A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present disclosure is a non-provisional of and claims priority to U.S. Provisional Application No. 62/472,162 filed on Mar. 16, 2017 and entitled “Apparatus and Methods of Providing an Efficient Radix-R Fast Fourier Transform”, which is incorporated herein by reference in its entirety.
The present disclosure is generally related to the field of data processing, and more particularly to data processing apparatuses and methods of providing Fast Fourier transformations, such as devices, systems, and methods that perform real-time signal processing and off-line spectral analysis. In some aspects, the data processing system may implement an efficient, generalized radix-r fast Fourier Transformation (FFT) that allow the efficient calculation of discrete Fourier transformations of data of arbitrary size, including prime sizes, which may provide an improvement in processing efficiency and speed by reducing the overall number of memory accesses (which may be internal to the processor core) to complete the operation.
A sampled data signal can be transformed from the time domain to a frequency domain using a Discrete Fourier Transform (DFT). Conversely a sampled data signal can be transformed from the frequency domain to a time domain using an Inverse DFT (IDFT). The DFT is a fundamental digital signal-processing transformation that provides spectral information (frequency content) for analysis of signals. The DFT allows for signal content to be analyzed in the frequency domain, which allows for efficient computation of the convolution integral that can be used in linear filtering and signal correlation analysis. However, since direct computation of the DFT uses a large number of arithmetic operations, it can be impractical for direct computation of DFTs in real-time applications.
In an example, the computational burden is a measure of a number of calculations to be determined. The DFT (and IDFT) process starts with a number (N) of input data points and computes a number (N) of output data points. The DFT is a function of a sum of products (repeated multiplication of two factors). The Fast Fourier Transform (FFT) reduced the computational burden, allowing the FFT to be used in diverse applications, such as digital filtering, audio processing, spectral analysis for speech recognition, and so on. In particular, the FFT utilizes a divide-and-conquer approach that divides the input data into subsets from which the DFT is computed.
The FFT algorithm can be memory access and storage intensive. For example, to calculate a radix-4 FFT butterfly, four pieces of data and three “twiddle” coefficients can be read from memory, and four pieces of resultant data are written back to memory. In an FFT implementation, an address generator can be used to compute the addresses (locations in memory) where input data, output data, and twiddle coefficients will be stored and retrieved from memory. The time required to read input data and twiddle coefficients from the memory and to write results back to memory affects the overall time to compute the FFT. The time required to calculate the address can also impact the overall time to compute the FFT.
In some embodiments, a system can be configured to utilize a generalized, Radix-r FFT, which implements a word counter and shifting counter in a decimation-in-time (DIT) process or in a decimation-in-frequency (DIF) process to achieve a self-sorting radix-r algorithm in which access to the coefficient multiplier's memory can be reduced as compared to conventional radix-r algorithms.
In certain embodiments, systems, methods and circuits are disclosed that can utilize a generalized FFT process with an FFT address generator that can compute the FFT of an input data having a size that is a multiple of an arbitrary integer without adding to the memory requirements. In an example of one possible advantage provided by the generalized FFT process with the FFT address generator described herein, the systems, methods, and circuits can reduce memory access relative to prior address generators by regrouping the data with its corresponding coefficient multiplier.
Embodiments of a generalized radix-r FFT, as disclosed herein, can be used in a wide range of signal processing and fast computational algorithms. The reduction in computational time provided by the generalized radix-r FFT finds applications in both real-time signal processing and off-line spectral analysis. Further, the generalized radix-r FFT can be used in a variety of applications, including speech, satellite and terrestrial communications; wired and wireless digital communications; multi-rate signal processing; target tracking and identifications; radar and sonar systems; machine monitoring; seismology; biomedicine; encryption; video processing; gaming; convolution neural networks; digital signal processing; image processing; speech recognition; computational analysis; autonomous cars; deep learning; and other applications.
In some embodiments, an apparatus can include a memory configured to store data at a plurality of addresses. The apparatus can further include a generalized radix-r fast Fourier transform (FFT) processor configured to determine a plurality of FFTs for any positive integer Discrete Fourier Transform (DFT) by utilizing three counters to access the data and the coefficient multipliers at each stage of the FFT processor.
In one possible aspect, the positive integer DFT can be a multiple of an integer. In another possible aspect, the positive integer DFT can be a prime number. In still another aspect, the generalized radix-r fast FFT processor can be configured to perform at least one of a Decimation in Frequency (DIF) operation and a Decimation in Time (DIT) operation. In still another aspect, the generalized radix-r fast FFT processor may include an address generator configured to reduce accesses to coefficient multipliers of the FFTs stored by the plurality of addresses of the memory by regrouping data with their corresponding coefficient multipliers.
In other embodiments, an apparatus may include an input configured to receive input data having a size that is a multiple of an arbitrary integer a. The apparatus may further include a memory configured to store data at a plurality of addresses and may include a generalized radix-R fast Fourier transform (FFT) processor coupled to the input into the memory. The generalized radix-r FFT processor may be configured to determine an FFT of the input data using three counters to access data and coefficient multipliers at each stage of the FFT processor.
In still other embodiments, an apparatus may include a memory configured to store data at a plurality of addresses. The apparatus may further include a generalized radix-r fast Fourier transform (FFT) processor configured to determine a plurality of FFTs for any positive integer Discrete Fourier Transform (DFT) by utilizing three counters to access the data and the coefficient multipliers at each stage of a plurality of stages of the FFT processor. The plurality of stages may include an FFT stage and at least one butterfly stage.
In the following discussion, the same reference numbers are used in the various embodiments to indicate the same or similar elements.
Despite many new technologies, the Fourier transform may remain the workhorse for signal processing analysis. The Fast Fourier Transform (FFT) is an algorithm that can be applied to compute the Discrete Fourier transform (DFT) and its inverse, both of which can be optimized to remove redundant calculations. These optimizations can be made when the number of samples to be transformed is an exact power of two and, if not, the number of samples can be zero padded to the nearest number that is power of two.
The present disclosure may be embodied in one or more address generators that can be used in conjunction with one or more butterfly processing elements. The one or more address generators can be configured to support a generalized radix-r FFT that may allow the efficient calculation of discrete Fourier transform of arbitrary sizes, including prime sizes. In some embodiments, the embodiments of the present disclosure may utilize a computing device including an interface coupled to a processor and configured to receive data. The processor may be configured to apply a butterfly computation, which may include a simple multiplication of input data with an appropriate coefficient multiplier. In the context of an FFT computation, a butterfly computation is a portion of the DFT computation that combines the results of smaller DFTs into a larger DFT (or vice versa) or segments a larger DFT into smaller DFTs. These smaller DFTs may be written to or read from memory, and such read/write operations contribute to the overall speed of the DFT computation. Embodiments of a system in accordance with the present disclosure may include one or more simple address generators (AGs), which can compute address sequences from a small parameter set that describes the address pattern.
A processor may be configured to implement a butterfly operation (or may be configured to compute the mathematical transformations), and dataflow may be controlled by an independent device or by another processor of the device. In an embodiment, peripheral devices may be used to control data transfers between an I/O (Input/Output) subsystem and a memory subsystem in the same manner that a processor can control such transfers, reduce core processor interrupt latencies, and conserve digital signal processor (DSP) cycles for other tasks leading to increased performance. Embodiments described herein may present a generalized radix-r FFT that allows the efficient calculation of DFTs of arbitrary size, and including prime sizes.
Referring now to
In some embodiments, the one or more CPU cores 102 can include internal memory 114, such as registers and memory management. Further, the one or more CPU cores 102 can include an address generator 116 including a plurality of counters 118. In some embodiments, the one or more CPU cores 102 can be coupled to a floating-point unit (FPU) processor 104.
The one or more CPU cores 102 can be configured to process data using FFT DIF operations or FFT DIT operations. Embodiments of the present disclosure utilize an address generator 116 including a plurality of counters 118 to provide generalized radix-r FFTs, which allow for the efficient calculation of discrete Fourier transforms of arbitrary sizes, including prime sizes. The address generator 116 and the counters 118 can be used to reduce the overall number of memory accesses (read operations and write operations) for the various FFT calculations, thereby enhancing the overall efficiency, speed and performance of the one or more CPU cores 102.
It should be appreciated that the FFT operations may be managed using a dedicated processor or processing circuit. In some embodiments, the FFT operations may be implemented as CPU instructions that can be executed by the individual processing cores of the one or more CPU cores 102 in order to manage memory accesses and various FFT computations.
In order to appreciate the improvements to the processing cores provided by the present disclosure, it is important to understand at least one possible implementation of the FFT computations. In the following discussion of
Further, the four signals can be decomposed into eight signals using the interlace decomposition at a fourth stage 208. The eight signals can be decomposed into sixteen signals using the interlace decomposition at a fifth stage 210.
Each of the stages uses an array of a size that is a power of two. If the data size is not a power of two, it can be zero padded to the nearest number that is a power of two. As used herein, the term “zero padded” refers to the insertion of a plurality of zeros at the beginning or end of a number in order to fill the array to form an array having a size that is a power of two. All the above cited algorithms require data sizes that have been power of two and if not it should be zero padded to the nearest number that is power of two. Zero-padding from a natural computation size to the nearest two-to-a-power size introduces increased computational complexity and memory requirements and reduces accuracy, especially in multidimensional problems.
The definition of the DFT is represented by the following equation:
where x(n) represents the input sequence, X(k) represents the output sequence, N represents the transform length, and wN represents the Nth root of unity, wN=e−j2π/N. Both the input sequence (x(n)) and the output sequence (X(k)) are complex valued sequences of length N=rS, where the variable r represents the radix and the variable S represents the number of stages.
The decimation-in-time (DIT) FFT first rearranges the input elements into bit-reverse order, then builds up the output transform in loge N iterations. The DIT FFT computes an 8-point DIT DFT in three stages as depicted in
In the embodiments of
In general, higher radix butterfly implementations can reduce the communication burden. For example, a sixteen-point DFT can be determined in two stages of radix-4 butterflies, as shown in
It is also possible to derive FFT algorithms that first go through a set of log2 N iterations on the input data, and rearrange the output values into bit-reverse order. These are called decimation-in-frequency (DIF) DFT outputs. One possible example of a three-stage eight-point DIF FFT process is described below with respect to
The integers n and k in equation (1) (for the case N=2γ) can be expressed in binary numbers depicted in the following equations:
n=2γ−1nγ−1+2γ−2nγ−2+ . . . +n0, (2)
and
k=2γ−1kγ−1+2γ−2kγ−2+ . . . +k0, (3)
in which the variables n and k can take the values 0 and one only. Accordingly, equation (1) can be rewritten as follows:
Based on equation (4), the γple sum can be divided into γ separate summations as follows:
The computation of equation (1) can be divided into log2N=γ stages, where each stage can have a computational complexity of N. As a result, the total computational complexity can be decreased from N2 to N log2 N. If the result needs to be in the natural order, an unscrambling stage for Xγ can be included. The signal flow graph for an 8-points radix-2 DIF FFT described below with respect to
In the illustrated example of
The radix-2 butterfly can include two complex additions and one complex multiplication. A conceptual representation of the radix-2 butterfly is described below with respect to
The basis of the radix-r FFT is that a DFT can be divided into r smaller DFTs, each of which is divided into r smaller DFTs, in a continuing process that results in a combination of r point DFTs. By properly dividing the DFT into partial DFTs, the system can control the number of multiplications and stages. In some embodiments, the number of stages may correspond to the amount of global communication, the amount of memory accesses, or any combination thereof. Thus, advantages can be achieved by reducing the number of stages.
Conceptually, the FFT address generator can provide a simple mapping of the three indices (FFT stage, butterfly, and element) to the addresses of the multiplier coefficients. At the outset, equation (1) can be expressed in compact form as depicted in equation (8) below:
for k=0, 1, L, N−1, and with p=0, 1, . . . , (N/r)−1 and q=0, 1, . . . , r−1, with
X=[X
(p)
,X
(p+N/r)
,X
(p+2N/r)
, . . . ,X
(p+(r−1)N/r)]T, (9)
W
N=diag(wN0,wNp,wN2p, . . . ,wN(r−1)p), (10)
Therefore, by defining [Tr]l,m as the element at the lth line and mth column in the matrix Tr equation (11) can be rewritten as follows:
[Tr]l,m= (12)
where l=0, 1, . . . , r−1, m=0, 1, . . . , r−1 and xN represents the operation x modulo N and where WN(m,v,s) represents the set of the twiddle factor matrix as follows:
[WN]l,m(v,s)=diag(N (0,v,s),N (1,v,s), . . . ,N (r−1,v,s)), (13)
where the indices r represents the FFT's radix; the values v=0, 1, . . . , V−1 represents the number of words of size r (V=N/r) and the value s=0, 1, . . . , S represents the number of stages (or iterations S=logr N−1). Further, equation (13) can be expressed for the different stages in an FFT process as follows:
for the DIF process. Equation (14) can be expressed as follows:
for the DIT process, where l=0, 1, . . . , r−1 is the lth butterfly's output, m=0, 1, . . . , r−1 is the mth butterfly's input and └x┘ represents the integer part operator of x.
As a result, the lth transform output during each stage can be illustrated according to the following equation:
for the DIF process and
for the DIT process.
The read address generator (RAG), the write address generator (WAG), and the coefficient address generator (CAG) can be used for DIF and DIT processes, respectively. The mth butterfly's input data of the vth word x(m) at the sth stage (sth iteration) is fed by equations (12) and (13) for the DIF process and by equation (14) for the DIF process of the RAG as follows:
and for s>0
and for the DIT process
where the butterfly's input m=0, 1, K, r−1, v=0, 1, K, V−1 and s=0, 1, K, S, S=logr N−1.
For both cases, the lth processed butterfly's output X(l,v,s)(l=0, 1, K, r−1) for the vth word at the sth stage should be stored into the memory address location given for the WAG as follows:
WAG(l,v,s)=l(N/r)+v, (21)
It should be noted that, for both algorithms, the input and output data are in natural order during each stage of the FFT process known at all stages as the Ordered Input Ordered Output (0100) algorithms. The coefficient multipliers (Twiddle Factors or Twiddle Coefficients), which are used during each stage and which are fed to the mth butterfly's input of vth word x(m) at the sth stage (sth iteration), are provided as follows:
for the DIF process and
for the DIT process. Based on equations (15), (20), (21), and (23), the generalized radix-r FFT can be implemented in a field-programmable gate array (FPGA), a circuit, or software that can execute on a processor. Regardless of how the mathematical processes are implemented, the generalized radix-r FFT can be used with a variety of different circuits, devices, and systems.
At 706, the method 700 can include computing the first stage. At 708, the method 700 may include computing the S−1 stages. At 710, the method 700 may include executing the butterfly computations with trivial multiplication using unitary Twiddle factors. At 712, the method 700 can include executing the butterfly computations with non-trivial multiplications using the complex Twiddle factors.
At 714, if the selected stage is not greater than the total number of stages minus one, the method 700 can include incrementing the stage counter at 716. The method 700 then returns to 708 to compute the S−1 stages. Returning to 714, if the selected stage is greater than the total number of stages minus one, the method 700 can terminate at 718.
In the source code 800, a plurality of “for” loops are nested to iteratively determine the read data addresses and the twiddle (coefficient) factor addresses and to determine the x-integer for the butterfly FFT. The illustrative source code 800 may correspond to equations 17, 20, and 23 above.
In some embodiments, the generalized radix-r FFT operations and the associated address generator and counters disclosed herein take advantage of the occurrence of the multiplication by one. For example, the elements of the twiddle factor matrix illustrated in equation (4) that may be equal to one can be easily predicted when the shifting counter in both cases is equal to zero (i.e., v<rs or v<r(S−s)). The trivial multiplication by one) (w0) during the entire FFT process is consequently avoided. Thus, embodiments of the present disclosure may take advantage of this mathematical equivalence to ensure that the zero-padding does not contribute to the computational load.
Additionally, as can be seen in the source code 800 of
However, the division (MOD) operation is more costly in terms of processor flops than multiplication and thus can be more intensive.
Many FFT users may prefer the natural order outputs of the computed FFT and that is why many developers have concentrated their efforts in reducing the computational time impact in the bit reversal stage, which is the first stage of the DIT process known as the bit reversal data shuffling technique. The DIT FFT has been attractive in fixed point implementations because DIT processes executed in fixed-point arithmetic have been shown to be more accurate than the decimation-in-frequency (DIF) processes. Furthermore, it is highly recommended to reorder the intermediate stage of the FFT algorithm in order to facilitate the operation on consecutive data elements for many hardware architectures. To these ends, a number of alternative implementations have been proposed. One such alternative implementation may adopt an out-of-place algorithm where the output array is distinct from the input array.
For example, in a bit-reversal technique developed by Rius and De Porata-Doria (J. M. Rius and R. De Porrata-Doria “New FFT Bit-Reversal Algorithm”, IEEE Transactions On Signal Processing, Vol. 43, No. 4, April 1995, pp. 991-994), the operational count excluding the index calculations for each stage as follows:
N−2 integer additions,
2(N−2)integer increments,
(log2 N)−1 multiplications by 2,
(log2 N)−1 divisions by 2, (25)
plus two more divisions N/2 and N/4. In equation (25), multiplications and divisions can be efficiently implemented using bit-shift operations. Further, this Rius implementation uses a storage table of N/2 index numbers. In contrast, a faster bit-reversal permutation is described by Prado (J. Prado “A New Fast Bit-Reversal Permutation Algorithm Based on Symmetry”, IEEE Signal Processing Letters, Vol. 11, No. 12, December 2004, pp. 933-936). An even faster implementation was described by Pei and Chang (S. Pei, K. Chang “Efficient Bit and Digital Reversal Algorithm Using Vector Calculation” IEEE Transactions on Signal Processing, Vol. 55, No. 3, March 2007, pp. 1173-1175). The embodiment described by Pei and Chang provides a significant improvement in the operation count, which includes N shifts, N additions, and an index adjusting and will require the use of O(N) memories.
However, embodiments of the radix-r implementation described herein do not utilize memory to store a table index number. Thus, the overall memory accesses can be reduced as compared to the prior implementation. A table is shown in Table 1 below; which depicts the memory storage for the three implementations described above.
By examining equations (16) and (17), it can be determined that the data in both algorithms were grouped with their corresponding coefficient multipliers at each stage because the mth coefficient multiplier of the lth butterfly's output shifts if, and only if, the v (v=0, 1, K, V−1) is equal to r(S−s) in the DIF process or v=rs in the DIT process. As a result, and since V=N/r=rS; the total number of shifts during each stage in the DIT process would be rs and the total number of shifts during each stage in the DIF process is r(S−s). Therefore, by implementing the word counter r(S−s) (word-counter=0, 1, . . . , r(S−s)−1) and the shifting counter rs (shift-counter=0, 1, . . . , rs−1) in the DIT process or the word counter rs and the shifting counter r(S−s) in the DIF process, embodiments of systems, methods and devices can achieve highly efficient, self-sorting DIT/DIF radix-r processes through which accesses to the coefficient multiplier's memory are reduced as compared with the conventional radix-r DIT/DIF processes.
The DIF FFT can be derived based on the above-equations and the discussion below. For the first iteration (i.e., s=0), equation (20) may equal equation (21) due to the fact that the second term of this equation may be equal to v and the third term may be equal to zero. Thus, for the first iteration, the RAG and WAG may have the same structure.
In fact, when s=0, the third term of equation (20) can be determined as follows:
and since rS=V, equation (26) can be determined as follows:
Since v=0, 1, K, V−1 therefore,
is always equal to zero. Similarly, the second term vr
§v••r
Also, for the first iteration when s=0, the Coefficients Address Generator (CAG) illustrated in equation (23) could be expressed for a conventional radix-r butterfly where the term mlVN represents the adder tree matrix Tr, as follows:
As a result, the first iteration involves no twiddle factor multiplication.
For s>1, modulo and integer part operations dominate the workload in the reading and coefficient address generators. The variable AB denotes A modulo B, which is equal to the residue (remainder) of the division of A by B, and the variable └A/B┘ denotes the quotient (Integer Part) of the division of A by B. The arithmetical operation modulo, in a hardware implementation, can be represented by a resettable counter. During each stage, v words (v=0, 1,K, V−1) may be processed. Thus, the third term of equations (20) and (23) is a function of rs and could be replaced by the arithmetical operation modulo. In fact, since v varies between 0 and (V−1), the third term can be expressed as follows:
and will vary between 0 and rs−1. As a result, the integer part operation in equations (20) and (23) can be simplified as follows:
for I=0, 1, . . . , rs−1, s=0, 1, . . . , S, and S=logrN−1, where S is the number of stages.
Based on equations (31) and (33), for s>1, r(S−s) words may encounter trivial multiplication (i.e., 0=1). As a result, the proposed simplified algorithm can be based on three simple counters as follows:
1. Stage or Iteration Counter
s=0,1, . . . ,S
S=logrN−1; (32)
I=0,1, . . . ,rS−1; (33)
and
3. Word Counter
M=0,1, . . . ,r(S−s)−1. (34)
One possible implementation of the DIT radix-r address generator, which uses some of the above equations, is described below with respect to
At 1008, the method 1000 can include determining a read address generator for each word. The method 1000 can also include executing the butterfly Radix-r, at 1010. At 1012, the method 1000 may include determining a write address generator for each word. At 1014, if the current word counter (v) is not greater than the total number of words minus one, the method 1000 may include incrementing the word counter, at 1016. The method 1000 may then return to 1008 to determine the read address generator.
Otherwise, at 1014, if the word counter is greater than the total number of words minus one, the method 1000 may include initializing a plurality of parameters, at 1018. At 1020, the method 1000 can include initializing a plurality of additional parameters. At 1022, the method 1000 may include determining the read address generator. At 1024, the method 1000 can include executing the butterfly Radix-r. At 1026, the method 1000 may include determining the write address generator. At 1028, if the word counter (v) is not greater than the total number of words (B) minus one, the method 1000 may include incrementing the word counter, at 1030. The method may then return to 1022 to determine the read address generator.
Otherwise, at 1028, if the word counter (v) is greater than the total number of words minus one, the method 1000 may include initializing a plurality of parameters, at 1032. At 1034, the method 1000 may include initializing further parameters. At 1036, the method 1000 can include determining a read address generator. At 1038, the method 1000 may include executing the Radix-r butterfly. At 1040, the method 1000 can include determining the write address generator.
At 1042, if the iteration counter (L) is not greater than a total number of words (B) minus one, the method 1000 may include incrementing the iteration counter 1044. The method 1000 may return to 1036 to determine the read address generator.
Returning to 1042, if the iteration counter (L) is greater than the total number of words (B) minus one, the method 1000 may advance to 1046. If, at 1046, the word counter (v) is not greater than a total number of words minus two, the method 1000 may include incrementing the word counter at 1048. The method 1000 may then advance to 1034 to initialize a plurality of parameters.
Returning to 1046, if the word counter (v) is greater than the total number of words minus two, the method 1000 can include advancing to 1050. If, at 1050, the stage counter (s) is not greater than the total number of stages minus one, the method 1000 may include incrementing the stage counter, at 1052. The method 1000 may then return to 1020 to initialize a plurality of parameters. Otherwise, at 1050, if the stage counter (s) is greater than the total number of stages minus one, the method 1000 may terminate, at 1054.
Returning to 1111, if the word counter (v) is greater than the total number of words minus one, the method 1100 may include initializing a plurality of parameters, at 1114. At 1116, the method 1100 may include initializing additional parameters. At 1118, the method 1100 may include determining the read address generator. At 1120, the method 1100 can include executing the butterfly Radix-r. At 1122, the method 1100 can include determining the write address generator. At 1124, if the word counter (v) is not greater than the total number of words minus one, the method 1100 may include incrementing the word counter at 1126. The method 1100 may then return to 1118 to determine the read address generator.
Returning to 1124, if the word counter (v) is greater than the total number of words minus one, the method 1100 may initialize a plurality of parameters at 1128 and 1130. The method 1100 may include determining a read address generator at 1132, executing the butterfly Radix-r at 1134, and determining a write address generator at 1136. At 1138, if the word counter (L) is not greater than the number of words minus one, the method 1100 may include incrementing the word counter at 1140 and then returning to 1130 to initialize some of the parameters.
Returning to 1138, if the word counter (L) is greater than the total number of words minus one, the method 1100 may include setting the input (Xin) equal to the output (Xout), at 1142. At 1144, if the shifting counter (v) is not greater than the total number of shifts minus two, the method 1100 may include incrementing the shifting counter at 1146 and returning to 1130 to initialize some of the parameters.
Returning to 1144, if the shifting counter (v) is greater than the total number of shifts minus two, the method 1100 may advance to 1148. At 1148, if the stage counter (s) is not greater than the total number of stages minus one, the method 1100 may increment the stage counter (s) at 1150. The method 1100 may then return to 1116 to initialize some of the parameters. Returning to 1148, if the stage counter (s) is greater than the total number of stages minus one, the method 1100 may terminate at 1152.
In general, the method 1000 in
It should be appreciated that the method 1000 of
In some embodiments, an apparatus, such as a processor, a central processing unit, or other data processing circuit, can be configured to implement the methods described with respect to at least one of
In the illustrated example, the butterfly input (Bin) includes the input function. The butterfly Radix-r 1502 may be configured to generate a butterfly output (Bout) as well as the butterfly write-address output (Bw).
While it should be appreciated that the examples above utilized Matlab® with the purpose of demonstrating the function, the generalized radix-R FFT functionality may be programmed utilizing other programming languages or utilizing software modules implemented in a variety of different programming languages and configured to share information. Examples provided are for illustrative purposes only and are not intended to be limiting.
In conjunction with the methods, devices, and systems described above with respect to
The processes, machines, and manufactures (and improvements thereof) described herein are particularly useful improvements for computers that process complex data. Further, the embodiments and examples herein provide improvements in the technology of image processing systems. In addition, embodiments and examples herein provide improvements to the functioning of a computer by enhancing the speed of the processor in handling complex mathematical computations by reducing the overall number of memory accesses (read and write operations) performed in order to complete the computations. Thus, the improvements provided by the FFT implementations described herein provide for technical advantages, such as providing a system in which real-time signal processing and off-line spectral analysis are performed more quickly than conventional devices, because the overall number of memory accesses (which can introduce delays) are reduced. Further, the radix-r FFT can be used in a variety of data processing systems to provide faster, more efficient data processing. Such systems may include speech, satellite and terrestrial communications; wired and wireless digital communications; multi-rate signal processing; target tracking and identifications; radar and sonar systems; machine monitoring; seismology; biomedicine; encryption; video processing; gaming; convolution neural networks; digital signal processing; image processing; speech recognition; computational analysis; autonomous cars; deep learning; and other applications. For example, the systems and processes described herein can be particularly useful to any systems in which it is desirable to process large amounts of data in real time or near real time. Further, the improvements herein provide additional technical advantages, such as providing a system in which the number of memory accesses can be reduced. While technical fields, descriptions, improvements, and advantages are discussed herein, these are not exhaustive and the embodiments and examples provided herein can apply to other technical fields, can provide further technical advantages, can provide for improvements to other technologies, and can provide other benefits to technology. Further, each of the embodiments and examples may include any one or more improvements, benefits and advantages presented herein.
The illustrations, examples, and embodiments described herein are intended to provide a general understanding of the structure of various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. For example, in the flow diagrams presented herein, in certain embodiments, blocks may be removed or combined without departing from the scope of the disclosure. Further, structural and functional elements within the diagram may be combined, in certain embodiments, without departing from the scope of the disclosure. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown.
This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the examples, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be reduced. Accordingly, the disclosure and the figures are to be regarded as illustrative and not restrictive.
Number | Date | Country | |
---|---|---|---|
62472162 | Mar 2017 | US |