The present disclosure is directed generally to structures and methods for ultrasonic Fourier transform.
There is a continued need for ultrafast devices methods for computation to solve multi-dimensional nonlinear differential equations describing, for example, charge evolution in high density plasmas. For example, there are a vast variety of computational plasma problems concerning high energy fusion, such as predicting the instabilities in the Z-pinch, the plasma confinement system using an electrical current in the plasma to generate a magnetic field that compresses it. Indeed, controlling instabilities in nonlinear high-density plasmas is critical to understanding the limits of potential fusion energy-enabling phenomenon such as the Z-pinch, among other aspects. Since Z-pinch experiments are expensive and time-consuming, there is a continued need for rapid, such as ultrasonic, processing for 2-D fast Fourier transform (2DFFT).
The embodiments described herein are directed to a computing architecture and method that can compute a 2D Fourier Transform at a very fast rate, must faster e.g., 100-1000 times) than possible by traditional von Neumann architectures, and using much lower energy required by current electronic FFT (Fast Fourier Transform) computational devices or methods. The architecture includes arrays of piezoelectric pixels sandwiching a bulk ultrasonic transmission medium (e.g., silicon, sapphire, silicon-carbide) to generate ultrasonic waves that, when adding at a distance, provide an approximation to the 2DFFT of the input pattern. The embodiments described herein are also directed to the method of performing the ultrasonic 2DFFT utilizing the computing apparatus. Applications of the apparatus and method include but are not limited to solving multidimensional nonlinear differential equations such as the Vaslov equation, image processing for computer vision, medical imaging, image recognition, general signal processing, and more.
According to an embodiment is provided a device configured for low-energy ultrasonic 2D Fourier transform analysis. The device includes: (i) a first layer comprising an array of piezoelectric pixels; (ii) a second layer comprising an array of piezoelectric pixels; (iii) a third layer, positioned between the first and second layers, comprising a bulk ultrasonic transmission medium; wherein the second layer of array of piezoelectric pixels is in the Fourier plane of an input signal of the first layer array of piezoelectric pixels.
According to an embodiment, the first layer array of piezoelectric pixels is a top layer, and wherein the second layer array of piezoelectric pixels is a bottom layer.
According to an embodiment, the bulk ultrasonic transmission medium comprises silicon, sapphire, silicon-carbide, and/or combinations thereof.
According to an embodiment, the array of piezoelectric pixels of the first and/or second layer comprises PZT, aluminum nitride, zinc oxide, and/or combinations thereof.
According to an embodiment, the array of piezoelectric pixels of the first or second layer is configured to generate an ultrasonic wave for an approximation to the 2D Fourier transform of the input signal.
According to an embodiment, the array of piezoelectric pixels of the first and/or second layer comprises GHz transducers. According to an embodiment, the GHz transducers are approximately 0.5 to 5 GHz.
According to an embodiment, the device further includes one or more signal interconnects located externally to the first, second, and third layers.
According to an embodiment, the device further includes a complementary metal-oxide-semiconductor (CMOS) stack positioned between a top surface of the third layer and a bottom surface of the first layer.
According to an embodiment, the device further includes one or more signal interconnects and/or optical interconnects integrated by the CMOS stack.
According to an embodiment, at least one of the first and second arrays of piezoelectric pixels is configured to emit one or more waves into the third layer to produce a Fourier transform of the input signal phase and magnitude of voltages.
According to an embodiment, the device further includes a Fresnel lens configured to focus the input signal onto the array of piezoelectric pixels of the second laver.
According to an embodiment, at least one of the first and second arrays of piezoelectric pixels is configured to emit one or more waves into the third layer to produce a Fourier transform of the input signal phase and magnitude of voltages.
According to an embodiment, the device further includes a Fresnel lens configured. to focus the input signal onto the array of piezoelectric pixels of the second layer.
According to an embodiment, the first or second array of piezoelectric pixels is configured to generate an ultrasonic wave and is divided into a plurality of concentric rings.
According to an embodiment, the first or second array of piezoelectric pixels is configured to generate an ultrasonic wave and is divided into a plurality of binary weighted slices.
According to an embodiment, the piezoelectric pixels in the first or second array of piezoelectric pixels can vary in width. According to an embodiment, the piezoelectric pixels in the first or second array of piezoelectric pixels vary in width from 1 wavelength to 10 wavelengths.
According to an embodiment, at least some of the piezoelectric pixels in the first and/or second array are spaced in height relative to other piezoelectric pixels in the array by a fraction of a wavelength.
According to an embodiment is a method for low-energy ultrasonic 2D Fourier transform analysis using a device comprising: (i) a first layer comprising an array of piezoelectric pixels; (ii) a second layer comprising an array of piezoelectric pixels; (iii) a third layer, positioned between the first and second layers, comprising a bulk ultrasonic transmission medium, wherein the second layer of array of piezoelectric pixels is in the Fourier plane of an input signal of the first layer array of piezoelectric pixels, the method comprising the steps: emitting, by the array of piezoelectric pixels of the first layer, one or more acoustic waves through the third layer; receiving, by the array of piezoelectric pixels of the second layer, the emitted one or more acoustic waves; and determining, from the received one or more acoustic waves, a Fourier transform of the one or more acoustic waves.
According to an embodiment, a far-field pattern of the array of piezoelectric pixels of the first layer is the Fourier transform of the input signal of the first layer.
According to an embodiment, the device further includes a complementary metal-oxide-semiconductor (CMOS) stack positioned between a top surface of the third layer and a bottom surface of the first layer.
These and other aspects of the invention will be apparent from reference to the embodiment(s) described hereinafter.
In the drawings, like reference characters refer to the same parts throughout the different views, and the drawings are not necessarily to scale.
The present disclosure describes a computing architecture and method configured to compute 2D Fourier transform at an ultrasonic rate with low energy requirements. The architecture includes arrays of piezoelectric pixels sandwiching a bulk ultrasonic transmission medium (e.g., silicon, sapphire, silicon-carbide) to generate ultrasonic waves that provide an approximation to the 2DFFT of the input pattern. An aspect of the invention is an ultrasonic Fourier transform computing apparatus (“SFT computer” or “SFTC”). An embodiment of the SFTC includes a bulk ultrasonic wave transmission medium (e.g., silicon, sapphire, and/or others) having a top surface and a bottom surface, an array of piezoelectric (e.g. PZT, aluminum nitride, zinc oxide, etc.) GHz (0.5 to 5 GHz) transducers disposed on the top surface of the bulk ultrasonic wave transmission medium, and an array of piezoelectric GHz (0.5 to 5 GHz) transducers disposed on the bottom surface of the bulk ultrasonic wave transmission medium, wherein the bottom surface-located piezoelectric array is in the Fourier plane of the input signal top surface-located piezoelectric array. In one embodiment, control and processing electronics and signal interconnects are located external to the SFTC, although other designs are possible.
An embodiment of the SFTC includes a bulk ultrasonic wave transmission medium, a complementary metal-oxide-semiconductor (CMOS) stack disposed on the top surface of the ultrasonic wave transmission medium, an array of piezoelectric GHz (0.5 to 5 GHz) transducers disposed on top surface of the CMOS stack, and an array of piezoelectric GHz (0.5 to 5 GHz) transducers disposed on the bottom surface of the hulk ultrasonic wave transmission medium, wherein the bottom surface-located piezoelectric array is in the Fourier plane of the input signal top surface-located piezoelectric array. In this embodiment, control and processing electronics and signal interconnects are provided internally (integrated) by the CMOS stack and the optical interconnect(s).
An embodiment of the SFTC includes a bulk ultrasonic wave transmission medium (as above), a CMOS stack disposed on the top surface of the ultrasonic wave transmission medium, an optical interconnect disposed on top of the CMOS stack, an array of piezoelectric GHz (0.5 to 5 GHz) transducers disposed on top of the optical interconnect, and an array of piezoelectric GHz (0.5 to 5 GHz) transducers disposed on the bottom surface of the bulk ultrasonic wave transmission medium, wherein the bottom surface-located piezoelectric transducer array is in the Fourier plane of the input signal top surface-located piezoelectric transducer array. In this embodiment, control and processing electronics and signal interconnects are provided internally (integrated) by the CMOS stack and the optical interconnect(s).
A single processor 2D FFT requires roughly 0(5N2 Log N) steps. Here the factor of 5 originates from local data loading and transferring. In addition to this time, there is a set of N2 reads and writes for the initial state and final answer. The time for reading and writing from memory can be accelerated using optical data loading using on-chip optical data lines to 100-500 Gbits/second, Newly developed very fast memories can achieve Tbits/s data rate. Data in bulk can be loaded in bulk using a deep pipelining stages. This corresponds to a tRW, of 16-40 ps per 16 bit data. For a 1024×1024 FFT. the time to load and unload can be ˜30-70 μs. For the calculation time, tcalc is the clock period, without pipelining. For a 3 GHz clock computer, one can estimate the calculation time to be 17.5 ms. One can parallelize FFT among M cores, but the speedup is not great as the data transfer across limits the computation speed.
To maximize the speed of calculation, it is best to make the computation time comparable to data loading and reading time. The calculation time of SFFT is dictated by the travel time of the 2D sonic energy across the silicon to form the FFT. The travel time is ttransit=csound/L, where csound is the speed of sound, and L is the travel length. L is estimated to be on the order of 1-cm, and since speed of sound is 9000 m/s, the calculation time is estimated to be ˜11.6 μs. This is comparable to the data input and output loading time. Following the acquisition of the analog data, this data will be converted to digital domain using ADC converters. The ADC conversion time can be in a flash converter with speeds exceeding 1 GSPS at 16 bits for a tADC=1 ns. Including N such ADC converters would lead to a time of ADC conversion to be tADC=tADCsingle*N. The number of ADCs can be decreased to save space and power and still be well ahead of traditional FFT calculation approach. For the nominal 1024 pixels this would translate to a time of
Also described is an approach in which photodetectors receive light signals that are processed to control the amplitude and phase for the transmit plane. For the case of processing real time optical images, one can eliminate the need for data input and that portion of the time can be eliminated. If the optical microcantilevers are used to modulate light reflected in the receive plane, then the image is also not read so that part of the time is eliminated. For the case of pure optical in and optical out, one may see very high speedup gains limited only by the transit times of the ultrasonic wavefronts.
Referring to
The array of 2D piezoelectric pixels can launch waves into the bulk silicon to produce a Fourier Transform of the input phase and magnitude of voltages applied at the input frame. CMOS integration allows very low energy and fast operation. Referring to
According to an embodiment, the piezoelectric transducer array can control both the spatial amplitude and the phase of an ultrasonic wavefront propagating through a silicon substrate. At the focal plane of this wavefront, one can obtain the Fourier Transform of the input propagation of k-vectors adding up by laws of diffraction using receiving CMOS. The output ultrasonic pressure field is the Fourier transform of the of the input pressure field, using for example the following:
A thin film of aluminum nitride (AlN) transducer array integrated with CMOS substrate is used to implement the SonicFFT as shown in
The time of travel of the sonic pulses generated from the transducers to the Fourier plane is limited by the speed of sound. Each ultrasonic transducer width will be:
Hence, the width of the transmitting aperture on the initial plane is A=Nλ. The receiving aperture should be a distance A away to capture the wavefronts from all of the pixels, Therefore, the time of calculation is:
If one envisions one clock cycle of the ultrasound to that of the clock on a microprocessor, one can state that a 2D FFT using the SonicFFT technique would be an 0(N) computation complexity. The physical properties of wave propagation and additions perform the cross additions needed for a 2D FFT which in real time is implemented on a massively parallel processor with the complexity of 0(N2 log2 N).
According to an embodiment, the SonicFFT uses the fact that an ultrasonic transducer pixel array can control both spatial amplitude and phase of an ultrasonic wavefront propagating through a silicon. At the focal plane of this wavefront, one can obtain the Fourier Transform of the initial wavefront due to propagation of k-vectors adding up by laws of diffraction. The lateral dimensions of the transmission array is L=Nλ where N is the number of pixels and λ is the ultrasonic wavelength. This wavelength is 9 μm for a frequency of 1 GHz. Hence, a 1000×1000 array of pixels would occupy an area of 9×9 mm. The length over which the wavefront needs to traverse is approximately L wavelengths also for the k-vectors to add up without losing energy on the sides. Hence, the distance to be traversed is approximately L=Nλ also. The transit time or the calculation time:
where c is the speed of sound (9000 m/s for silicon) and v is the frequency, and T is the period of the ultrasound. Hence, it is seen that at GHz frequencies, which are comparable to the clock frequencies of modem multi-core processors, the number of cycles consumed in the microprocessor can be compared to the transit time of the ultrasonic wavefront. The order of computation can be thought of as 0(N) as it takes ˜N cycles of the the GHz clock to get the computation done. This analog-domain FFT would occur in O[N], for a N×N 2D array of data, drastically faster than the O[N2 Log(N)] for a single micro controller.
Principle of SonicFFT
Referring to
Assuming sinusoidal excitation, the normal component of the transducer velocity is
where ξ0 is the spatial variation (apodization function) across the transducer, v0 is a constant velocity amplitude, and ω is the frequency at which the transducer is driven. As the wavefront has undergone Fraunhofer diffraction through the transducer, for a circular piston transducer of radius a and thickness
the Fraunhofer integral can be approximately modeled as a Rayleigh integral. The Fraunhofer approximation of R can be assumed to be constant during the integral. Using the Rayleigh integral equation, the velocity potential for a far field point on the observation plane is:
Taking k=ω/c0, dS=dx1dy1, and express the transducer by an aperture function Ω(x1,y1), where Ω(x1,y1)=1 in S and 0 outside S, the Fraunhofer approximation for the velocity potential can now be written:
The spatial frequencies in the x and y directions are
The pressure phasor is related to the velocity potential phase by p=jωρ0ϕN. Therefore, the pressure at the observation point can be written as:
The integral portion can be recognized to be a 2D Fourier Transform of the product of the aperture and apodization functions:
From the above expression, it can be seen that Fourier transform is scaled by an amplitude term
and a phase term
As in Fraunhofer approximation, R is constant and (x12+y12) term can be neglected, the pressure wave on a receiving plane can be described as the product of a constant term, a phase term, and the Fourier transform of the apodization function times the apertures function. One can recover the Fourier Transform by dividing the pressure wave by a constant factor.
In order to calculate the effective scanning region for the SonicFFT, it is necessary to find the maximum span in frequency. The span in frequency space on the x and y axes should be −Bx≤fx≤Bx and −By≤fy≤By, where Bx and By are the maximum bandwidths of interest.
One can relate fx and fy to kx and ky as follows:
2πfx=kx
2πfy=ky (Eq. 8)
And one can relate kx and ky to the spatial coordinates x0 and y0 on the observation plane:
where k is 2π/λ and z is the distance between the source and observation planes. Combining the above expressions, it is found that:
assuming Bx=By=Bx,y.
To give a tighter bound to the dimensions of the receive array, one must take a closer look at which this bandwidth should be. While the bandwidth of interest depends on the problem to be solved, it is also restricted by the dimensions of the received array.
As the effective bandwidths of a circular aperture of radius a and a square aperture of width 2a are both 5/a. One can relate this to the maximum span in frequency space as follows:
Substituting into previous equations defining kx and ky, the final equation is obtained.
As discussed in the derivation, to reach the final equation of the SonicFFT, one must assume that the (x12+y12) term and higher order terms in R can be neglected. That is, the transducer is operating in Fraunhofer zone. Therefore, the thickness of the transducer
should be satisfied. The error between the Rayleigh integral and the Fraunhofer approximation is less than 5%. In this process, the silicon thickness z=725 μm. Therefore, the transducer width should be less than 103.12 μm. For the best impedance match, one choses a square transducer with width=10 μm for the testing. A possible layout is shown in
With the tested transducer, width=100 μm (a=50 μm), thickness z=725 μm, speed of sound in silicon c=8433 m/s, resonant frequency f=1.2 GHz, one can calculate the maximum scanning span x0,max/min=y0,max/min=±611.4 μpm. As the affective bandwidth is
the Nyquist sampling frequency is
Thus, ideally the received SonicFFT should be sampled at space interval
which is less than or equal to 5 μm. However, to expedite the simulation, a sample point was obtained every 11 μm and use Gaussian interpolation in Python to smooth out the result.
Referring to
The definition of the 2-D Fourier transform F(kx,ky) of a function f(x,y) is as follows:
F(kx, ky)=[f(x,y)]=∫−∞∞∫−∞∞f(x,y)e−j(k
Field propagation physics can be used to obtain a Fourier transform in two ways—using a lens and without using a lens but applying fixed correction factors.
It can be shown that given the described arrangement in
Referring to
This relation can be summed up in the following equation, illustrating that the field at the focal plane is a scaled Fourier transform of the input.
The received field on the focal plane can then be multiplied by a factor jfλ/A to cancel out the constant scale factor, leaving only the Fourier transform. This relation is valid when the Fresnel and paraxial approximations apply. While this result is shown for optical fields, the same wave phenomena apply to acoustic fields as well.
For the lensless case, the propagation distance must be extended into the Fraunhofer zone. It can be shown that for a monochromatic source S located on a source plane (xs, ys), as shown in
The integral in Equation (16) is simply the Fourier transform of Ω(xs,ys)ξ(xs,ys) using spatial frequencies kx=−kxo/z and ky=−ky0/z. Thus, Equation (16) can therefore be written as:
This expression shows that the far field pressure field is simply the Fourier transform of the product of the apodization and aperture functions multiplied by a constant term and a phase term.
From the above expression, it can be seen that the Fourier transform is scaled by an. amplitude term C and a phase term Φ:
To retrieve the Fourier transform from the pressure wave, these terms must be canceled out by multiplying by a constant scale factor 1/C and applying a coordinate dependent phase correction to each receive transducer pixel on the observation plane to cancel out Φ. Note that there will be an additional amplitude factor from the pressure to voltage conversion in the piezoelectric transducers that will need to be canceled out as well.
The lensless approach is more complex and results in more errors due to the coordinate dependent phase term. Therefore, a system that uses an acoustic lens for Fourier transform calculation is proposed in
Accuracy and SNR of FFT in Frequency Domain with Focusing
In optics, it is possible to apply a distribution U(x,y) at the front focal plane of a lens and obtain the Fourier Transform at the back focal plane of the lens.
In the previous section, the Fraunhofer approximation was applied, ignoring the x12+y12 term in the expansion below.
If one does not ignore that term, one will instead have the Fresnel approximation, where the additional phase term appears in the integral.
Now there are two phase terms that must be cancelled in order to achieve the Fourier Transform—one preceding the integral, and one within the integral. To see how to cancel out the
phase term within the integral, one must first place a lens immediately next to the input plane at z=0. The phase contribution from the lens is:
Therefore, at the observation plane, our pressure phasor is now:
It is seen that if the observation plane is situated at z=F, then the terms
within the integral will cancel out and the integral becomes:
So how does one cancel out the
term preceding the integral? Neglecting evanescent waves, one notes that, if one has a pressure wave p(x,y;z;t) propagating in the z direction, with sinusoidal drive at frequency ω, then its pressure phasor p(x,y;z;ω) is related by:
p(x,y;z;t)=p(x,y;z;ω)ejωt (Eq. 24)
One can write it as:
Where S(kx,ky;z) is the angular spectrum of p(x,y;z;ω)—essentially it is writing it as the aggregate of its plane wave components. If the wave is propagating from a source plane at z′=0 to an observation plane at z′=z, then through use of the Helmholtz theorem, it can be shown that:
where S(kx,ky;0) is the spectrum of the pressure wave at the source plane z′=0.
Looking at the previous two equations, it is seen that the spectrums at the source and observation planes are related as follows:
This relation can be described as the wave propagation transfer function. Applying a Taylor approximation (essentially applying Fresnel approximation to this term as well), one can rewrite this phase term as:
Therefore, it is noted that if one propagates an initial distribution at z=0 a distance z=d before entering the lens, as shown in the figure below, one is able to cancel out the phase factor outside the integral using the steps shown in
Assuming the input plane is the x2,y2 plane, the plane where the lens is situated is the x1,y1 plane, and the observation plane is the x0,y0 plane. One can express the Fourier transform of the pressure wave at the input plane as 2(kx,ky)=(p(x2,y2)), where p(x2,y2) is the pressure at the input plane. Similarly, one can express the Fourier transform of the pressure at the lens plane as 1(kx,ky).
From the previous discussion of the wave propagation transfer function, one notes that:
One can use this to relate 1 and 2:
From our discussion of the phase cancellation properties of a lens at the focal point, one can write the pressure distribution on the observation plane as p(x0,y0):
Where one have assumed that R≈z=F, where F is the focal distance of the lens. One can write this in terms of the Fourier transform of the input plane as:
Noting that kx=−kx0/z and ky=−ky0/z:
It is sees that if d=F, then the phase terms cancel out, leaving:
which simplifies the Fourier transform multiplied by a constant phase and amplitude scale factor term.
Once advantage of using a lens is that it gives us the Fourier transform directly, therefore one does not need to scale the array elements with a phase term. Furthermore, operating using the Fresnel approximation allows closer distances than if using the Fraunhofer approximation.
Notably, there are constraints on a system implemented with an ideal lens. As an initial constraint, the Fresnel approximation must hold and Z1=Z2=F. In
Regarding the size of the receive array, from the previous discussion on Shannon-Nyquist sampling and on ensuring that most of the spectral power must he captured, there are the previous constraints:
Models typically use a far field approximation, so the focal length of the lens should he in the Fraunhofer zone. One can state that the pressure in the far field is:
where a is the radius of the piston, ρ0 is the density of the medium, c0 is the speed of sound in the medium, u0 is the piston velocity, and 2P0 is the maximum axial pressure.
To apply a focus operation to the field to simulate the effects of the lens at the observation plane focal point, one must multiply the field by the following phase factor:
where zf is the focal length of the lens.
While in actuality, the system is dealing with square transducers and square arrays, one must first consider that the entire transmit aperture is circular and transmitting from the entire area (the array consists of infinitesimally small transducers), to start comparing beam patterns with expected Fourier Transforms and explore the effects of discretizing the array into finite sized transducer elements.
Assuming a transmit array that is 100 μm in radius and that the ultrasound waves are propagating in a medium with speed of sound 8000 m/s and at 1 GHz, giving a wavelength of 8 μm. In order for this array to be in the Fraunhofer zone, the focal length must be:
To reduce error from the approximation, the transducer is placed a little further than this, at 4 mm, and assumes that there is a perfect lens that can focus at that value. Before plotting the equation, it is noted that there is a singularity point in the far field piston diffraction pattern when sin(θ)→0, that is when x=0 and y=0. Noting that
one can find the value at x=0 and y=0 as follows:
It was previously noted that after the pressure at the focal point of a lens is a scaled/constant phase shifted Fourier transform (assuming r≈zF≡F):
Rewriting this in terms of P0, by using the relation k=ω/c0:
Therefore, it is necessary to divide by the factor jkP0/27πF to recover the Fourier Transform. The following 2 plots are generated for an aperture of radius 100 μm, velocity 8000 m/s, frequency 1 GHz, wavelength 8 μm, and zf=4 mm. The lens focusing has been applied by multiplying the piston field with the lens phase contribution and the pressure field has been scaled by jkP0/27πF to enable a direct comparison with the Fourier Transform. Note that to reduce computation time, the receive plane used for calculation does not fit 98% of the spectral power, as stated as a requirement previously.
The x-y coordinates can be transformed to the appropriate spatial frequencies fx and fy using the relations:
For the most direct comparison, one can first compare this to the Fourier transform of a circle with radius
where a is 100E-6. The plots for y=0 are shown in
Error=Magnitude(FFT−FUS)
%Error=100%×Magnitude(FFT−FUS)/Magnitude(FFT) (Eq. 43)
where FFT is the Fourier Transform computed analytically (or later on using FFT) and FUS is the Fourier Transform computed using ultrasound. This is simply a comparison of the change in vector length. in complex space. The locations with maximum error are at the nulls of the Fourier Transform. While the percent error may seem huge at these points—this is the result of dividing a small value by an even smaller value that is almost 0 and it is further noted that the actual error is very small at these points.
Normalizing the error by the maximum amplitude in the Fourier Transform, a value is obtained that is a better comparison than an % error plot. Therefore, it is seen that the error is quite acceptable for a Fourier Transform computed using the field from a piston transducer using a far field approximation.
If one wants to use the aperture to represent the function
instead of
what is the appropriate scaling to use? It can be seen that the Fourier transform of the function
has amplitude c2 times that of the Fourier transform of the function
Furthermore, the frequency at which the first zero is located for
is 1/c times the frequency of the first zero of
Summarizing the above, it is seen that the amplitude must he scaled by c2 and the frequency axes must be multiplied by 1/c to be able to use the transmit array aperture to represent the Fourier transform of a circle with a radius c times that of the aperture radius.
It is desirable to discretize the receive plane into discrete transducer elements. A transducer integrates the received pressure on its surface, converting this force into a voltage. To convert the pressure profile into transducer voltages, one must integrate the pressure on each transducer.
It is noted that the minimum spatial wavelength of interest is the diameter of the transmit array aperture Dx. Therefore, the minimum spatial frequency of interest is Δfx=1/Dx. From the relations Δkx=2π×Δfx and Δkx=kΔxRX/z, it is seen that the minimum transducer pitch should be ΔxRX=zΔkx/k. However, because of sampling on the receive plane as well, one should work with transducer pitches that are half of that or smaller:
Therefore, for the above transmit apertures, one needs ΔxRX≤80 μm. Shown in
It can be seen that in practice, one wants to oversimple on the receive plane to get an image that more closely represents the Fourier transform. In
One must also consider the error contributed by the integration over each transducer. The smaller the transducer is the closer it should represent the value of the pressure on its center, because there will be less contribution from off-center points. Before comparing directly against the Fourier Transform, it is desirable to first compare how well it captures the pressure profile. Shown in
Shown in
Referring to
A key requirement of SFFT is to be able to control phase and amplitude of the triggered pulse. Described herein (such as with regard to
In order to increase the sonic signal at the receiver, a Fresnel lens can focus the ultrasonic wavefronts onto the receiver. Since according to an embodiment the wavelength is 9 μm, a Fresnel lens can be fabricated by a silicon dioxide/silicon composite layer to form the binary spatial distribution of index-of-refraction. Simulations of the FFT with lens have verified the increase in the intensity of the received image while enabling the SFFT to be measured within an aperture that is equal to the depth over which the wavefront propagates.
Apodization Function Amplitude
To execute the amplitude of the apodization function, ξ(xs,ys), in Eqn. (16), different transducer pixels in the transmitting plane (xs,ys) should be assigned different actuating voltage amplitudes corresponding to the input spatial function. One way to implement that is to use a Digital to Analog Converter (DAC) circuit for each transmitting pixel. That allows the amplitude information to be stored digitally in a RAM and then loaded to the pixels to compute the FFT. One issue with DAC circuit is that conventional DAC's area scales exponentially with number of bits. Consequently, if high resolution is needed, having a DAC circuit for each pixel will be infeasible given the high density of the transmitting pixels array.
A 4-bit version of the proposed Sonic DAC is shown in
Equation 45 shows that the received pressure along the propagation axis is linearly proportion to the area of the ring (ao2−ai2), As a result, the sensed pressure from the binary weighted transmitting rings is also binary weighted. As shown in
Another way of dividing the transducer pixel is cutting it in binary weighted slices instead of rings as shown in
Apodization Function Phase
To implement the phase term in the apodization function in Equation 16, the RF switch control signal is delayed by a digitally controlled delay (
Digital DAC with Both Amplitude and Phase Control
Referring to
As acoustic waves travel in the silicon substrate, it experiences multiple reflections due to the high acoustic impedance mismatch between silicon and air. That results in multiple echoes received by the receiving transducer as shown in
To eliminate multiple reflections a λ/4 matching layer need to be deposited on the back side of silicon followed by a thick lossy absorbing layer as shown in
Pzflex simulations in
In some cases the input to the 2D array needs to be driven optically. In this case, photodiode arrays, very much like the pixels in a camera array can be used to receive amplitude and phase of an image, which can then modulate the RF drive of the AlN pixels. Such a circuit is shown below. The two photodiodes receive optical energy and will drive the AlN pixel with the right phase and amplitude using the analog CMOS circuit. Alternatively, one can use just one pixel to control the amplitude of the AlN transducer in the case an image is acquired and its Fourier transform is required. See, for example,
In another embodiment, such as that shown in
On the receive side, if done in RF CMOS. multitude of RF receivers are needed. with the ability to mix and acquire I and Q values. Another method is to bend a mirror in proportion to the input RF voltage on the receive electrodes. This can be done using electrostatically driven minor pixels. For example, the voltage generated on a I and Q receiver can be applied to an electrostatic mirror. The force across an electrostatic actuator is
where V1 and V2 are the voltages across the gap between the movable mirror and the rigid electrode with the voltage. Hence the force produces displacement proportional to V12−2V1V2+V22. In order to get large displacements, V1 can be a dc bias voltage to increase displacement. The V2 can be the sense voltage with either Sin or Cosine phase shift. Two pixels with Sin and Cos voltages would lead to an effective optical phase shift on the reflected signal that is proportional to the phase difference between the two pixels, providing optical output proportional to the phase of the input ultrasonic amplitude. See, for example,
While embodiments of the present invention has been particularly shown and described with reference to certain exemplary embodiments, it will be understood by one skilled in the art that various changes in detail may be effected therein without departing from the spirit and scope of the invention as defined by claims that can be supported by the written description and drawings. Further, where exemplary embodiments are described with reference to a certain number of elements it will be understood that the exemplary embodiments can be practiced utilizing either less than or more than the certain number of elements.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/675,415 filed on May 23, 2018. and entitled “Ultrasonic Fourier Transform Analog Computing Apparatus, Method, and Applications,” the entirety of which is incorporated herein by reference.
This invention was made with government support under Grant No.: N66001-12-C-2009 awarded by the Intelligence Advanced Research Projects Activity. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
62675415 | May 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17057868 | Nov 2020 | US |
Child | 17721882 | US |