Imaging technologies based on frequencies include technologies such as ultrasound, radar, sonar and radio astronomy. Ultrasound, radar and sonar technologies generally send a signal and determine a position and/or shape of an object by processing the reflections of the signals and use that reflection to create an image whereas radio astronomy processes signals assuming they are coming from far away.
While standard transducers comprise either a single active element that both generates and receives high frequency sound waves, or two paired elements, one for transmitting and one for receiving; phased arrays comprise transducer assemblies that can each be pulsed separately. The transducer assemblies may be arranged in a strip (linear array), a ring (annular array), a circular matrix (circular array), curved, or a more complex shape. A phased array system varies the time between a series of outgoing pulses in such a way that the individual wave fronts generated by each element in the array combine with each other to add or cancel energy in predictable ways that effectively steer and shape the sound beam or radio signal.
Beamforming is a signal processing technique used to precisely align the phases of an incoming signal from different parts of an array to form a well understood beam in a specific direction and focus depth. The signals from each of the elements are delayed such that when they are summed they all have the same delay corresponding to a specific direction. However, each independent beam requires an independent analog path or delay and sum calculation causing increasing complications and computational complexity as the size of the array and the corresponding number of independent beams increases.
There is therefore a need for alternative methods of signal processing to align the phases of an incoming signal to efficiently create images from phased arrays in imaging technology.
Described herein is a method of beamform signal processing to enhance image quality and angular resolution from signals received from a phased array in near and far field imaging data and to transform raw transducer data into a form that can be displayed on a screen with a high degree of image resolution. The methods and systems described herein may take a single or multi-dimensional image generated from a transducer array and generate multi-dimensional images one dimension more than the dimension from which the image was generated, e.g. 2D images may be generated from 1D images, 3D images may be generated from 2D images, et cetera. The signal processing systems and methods described herein may be applied to radar, sonar, ultrasound and radio astronomy.
In the methods and systems and methods described here, translations are made from spatial input data to angular output. Two or more raw input signals are combined to form a similar number of output signals in a series of sub arrays across an array of input sensors/transducers less than a wavelength apart. In some embodiments, such transducers may be about 0.7 wavelengths apart or any fraction >1 wavelength.
In some embodiments, a first group of raw input signals from a first set of neighboring transducers in a phased array is combined to form a first set of output angular signals and a second group of raw input signals from a second set of neighboring sensors in the phased array is combined to form a second set of output angular signals. Two or more raw input signals may be combined to form a set of output angular signals until the spatial input form the entire sensor array has been converted to angular information of resolution at the Abbe limit. Each set of output angular signals of input is then combined with adjacent angular beams from neighboring subarrays to refine the angular resolution with the amount of angular resolution increasing with each subsequent pairing such that a fourth set in the hierarchy would have twice as much angular resolution as the second set in the hierarchy. The output angular signals may be combined in the same or different ways to refine angular resolution and improve image quality data. For example, inputs can be combined in any desired adjacent fashion including, but not limited to, up, down, azimuthal angle 8, and polar angle cp. In some embodiments, additional refinements to the data may be added such that some or all of the data including, but not limited to the raw data, may be apodized.
The sample frequency used to generate the raw input signals in the methods and compositions described herein is at least two times per wavelength or period (T). In some embodiments, the sampling frequency may be about 10 times per wavelength, or any fraction in between including, but not limited to, 3, 4, 5, 6, 6.25, 7, 7.5, 8, 9, and the like. The same or different sample frequencies may be used to generate each group of raw input frequencies, creating a frequency agnostic system. In some embodiments, the beam directions may be determined in parallel.
In some embodiments, a method of generating a multi-dimensional image from a phase array with one less dimension than the multi-dimensional image may include receiving a signal of amplitude (A) from an array of sensors over time (t); apodizing received raw signals A; pairing each signal A received by a sensor with an adjacent signal above, below and across forming a first set of paired signals B; compensating for the time delay due to the location difference of each sensor receiving the signal; pairing each pair in the first set of paired signals B in accordance with their spatial placement and angle in volume in the phase array to form a second set of paired signals C; and/or pairing each pair in the second set of paired signals C in accordance with their spatial placement and angle in volume in the phase array to form a third set of paired signals D. Such third sets of paired signals D and/or subsequently grouped signals may be used to produce the multi-dimensional image.
In some embodiments, a beamforming apparatus may include a phased array and a processor configured to translate spatial input data from beam signals of angles between +π/4 radians to −π/4 radians to angular output by apodizing each set of signals, interpolating the angular information, and forming an image from the interpolated angular information. The phased array may include transducers separated by a distance L=λ/1.44 configured to receive signals of a frequency (f) with a period (T) and a time delay of +t/4 to −t/4 between signals received by adjacent transducers. In some embodiments, raw input data is paired according to space and time to create a first set of paired received signals. The first set of received signals is grouped according to space and time to create a second set of paired signals. In further embodiments, each of the second set of paired signals is grouped according to space, angle and time to create a third set of paired signals. In additional embodiments, each of the third set of paired signals is paired according to space, angle and time to create a fourth set of paired signals. Such grouping may continue hierarchically to combine angles of interest with adjacent angular beams from neighboring sub-arrays to refine angular resolution until all of the spatial input has been converted to angular information of resolution at the Abbe limit and used to generate images from phased arrays.
To the accomplishment of the foregoing and related ends, certain illustrative aspects of the system are described herein in connection with the following description and the attached drawings. The features, functions and advantages that have been discussed can be achieved independently in various embodiments of the present disclosure or may be combined in yet other embodiments, further details of which can be seen with reference to the following description and drawings. This summary is provided to introduce a selection of concepts in a simplified form that are elaborated upon in the Detailed Description. This summary is not intended to identify key features or essential features of any subject matter described herein.
To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
“Abbe diffraction limit” in this context refers to that light with wavelength λ, traveling in a medium with refractive index n and converging to a spot with angle\theta will make a spot with radius
“Anistrophic” in this context refers to exhibiting properties with different values when measured in different directions.
“Apodization” in this context refers to the process of altering a signal (such as one emitted by an electrical sensor) to make its mathematical description smoother and more easily manipulatable.
“Far Field” in this context refers to the region from the antenna where the radiation pattern does not change shape with distance. It satisfies the following three equations: R>(2D{circumflex over ( )}2)/λ, R>>D, and R>>λ.
“FLOPS” in this context refers to floating-point operations per second where “floating-point” is a method of encoding real numbers within the limits of finite precision available on computers.
“Fresnel zone” in this context refers to is a series of concentric ellipsoidal regions of alternating double strength and half strength volumes of a wave's propagation, caused by a wave following multiple paths as it passes by an object and is partially refracted by it, resulting in constructive and destructive interference as the different length paths go in and out of phase.
“Near Field” in this context refers to the area in the immediate vicinity of an antenna having a boundary of R<0.62√(D{circumflex over ( )}3/λ) where D is the maximum linear dimension of an antenna and λ, is the wavelength.
“Nyquist limit” in this context refers to the minimum sampling rate required to avoid aliasing. Specifically, there must be at least two samples per period of the wave being observed.
“Piezoelectric” in this context refers to the ability of certain materials to generate an AC (alternating current) voltage when subjected to mechanical stress or vibration, or to vibrate when subjected to an AC voltage, or both.
“Secant of an angle” in this context refers to the length of the hypotenuse divided by the length of the adjacent side. The abbreviation is sec. sec θ=hypotenuse/adjacent. It is equal to 1/cosine.
“Voxel” in this context refers to each of an array of elements of volume that constitute a notional three-dimensional space, especially each of an array of discrete elements into which a representation of a three-dimensional object is divided.
Described herein are methods and systems for translating spatial input data to angular output for a phased array in order to generate images with high resolution. Phased arrays may be in any shape desired including, but not limited to, linear, curved, annular, rectangular, circular, matrix, and the like. The methods and systems described herein decrease computational costs, allowing for increasingly complex phased arrays to be used in imaging technologies with near or far field focus including, but not limited to, ultrasound, radar, sonar and radio astronomy. In some embodiments, the system can produce images of high resolution and frame rate as well as handle high numbers of array elements.
During pulse propagation in a phased array, echoes are scattered by acoustic impedance perturbations and detected by the array elements. The data is sampled at a rate of about four to about ten of the fundamental frequency (f) though any fraction of that frequency can be used including but not limited to, 5, 6, 7, 8, and 9 or any fraction thereof, and digitally integrated through beamforming, allowing reception in a predefined direction corresponding to the transmission angle, and optimal focus at each depth.
In some embodiments, raw input signals received by a transducer in a phased array are combined hierarchically using spacial, angular and time coordinates using Cartesian coordinates, i.e. the placement of the input signal in space, the angle in volume, i.e. the direction facing into the void, and time delay, i.e. the difference in time for a beam to return based on the position of a transducer. The number of signals combined at each level in the hierarchy may vary from about 2 to about the square root of the total sensor/transducer count. In some embodiments, the translation of the input data from a plurality of raw input signals may take place using parallel processing.
As shown in the phased array with beamforming apparatus in
Receive beamformer 114 may proceed as shown in
In additional embodiments, receive beamformer 114 may proceed as shown in
Signal pairing or grouping occurs in a hierarchical manner using spacial and angular coordinates from the phased array as well as time displacement based on the location of the transducers in relation to one another. As shown in
In a phased array, sensors/transducers are separated by a constant distance L. As shown in
α: depth
β: depth/cos(arctan(lambda/1.4*depth))
β: sqrt(depth{circumflex over ( )}2+L{circumflex over ( )}2/2)
χ: sqrt(depth{circumflex over ( )}2+2*L{circumflex over ( )}2)
δ: sqrt(depth{circumflex over ( )}2+2*L{circumflex over ( )}2)
ε: sqrt(depth{circumflex over ( )}2+L{circumflex over ( )}2/8)
l: sqrt(depth{circumflex over ( )}2+12.5*L{circumflex over ( )}2)
γ: sqrt(depth{circumflex over ( )}2+4.5*L{circumflex over ( )}2)
As shown in
As shown in
This first set of paired received signals B is then combined as shown in
Cijkm(t)=X[2i−m]j*Bi[2j−m](t+{2m−3})+X[2i]j*Bi[2j](t−{2m−3})
where Xij, Yij are constant tables derived from the windowing function for the full aperture such that X and Y are apodizing functions, i and j represent coordinates in space (Cartesian) and where k corresponds to azimuthal angle θ and m represents polar angle φ. Inputs can be combined in any desired adjacent fashion including up, down, azimuthal angle θ and polar angle φ.
The first set of paired signals B is combined to form:
In an alternative embodiment as shown in
The direction of a signal in a phased array may be detected by selectively delaying the signals received from each sensor and running cross-correlations of the received return signals. While this is feasible for phased arrays with small number of sensors, the computational cost for a larger array is the square of the number of sensors in the array. By hierarchically ranking pairs of sensors in space, angle, and accounting for time, the computational cost is the number of sensors times the log of the number of sensors rather than the square, decreasing the computational cost considerably. As shown in Table 1, even 512×512 inputs and outputs calculated according to the methods described herein can be calculated four orders of magnitude more efficiently than traditional methods as shown in lines 1-5 (delay and sum (4.12E+04) to 1.61E+02) of Table 1.
This is additionally shown in
In various embodiments, system 1000 may comprise one or more physical and/or logical devices that collectively provide the functionalities described herein. In some embodiments, system 1000 may comprise one or more replicated and/or distributed physical or logical devices.
In some embodiments, system 1000 may comprise one or more computing resources provisioned from a “cloud computing” provider, for example, Amazon Elastic Compute Cloud (“Amazon EC2”), provided by Amazon.com, Inc. of Seattle, Wash.; Sun Cloud Compute Utility, provided by Sun Microsystems, Inc. of Santa Clara, Calif.; Windows Azure, provided by Microsoft Corporation of Redmond, Wash., and the like.
System 1000 includes a bus 1002 interconnecting several components including a network interface 1008, a display 1006, a central processing unit 1010, and a memory 1004.
Memory 1004 generally comprises a random access memory (“RAM”) and permanent non-transitory mass storage device, such as a hard disk drive or solid-state drive. Memory 1004 stores an operating system 1012 as well as routine 200, routine 300 and routine 400.
These and other software components may be loaded into memory 1004 of system 1000 using a drive mechanism (not shown) associated with a non-transitory computer-readable medium 1016, such as a floppy disc, tape, DVD/CD-ROM drive, memory card, or the like.
Memory 1004 also includes database 1014. In some embodiments, system 1000 may communicate with database 1014 via network interface 1008, a storage area network (“SAN”), a high-speed serial bus, and/or via the other suitable communication technology.
In some embodiments, database 1014 may comprise one or more storage resources provisioned from a “cloud storage” provider, for example, Amazon Simple Storage Service (“Amazon S3”), provided by Amazon.com, Inc. of Seattle, Wash., Google Cloud Storage, provided by Google, Inc. of Mountain View, Calif., and the like.
References to “one embodiment” or “an embodiment” do not necessarily refer to the same embodiment, although they may. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively, unless expressly limited to a single one or multiple ones. Additionally, the words “herein,” “above,” “below” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the claims use the word “or” in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list, unless expressly limited to one or the other. “Logic” refers to machine memory circuits, non-transitory machine readable media, and/or circuitry which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter). Those skilled in the art will appreciate that logic may be distributed throughout one or more devices, and/or may be comprised of combinations memory, media, processing circuits and controllers, other circuits, and so on. Therefore, in the interest of clarity and correctness logic may not always be distinctly illustrated in drawings of devices and systems, although it is inherently present therein. The techniques and procedures described herein may be implemented via logic distributed in one or more computing devices. The particular distribution and choice of logic will vary according to implementation.
Those having skill in the art will appreciate that there are various logic implementations by which processes and/or systems described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes are deployed. “Software” refers to logic that may be readily readapted to different purposes (e.g. read/write volatile or nonvolatile memory or media). “Firmware” refers to logic embodied as read-only memories and/or media. Hardware refers to logic embodied as analog and/or digital circuits. If an implementer determines that speed and accuracy are paramount, the implementer may opt for a hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a solely software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary.
Those skilled in the art will recognize that optical aspects of implementations may involve optically-oriented hardware, software, and or firmware. The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood as notorious by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. Several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and/or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of a signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, flash drives, SD cards, solid state fixed or removable storage, and computer memory.
In a general sense, those skilled in the art will recognize that the various aspects described herein which can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or any combination thereof can be viewed as being composed of various types of “circuitry.” Consequently, as used herein “circuitry” includes, but is not limited to, electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes and/or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes and/or devices described herein), circuitry forming a memory device (e.g., forms of random access memory), and/or circuitry forming a communications device (e.g., a modem, communications switch, or optical-electrical equipment).
Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use standard engineering practices to integrate such described devices and/or processes into larger systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a network processing system via a reasonable amount of experimentation.
The foregoing described aspects depict different components contained within, or connected with different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
Embodiments of methods and systems for interpreting beam direction imaging data have been described. The following claims are directed to said embodiments, but do not preempt application performance testing in the abstract. Those having skill in the art will recognize numerous other approaches to interpreting beam direction imaging data possible and/or utilized commercially, precluding any possibility of preemption in the abstract. However, the claimed system improves, in one or more specific ways, the operation of a machine system for interpreting beam direction imaging data, and thus distinguishes from other approaches to the same problem/process in how its physical arrangement of a machine system determines the system's operation and ultimate effects on the material environment. The terms used in the appended claims are defined herein in the glossary section, with the proviso that the claim terms may be used in a different manner if so defined by express recitation.
This application is a continuation of U.S. patent application Ser. No. 15/133,474, filed on Apr. 20, 2016, which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5617371 | Williams | Apr 1997 | A |
5903516 | Greenleaf et al. | May 1999 | A |
6063030 | Vara et al. | May 2000 | A |
6120450 | Li | Sep 2000 | A |
6607489 | Hoctor | Aug 2003 | B2 |
7423578 | Tietjen | Sep 2008 | B1 |
7667639 | Cheng et al. | Feb 2010 | B2 |
7750849 | Hjelmstad | Jul 2010 | B2 |
8517946 | Kim | Aug 2013 | B2 |
9030354 | Natarajan | May 2015 | B2 |
9132913 | Shapiro et al. | Sep 2015 | B1 |
9323445 | Kritt et al. | Apr 2016 | B2 |
9342156 | Huh | May 2016 | B2 |
9986969 | Call et al. | Jun 2018 | B2 |
10401492 | Brooks | Sep 2019 | B2 |
10624612 | Sumi | Apr 2020 | B2 |
20020173721 | Grunwald | Nov 2002 | A1 |
20020173722 | Hoctor et al. | Nov 2002 | A1 |
20030055334 | Steinbacher et al. | Mar 2003 | A1 |
20040102700 | Asafusa | May 2004 | A1 |
20070200760 | Hjelmstad | Aug 2007 | A1 |
20070239001 | Mehi et al. | Oct 2007 | A1 |
20070259158 | Friedman et al. | Nov 2007 | A1 |
20080012753 | Cheng | Jan 2008 | A1 |
20080114239 | Randall et al. | May 2008 | A1 |
20080306385 | Jago | Dec 2008 | A1 |
20100030076 | Vortman et al. | Feb 2010 | A1 |
20100160784 | Poland | Jun 2010 | A1 |
20100251823 | Adachi | Oct 2010 | A1 |
20110077524 | Oshiki et al. | Mar 2011 | A1 |
20110208052 | Entrekin | Aug 2011 | A1 |
20120157851 | Zwirn | Jun 2012 | A1 |
20130234891 | Natarajan et al. | Sep 2013 | A1 |
20130253317 | Gauthier | Sep 2013 | A1 |
20140046188 | Yen et al. | Feb 2014 | A1 |
20140058266 | Call et al. | Feb 2014 | A1 |
20140164965 | Lee et al. | Jun 2014 | A1 |
20140219059 | Younghouse | Aug 2014 | A1 |
20150293223 | Park et al. | Oct 2015 | A1 |
20160161589 | Benattar | Jun 2016 | A1 |
20160161594 | Benattar | Jun 2016 | A1 |
20160161595 | Benattar | Jun 2016 | A1 |
20160165338 | Benattar | Jun 2016 | A1 |
20160165341 | Benattar | Jun 2016 | A1 |
20170307755 | Brooks | Oct 2017 | A1 |
20170343668 | Brooks et al. | Nov 2017 | A1 |
20180000453 | Hunter et al. | Jan 2018 | A1 |
20180055483 | Hunter | Mar 2018 | A1 |
20190353975 | Didomenico | Nov 2019 | A1 |
Number | Date | Country | |
---|---|---|---|
20190324139 A1 | Oct 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15133474 | Apr 2016 | US |
Child | 16404497 | US |