Apparatus and method for imaging objects with wavefields

Information

  • Patent Application
  • 20070282200
  • Publication Number
    20070282200
  • Date Filed
    September 09, 2005
    19 years ago
  • Date Published
    December 06, 2007
    16 years ago
Abstract
A transmission wave field imaging method, comprising the transmission of an incident wave field into an object, the incident wave field propagating into the object and, at least, partially scattering. Also includes the measuring of a wave field transmitted, at least in part, through an object to obtain a measured wave field, the measured wave field based, in part, on the incident wave field and the object. Additionally, the processing of the measured wave field utilizing a recursive reconstruction algorithm to generate an image data set representing at least one image of the object.
Description
INCORPORATION BY REFERENCE

Material on one (1) compact disc accompanying this patent and labeled COPY 1, a copy of which is also included and is labeled as COPY 2, is incorporated herein by reference. The compact disc has a software appendix thereon in the form of the following three (3) files:


Appendix A.doc, Size: 16 Kbytes, Created: Aug. 13, 2001;


Appendix B.doc, Size: 19 Kbytes, Created: Aug. 13, 2001; and


Appendix C.doc, Size: 1 Kbytes, Created: Aug. 13, 2001.


COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material to which a claim of copyright protection is made. The copyright owner(s) has/have no objection to the facsimile reproduction by any one of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but reserves all other rights in the copyrighted word.


BACKGROUND OF THE INVENTION

It has long been known that elastic, electromagnetic or acoustic waves in homogeneous and layered environments in the frequency range of a fraction of a cycle per second up to hundreds of millions of cycles per second and higher can be propagated through many solids and liquids. Elastic waves are waves that propagate through solids, and have components of particle motion both parallel (longitudinal, or pressure, wave) and perpendicular (shear wave) to the direction of propagation of the wave energy itself. In contrast to this, acoustic waves are those waves that generate particle motion that is exclusively parallel to the propagation of wave energy. Electromagnetic waves have components of variation of field strength solely in the direction perpendicular to the direction of propagation. All of these types of waves may be used to image the acoustic longitudinal wavespeed and absorption, the electromagnetic wavespeed and absorption, the shear wavespeed, and the density of the material through which the wave energy has traveled.


It is also known that scattering is produced not only by spatial fluctuations in acoustic impedance, which is the product of mass density times wavespeed, but also by independent fluctuations in electromagnetic permeability, permittivity and conductivity, elastic compressibility, shear modulus, density, and absorption. These lead to variations in phase speed (which is the speed of propagation of fronts of constant phase) and in impedance (for the electromagnetic case, the ratio of the electric to the magnetic field strength). The net property of an object which describes the phenomenon of scattering in a given modality, is called the “scattering potential”.


Other imaging methods have been applied to one or the other modality, or restricted to acoustic or elastic media, the method described in this patent is applicable to any type of wave motion, whether electromagnetic, elastic (including shear wave effects) or acoustic (scalar approximation valid in liquid and gases).


Furthermore, the ambient media may have some forms of structure (layering) or microstructure (porosity) relevant to the medical, geophysical, or nondestructive imaging applications envisioned for this technology. In the prior art the presence of this layering or porosity has greatly diminished the effectiveness of the imaging program. The method of this patent minimizes the obscuring effect of such structures in the ambient media. In addition, we have made several changes to the previous U.S. Pat. No. 4,662,222 that significantly extend the applicability and speed of our algorithm. These changes are described, in part, below:


As discussed in U.S. Pat. No. 4,662,222, by one of the present authors, these principles can be used to image scattering bodies within a given medium. Some definitions will clarify the procedures and methods used:


The direct or forward scattering problem is concerned with a determination of the scattered energy or fields when the elastic or electromagnetic properties of the scattering potential are known.


The inverse scattering problem, by contrast, consists in the use of scattered electromagnetic, elastic, or acoustic waves to determine the internal material properties of objects embedded in a known (ambient) medium. An incident wave field is imposed upon the ambient medium and the scatterer. The scattered field is measured at detectors placed a finite distance from the scattering objects. The material parameters of the scatterer are then reconstructed from the information contained in the incident and scattered fields. In other words, as defined herein, acoustic or electromagnetic imaging using inverse scattering techniques is intended to mean electronic or optical reconstruction and display of the size shape, and unique distribution of material elastic or electromagnetic and viscous properties of an object scanned with acoustic, electromagnetic or acoustic energy, i.e., reconstruction of that scattering potential which, for a given incident field and for a given wave equation, would replicate a given measurement of the scattered field for any source location.


The description of prior art summarized in U.S. Pat. No. 4,662,222 (referred to as “the previous patent”) is also relevant here. It is sufficient to say here, that the use of the conjugate gradient algorithms discussed in the previous patent brought the attainment of inverse scattering into a practical reality. Previous to that patent (U.S. Pat. No. 4,662,222), only linear approximations to the full inverse scattering problem were amenable to solution, even with the aid of modern high speed digital computers. The resolution achievable with the inverse scattering methods described in the previous patent was far superior to any ultrasound imaging in medicine previous to that time.


This invention is designed to provide improved imaging of bodies using wave field energy such as ultrasound energy, electromagnetic energy, elastic wave energy or Biot wave energy. In particular it is designed to provide improved imaging for medical diagnostic applications, for geophysical imaging applications, for environmental imaging applications, for seismic applications, for sonar applications, for radar applications and for similar applications where wave field energy is used to probe and image the interior of objects.


In particular, this invention has important applications to the field of medical diagnosis with specific value to improved breast cancer detection and screening. The present method of choice is mammography. This choice is not supported by the outstanding performance of mammography, but rather because it is the most diagnostically effective for its present cost per exam. Another reason for the wide spread use of mammography is the inertia of changing to other modalities. It is known that hand-probe-based, clinical reflection ultrasound can match the performance of mammography in many cases, but only in the hands of specially trained ultrasound radiologists. Today and in the foreseeable future, there are more mammography machines and radiologists trained to use them than there are trained ultrasound radiologists that have access to high quality ultrasound scanners. Sophisticated breast cancer diagnostic centers use both mammography and ultrasound to compensate for weakness in either approach when used alone. When used alone, ultrasound and mammography require a biopsy to remove a sample of breast tissue for pathology lab analysis to determine whether a sampled lesion is cancerous or benign.


Even when mammography and ultrasound are used together, the specificity for discriminating between cancer and fibrocystic condition or between cancerous tumor and a fibroadenoma is not high enough to eliminate the need for biopsy in 20 to 30 percent of lesions. Given that early diagnosis of breast cancer can ensure survival and that one woman in eight will have breast cancer in her life, it is important to the general population for cancer to be detected as early as possible. Detection of cancer in an early stage for biopsy is thus very important. However, biopsy of benign lesions is traumatic to the patient and expensive. A mammogram costs about $90 but a biopsy is about ten times more expensive. Thus, it is important that a breast cancer diagnostic system have high specificity and sensitivity to eliminate unnecessary biopsies. Increasing the rate of diagnostic true positives to near 100 percent will identify all lesions as a cancer that are cancer without the need to biopsy. However, it is also necessary to increase the rate of true negatives to near 100 percent to eliminate biopsy of benign lesions. Neither mammography or ultrasound or their joint use has provided the combination of sensitivity and specificity to eliminate biopsy or to detect all cancers early enough to insure long term survival after breast cancer surgery for all women that have had breast exams.


There does not seem to be any obvious improvements in present mammography or hand-probe-based, clinical reflection ultrasound that can significantly improve these statistics. However, there is reason to believe that inverse scattering ultrasound tomography or electric impedance tomography can provide improved diagnostic sensitivity or specificity when used separately or jointly with themselves and with ultrasound and/or mammography. Inverse scattering ultrasound imaging provides several advantages over present clinical reflection ultrasound that uses hand held ultrasound probes. A hand held probe is a transducer assembly that scans an area of the patient's body below the probe. Probes consisting of mechanical scanning transducers and probes comprising electronically scanned arrays of transducer elements are both well developed technologies.


Inverse scattering has the following advantages over said clinical reflection ultrasound. Inverse scattering images have the following features: (1) two separate images can be made, one of speed of sound and one of acoustic absorption; (2) these images are a quantitative representation of actual acoustic bulk tissue properties; (3) the images are machine independent; (4) the images are operator independent; (5) the images are nearly free of speckle and other artifacts; (6) all orders of scattering tend to be used to enhance the images and do not contribute to artifacts or degrade the images; (7) the images of speed of sound and absorption create a separation of tissues into classes (fat, benign fluid filled cyst, cancer, and fibroadenoma), although the cancer and fibroadenoma values tend to overlap; (8) the speed of sound and acoustic absorption images have excellent spatial resolution of 0.3 to 0.65 mm at or below 5 MHz; and (9) the speed of sound images can be used to correct reflection tomography for phase aberration artifacts and improve ultrasound reflectivity spatial resolution. Inverse scattering easily discriminates fat from other tissues, while mammography and present clinical ultrasound cannot.


Because of the similar values of speed of sound and acoustic absorption between cancer and fibrocystic condition (including fibroadenoma), it is not known whether inverse scattering will provide the required high level of specificity to eliminate biopsy. Perhaps this performance could be achieved if inverse scattering were combined with reflection ultrasound (including reflection tomography) or with mammography or with both.


A traditional problem with inverse scattering imaging is the long computing times required to make an image. Diffraction Tomography is an approximation to inverse scattering that uses a first order perturbation expansion of some wave equation. Diffraction Tomography is extremely rapid in image computation, but suffers the fatal flaw for medical imaging of producing images that are blurred and not of acceptable quality for diagnostic purposes and it is not quantitatively accurate. In our last patent application, we addressed the computational speed problem with inverse scattering and showed how to increase the calculation speed by two orders of magnitude over finite difference methods and integral equation methods. This speed up in performance was achieved by use of a parabolic marching method. This improvement in speed was sufficient to allow single 2D slices to be imaged at frequencies of 2 MHz using data collected in the laboratory. However, making a full 3-D image with a 3D algorithm or even from several, stacked 2-D image slices made with a 2D algorithm would have required computing speed not available then and only becoming available now.


Another imaging modality that has been investigated for detecting breast cancer is EIT (electrical impedance tomography). This modality has been investigated for many years and many of its features are well known. One of its great advantages for breast cancer detection is its high selectivity between cancer and fibrocystic conditions including fibroadenoma. Cancer has high electrical conductivity (low electrical impedance) while fibrocystic conditions and fibroadenoma have low electrical conductivity (high electrical impedance). However, EIT has poor spatial resolution. Also EIT requires the use of inversion algorithms similar (in a general sense) to those of inverse scattering. In addition EIT algorithms have mostly been used to make 2-D images. Work on making 3-D EIT images is less developed because of the increased computer run time of 3-D algorithms.


Other problems with mammography may be listed, such as the pain associated with compressing the breast between two plates to reduce the thickness of the breast in order to improve image contrast and cancer detection potential. Another problem is the accumulated ionizing radiation dose over years of mammography. It is true that a single mammogram has a very low x-ray dose, especially with modern equipment. However, if mammograms are taken every year from age 40 or earlier, by age 60 to 70 the accumulated dose begins to cause breast cancer. The effect is slight, and the benefits of diagnosis outweigh this risk. Nevertheless, it would be an advantage to eliminate this effect. Mammography is not useful in many younger women (because of the tendency for greater breast density) or older women with dense breasts (such as lactating women). About 15 percent of women have breasts so dense that mammography has no value and another 15 percent of women have breasts dense enough that the value of mammography is questionable. Thus 30 percent of all mammograms are of questionable or zero value because of image artifacts from higher than normal breast density.


SUMMARY OF THE INVENTION

The present invention describes a transmission wave field imaging method, comprising the transmission of an incident wave field into an object, the incident wave field propagating into the object and, at least, partially scattering. The invention also describes the measuring of a wave field transmitted, at least in part, through the object to obtain a measured wave field, the measured wave field based, in part, on the incident wave field and the object. The invention further describes the processing of the measured wave field utilizing a recursive reconstruction algorithm to generate an image data set representing at least one image of the object.


The imaging method above also describes a recursive reconstruction algorithm that calculates a predicted total wave field utilizing a single independent variable, the single independent variable representing scatter potential. The recursive reconstruction algorithm represents a gradient-based recursive algorithm that solves for an independent scatter potential variable, the gradient-based recursive algorithm treating the total wave field as a dependent variable depending upon the independent scatter potential variable. The processing calculates a predicted total wave field based on the incident wave field and generates the image data set based on a comparison of the predicted total wave field and the measured wave field.


The imaging method above also describes the transmitting where transmitting includes multiple incident wave fields from associated multiple source positions. The incident wave field includes multiple frequency components, the recursive reconstruction algorithm separately processing the multiple frequency components.


The imaging method above also describes an incident wave field representing an ultrasound wave field. The method incorporates the displaying of at least one image as at least two images co-displayed, the at least two images including a speed-of-sound image and an attenuation image. The incident wave field represents an ultrasound wave field and the processing including generating a speed-of-sound data set and an attenuation data set, the speed-of-sound data set representing speed of the ultrasound wave field through the object, the attenuation data set representing attenuation of the ultrasound wave field within the object. The incident wave field comprises at least one of acoustic energy, elastic energy, electromagnetic energy and microwave energy.


The imaging method above also describes the object as a breast, and a transmission source being tilted to direct the incident wave field toward the chest wall of a patient to image features close to the chest wall. The transmitting and receiving are performed by a transmitter and receiver, respectively, the transmitter and/or the receiver are moved with respect to the object.


The imaging method above also describes repeating the transmitting and receiving at multiple slices within the object to generate a 3D volume of image data sets, displaying 2-D images representative of slices through the object, and displaying a 3-D image of at least a portion of the object. The measuring is performed at multiple positions about the object.


The present invention also describes transmission wave field imaging method wherein an incident wave field is transmitted into an object, the incident wave field propagates into the object and, at least, partially scatters. The invention also includes the measuring of the total wave field transmitted, at least in part, through the object to obtain a measured wave field, the measured wave field based, in part, on the incident wave field and the object. Furthermore, the method describes processing the measured wave field utilizing a gradient-based reconstruction algorithm to generate an image data set representing at least one image of the object. The gradient-based algorithm solves for an independent scatter potential variable and treats the measured wave field as a dependent variable depending upon the independent scatter potential variable.


The present invention also describes a transmission wave field imaging method wherein an ultrasound incident wave field is transmitted into an object, the incident wave field is propagated into the object and, at least, partially scatters. The invention further describes the measuring of an ultrasound total wave field transmitted, at least in part through the object wherein the measured wave field is defined, in part, by the incident wave field and the object. Furthermore, the invention obtains speed-of-sound and attenuation values which correspond to the speed of the incident wave field through the tissue. The attenuation values correspond to an amount of attenuation of the incident wave field which is propagated through the tissue. The invention describes the construction of a speed-of-sound image from the speed-of-sound values, an attenuation image from the attenuation values, and the displaying of the speed-of-sound image and the attenuation image.


The imaging method described above also calculates a scattering potential based on the measured wave field wherein the attenuation image and the speed-of-sound image are representative of a component of the scattering potential. The speed-of-sound and attenuation values represent ranges of gray scale values for pixels comprising the speed-of-sound and attenuation images.


The imaging method described above utilizes an inverse scattering algorithm to reconstruct the speed-of-sound and attenuation images based on the measured wave field. The processing calculates a predicted total wave field based on the incident wave field and the constructing generates the speed-of-sound and attenuation images based on a comparison of the predicted total wave field and the measured wave field.


The present invention also describes a method for displaying transmitted ultrasound breast images. Transmitted ultrasound data sets are obtained representative of a measured wave field transmitted, at least in part, through an object, wherein the measured wave field is defined, in part, by an ultrasound incident wave field, a scattering potential and the object through which the wave is transmitted. Speed-of-sound and attenuation values are also obtained. The speed-of-sound values correspond to speed of ultrasound incident wave fields through tissue, the attenuation values correspond to an amount of attenuation of ultrasound incident wave fields propagated through the tissue. Speed-of-sound images are constructed from the speed-of-sound values and attenuation images are constructed from the attenuation values. The method further describes the displaying of those images.


The present invention also describes a transmission wave field imaging device. The device comprises a transmitter for transmitting an incident wave field into an object wherein the incident wave field is propagating into the object and, at least, partially scatters. The device includes a receiver for measuring the total wave field transmitted, at least in part, through the object to obtain a measured wave field. The measured wave field is based, in part, on the incident wave field and the object. The device also includes a processing module for processing the measured wave field. The module utilizes a recursive reconstruction algorithm to generate an image data set representing at least one image of the object.


The imaging device above also describes a processor module comprising a plurality of processors operated in parallel and a PCI bus interconnecting the processing module and the transmitter and/or the receiver. The processor module is implemented on either a personal computer, a hand-held device, a multi-processor system and/or a network PC. The transmitter operates in a pulse echo mode to perform reflective tomography and is rotated about the object to obtain measured data at multiple receive positions around the object. The measured data defines the measured wave field and a display for displaying the image based on the image data set uses the speed of sound image and/or the attenuation image.


The imaging device above also describes the receiver and/or the transmitter as including transducer elements arranged in at least one of a circular array, a linear array, a 2D planar array and a checkerboard 2D array. The transmitter sequentially transmits first and second incident wave fields into the object having different first and second fundamental frequencies, respectively and sequentially transmits a high frequency incident wave field and a low frequency incident wave field.


The present invention also describes a transmission wave field imaging system. The system comprises a transmitter for transmitting an incident wave field into an object wherein the incident wave field is propagated into the object and, at least, partially scatters. The system includes a receiver measuring a total wave field transmitted, at least in part, through the object to obtain a measured wave field. The measured wave field is defined, in part, by the incident wave field, a scattering potential and the object. The measured wave field includes at least one of speed of sound information and attenuation information characterizing the object. The system also includes a processing module for processing, on-site and in real-time, the measured wave field to generate an image of the object. The image comprises pixel values based on at least one of the speed of sound information and the attenuation information characterizing the object. The system further comprises a display for displaying, on-site and in real-time, the image containing at least one of the speed of sound information and the attenuation information of the object.


The wave field imaging system above also describes a breast scanner including the transmitter and receiver, and the processing module that generates the image substantially during a breast scan. The processor module calculates a scattering potential from the measured wave field. The scattering potential includes a real component and an imaginary component, the real component defining the speed of sound information and the imaginary component defining the attenuation information.


The wave field imaging system above also describes a phase detector joined to the receiver wherein the phase detector outputs real and imaginary signal components. The transmitter and receiver repeat the transmitting and receiving operations at multiple slices within the object to collect a 3D volume of data and the display displays 2-D images representative of slices through the object. The display also displays a 3-D image of at least a portion of the object.


The wave field imaging system above also describes a movable carriage supporting the transmitter and receiver wherein the movable carriage rotating about an axis of a cylindrical chamber is configured to receive the object. The movable carriage supporting the transmitter and receiver is vertically adjusted along an axis of a cylindrical chamber. The chamber is configured to receive the object and a fluid tank is configured to receive at least one of the object, the transmitter and receiver.


Additional objects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other objects and features of the invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as further set forth hereinafter.




BRIEF DESCRIPTION OF THE DRAWINGS

In order that the manner in which the above-recited and other advantages and objects of the invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 is a side-elevational view shown partially in cross section which schematically illustrates an acoustic scanner which may be used with the apparatus and method of the present invention;



FIG. 1A is a two dimensional view of a cross-borehole geometry imaging method, according to the present invention, using inverse scattering;



FIG. 1B is a two dimensional view of a reflection geometry imaging method, according to the present invention, using inverse scattering with both a wave energy source means and wave energy detection means juxtaposed to the image field;



FIG. 1C is a two dimensional view of a reflection geometry imaging method, according to the present invention, using inverse scattering with a remote wave energy source means and remote wave energy detection means;



FIG. 2 is a perspective view illustrating one type of configuration which may be used for the transducer arrays employed in the scanner of FIG. 1;



FIG. 3 is a top view of the transducer arrays shown in FIG. 2;



FIGS. 4A-4B are schematic diagrams illustrating one embodiment of an electronic system that may be used to implement the apparatus and method of the present invention, generally illustrate the invention in both the acoustic and electromagnetic wavefield case;



FIG. 4A illustrates, with the use of an array processor, the scanning of an object using multiple frequencies, the frequency of an oscillator being changed several times at each illustrated transmitter position, so that sequential multiple frequencies may be transmitted by changing the frequency of the oscillator;



FIGS. 4C-4F schematically illustrate how electronic multiplexing may be used to eliminate the need for mechanical rotation, thus further enhancing system performance;



FIG. 4C shows n transducer elements in a circular array closely spaced to form a continuous array surrounding a body for use in the present inventive method of electronically multiplexing the array elements to eliminate the need for mechanical rotation;



FIG. 4D illustrates the transducers arranged in a linear fashion for pure electronic multiplexing of a linear transducer array according to the present invention, FIGS. 4C and 4D may be switched to act as either a transmitter or receiver by use of an active switch;



FIG. 4E is an electrical schematic diagram showing how the elements in FIGURES;



FIG. 4F shows how a passive network of diodes and resistors may be used to allow a single element to act as either a transmitter or a receiver, or in both capacities by the use of diode networks to isolate the transmitter from the receiver circuits according to the present invention;



FIG. 4G is an electrical schematic illustration of a Bow-Tie Antennae used with the apparatus and method of the present invention;



FIG. 4H is an electrical schematic illustration of a Log-Periodic Antennae used with the apparatus and method of the present invention;



FIG. 4I is an electrical schematic illustration of a Spiral Antennae used with the apparatus and method of the present invention;



FIGS. 5A and 5B schematically illustrate another electronic system which can be used to implement the apparatus and method of the present invention;



FIG. 5A shows multiple frequency signals transmitted by selecting and generating a type of waveform which inherently includes multiple frequencies, and using a parallel processor, so as to produce repetitive narrow bandwidth signals or a wide bandwidth signal, which differs from the system shown in FIG. 4A in that the amplified analog signals output by a preamplifier are digitized by a high-speed analog-to-digital converter rather than being input to a phase detector;



FIG. 5B shows a waveform generator having five basic components;



FIG. 6A is a system block diagram of an optical microscope used with the apparatus and method of the present invention;



FIG. 6B is a perspective view which schematically illustrates an interferometer used with the apparatus and method of the present invention;



FIG. 7 shows parallelization of the forward problem according to the present invention;



FIG. 8 illustrates a generic imaging algorithm, according to the present invention;



FIG. 9 is a photograph of a television display screen showing an image that simulates a cancer and an active image obtained as the inverse scattering solution using the method and a computer simulation of the apparatus of the present invention;



FIGS. 9A through 9D detail an expanded imaging algorithm with respect to that shown in FIG. 8 which schematically illustrates a flow diagram for programming a computer or combination of computer and array processor or parallel processor to enable the computer or combination mentioned to control the electronic system of FIG. 4 or 5 and the apparatus of FIGS. 1-3 so as to rapidly develop a reconstruction of the image of an object from scattered acoustic energy using inventive scattering techniques, in accordance with one presently preferred method of the invention;



FIG. 10 shows the generic solution to the forward problem according to the present invention;



FIGS. 11A-1, 11A-2 show the generic solution to the forward problem (or any square system) using a biconjugate gradient algorithm according to the present invention;



FIGS. 11B-1, 11B-2 show the generic solution to the forward problem using the stabilized conjugate gradient algorithm, according to the present invention;



FIG. 11C illustrates the application of the Lippmann-Schwinger Operator to the internal field estimate according to the present invention;



FIG. 12 illustrates the application of the Lippmann-Schwinger Operator in the presence of layering to the internal field estimate according to the present invention;



FIG. 13 illustrates the application of the Hermitian conjugate of the Lippmann-Schwinger operator;



FIG. 14 illustrates the application of the second Lippmann-Schwinger operator in the generic case;



FIG. 15 illustrates the application of Hermitian conjugate of the second Lippmann Schwinger Operators. FIG. 15 also illustrates a subroutine for the Top Loop Box in FIG. 8, according to the present invention;



FIG. 16 illustrates a rectangular scattering matrix according to the present invention;



FIG. 17 illustrates a N-by-N scattering region according to the present invention;



FIG. 18 illustrates a N-by-N scattering coalesced region according to the present invention;



FIGS. 19A and B illustrate application of the Hermitian conjugate of the Jacobian Matrix in the presence of layering according to the present invention;



FIGS. 20A through 20C illustrate application of the Jacobian Matrix in the generic case (free space—or no layering), according to the present invention;



FIG. 21 illustrates the propagation or transportation of fields from image space to detector position;



FIG. 22 the illustrates application the Hermitian conjugate of transportation of fields from image space to detector position;



FIGS. 23A and B illustrates application the Jacobian Matrix with correlation, according to the present invention;



FIG. 24 shows a transducer coupling according to the present invention;



FIGS. 25A, B, and C illustrates application of the Hermitian Jacobian Matrix in the generic case according to the present invention;



FIG. 26 illustrates the scattering subroutine, according to the present invention;



FIG. 27 shows the relevant geometry for the illustrates R reflection coefficients;



FIGS. 28A through 30C are photographs of computer renderings of computer models and their reconstructions using inverse scattering;



FIG. 31: Illustration of the water bath scanner and the associated two transducer arrays;



FIG. 32: Tilting cot for patient positioning when scanning using the water bath scanner;



FIG. 33.a: Block Diagram of the total water bath scanner system;



FIG. 33.b: Block Diagram of the transmit and receive MUX;



FIG. 34.a: Side view of the ultrasound compression plate scanner for transmission inverse scattering and reflection ultrasound;



FIG. 34.b: End view of the ultrasound compression plate scanner for transmission inverse scattering and reflection ultrasound;



FIG. 34.c: Isometric view of the compression plate scanner showing options for placing transducer elements either inside or outside of the top and bottom compression plates;



FIG. 34.d: Isometric view of the compression plate scanner showing options for placing side compression plates with optional transducer elements either inside or outside of the side compression plates;



FIG. 34.e: Isometric view of the compression plate scanner showing the option for placing front compression plates with optional transducer elements either inside or outside of the front compression plates;



FIG. 35.a: Speed of sound model of a breast compressed between two plates as in a mammogram;



FIG. 35.b: Reconstructed image of sound speed (including a random component) using a ½ wavelength pixel parabolic inverse scattering algorithm;



FIG. 35.c: Attenuation model of a breast compressed between two plates as in a mammogram;



FIG. 35.d: Reconstructed image of the acoustic attenuation using a ½ wavelength pixel parabolic inverse scattering algorithm;



FIG. 35.e: B-scan image created from simulated data for the model defined in FIGS. 5.a and 5.c with scattering from changes in speed of sound;



FIG. 36.a: “Time of flight image” i.e., speed of sound map (image) obtained from real time of flight data collected on a laboratory scanner through application of a time of flight CT algorithm;



FIG. 36.b: Sound speed map (image) obtained after 15 steps of the new fast 2 wavelength pixel algorithm starting with the same data as the time of flight CT algorithm;



FIG. 37.a: Normalized true speed of sound image, γ=[cO/c(x,y)−1], used to generate simulated data to test the 2 wavelength pixel algorithm;



FIG. 37.b: Normalized speed of sound image, γ=[cO/c(x,y)−1], reconstructed by the 2 wavelength pixel algorithm;



FIG. 37.c: Sampled true and reconstructed 2 wavelength pixel images through the horizontal, center line;



FIG. 38.a: A 120 by 120 pixel image of Re(γ) for the true object;



FIG. 38.b: A 120 by 120 pixel image of the Re(γ) reconstructed using the straight line, time of flight CT algorithm;



FIG. 38.c: A 120 by 120 pixel image made by the new fast parabolic algorithm;



FIG. 39.a: True 3-D speed of sound x-y image for breast cancer model on plane y=75;



FIG. 39.b: Parabolic reconstructed 3-D speed of sound image for the breast cancer model on plane y=75;



FIG. 40.a: Commercial B-scan Image of the cancerous breast tissue;



FIG. 40.b: Reflection tomography image of the corresponding slice of the cancerous breast tissue shown in the companion B-scan image, FIG. 10.a;



FIG. 41 The discrete propagator for A=2λ, A=λ and A=λ/2 for the new ‘big pixel’ parabolic method;



FIG. 42 2.5 MHz to 5 MHz bandwidth time signal through a cylinder; and



FIG. 43 Squared envelope of the 2.5 MHz to 5 MHz bandwidth time signal through a cylinder.




DEFINITIONS

Definition 1. An inverse scattering imaging method is any imaging method which totally or partially uses wave equation modeling and which utilizes a nonlinear operator relating the inversion data to the image components.


Definition 2. The inversion data is defined by that transformation of the wavefield data which is the range of the nonlinear operator. The domain of the nonlinear operator is the image components.


Definition 3. A parabolic propagation step is an arithmetic operation which computes the wavefield on a set of surfaces given the wavefield on another, disjoint set of surfaces using a parabolic differential (or difference?) equation in the derivation of the arithmetic operation.


Definition 4. By λ we mean the wavelength in the imbedding medium for the maximum temporal frequency in the wavefield energy.


Definition 5. By Δ we mean the average spatial separation of the points used to discretize the discretized wavefield in the computer implementation of the method.


Definition 6. By a short convolution operation we mean a discrete convolution which is faster if done by direct summation rather than by FFT.


DETAILED DESCRIPTION OF INVENTION

The following explanation of notation is given to facilitate obtaining an understanding of the invention. Unless otherwise noted, the notation will essentially be the same as U.S. Pat. No. 4,662,222. The scattering potential γ changes from point to point within an object or body as well as changing with frequency of the incident field. Thus

γωj≡γω(xj), xjεR3

or R3 is used to signify the scattering potential at pixel j or point j, for the incident field at frequency ω.γω can be considered to be the vector composed of all values of j, i.e. for all pixels:

γω≡(γω1γω2 . . . γωN)T,

where T indicates “transpose”. For our purposes of exposition and simplicity, we will hence forth consider the case where γ is independent of frequency, although this is not necessary. This convention also is essentially the same as that employed in the U.S. Pat. No. 4,662,222.


Notation


Vector Field Notation


The following notation will be used throughout the patent to denote a vector field describing elastic waves, or electromagnetic waves:
f(r)(fx(x,y,z)fy(x,y,z)fz(x,y,z))(fxfyfz)T

(T-denotes “transpose”) is used to represent the total field.
fi(r)(fxi(x,y,z)fyi(x,y,z)fzi(x,y,z))(fxifyifzi)T

(T-denotes “transpose”) is used to denote the incident field. The incident field is the field that would be present if there was no object present to image. In the case of layering in the ambient medium, it is the field that results from the unblemished, piecewise constant layering.


The scattered field is the difference between the total field and the incident field, it represents that part of the total field that is due to the presence of the inhomogeneity, i.e. the “mine” for the undersea ordnance locater, the hazardous waste cannister for the hazardous waste locator, the school of fish for the echo-fish locator/counter, and the malignant tumor for the breast scanner:
fs(r)f(r)-fi(r)fs(r)(fxs(x,y,z)fys(x,y,z)fzs(x,y,z))(fxsfysfzs)T

T-denotes “transpose” fωφinc(r) denotes the scalar incident field coming from direction (source position) Φ at frequency ω. The r could represent either a 3 dimensional, or a 2 dimensional vector of position.


Scalar Field Notation


A scalar field is indicated by a nonbold f(r), rεR3


EXAMPLE 1
Acoustic Scattering—Scalar Model Equations

This first example is designed to give the basic structure of the algorithm and to point out why it is so fast in this particular implementation compared to present state of the art. The background medium is assumed to be homogeneous medium (no layering). This example will highlight the exploitation of convolutional form via the FFT, the use of the Frechet derivative in the Gauss-Newton and FR-RP algorithms, the use of the biconjugate gradient algorithm (BCG) for the forward problems, the independence of the different view, and frequency, forward problems. It will also set up some examples which will elucidate the patent terminology. The field can be represented by a scalar quantity f. The object is assumed to have finite extent The speed of sound within the object is c=c(x). The speed of sound in the background medium is the constant co. The mass density is assumed to be constant. The attenuation is modelled as the imaginary part of the wavespeed. These are simplifying assumptions, which we make so as to focus on the important aspects of the imaging algorithm. By no means is the imaging algorithm restricted to the scalar case, either in theory, or, more importantly, in practical implementation.


We consider the two dimensional case first of all (equivalently the object and incident fields are assumed to be constant in one direction). Given an acoustic scatterer with constant mass density and constant cross-section in z, illuminated by a time harmonic incident field, fωφinc, with eiωt dependence and source position (incident angle), φ, the total field satisfies the following integral equation.
fωϕinc(ρ_)=fωϕ(ρ_)-γ(ρ_)fωϕ(ρ_)gω(1ρ_-ρ_1)xywhereγ=c02c2-1(1)


where



ρ=(x, y) and where f≡fωφ is the field internal to the object at frequency C and resulting from the incident field from source position φ,: fωφinc. The 2-D Helmholtz Green's function is given by:
gω(1ρ_-ρ_1)=kω24iH0(2)(kωlρ_-ρ_l),kω=ωc0

where H (2) is the Hankel function of the second kind, and zeroth order.


Now it is required to compare the field measured at the detectors, with the field as predicted by a given guess, γ(n) for γ. To accomplish this, first define the scattered field at the detectors as the total field minus the incident field.

fωφsc( ρ)≡fωφ( ρ)−fωφinc( ρ)


This represents the field due entirely to the scattering potential. Using this relation to rewrite (1), gives

fωφsc( ρ)=∫∫γ( ρ′)fωφ( ρ′)gω(l ρpl)dx′dy′  (2)


These two equations, (1) and (2) are the basis of the imaging algorithm. They must be discretized, then exported to the computer in order to solve practical imaging problems. The purpose of the apparatus and method herein described, is to solve the discretized form of these equations without making any linearizing assumptions (such as used in conventional diffraction tomography), and in real time, with presently available computers.


Discretization of the Acoustic Free-Space Lippmann Schwinger Integral Equation and Green's Function—2D Free Space Case


Let us for a moment drop the frequency and source position subscripts. The scalar field “γf” is given by
γf(xy)γ(xy)f(xy)


Discretization of the integral equation is achieved by first decomposing this function into a linear combination of certain (displaced) basis functions
γf(xy)n=1Nxm=1Nyanms(x-nδy-mδ)(3)

where it has been assumed that the scatterer y lies within the support [0, Nxδ]x[0, Nyδ]—a rectangular subregion.


The basis functions S can be arbitrary except that we should have cardinality at the grid nodes:
S(00)=1

whereas for all other choices of n′,m′,
s(nδmδ)=0,nm0.


An example of such a function is the 2-D “hat” function. Our algorithm uses the “sinc” function has its basic building block—the Whittaker sinc function which is defined as:
sinc(x)=sin(πx)πx


The two dimensional basis functions are defined by the tensor product:
s(x-nδy-mδ)=sinc((x-nδ)/δ)·sinc((y-mδ)/δ)(4)

If the equality in equation (3) is presumed to hold at the grid points x=nδ, y=mδ: the coefficients αnm can be determined to be precisely the value of the field γf:
γf(xy)n=1NxNym=1γ(nδmδ)f(nδmδ)s(x-nδy-mδ)(5)

Now this expression for the product γf can be substituted into (1):
fi(xy0)=f(xy)-kω24inmγnmfnmHo(2)(kωlρ_-ρ_l)S(x-nδy-mδ)2ρ(6)

In particular this equation holds at
(xy)=(nδmδ)

for which we get:
fnmi=fnm-n-1Nxm-1Nyγnmfnmfn-n,m-m,(7)

n=1, . . . , Nx m=1, . . . , Ny where the 2-D discrete Green's function is defined as:
g(n-n,m-m)k24iH02(k(nδ-xmδ-y))S(x-nδy-mδ)2ρ(8)


Although it is not clear that the dependence of g on m and n is as so stated, in is indeed the case, since we have by the substitution of the transformation

x′→x′+nδ
y′→y′+m′δ

into this last equation shows explicitly that the discretized Green's function “g”, does in fact depend only on the differences n-n′, and m-m″. Thus the discrete equation (7) has inherited the convolutional form of the continuous equation (1). It is this fact which allows the use of the 2-D FFT to compute the summation in (8) in only order NXNY log2(NxNy) arithmetic operations and is therefore critical to the real-time speed of the algorithm.


Expression of the Discretized Lippmann-Schwinger Equation in Compact Notation


The discretized Lippmann-Schwinger equation can be rewritten using a compact operator notation. We write (7) as:

fωφi=(I−Gω[γ])fωφ  (9)


where Gω represents 2-D discrete convolution with the discrete, 2-D Green's function for frequency ω and [γ] denotes the operator consisting of pointwise multiplication by the 2-D array γ. I denotes the identity operator.


In exactly the same manner, and with the same definitions, we can write the scattered field equation (2) as:

fωφsc=Gω[γ]fωφ  (10)


for the scattered field inside of the object, and finally

fωφmeas≡T(fωφsc)=T(Gω[γ]fωφ)  (11)


for the measured value of the scattered field at the detectors. The T operator is used to indicate the “truncation” of the calculated scattered field, calculated on the entire convolution range (a rectangle of samples [1,Nx]×[1,Ny] covering γ), onto the detectors which lie outside the support of γ. (see FIGS. 1A/B/C.). In other words T is simply the operation of “picking” out of the rectangular range of the convolution, those points which coincide with receiver locations.


Extension of the Method to Remote Receivers with Arbitrary Characteristics


In the event that the receivers do not lie within the range of the convolution and/or they are not simple point receivers, we need to modify the measurement equations (11). It is well known that, given source distributions within an enclosed boundary, the scattered field everywhere external to the boundary can be computed from the values of the field on the boundary by an application of Green's theorem with a suitably chosen Green's function, i.e.:
fs(ρ_)=Bfs(ρ_)p(ρ_,ρ_)l,ρ_

outside boundary B (12), where P is the Green's function the integral is about the boundary, and d1′ is the differential arclength (in the 3-D case, the integral is on an enclosing surface). Equation (12) allows for the construction of a matrix operator which maps the boundary values of the rectangular support of the convolution (2Nx+2Ny−4 values in the discrete case) to values of the scattered field external to the rectangle. Furthermore, this “propagator matrix” can be generalized to incorporate more complex receiver geometries. For example, suppose that the receiver can be modeled as an integration of the scattered field over some support function, i.e.;


vn≡signal from receiver

n=∫∫fs( ρ)Sn( ρds)  (13)

Where Sn is the support of receiver n. Then from (12):
vn=Bfs(ρ_){Sn(ρ_)P(ρ_,ρ_)s}l,ρ_outsideboundaryB=Bfs(ρ_)P(n,ρ_)l,P(n,P)=Sn(ρ_)P(ρ_,ρ_)s(14)

Discretizing the integral gives the matrix equation:
vn=i=1NbPnlfis,n=1,,Nd(15)

where Nb=2Nx+2Ny−4 is the number of boundary (border) pixels and Nd is the number of receivers. Equation (15) defines the matrix that we shall henceforth refer to as P or the “propagator matrix” for a given distribution of external receivers. The equation:

vωφmeas=P107(Gω[γ]fωφ)  (16)

includes (11) as a special case for which the receivers are point receivers inside the convolution support. Note that P is a function of frequency, but is not a function of source position. vωφmeas is a vector of dimension Nd.


The added flexibility of this propagator matrix formulation is particularly advantageous when interfacing our algorithms with real laboratory or field data. Often times the precise radiation patterns of the transducers used will not be known a priori. In this event, the transducers must be characterized by measurements. The results of these measurements can be easily incorporated into the construction of the propagator matrix P allowing the empirically determined transducer model to be accurately incorporated into the inversion. Equations (9) and (16) then provide in compact notation the equations which we wish to solve for the unknown scattering potential, γ. First we consider the forward problem, i.e. the determination of the field fωφ for a known object function γ and known incident field, fωφi. Then we establish how this forward problem is then incorporated into the solution of the inverse problem, i.e., the determination of γ when the incident fields, and the received signals from a set of receivers are known. Note that the internal field to the object is also—along with the object function γ, an unknown in the inverse problem.


Since (9) is linear in fωφ it can be solved by direct inversion of the linear system:

f=(I−G[γ])−1fi  (17)

In discrete form, this is just a matrix equation (after linearly ordering the 2-D supports of the domain and range into vectors). Since the dimension of the range and domain of the linear system is NxNy, the dimension of the linear system is the same giving a total of (NxNy)2 elements. The arithmetic required to solve the system by standard direct means is thus order (NxNy)3. In other words, a doubling of the edge dimension of the problem space will increase the CPU time required by a factor 26=64 times! The arithmetic work required will quickly become intolerable as the size of the problem increases. It is precisely this large growth rate which has convinced many researchers, not to pursue inverse scattering approaches based on integral equations. One could, of course, go to an iterative method. A single iteration of an iterative linear system solver, such as bi-conjugate gradients requires order (NxNy)2 operations (essentially a matrix-vector product). It can be shown that the growth rate in the number of required iterations for sufficient convergence for the BCG algorithm is order N for this equation. Thus, the overall computational complexity is order N5—only one order of N has been saved over direct inversion. Since inverse problems in 2-D generally require order N views, the iteration must be done N times and we are back to order N6 computation for BCG.


The key to overcoming this objection is the convolutional form of (7). If this is exploited by use of the FFT algorithm, the computation needed to perform a matrix-vector product is only order NxNy log2 (NxNy). This allows the BCG algorithm to be applied to a single view with order N3 log2(N) operations and to all views with order N4 log2(N) operations. Due to the slow growth rate of log2(N), this is essentially a reduction of two orders of N over nonconvolutional methods. Also, this convolutional approach avoids the necessity of storing the order N4 computer words needed to contain the linear system in full matrix form. Only order N2 words are needed due to the shift invariance of the discrete Green's kernel in (7). It is these two savings, more than any other advance that we have made, that allows us to perform inverse scattering in reasonable time for meaningfully sized problems. The use of BCG over the original CG algorithm also represents a major advance since it converges in essentially the square root of the number of iterations needed by CG. This combination of FFT and CG algorithm was originally developed in [Borup, 1989, Ph.D. dissertation, Univ. of Utah, Salt Lake City], herein incorporated by reference.


The Imaging or Inverse Problem


In order to solve the imaging problem, we need a set of equations relating the unknown scattering potential, γ, and total fields, fωφ, with measurements of the scattered field on a set of external detectors. These detector equations are given in (16). Equation (16), and the internal field equations (9) are the equations which are solved simultaneously to determine γ and the fωφ. There are NxNy unknowns corresponding to the γ values at each of the grid points, and Nx×Ny×Ω×Φ, unknowns corresponding to the unknown fields, Ω is the number of frequencies, and Φ is the number of source positions (angles). We have improved upon the state of the art by considering the internal field equations to define fωφ. for a given γ. Thus the total number of unknowns is reduced to NxNy.


The total number of measurement equations is Nd×Ω×Φ where Nd is the number of detectors. In the general case, where, the sources and detectors do not completely surround the object, the problem of determining γ is “ill-posed”, in a precise mathematical sense, therefore, in order to guarantee a solution, the number of equations Nd×Ω×Φ>NxNy, is chosen to be larger than the number of pixel values for over determination. Then the system is solved in the least squares sense. More specifically the solution of (9,16) for γ and the set of fields, fωΦ, in the least squares sense is obtained by minimizing the real valued, nonlinear functional:
minγωϕrωϕ2minγωϕvωϕmeas-PωGω[fωϕ]γ2(18)

subject to the satisfaction of the total field equations, (9), as constraints. The vector rωφ of dimension Nd is referred to as the “residual” for frequency, ω., and angle, φ.


The methods used to solve the nonlinear optimization problem in our inverse scattering algorithms are, thus far, all “gradient methods” in that second derivative information is not used (as it is in the full Newton and quasi-Newton methods). The principal computation involves the calculation of the gradient vector of (18). A straight forward calculation gives the gradient formula: ∇f(x)=jH(x)r(x) where the superscript H denotes the Hermitian transpose (complex conjugate transpose of the matrix) and J is the Jacobian of the nonlinear system:
J(x)=[x1a1xNa1x1aMxNaM].(19)


The simplest gradient algorithm is the Gauss-Newton (GN) iteration. The GN iteration for finding a solution to ∇f=0 is given by:

xn=y−a(x)(n))  (20.1)
δx(n)=(JnHJn)−1JnHr(n)  (20.2)
x(n+1)=x(n)+δx(n)  (20.3)

where a is the vector of nonlinear equations. This iteration is well defined assuming that the columns of Jn remain linearly independent. Since (20.2) is equivalent to the quadratic minimization problem:
minδx(n)Jnδx(n)-r(n)M2(21)

it can be solved by the minimum residual conjugate gradient method (MRCG). This approach also ensures that small singular values in Jn will not amplify noise if care is taken not to overconverge the iteration.


In the previous patent U.S. Pat. No. 4,662,222 the fields fωφ were considered to be independent variables. In this present patent these fields are considered to be dependent variables, with dependence upon γ given implicitly by

fωφ(n)=(I−Gω(n)])−1fωφinc


In order to find the Jacobian expression we must then differentiate the residual vector defined in (18) with respect to γ. The details of this calculation are given in [Borup, 1992.]. The result is:
Jωϕ=δrωϕδγ=Pω(I-[γ]Gω)-1[fωϕ](22)

The final Gauss-Newton algorithm for minimizing (18) subject to equality constraints (9) is:


GN 1. Select an initial guess, γ(0)


GN 2 Set n=0


GN 3 Solve the forward problems using (biconjugate gradients) BCG:
fωϕ(n)=(I-Gω[γ(n)])-1fωϕincω=1,,Ωϕ=1,,Φ


GN 4 Compute the detector residuals:
rωϕ(n)=vωϕmeas-PωGω[fωϕ(n)]γ(n),ω=1,,Ωϕ=1,,Φ


GN5 If
Σωϕrωϕ(n)2<,stop.


GN6 Use MRCG to find the least squares solution to the quadratic minimization problem.
minδγ(n)Σωϕrωϕ(n)+PωGω(I-[γ(n)]Gω)-1[fωϕ(n)]δγ(n)2

for δγ(n).


GN7 Update γ. γ(n+1)(n)+δγ(n).


GN8 Set n=n+1, go to GN 3


The crux of the GN iteration is GN 6 where the overdetermined quadratic minimization problem is solved for the scattering potential correction. This correction is approximated by applying a set of M iterations of the MRCG algorithm. The details of GN 6 are:


GN.6.1 Initialize the zeroth iterate of MRCG: δγ0=0


GN.6.2 Initialize the MRCG residuals equal to the GN outer loop residuals: rωφ0=rωφ(n)


Note that iterates pertaining to the MRCG iteration are indexed without the ( ) in order to distinguish them from outer GN loop iterates.


GN.6.3 Compute the gradient of the MRCG quadratic functional:
g0=Σωϕ[fωϕ*(n)](I-Gω*[γ*(n)])-1Gω*PωHrωϕ0


GN.6.4 Set the initial search direction equal to minus the gradient: p0=−g0


GN.6.5 Set m=0


GN.6.6 Compute
tωϕn=pωGω(I-[γ(n)]Gω)-1[fωϕ(n)]pmmω=1,,Ωϕ=1,,Φ


GN.6.7 Compute the quadratic step length:
αm=Σωϕtωϕm2/gm2


GN.6.8 Update the solution of the quadratic minimization: δγm+1=δγmmpm


GN.6.9 Update the MRCG search residuals: rωφm+1=rωφmmtωφm


GN.6.10 Compute the gradient of the MRCG quadratic functional:
gm+1=ωϕ[fωϕ*(n)](I-Gω*[γ*(n)])-1Gω*pωHΓωϕn+1


GN.6.11 Compute
βm=gm+12/gm2


GN.6.12 Update the MRCG search direction: pm+1=−gm+1mpm


GN.6.13 If m=M, go to CN.6.15


GN.6.14 m=m+1. go to GM.6.8


GN.6.15 Equate the solution of the quadratic minimization problem with the last iterate of MRCG: δγ(n)=δγM


GN.6.16 Return to the GN algorithm at GN 7


A problem with the algorithm above is the presence of the inverse of the transposed total field equation (I−[γ(n)]Gω)−1 in the computation of the Jacobian and its adjoint in 6.3 and 6.10 Since we do not invert (or even store) these large linear systems, the occurrence of these inverse operators must be dealt with. This problem can however be overcome by computing the action of this operator on a given vector, as needed during the MRCG iteration, by a few iterations of BCG. When performed in this way, the shift invariance of the Green's function is exploited in a maximal way. No large matrices need to be inverted or stored. This is because the Jacobian implementation now consists exclusively of shift invariant kernels (diagonal kernels such as pointwise multiplication by γ or f and the shift invariant kernel composed of convolution with the Green's function) Such shift invariant kernels can be implemented efficiently with the FFT as previously described.


An even greater increase in numerical efficiency can be obtained in cases for which the approximation:

(I−[γ(n)]Gω)−1≈(I+[γ(n)]Gω)  (23)


(which is similar to the Born approximation of the forward problem) can be used in the Jacobian. This has been found to be the case for many acoustic scattering problems for biological tissue, and for EM problems for which the contrast in dielectric contrast is small. The other two gradient algorithms that are used to solve the inverse scattering equations are the Fletcher-Reeves and Ribiere-Polak algorithms. For the inverse scattering equations, they are given by the iteration:


RP 1. Select an initial guess, γ(0)


RP 2. Solve the forward problems using (biconjugate gradients) BCG
fωϕinc=(I-Gω[γ(0)])fωϕ(0)ω=1,,Ωϕ=1,,Φ


for the internal fields, fωφ(0)


RP 3 Computer the detector residuals:
rωϕ(0)=PωGω[fωϕ(0)]γ(0)-vωϕmeas,ω=1,,Ωϕ=1,,Φ


RP 4 Compute the gradient:
rωϕ(0)=PωGω[fωϕ(0)]γ(0)-vωϕmeas,ω=1,,Ωϕ=1,,Φ


RP 5 computer the search direction: p(0)=−g(0)


RP 6 n=0


RP 7 compute the Jacobian operating on the search direction:
tωϕ(0)=PωGω(I-[γ(n)]Gω)-1[fωϕ(0)]p(0).ω=1,,Ωϕ=1,,Φ


RP 8 compute the step length (quadratic approximation):
αn=-Reωϕ(rωϕ(n),tωϕ(n))/ωϕtωϕ(n)2


RP 9 update the solution: γ(n+1)(n)np(n)


RP 10 Solve the forward problems using (biconjugate gradients) BCG:
fωϕinc=(I-Gω[γ(n+1)])fωϕ(n+1)ω=1,,Ωϕ=1,,Φ


RP 11 Compute the detector residuals:
rωϕ(n+1)=PωGω[fωϕ(n+1)]γ(n+1)-vωϕmeas,ω=1,,Ωϕ=1,,Φ


RP 12 If
ωϕrωϕ(n)<,stop


RP 13 Computer the gradient:
g(n+1)=ωϕ[fωϕ*(n+1)](i-Gω*[γ*(n+1)])-1Gω*PωHrωϕ(n+1)


RP 14 Compute:
βn={g(n+1)2/g(n)2g(n+1),g(n+1)-g(n)/g(n)2,


Fletcher-Reeves


Ribere-Pollack


RP 15 Update the search direction: p(n+1)=−g(n+1)np(n)


RP 16 n=n+1, go to RP 7


The distinction between FR and RP lies in the calculation of βin RP 14. It is generally believed that the RP calculation is more rapidly convergent. Our experience indicates that this is true and so we generally use the RP iteration rather than FR.


Comparison of the linear MRCG iteration and the nonlinear RP iteration reveals that they are very similar. In fact, RP is precisely a nonlinear version of MRCG. Note that the only difference between them lies in the fact that the RP residuals, computed in steps RP.10 and RP.11 involve recomputation of the forward problems while the MRCG residuals are simply recurred in 6.9 (additional, trivial, differences exist in the computation of the α and β's). In other words, the RP iteration updates the actual nonlinear system at each iteration, while the MRCG simply iterates on the quadratic functional (GN linearization). The overall GN-MRCG algorithm contains two loops—the outer linearization loop, and the inner MRCG loop, while the RP algorithm contains only one loop. Sine the RP algorithm updates the forward solutions at each step, it tends to converge faster than GN with respect to total iteration count (number of GN outer iterations times the number, M, of inner loop MRCG iterates). The GN method is, however, generally faster since an MRCG step is faster than an RP step due to the need for forward recomputation in the RP step. The overall codes for the GN-MRCG algorithm and the RP algorithm are so similar that a GN-MRCG code can be converted to an RP code with about 10 lines of modification.


Before leaving this example it is important to note that the examples given in this patent all assume that the different frequencies, ω, and views, φ are all computed in serial fashion. It is important to note however, that another important link in the real time implementation of our algorithm, is the fact that the different frequencies and different views are independent in both the forward problem and Jacobian calculations, and therefore can be computed independently and in parallel. This should be clear by examining, for example, the computations in GN.3 and GN.6.3. The forward problem calculations, GN.3 are completely independent and in the gradient calculation, GN.6.3, the computations are independent in frequency and view up to the point at which these contributions are summed to give the gradient. The GN and RP algorithms could thus be executed on a multinode machine in parallel with node intercommunication required only 2 to 3 times per step in order to collect sums over frequency and view number and to distribute frequency/view independent variables, such as scattering potential iterates, gradient iterates, etc., to the nodes.


EXAMPLE 2
Image Reconstruction in Layered Ambient Media Using Optimization of a Bilinear or Quadratic Objective Function Containing All Detector Measurement Equations, and with Internal Fields Represented as a Function of the Scattering Potential

The examples in the previous patent U.S. Pat. No. 4,662,222 were characterized by the consideration of both the internal fields, and the object function y to be independent variables of equal importance. The example given above, however, has the distinguishing characteristic that the object function γ is considered to be the sole independent variable, and the internal field resulting from the incident field and γ interaction is considered to be a function of this γ. We may write
fθωscfθωsc(γ)θ=1,,Θω=1,,Ω(24)

for the scattered field.


The difference between this example, and the previous one is that the background medium, in which the object is assumed to be buried, is assumed to be homogeneous in the previous case, whereas it is here assumed that the background is layered. The residual is defined in the same way as the previous example, with dθω used to represent the Nd-length vector of measured scattered field at the detector positions. That is, the residual vectors for all ω, θ are defined in the following way.
rθωfθωsc(γ)-dθω=0θ=1,,Θω=1,,Ω(25)


The functional F(γ), dependent upon γ is defined in the same way as example 1:

F(γ)≡Σθω∥rθω22  (26)


This is the functional, dependent upon the object function γ in a highly nonlinear manner, which must be minimized in order to determine the γ values at each gridpoint in the object space.


It is again necessary to compute the Jacobian:
(fθωscγ)=To((I-Gω·[γ])-1(Gω·[fθω]))θ=1,,Θω=1,,Ω(27)

where θ again refers to the multiple views and the ω to the multiple frequencies available to us. That is, we again assume that the noise level in the system is zero. We apply the Gauss Newton algorithm to the associated least squares problem and get the same result. This leads to the overdetermined system described above (the notation is identical to the previous example, the difference lies in the actual form of the layered Green's function).
[To([I-Gk·[γ]]-1Gk)Iθxθ][[f1k][fθk]]δγ(n)=-[[r1k(n)][rθk(n)]]k:(28)

which must be solved for the γ-update δγ The left hand side of the above formula is given by
[To([I-Gk·[γ]]-1Gk)Iθxθ]{[TGk(1-[γ(n)]Gk)-10..00..0TGk(I-[γ(n)]Gk)-1]}(29)


Again, we can use the complex analytic version of the Hestenes overdetermined conjugate gradient algorithm, adapted for least squares problems to iteratively solve this system. This is equivalent to finding the minimum norm solution.


The formula for the Jacobian in the layered medium situation in the presence of multiple frequencies, is,
(fθωscγ)=To((I-Gω·[γ])-1(Gω·[fθω]))θ=1,,Θω=1,,Ω(30)

where Gω is the Layered Green's function for the frequency ω. Therefore, in effect, to determine the γ-update, δγ, we merely solve the multiple view problem for each particular frequency, that is, we solve the overdetermined system:
[To([I-Gk·[γ]]-1Gk)Iθxθ][[f1k][fθk]]δγ(n)=[[r1k(n)][rθk(n)]]k=1,,Ω(31)

which in component form is:
[T·([I-Gk·[γ]]-1Gk)][fjk]δγ(n)=-[rjk(n)]k+1,,Ωj+1,Θ(32)


See also the computer code in this patent for the complete description of this procedure.


For multiple view and multiple frequency inversion then, the inversion scheme in layered media reads:


STEP 1. Choose an initial guess for the scattering potential, γ(o).


STEP 2. Set n=0


STEP 3. solve the forward problems
(I-G[γ])fθω=fθωinc,θ=1,,Θω=1,,Ω

using biconjugate gradients, and use the result to compute the φ-forward maps φθω(n)) and
rθω(n)ϕθω(γ(n))-dθω=0θ=1,,Θω=1,,Ω


STEP 4. Determine the L2 norm of the total residual vector
tr(n)[r11(n)rθ(n)][r1Ω(n)rθΩ(n)]STEP5.Iftr(n)2ω=1Ωθ=1Θrθω(n)2<thenstop.


Step 6. Solve the NdθΩ by N overdetermined system:
T·[([I-G1·[g]]-1G1)Iθθ00([I-GΩ·[g]]-1GΩ)Iθθ]

in the least squares sense for δγ(n), using the conjugate gradient algorithm adapted for least squares problems.


STEP 7. Update 7 via the formula

γ(n+1)(n)+δγ(n)


STEP 8. Set n=n+1, go to begin


The actual values of the angles of incidence θ are chosen to diminish as much as possible the ill conditioning of the problem. Equally spaced values 0<θ<2Π are ideal, but experimental constraints may prohibit such values, in which case the multiple frequencies are critical.


For biological contrasts (γ<0.15 say) the following approximation, which is essentially identical to the assumption in the previous example, is valid:

[I−Gω[γ]]−1=[I+Gω[γ]]  (33)


This is the same Born-like approximation restricted to the Jacobian calculation only, that is assumed in example 1. This approximation has a much larger radius of convergence than the standard born approximation, but substantially reduces the computational complexity of the problem.


How to Implement the Convolution and Correlation in Layered Medium Imaging


All of the above calculations have been carried out without using the fact that the operation of matrix multiplication with G is in fact the sum of a convolution and a correlation, which is transformed into a pair of convolutions. Symbolically GL=GR+GV, where GR is the correlation and GV is a convolution operator. The actual numerical implementation of the above formulas, therefore, is somewhat different than would naively appear. The implementation of the correlation with convolution is a little more challenging than the convolution in the homogeneous case, in that a change of variables must be used to convert the correlation to a convolution before the Fast Fourier Transform (FFT)s can be applied. A “reflection” operator must be applied to y before the convolution is applied, this operator is denoted by Z, and is defined by:

(zf)(x,y,z,)≡f(x,y,h=z)  (34) [[[f11][fθ1]][[f1Ω][fΘΩ]]]δg(n)=-[[[r11(n)][rθ1(n)]][[r1Ω(n)][rΘΩ(n)]]]

The use of this operator is incorporated into the action of the Lippmann-Schwinger equation on the field f That is, the equation for the internal fields, which in example 1 read:

(I−Gγ)f=finc(I−Gγ)

Now becomes

(I−Gvγ−G-RZγ)f=finc


This change has non-trivial ramifications for the construction of the algorithms discussed above. The FFT implementation of this is given below. To prevent the notation from getting out of hand we use [ ] to indicate that a diagonal matrix is constructed out of an n vector, so that [γ(n)]fφ(n) denotes pointwise multiplication of the vectors γ and f. Also F is used to denote the Fourier transform, and * is used to indicate “convolution”. In this notation, the Lippmann-Schwinger equation becomes, in the symbolic form adopted above,

(I−Gvγ−GRZγ)f=(fGv*([γ]f)−GR*Z([γ]f))=(f−F−1{(F(GvF([γ]f))−(F(GRF{Z([γ]f)})})(35)


Substantial savings is achieved through the observation that the Fourier Transforms of GV and GR need only be done once, then stored, and finally, that the reflection operator, Z, will commute with Fourier transformation in the x and y (horizontal) directions, since it only involves reflection in the z (vertical) direction.


There are changes also, for example in the computation of f. The biconjugate gradient algorithm requires the use of the adjoint in the solution of the above equation. This adjoint differs from the homogeneous ambient medium case with the introduction of the correlation operator. Also, there are substantial changes in the implementation of the overdetermined conjugate gradient algorithm used to solve (in the least squares sense) the overdetermined system of equations. In this section, as before, we incorporate the ksc2 factor into the GV and GR terms. Of course, rather than carry out the inversion of a matrix, we use biconjugate gradients and Fast Fourier Transforms (FFT's), which require the Lippmann Schwinger operator, which we denote by LS, and its adjoint, which are defined by the following formulas:

(LS)f≡(I−F−1└(Gv•F)+(GR•F∘Z)┘)([γ]f)  (36)

Where Gv≡F(Gv) and GR≡F(GR) are the Fourier Transforms of GV and GR and need only be calculated once, then stored. The unitarity of F is used below to determine the adjoint (LS)H.

(LS)H≡((I−F−1[(Gv·F)+(GR·F∘Z)])[γ])H=[ γ](I−[(F−1GVH)+(ZHF−1GRH)](F−1)H)=( γ)(I−[(F−1GV)+(ZF−1GR)]F)  (37)

where we have used the unitarity of Fourier Transformation F:FH=F−1, and the fact that point wise multiplication is used, and the fact that ZH=Z. For practical problems it is best to obtain a reasonable first guess using low frequency data, then to build on this using the higher frequency information.


The Actual Construction of the Layered Greens Function


Up to this point, we have assumed the existence of the layered Green's function, but have not shown explicitly how to construct it. This is an essential step. For simplicity we deal with the scalar case, although the inclusion of shear wave phenomena is conceptually no more difficult than this case. The details are given in [Wiskin, 1991]. The general idea is based upon standard ideas in the literature concerning reflection coefficients in layered media. See for example [Muller, G., 1985, Journal of Geophysics, vol. 58] which is herein incorporated by reference. The novelty is the combination of the reflection coefficients into a bona fide Green's function, and the utilization of this in a forward problem, then more importantly, in the inverse problem solution. The procedure involves decomposing the point response in free space into a continuum of plane waves. These plane waves are multiply reflected in the various layers, accounting for all reverberations via the proper plane wave reflection/transmission coefficients. The resulting plane waves are then resummed (via a Weyl-Sommerfeld type integral) into the proper point response, which in essence, is the desired Green's function in the layered medium. The final result is:
GV(x-x;Δz-Δz)=-ωu(x-x)C(u)[-ωbscΔz-Δz+R+R-ωbscΔz-Δz]u,and(38)GR(x-x;Δz-Δz)=-ωu(x-x)C(u)[R+-ωbsc(Δz-Δz)+R-ωbsc(Δz-Δz)]u,whereC(u)=(1-R+R-)-1ωubsc,(39)


R and R30 are the recursively defined reflectivity coefficients described in Muller's paper, u is the horizontal slowness,
usinθc

Recall that Snell's law guarantees that u will remain constant as a given incident plane wave passes through several layers.


ω is the frequency


bsc is the vertical slowness for the particular layer hosting the scattering potential


This Green's function for imaging inhomogeneities residing within Ambient Layered Media must be quantized by convolving with the sinc (really Jinc) basis functions described below. This is done analytically in [Wiskin, 1991], and the result is given below.


Discretization of Layered Medium Lippmann-Schwinger Equation


Unfortunately the basis functions that were used in the free space case (example 1) cannot be used to give the discretization in the layered medium case because of the arbitrary distribution of the layers above and below the layer which contains the scattering potential. The sinc functions may continue to be used in the horizontal direction, but the vertical direction requires the use of a basis function with compact support (i.e. should be nonzero only over a finite region). The sampled (i.e. discrete) Green's operator in the layered case is defined by

GL(y,k,m)=GL(jδ,kδ,mδ)  (40)

Where the three dimensional basis functions are of the form:

B(x)=B(x,y,z)=Sj,δ)(x)Sk,δ(ym(z),  (41)

(I is the set of integers) and where:
x=(xyz)R3xjkm(jδkδmδ)j,k,mI

The basis functions in the horizontal (x-y) plane S(j,δ) are based upon the Whittaker
sinc(x)=sin(πx)πx

The basis functions in the vertical (z) direction, on the other hand, are given by

Λm(Z)=Λ(z−mδ),

where Λ(z) is the “tent” function:
Λ(z)={1δ(z+δ)z[-δ,0]-1δ(z-δ)z[0,δ]

The form of the discretized Lippmann Schwinger equation is
finc(nδmδlδ)fnmlinc=f(nδmδlδ)-ksc2ΣnmlGπ/δV(n-nm-ml-l)γL(nδmδlδ)-ksc2ΣnmlG^π/δR(n-nm-ml-l)γL(nδmδlδ)(42)

with the vectors Gπ/δR and Gπ/δV are the result of first convolving the Green's function with the basis functions, and then sampling at the gridpoints. The superscript R refers to the correlation part whereas the superscript V refers to the convolution part of the Green's function. Both are computed via Fast Fourier Transforms, and are very fast. In the free space case the correlation part is zero. That is, we make the definition for L=V or R—(ie. convolution or correlation):
Gπ/δL(n,m,l)R3S(x)S(y)Λ(z)GL(nδ-xmδ-ylδ-z)xyz,(43)

substitution into equation (17) yields the form (42). In matix notation, this equation gives:

f(I−ksc2GVL−ksc2ĜR*ZγI)−1finc  (44)

where now * represents discrete convolution in 3D. Now that we have constructed the Integral equations in discrete form, we must derive the dosed form expression for the layered Green's operators. The construction of the closed form expression of the sampled and convolved acoustic layered Green's function Gπ/δL from the layered Green's function given in equations (38 and 39) above is given by performing the integration in (43) analytically. This process is carried out in [Wiskin, 1991]. The final result is, for the convolutional part,
GV(m,n)=Au=0uδJ0C(u)2iωbsc{R-R+{-1+(ωbSCδ-1)scδωb}+{1--ωbscδ(ωbscδ-1)scδωb}}un=0Au=0uδJ0C(u)1ω2bsc2δ{2-ωbscδ--ωbscδ}{-ωbscnδ+R-R+ωbscnδ}un1(45)m[0,M-1],n[-N+1,N-1]andGr(m,n)=Au=0uδJ0C(u){2-ωbmδ--ωbmδ}ω2bsc2δ{R+(ωbsc(h+2d)-ⅈωbscnδ)+R-(ⅈωbsc(h+2d)ⅈωbscnδ)}um[0,M-1],n[-N+1,N-1](46)

In the above formulas Jo≡Jo(uω|m|δ) is the zeroth order Bessel Function, and the upper limit uδ is defined as:
uδ=1/cδ=kδ1ω=π/δω

for wavenumber kδ=π/δ. Also,

C(u)≡(1−RR+)−1Sc

where
Sc=ωubsc

with bsc defined as the vertical slowness in the layer containing the scattering point. When this layer is assumed to have acoustic wave velocity, it is given explicitly by:
bsc1βsc2-u2.


These expressions give the discretized 3D Acoustic layered Green's function, which is used in the layered media discretized Lippmann-Schwinger equation to obtain the solution field within the inhomogeneity. It is also used in the detector equations (9), which correspond to the detector equations (11).


This patent differs from the previous art in the important aspect of including the correlation part of the green's function. This correlation part is zero for the free space case. This correlational part is directly applicable as it occurs above to the fish echo-locator/counter, and to the mine detection device in the acoustic approximation. (There are specific scenarios, where the acoustic approximation will be adequate, even though shear waves are clearly supported to some degree in all sediments). The inclusion of shear waves into the layered media imaging algorithm (the technical title of the algorithm at the heart of the hazardous waste detection device, the fish echo-locator, and the buried mine identifier) is accomplished in Example 6 below. Actually a generalized Lippmann-Schwinger equation is proposed. This general equation is a vector counterpart to the acoustic Layered Green's function, and must be discretized before it can be implemented. The process of discretization is virtually identical to the method revealed above. Furthermore, the BCG method for solving the forward problem, the use of the sinc basis functions, and the use of the Fast Fourier Transform (FFT) are all carried out identically as they in the acoustic (scalar) case.


EXAMPLE 3
Imaging with Electromagnetic Wave Energy

The use of electromagnetic energy does not greatly affect the basic idea behind the imaging algorithm, as this example will demonstrate. Furthermore, we will see that it is possible to combine the layered Green's function with the electromagnetic free space Green's function to image materials within layered dielectrics. In fact this process of combining the layered Green's function with Green's functions derived for other structures and/or modalities than the free space acoustic case can be extended almost indefinitely. Another example is given below, where the combination of the Acoustic Biot Green's function with the layered Green's function is carried out. Further extensions that are not detailed here, are: (1) Combining Elastic (including shear wave energy) Wave equations with the layered Green's function, (3) Combining the Elastic Biot equations with the layered Green's function, (4) Combining the elastic and electromagnetic wave equations to model transducers. For simplicity, we consider a homogeneous, isotropic, non-magnetic background material. The time dependence will be eiωx, the magnetic properties of free space are summarized in {circumflex over (z)}o≡iωuo, and the electric properties are summarized in ŷo≡iωεo. Within the object being imaged the magnetic properties are {circumflex over (z)}≡{circumflex over (z)}o≡iωuo (the equivalence with the free space value is the nonmagnetic media assumption). The electric properties of the object being imaged are summarized in ŷ≡σ+{circumflex over (γ)}o≡σiωεrεo, which is the complex admittivity of the object. The larger σ is, the less able the medium is able to support electromagnetic wave transport. These properties are combined in
ko2-y^oz^o=ω2μooω2co2(47)

which is a measure of the number of wave cycles present per unit distance for a given frequency ω, when the wave speed of propagation is co. The object's electrical properties (recall it is assumed non-magnetic for simplicity) are summarized in the “object function”, γ. (the normalized difference from the surrounding medium):
γ(r)y^-y^oy^o=σ+ωroωo-1=-iσωo+r-1(48)

The total electric field equations are:
Ei(r)=E(r)-(ko2+·)γ(r)E(r)-kor-r4πr-r3r(49)

where Ei(r) is the 3-D incident field, E(r) is the 3-D total field.


The construction of the sinc basis discretization and the GN and RP algorithms for this 3-D EM case is essentially equivalent to the 2-D scalar acoustic case described above. See [Borup, 1989] for the details of the discretization and solution of the forward problem by FFT-BCG. The vector-tensor nature of the fields—Green's function is the only new feature and this is easily dealt with.


For simplicity, we now look at the situation where there is no z dependence in either the object function, i.e. γ(x,y,z)=γ(x,y), neither in the incident field. The vector ρ=(x, y) is used for the position vector in (x,y)-space.
Ei(ρ)=(ρ)-(ko2+·)γ(ρ)E(ρ)14iH0(2)(koρ-ρ)2ρ(50)

in matrix form the equation (50) looks like:
E(ρ)=E(ρ)-[k02+2x22xy2yxk02+2y2ko2]γ(ρ)E(ρ)14iHo2(koρ-ρ)2ρ(51)


From this form of the equation, it is clear that the electric field in the z-direction is uncoupled with the electric field in the x-y plane. Thus, in this situation the field consists of two parts. The so-called transverse electric (TE) mode, in which the electric field is transverse to the z direction, and the transverse magnetic (TM) mode, in which the electric field has a nonzero component only in the z direction. The TM mode is governed by the scalar equation:
Ezi(ρ)=Ez(ρ)-k024iγ(ρ)H0(2)(koρ-ρ)2ρ(52)whereEi(ρ)(ExiEyiEzi)andE(ρ)(ExEYEZ)(x,y)

which is identical to the 2-D acoustic equation discussed previously. Thus, the 2-D TM electromagnetic imaging algorithm is identical to the 2-D acoustic case discussed in detail above.


The TE mode is given by the following equation:
(Exi(ρ)Eyi(ρ))=(Ex(ρ)Ey(ρ))-14i[k02+2x22xy2yxk02+2y2]γ(ρ)(Ex(ρ)Ey(ρ))H02(koρ-ρ)2ρ(53)

The field is a two component vector and the Green's function is a 2×2 tensor Green': function:
G=[gxxgxygxygyy]=14i[k02+2x22xy2yxk02+2y2]H02(koρ-ρ)(54)

In compact notation:
Ei(I-GΓ)E(55)where:Ei=[ExiEyi],E=[ExEy],I=[I00I]andΓ=[γ00γ]


This equation also has a convolution form and can thus be solved by the FFT-BCG algorithm as described in [Borup, 1989] The construction of the GN-MRCG MRCG and RP imaging algorithms for this case is identical to the 2-D acoustic case described above with the exception that the fields are two component vectors and the Green's operator is a 2×2 component Green's function with convolutional components.


Finally, note that the presence of layering parallel to the z direction can also be incorporated into these 2-D algorithms in essentially the same manner as above. Special care must be taken, of course, to insure that the proper reflection coefficient is in fact, used. The reflection coefficient for the TM case is different from the TE case.


The Electromagnetic Imaging Algorithm


The electromagnetic imaging algorithm is virtually the same as the imaging algorithm for the scalar acoustic mode shown in the previous example, and therefore, will not be shown here. The Microwave Nondestructive Imager is one particular application of this imaging technology, another example is the advanced imaging optical microscope.


EXAMPLE 4
Advanced Imaging Microscope

The microscope imaging algorithm requires the complex scattered wave amplitude or phasor of the light (not the intensity, which is a scalar) at one or more distinct pure frequencies (as supplied by a laser). Since optical detectors are phase insensitive, the phase must be obtained from an interference pattern. These interference patterns can be made using a Mach-Zehnder interferometer. From a set of such interference patterns taken from different directions (by rotating the body for example), the information to compute the 3-D distribution of electromagnetic properties can be extracted. A prototype microscope system is shown in FIG. 6A and discussed in the Detailed Description Of The Drawings. There are two interferometer paths A and B where A is the path containing the microscope objective lenses and sample and B is the reference path containing a beam expander to simulate the effect of the objective lenses in path A. These paths are made to be nearly identical. If the paths are equal in length and if the beam expander duplicates the objective lenses, then the light from these two paths has equal amplitude and phase. The resulting interference pattern will be uniformly bright. When any path is disturbed and interference pattern will result. In particular if path A is disturbed by placing a sample between the objective lenses, an interference pattern containing information of the object will result. When path B has an additional phase shift of 90 degrees inserted, then the interference pattern will change by shifted (a spatial displacement) corresponding to 90 degrees.


The problem of extracting the actual complex field on a detector (such as a CCD array face) from intensity measurements can be solved by analysis of the interference patterns produced by the interferometer. It is possible to generate eight different measurements by: (1) use of shutters in path A and path B of the interferometer; (2) inserting or removing the sample from path A; and (3) using 0 or 90 degree phase shifts in path B. It is seen that all eight measurements are necessary.


The procedure to extract amplitude and phase from the interference patterns can be seen after defining variables to describe the eight measurements. Let:

fo(det)φωxA=Aoφωxeiωt+iα(φ,ω,x)

be the incident field on the detector from path A (with no energy in path B) when no sample is in place,

Aoφωx=|Aoφωx|,
fodetφωxB=BIφωneiαx+iβ(φωx)

be the incident field on the detector from reference path B, when no additional phase shift is introduced in path B (with no energy in path A), B1φωx=|B1φωx|

f1(det)100ωxB=B2φωxeiωx+iβ(φ,ω,x)+iπ/2

be the incident field on the detector from reference path B when π/2 additional phase shift is introduced in path B (with no energy in path A),

B2φωx=|B2φωx|,
fo(det)φωxC=Co1φωxeiωt+iα(φ,ω,x)

be the field on the detector when both path A and path B, are combined when no sample is in place,

Co1φωx=|Co1φωx|,
fo(det)φωxC=Co2φωxeiωt+θ(φ,ω,x)+iπ/2

be the field on the detector when both path A and path B2 are combined when no sample is in place,

Co2φωx=|Co2φωx|,
f(det)φωxA=A1φωxeiωt+iφ(φ,ω,x)

be the field on the detector from path A (with no energy in path B) when a sample is present,

Aφωx=|Aφωx|,
f(det)φωxC=C1φωxeiωt+iσ(φ,ω,x)

be the field on the detector when both path A and path B1 are combined when the sample is in place,

C1φωx=|C1φωx|
f(det)φωxC=C2φωxeiωt+iσ2(φ,ω,x)

be the field on the detector when both path A and path B2 are combined when the sample is in place,

C2φωx=|C2φωx|,

φ be the rotation angle of the sample holder


ω be the frequency of the light


x be the 2-D address of a pixel on the detector.


Now consider the one pixel for one rotation angle and at one frequency of light so that we may suppress these indices temporarily. The measurement made by the CCD camera is not of the field but rather of the square of the modulus of the field (i.e., the intensity). Thus, eight intensity fields can be measured on the detector:

M12=A02φωxM22=B12φωx
M32=B22φωxM42=C012φωx
M52=Co22φωxM62=A2φωx
M72=C12φωxM82=C22φωx


Because of the nature of an interferometer, the eight measurements are not independent. In particular, M42 is dependent on M12 and M22. M52 is dependent on M12 and M32 M72 is dependent on M12, M22, M42 and M62 M82 is dependent on M12, M32, M52 and M62. Also, M32=KM22 (where, K=1 for a perfect phase shifter). Thus,

B22=KB12


There are four ways to add the two beams on specifying sample in or sample out and specifying 0-degree phase shift or 90-degree phase shift. For sample out:

M42=Co1φωx2=|Aoeiωt+iα(φ,ω,x)+B1eiωt+β(φnωx)|2=Ao2+B12+2AoB1 cos(β−α),M52=Co2φωx2=|Aoeiωt+iα(φ,ω,x)+B2eiωx+iβ(φ,ω,x)+iπ/2|2=Ao2+B22+2AoB2 cos(β−α+π/2)=Ao2+B22+2AoB2 sin(β−α)(β−α)=−arctan {M3└M42−M12−M22┘/M2└M52−M12−M32┘}


For sample in:

M72=C12φωx=|Aφωxeiωt+iφ(φ,ω,x)+B1φωxeiωt+iβ(φ,ω,x)|2=A2+B12+2AB1 cos(β−Ψ),M82=C22φωx=|Aφωxeiωt+iφ(φ,ω,x)+B2φωeiωx+iβ(φ,ω,x)|2=A2+B22+2AB2 cos(β−Ψ+π/2)=A2+B22−2AB2 sin(β−Ψ).(β−Ψ)=−arctan {M3└M72−M62−M22┘/M2└M82−M62−M32┘}
(β−α)−(β−Ψ)=(Ψ−α)=phase shift introduced by sample

The scattering equations are

f(x)−finc(x)+C(x−y)*([γ(y)]Df(y)),f(x)
f(sc-det)(z)=f(det)(z)−f(inc)(z)=D(z−x)*([f(x)]Dγ(x))


Here, the symbol * means convolution and the symbol [.]D means the vector [.] is replaced by a diagonal matrix. When written in terms of the phase shift introduced by the sample these equations become:

(f(x)/f(inc)(x))−1=C(x−y)*([γ(y)]D(f(y)/f(inc)(x)))
f(sc-det)(z)=f(inc)(z)=(f(det)(z)/f(inc)(z))−1=D(z−x)*(└f(x)/f(inc)(z)┘Dγ(x))


If the incident field is known then these equations can be solved. If the incident field is not known explicitly, but is known to be a plane wave incident normal to the detector, then the phase and amplitude is constant and can be divided out of both pairs of equations. It is the latter case that is compatible with the data collected by the interferometer. If the incident field is not a plane wave then, the amplitude and phase everywhere must be estimated, but the constant (reference) phase and amplitude will still cancel. The incident field can be estimated by inserting known objects into the microscope and solving for the incident field from measured interference patterns or from inverse scattering images.


The basic microscope has been described in its basic and ideal form. A practical microscope will incorporate this basic form but will also include other principles to make it easier to use or easier to manufacture. One such principle is immunity to image degradation due to environmental factors such as temperature changes and vibrations. Both temperature changes and vibration can cause anomalous differential phase shifts between the probing beam and reference beam in the interferometer. There effects are minimized by both passive and active methods. Passive methods isolate the microscope from vibrations as much as possible by adding mass and viscous damping (e.g. an air supported optical table). Active methods add phase shifts in one leg (e.g. the reference leg) to cancel the vibration or temperature induced phase shifts. One such method we propose is to add one or more additional beams to actively stabilize the interferometer. This approach will minimize the degree of passive stabilization that is required and enhance the value of minimal passive stabilization.


Stabilization will be done by adjusting the mirrors and beam splitters to maintain parallelness and equal optical path length. A minimum of three stabilizing beams per/mirror is necessary to accomplish this end optimally (since three points determine a plane uniquely including all translation). These beams may be placed on the edge of the mirrors and beam splitters at near maximum mutual separation. The phase difference between the main probing and main reference beam occupying the center of each mirror and beam splitter, will remain nearly constant if the phase difference of the two component beams for each stabilizing beam is held constant. Note that one component beam propagates clockwise in the interferometer, while the other propagates counterclockwise. Stabilization can be accomplished by use of a feed back loop that senses the phase shift between the two component beams for each stabilizing beam, and then adjusts the optical elements (a mirror in our case) to hold each phase difference constant. This adjustment of the optical elements may be done by use of piezoelectric or magnet drivers.


We now describe how use of the several stabilizing beams operates to stabilize the interferometer and the microscope. Since the stabilization of a single beam is known to those working with optical interferometers, the description given here will be brief. The description is easily seen by a simple mathematical analysis of the interaction of the two components of the stabilizing beam. The electric field of the light at a point in space may be represented

E=Eoexp(i2πft),


where Eo contains the spatial dependence and exp(i2πft) contains the temporal component, where f is the frequency of oscillation and t is time. For the remainder of this discussion we assume exp(i2πft) time dependence at a single frequency f and this factor will be omitted. Let the electric field of the output of the interferometer for one component beam be A and let the electric field output of the second component be B exp iθ. Then the combined output is [A+B exp(iθ)]. The light intensity of the combined output that is detected is

|A+Bexp(iθ)|2=A2+B2+2AB cos θ.


The feedback conditions can be easily derived from the above equation. The feedback signal can be derived by: (1) synchronous detection of the summed beams when one of the beams, say beam B, is modulated; or (2) by use of the differences in intensity of the transmitted and reflected beam exiting the final beam splitter. We now describe the operation of the first method of synchronous detection.


Let θ=(θo+ε sin(ωt)) represent phase modulation of phase θ around equilibrium phase θo by amount ε at frequency ω. The frequency ω is typically 100 Hz to 1000 Hz. Suppose that phase difference of zero degrees between beam A and beam B is required. Setting θo=δ and noting that both ε and δ are small we see that cos θ=cos(δ+ε sin(ωt)). By use of the trigonometric identity for the cosine of the sum of two angles we get cos θ=cos δ cos(ε sin(ωt))−sin δ sin(ε sin(ωt)). On expanding this expression to second order in a we obtain
cosθ=(1-δ2/2)-122(1-δ2)sin2(ωt)-δsin(ωt).

We wish to force δ to zero for optimal phase lock. There are two ways to accomplish this end: (1) by maximizing the modulus of the signal component ε2(1−δ2)sin2(ωt); or (2) by minimizing the modulus of the signal component δε sin(ca). The former method is accomplished by mixing (multiplying) |A+B cos θ|2 by sin2(ωt), low pass filtering and then maximizing the signal by feedback changes to δ.


To lock the phase difference to π/2 a modification of the above procedure may be used. Set θ=π/2+δ+ε sin(ωt). Then

cos θ=cos(π/2+δ+ε sin(ωt))=sin(δ+ε sin(ωt))=−sin δ cos(ε sin(ωt))+cos δ sin(ε sin(ωt)).


On expanding this expression to second order in δ we obtain
cosθ=(1-δ2/2)sin(ωt)-δ(1-122sin2(ωt)).


Again, we wish to force δ to zero for optimal phase lock. There are again two ways to accomplish this end: (1) by maximizing the modulus of the signal component (1−δ2/2)ε sin(ωt); or (2) by minimizing the modulus of the signal component
δ(1-(12)2sin2sin2(ωt)).

The former method is accomplished by mixing |A+B cos θ|2 with sin(ωt), low pass filtering and maximizing the result by changing to a through feedback. The latter method is accomplished by mixing |A+B cos θ|2 with sin2(ω), low pass filterings and minimizing the resulting signal by feedback to change δ.


We now describe a second or non-hetrodyning method of phase locking beam A and B to a constant phase difference. The net intensity of the respective transmitted and reflected beams from a beam splitter have different dependency of the phase difference from the that combines two beams. Let the two beams coincident on the beamsplitter be A and B cos θ Then the two exit beams from the beam splitter have respective intensities |A+B cos θ|2 and |A−B cos θ|2. This follows from a difference in phase of 180 degrees in the reflected beam depending on whether the incident medium has a higher or lower dielectric constant than the medium beyond the reflecting surface. The difference of the signals from detectors that detect transmitted and reflected beams is given by |A+B cos θ2−|A−B cos θ|2=4AB cos θ. If the desired phase angle is π/2 and the error is δ then 4AB Cos θ=−4AB sin δ=−4ABδ4. Thus, the error signal δ can be used in a feedback loop to drive the difference in the two optical paths to π/2. This technique can be refined by dividing by the product of A and B. Since A is proportional to B dividing by A.sup.2 will be equivalent. The value of A is proportion to the sampled laser power that is used to generate beam A and beam B cos θ.


EXAMPLE 5
Extension to Biot Theory (Acoustic Approximation)

In the article [Boutin, 1987, Geophys. J. R. astr. Soc., vol. 90], herein incorporated by reference, a greens function for the determination of a point response to a scattering point located in an isotropic, homogeneous, porous medium that supports elastic waves is developed. The implementation of this Greens' function into an imaging algorithm has never been carried out before. In this section, we have adapted their approach to derive an acoustic approximation to the fully elastic Biot theory that enables us to present a simplified practical tool for the imaging of porosity-like parameters in a geophysical context. The implementation of the full elastic Biot imaging algorithm using the Green's function of [Boutin, 1987] in place of the one derived in [Wiskin, 1992] is no different from the discretization employed here. The use of the sinc basis functions, the FFT's, the biconjugate gradients, and so on, is identical. We have developed a model that incorporates the parameters of fluid content, porosity, permeability, etc, but instead of the 3+1 degrees of freedom of the two phase elastic model, has only 1+1, corresponding to the two independent variables, Ps′ and Pl′ the pressure field within the solid (modelled as a liquid in this approximation) and truly liquid phase respectively. As with the previous example. The use of the Blot theory Green's function can be combined profitably with layering, to image material in a layered background.


Define (see [Boutin, 1987]., for more complete discussion of the parameters):

    • ω as the frequency of the interrogating wave
    • ρl as the density of the liquid phase
    • ρs as the density of the solid phase
    • n as a saturation parameter
    • K(ω) as the generalized, explicitly frequency-dependent Darcy coefficient introduced via homogenization theory.


αo, θ, ρ are parameter's defined in the following way: ρ=(1−n)ρs1└n+ρlω2 K(ω)/iω┘

θ=K(ω)/i(ω)
αo=α+ρ1ω2K(ω)/iω=α+ρ1ω2θ


The “acoustic two phase Green's function” is the Green's function obtained by solving the above system with a distributional right hand side, where δ(x) is the Dirac delta distribution. That is we seek the solution to the following matrix operator equation: BGBiot+I2×2δ(x)=0 where BGBiot: R2,3→R2 is a function on R3 or R2 and I2×2 is the two dimensional identity matrix. B is given by:
B=[θ2-β-α0-α02λ2+ω2ρ]

This author obtained
GBiot14πθλα0(δ22-δ12){(ω·d)[λα00θ]+(ω·v)[ρ_ω20α0-β_]}

where w: R3→R2 is the 2D vector defined by:
w=(-iδ1xx-iδ2xx)

and xεR3 for the acoustic two-phase 3D model, and where d and v are 2 component vectors (representing the two phases) given by
d=(-δ12δ22),v=(1-1)


A very similar analysis gives an equation and corresponding Green's Operator for the acoustic two-phase 2D (two spatial independent variables) case, in which w: R2→R2 and xεR2 The operator is very similar except that the w vector contains Hankel functions of the zeroth order:
w=(i4H0(2)(δ1x)i4H0(2)(δ2x))


Notice the existence of the two wavenumbers δ1 and δ2 in this acoustic approximation to Biot theory guarantee the existence of a fast and slow compressional wave—a distinguishing characteristic of the classical elastic Biot theory.


More importantly this algorithm provides us with a computationally feasible means of inverting for phenomenological parameters derived from the classical elastic two phase Biot Theory.


Finally, it is important to point out that convolving the above functions with the sinc basis function analytically is done in exactly the same manner with GBIot=G2×2 above as it was done above because x only occurs in the form of the scalar acoustic Green's operator. This is why it is possible to create very efficient algorithms based upon the sinc-Galerkin method and FFT's as described in [Borup, 1992], using the Green's Operator described above.


It is now possible to obtain an algorithm for the inversion of the 2D Biot porous medium by analogy with the algorithm based upon the Lippmann-Schwinger equation used earlier. This new algorithm is based upon a “two phase” Lippmann-Schwinger type equation derived in a future publication.
r0(x)=r(x)-[-b1_00b2]G2×2(x-ξ)D(ξ)r(ξ)3ξwhereD(x)=[γ100γ2]

The object functions γj are given by
γj(x)(-1)j(bj(x)bj_-1)j=1,2


This equation is discretized in the same manner as above, and the convolution character is preserved. With the explicit form of the discretized Biot Lippmann-Schwinger Integral Equation the imaging problem is solved by applying the analogous conjugate gradient iterative solution method to the Gauss-Newton Equations in a manner exactly analogous to the discussion above (see future publication for details).


Finally, as mentioned above, it is certainly possible to construct, using the above principles, a layered Green's function for the Biot theory in exact analogy with the construction for the acoustic layered Green's Operator. This operator will be a matrix operator (as is the acoustic two phase Green's Operator constructed above), and will consist of a convolutional and correlational part (as does the acoustic single phase Green's Operator constructed earlier). Because of these convolutional and correlational parts, the FFT methods discussed above are directly applicable, making the algorithm feasible from a numerical standpoint.


EXAMPLE 6
Non-Perturbative Inversion of Elastic Inhomogeneous Structures Buried within a Layered Half Space

In elastic media (by convention in this patent, media that supports shear wave activity) the relevant parameters are γ, μ, (Lame' parameters), ρ (density) and absorption. The inversion for these elastic parameters (i.e. the first and second Lame' parameters, γ and μ follows virtually the same prescription as was outlined above for the acoustic scalar case. In a manner similar to the Electromagnetic Inversion problem discussed above, it is possible to break up the arbitrary 3 dimensional vector u(x,y,z), representing the displacement into components that propagate independently. The exact description of this procedure is given in [Wiskin, 1991], and [Muller, 1985]. This example will not deal with this decomposition since the idea behind this is more easily seen by looking at the electromagnetic example given earlier.


The idea here is to incorporate the above solution to the layered inversion problem directly into the Green's operator. As before, in the electromagnetic and the acoustic case, this Green's function is used in the integral equation formulation of the inversion problem of imaging an inhomogeneous body extending over an arbitrary number of layers.


The following is the general elastic partial differential equation which governs the displacement vector field u in the general case where λ and μ both depend upon xεR3. When λ and μ are independent of x, we have the homogeneous elastic case.
-ω2ρ(x)ui(x)-xi[λ(x)xkuk(x)]+xi[μ(x){xiuk(x)+xkui(x)}]=fi(x)

fi(x) represents the applied body force, and μ(x) and λ(x) are the Lame' parameters, their dependence upon xεR3 is the result of both the inhomogeneity to be imaged and the ambient layered medium. In particular ui({right arrow over (y)}){right arrow over (y)}εR3 is the ith component (i=1, 2, 3) of the displacement field at point {right arrow over (y)}εR3, ui0({right arrow over (y)}) is the ith component of the incident field.

ρ1({right arrow over (x)})+ρ0(z)=ρ({right arrow over (x)})

is the total density variation, it consists of the 3-D variation in ρ1 and the vertical 1-D variation in ρo
λ1({right arrow over (x)})+λo(z)=λ({right arrow over (x)})

is the total variation in λ the first Lame' parameter, λ1 has 3-D variation, and λ0 has 1-D vertical variation.

μ1({right arrow over (x)})+μo(z)=μ({right arrow over (x)})

the total variation in the second Lame' parameter .mu.. The μ1 has variation in 3-D, the μ2 has variation in 1-D (vertical).

({right arrow over (x)})32 density=.ρ1({right arrow over (x)})+ρ0(z),

where ρ0({right arrow over (x)}) represents the layered medium without the object, and ρ1({right arrow over (x)}) represents the object to be imaged.


λ0(z) and μ0(z) are the z-dependent Lame' parameters representing the layered medium


λ1({right arrow over (x)}) and μ1({right arrow over (x)}) are the Lame' parameters representing the object to be imaged


The above differential equation is converted to the following elastic “Generalized Lippmann-Schwinger equation
ui(y)=ui0(y)-vol{Gim(LAY)(y,x)}um(x)x,i=1,2,3

by means of the kernel Gim(LAY)({right arrow over (y)};{right arrow over (x)}), which is constructed below. Where the volume integral is performed over the finite volume that contains the image space (and the object being imaged, for example, an ore body, submarine mine, oil reservoir, etc). This is the basic integral equation which forms the theoretical foundation for the Layered Green's Operator approach to Inverse Scattering in a layered medium.


This kernel is a 3 by 3 matrix of functions which is constructed by a series of steps:


Step 1:


Beginning with the acoustic (scalar or compressional) Green's function given by
G(k0r-r)=-k0r-r4πr-r

Step 2:


The free space elastic Green's matrix is a 3 by 3 matrix of functions, built up from the free space Green's function. It's component functions are given as
Gim(kTkLr-r)=μG(kTr-r)δim+μkT22xixm[G(kTr-r)-G(kLr-r)]

Step 3:


Next the layered Green's function, GimL({right arrow over (y)},{right arrow over (x)}), for a layered elastic medium is defined. It is defined in terms of Gim, the components i,m=1, . . . , 3, of the elastic Green's function in homogeneous elastic media given above. The components of the layered Green's matrix are integrals over Bessel functions and reflection coefficients in essentially the same manner as the acoustic layered Green's function consisted of integrals over wavenumber, of the acoustic reflection coefficients. This dyadic is patterned after [Muller, 1985], in the manner discussed in [Wiskin, 1992].


Step 4:


Finally the layered Green's kernel Gim(LAY), is constructed {right arrow over (x)}εR3Gim(LAY)(y,x)=-ω2GimL(y,x)ρ1(x)-xm{(xjGijL(y,x))λ1(x)}+xj{(xmGijL(y,x)+xjGimL(y,x))μ1(x)}

where GimL({right arrow over (y)},{right arrow over (x)}) is layered Green's function for the elastic layered medium. The progressive constructions can be represented in the following way, beginning with the acoustic free space Green's function:
G(k0r-r)=-k0r-r4πr-r->->Gim(y->,x->)->->GimL(y->,x->)->->Gim(LAY)(y->,x->)


Alternatively, in words:

Acoustic freeelastic free spaceelastickernel for elasticspace Green'sGreens functionlayeredlayered Lippmann-functionGreen'sSchwinger equationfunction


Using the last “kernel” in this series in the generalized elastic layered Lippmann-Schwinger Equation gives the vector displacement.
ui(y->)=ui0(y->)=vol{Gim(LAY)(y->,x->)}um(x->)x->

which is then discretized, using the sinc basis in exactly the same way as for the previous examples, and, the FFT, and biconjugate gradient, and conjugate gradient algorithms are then applied to this vector equation, in the same way as done above. Thus the elastic modality (including shear waves) is accounted for.


The construction of the layered Green's function in the full elastic case (with shear wave energy included is slightly more complicated than the purely scalar case. For this reason, we look at the construction in more detail, see also [Wiskin, 1993].


For discretization of the resulting equations see the electromagnetic case discussed in example 3.


The Construction of the Continuous Form of the Layered Green's Operator GL-3D and with full elastic mode conversion Case,


Now we proceed with the construction of the continuous variable form of the Layered Green's operator. The closed form expression for the discretized form of the bandlimited approximation to this continuous variable Green's operator is constructed in a manner exactly analogous to the vector electromagnetic case discussed in example 3.


By analogy with the acoustic (scalar) situation, the construction of the layered Green's operator can be viewed as a series of steps beginning with the 3D free space Green's operator.


Step a) Decompose a unit strength point source into a plane wave representation (Weyl-Sommerfeld Integral). For the elastodynamic (vector) case we must consider the three perpendicular directions of particle motion, which we take to be the horizontal direction, the direction of propagation, and the direction perpendicular to the previous two. Muller has given the representation for a point source in elastic media in [Muller, 1985]


Step b) Propagate and reflect the plane waves through the upper and lower layers to determine the total field at the response position. The total field will consist of two parts: uup(r), the upward propagating particle velocity, and ud(r), the downward propagating particle velocity at position r. This propagation and reflection is carried out analytically by means of the reflection coefficients, which in the elastic case, are the matrices R and R+ (R and R+ are 2×2 matrices), and the scalars, r, and r+. The matrices correspond to the P-SV (compressional and shear vertical polarized waves) for the case of horizontal stratification. (x is the horizontal, and z is the vertical coordinate, z is positive downward). The scalar coefficients correspond to the SH (horizontally polarized) shear waves, which propagate without mode conversion to the other types for the case of horizontally layered media.


R and r, represent the wave field reflected from the layers below the source position, and R+, and r+ represent the cumulative reflection coefficient from the layers above the source position, for the matrix and scalar cases respectively.


These total reflection coefficients are formed recursively from the standard reflection coefficients found in the literature (e.g. Muller [1985], and Aki and Richards, Quantitative Seismology, Freeman and Co., 1980, herein included as a reference].


FIGS. 1-C and 27 should be kept in mind throughout the following discussion. As in the scalar case, in order to have a single reference position R+ and R are both given with respect to the top of layer that contains the scattering point, which is denoted by “sc”. That is, R+ represents the reflection coefficient for an upgoing wave at the interface between layer sc and sc-1. FIG. 27 displays the relevant geometry.


In our case the expressions Su and Sd, Sd and Su, and are those derived from the Sommerfeld integral representation of a point source of unit strength, given in Muller. [see also Aki and Richards, 1980]. For example:

Sd=eiwbsc(z′−zsc)=eiwbscΔz′
Su=eiwbsc(z′−zsc)=eiwbscΔz′

for the scalar case.


As shown in detail, by Muller, the total contribution of the upward travelling wave is given by

Su+RR+Su+RR+RR+Su+RR+RR+Su+ . . . =[1+RR++RR+RR++RR+RR+RR++.[1−RR+]−1Su

for the transverse shear wave (SH or horizontal shear wave), and

Su+RR+Su+RR+RR+Su+RR+RR+Su+ . . . =[1+RR++RR+RR++RR+RR+RR++ . . . ]Su=[1−RR+]−1Su

for the P-SV matrix case. Note that some care must be exercised in the convergence of the matrix series, however, for practical situations we can omit this detail.


Step c) The process of forming the total field at the response point must be broken up into two cases:


Case 1: the response point is above the scatter point, that is Δz−Δz′<0 (Δz is the distance from the interface above the scattering layer, to the response point, the z axis is positive downward), and


Case 2, where the response point is below the scattering point Δz−Δz′<0. Furthermore each case consists of an upward travelling, and a travelling wave:


Upward travelling wavefield:


First, in case 1:


[1−RR+]Su and [1−RR+]Su represent the contribution to u.sup.up from the upward travelling part of the source. A similar expression can be formed for the contribution from the downward travelling part of the source, it is


└1−RR+┘−1RSd, and [1−RR+]−1RSd


Thus, the total upward component of the wave field at the response point is formed from:
uup((1-R-R+)-1(Su+R-Sd)(1-r-r+)-1(Su+r-Sd))


Case 2: the response point is below the scatter point, that is Δz−Δz′>0


The result here is similar, the change occurring in the coefficient of the SU, and SU.
uup((1-R-R+)-1(R-R+Su+R-Sd)(1-r-r+)-1(Su+r-Sd))

Downward Component of Wavefield


Case 1: A similar expression gives the downward component of the total wave field at the response point r, for case 1:
udownud((1-R+R-)-1(R+R-Sd)(1-r+r-)-1(r+r-Sd))+((1-R+R-)-1R+Su(1-r+r-)-1r+Su)orudownud((1-R+R-)-1(R+R-Sd+R+Su)(1-r+r-)-1(r+r-Sd+r+Su))


Case 2: For case 2, the result is similar, here the response point resides below the scatter point, that is Δz−Δz′>0
udownud((1-R+R-)-1(Sd+R+Su)(1-r+r-)-1(r+r-Sd+r+Su))


Step d) The final step in the process is the recombination of the plane waves to obtain the total wavefield (upgoing and downgoing waves) at the response position:


For the scalar case 1


Δz−Δz′<0, and:

GL(x−x′,y−y′,z/z′)=fu=fd,
with
uup=∫J0(uω/r−4\)[1−RR+]−1[Su+RSd]expiωbsc(z−zsc)du
ud=∫J0(uω/r−r′|)[1−RR+]−1[R+Su+R+RSd]exp−iωbsc(z−zsc)du

so that

GL(x−x′,y−y′,z/z′)=∫J0(uω/r−r′|)[1−RR+]−1[Su+RSd]{eiωbsc(z−zsc)+R+eiωbsc(z−zsc)}du

While for case 2, Δz−Δz′>0, similar algebra gives:

GL(x−x′,y−y′,z/z′)=∫J0(uω/r−r′|)[1−RR+]−1R+Su+Sd[eiωbsc(z−zsc)+Reiωbsc(z−zsc)]du

For case 1 this can be rewritten as:
GL(r-r,z/z)+Au=0C(u:r-r){SuωbscΔz+R-SdωbscΔz+R+Su-ωbscΔz+R+R-Ss-ωbscΔz}u

where the coefficient function C(u:r−r′) is given by:
C(u:r-r)=J0(uωr-r)(1-R-R+)-1ⅈωubsc

For case 2, GL can be written as:

GL(x−x′,y−y′,z/z′)=∫J0(uω/r−r′|)−1{R+RSueiωbscΔz+RSdeiωbscΔz+R+Sue−iωbscΔz+Sde−iωbscΔz}du


Finally, using the definitions for Su and Sd and recognizing that products such as SdeiωbscΔz can be rewritten as

SdeiωbscΔz=eiωbscΔz′eiωbscΔz=eiωbsc(Δz+Δz′)

the operator GL turns out (after rearrangement of the above equations) to be the sum of a convolution and a correlation kernel:

GL(r−r′,z/z)=GR(r−r′,z+z)+Gv(r−r′,z−z′).


Case I has (Δz−Δz′)≦0, where Δz=z−zsc and Δz′=z′−zsc with zsc being the z co-ordinate of the top part of the layer that contains the scatterer. In fact GR and GV turn out to be given by the following formulae (they appear to differ from the integrals above because of the rearrangement that leads to the decomposition into convolutional and correlational parts):
GR(r-r,Δz/Δz)=Au=0J0(uωr-r)C(u){R+-ωbsc(Δz+Δz)+R-ωbsc(Δz+Δz)}uGV(r-r,Δz/Δz)=Au=0J0(uωr-r)C(u){R+R--ωbsc(Δz+Δz)+ωbsc(Δz+Δz)}u

where, now:
C(u)=(1-R+R-)-1bscωb


R and R+ are the recursively defined reflectivity coefficients described in Muller's paper,


u is the horizontal slowness,
usinθc

Recall that Snell's law guarantees that u will remain constant as a given incident plane wave passes through several layers.


ω is the frequency


bsc is the vertical slowness for the particular layer hosting the scattering potential


Case II consists of the case where (Δz−Δz′)≧0. The GR and GV are now given by the following:
GV(r-r,Δz/Δz)=Au=0J0(uωr-r)C(u){R+R--ωbsc(Δz+Δz)+ωbsc(Δz+Δz)}uandGR(r-r,Δz/Δz)=Au=0J0(uωr-r)C(u){R--ωbsc(Δz+Δz)+R+-ⅈωbsc(Δz+Δz)}u


These expressions can be combined into one equation by the judicious use of absolute values. The correlational part of the Green's operator can be transformed into a convolution by a suitable change of variables. The resulting Green's function for imaging inhomogeneities residing within Ambient Layered Media must be quantized by convolving with the sinc (really Jinc) basis functions described below. This is done analytically and the result iS given below.


The same process is carried out as detailed above, in order to determine the P-SV total wavefield (the 2 by 2 matrix case), and is not repeated.


The matrix case is handled in exactly the same manner, with allowance for the differing algebra associated with matrices, the preservation of the convolution is preserved for the same reason as shown above in the scalar situation.


On avoiding convergence problems due to the presence of derivatives in the elastic Green's function (i.e. with shear wave motion)


One difficulty with this formulation is the presence of four derivatives acting upon the acoustic free space Green's function in the construction of the dyadic G(LAY). This problem we have overcome (see related code in Appendix E) by the following method:


Given an inhomogeneous distribution of isotropic density ρ and Lame parameters λ and μ, imbedded in a homogeneous medium with density ρ0 and Lame parameters λ0 and μ0, the total infinitesimal displacement vector u satisfies the partial differential equation

ω2ρui+(λujj),i+[μ(uij+uj,i)]j=0  (1)

while the incident displacement field ui satisfies

ω2ρuii+(λ0ujji),i+└μ0(uiji+ujii)┘j=0  (2)

where ρ0, λ0, and μ0 are the homogeneous parameters of the imbedding medium. Subtracting (2) from (1) and rearranging results in
uis+(1kp2-2ks2)(ujjs),i+1ks2[(uijs+ujis)j]j=-fi(3)

for the scattered displacement field us=u−ui. The inhomogeneous term f is given by
fi=(ρρ0-1)ui-(1kp2-2ks2){(λλ0-1)ujj}i+1ks2{(μμ0-1)(uij+uj,i)}j(4)

where the respective shear-wave and compression-wave velocities cs and cp and corresponding wave numbers ks and kp are given by
cs2=μ0ρ0;ks2=ω2cs2;cp2=λ0+2μ0ρ0;kp2=ω2cp2(5)


for the imbedding medium. Introducing the scattering potentials
γρ=ρρ0-1;γλ=(1kp2-2ks2)(λλ0-1);γμ=(1ks2)(μμ0-1)(6)givesfi=γρui+{γλujj},i+{γμ(uij+uij+uj,i)}j(7)

Solution of (3) by the introduction of the free-space Green's function results

uis=Gij*fi=∫vfi(r′)Gij(F−r′)dv′  (8)

where * denotes 3-D convolution, v is the support of the inhomogeneity, and the Green's function is given by [Aki and Richards, Quantitative Seismology, Freeman and Co., 1980, herein included as a reference]:
Gij(r-r)=ks2g(ksr-r)δij+2xidxj{g(ksr-r)-g(kpr-r)}(9)

where g(kR) is the scalar Helmholtz Green's function
g(kR)=kR4πR(10)

where e−Iωt time dependence has been assumed. Inserting (7) into (8) and integrating by parts yields the following integral wave equation:

uis=Gij*{γρuj}+Gij,j*{γλuk,k}+Gij,k*={γu(uj,k+ukj)}  (11)

For now, consider the case where γμ=0 and note that:
Gij,j=ks2xig(ksR)+xi(2g(ksR)-2g(kpR))(12)

using

(∇2+k2)g(kR)=−δ(|r−r′|)  (13)

reduces (12) to:
Gij,j=kp2xig(kpR)(14)

which will henceforth be denoted Ci. We have now arrived at the integral equation:
[uxiuyiuzi]=[uxuyuz]-[GxxGxyGxzCxGyxGyyGyzCyGzxGzyGzzCz]*[γρ0000γρ0000γρ0000γρ][uxuyuz·u](15)

since ukk=∇•u


The integral equation (15) can be solved by application of the 3-D FFT to compute the indicated convolutions, coupled with the biconjugate gradient iteration, or similar conjugate 15 gradient method. [Jacobs, 1981]. One problem with (15) is, however, the need to compute ∇•u at each iteration. Various options for this include taking finite differences or the differentiation of the basis functions (sinc functions) by FFT. Our experience with the acoustic integral equation in the presence of density inhomogeneity indicates that it is best to avoid numerical differentiation. Instead, the system is augmented as:
[uxiuyiuzi·ui]=[uxuyuz·u]-[GxxGxyGxzCxGxyGyyGyzCyGzx0GzyGzzGzCxCyCzD]*[γρ0000γρ0000γρ0000γρ][uxuyuz·u](16)whereD=CXX+CYY+Cz,z=kp2g2(kpR)=-kp2{kp2g(kiR)+δ(R)}.(17)


Iterative solution of (16) for the four unknown fields ux, uy, uz, and ∇•u now involves no numerical differentiation of u, since all derivative operators have now been applied analytically to the Green's function. The incident component ∇•ui is assumed known. Equation (16) can be written symbolically as

Ui=(I−G[γ]diag)U  (18)

where Ui and U are the augmented 4-vectors in Eq. (16), [γ]diag is the diagonal operator composed of γρ and γλ, and is the 4×4 dyadic Green's function in Eq. (16).


Inclusion of γμ Scattering to Give the General Elastic-wave Integral Equations


We now give a method for solving integral equations in the general case for the inhomogeneous material properties ρ, λ, and μ.


Because the inclusion of the more complicated term in γμ. causes the matrix notation used above to require breaking the equations into parts on several lines, we elect for efficiency of space to use the more compact tensor notation. This should cause no difficulty because the translation is clear on comparing the special case of γμ=0 in the general tensor equations which follow with the previous matrix notation for this same case.


First we give again (see Eq (11)) the integral equations (which in practice have been discretized by sinc basis functions) for inhomogeneous ρ, λ, and μ. Here * means convolution (the integration operation):

uis=Gij*{γρuj}+Gij,j*{γλuk,k}+Gij,k*[γu(uj,k+ukj)]  (19)

We note that the displacement field uj and its derivatives also ujk appear. We choose not to compute derivatives of the fields numerically and avoid the associated numerical instability. Instead, we augment the above equation with additional equations to compute the derivatives in a more stable way by solving for them directly. Thus, solving for (ui,l+ul,i) directly is more efficient and replaces computation of nine fields with only six. Forming the six unique symmetric-derivative pairs, we obtain an integral equation for these pairs.

ui,js+ul,is=(Gij,l+Glj,i)*[γρuj]+(Gij,kl+Gl,j,ki)*[γμ(ujk−ukj)]+(Gij,jl+Glj,ji)*[γλuk,k]  (20)

Note that
Gij,jl=kp22xixlg(kpR)(21)

We now augment the system Eq. (19) for the three components of uis with the system Eq. (20) which has six pairs of unique component derivatives.


These nine equations can also be placed in a matrix form similar to that of the augmented system given in Eq (16). The augmented system of nine components could be solved for γρ, γλ, and γμ, assuming knowledge of the fields ui and derivatives ui,k. Since the fields and derivatives are also unknown, we must solve for them simultaneously with γρ, γλ, and γμ. This is done by adding these constraint equations. These nine equations are composed of the three equations for ui
uii=ui−Gij*(γpuj)−Gij,j*[γλuk,k]−Gij,k*[γμ(uj,k+ukj)]  (22)

and the six equations for uj,k+ukj
ui,li+ul,ii=(ui,l−ul,i)−(Gij,l+Glj,i)*[γpuj]−(Gij,kl+Glj,ki)*[γμ(ujk+ukj)]−(Gij,jl+Glj,ji)*[γλuk,k]  (23)


These 9 equations can also be placed in a matrix form similar to that of the augmented system given in Eq (16).


Solving these twelve equations (Eq. (19, 22, 23)) for (γρ, γλ, and γμ.), the fields, and their derivatives is accomplished using the Frechet derivative methods described next. As in Eq. (16) for the μ=μ0 case, Eq. (23) can be written symbolically as

Ui=(I−G[γ]diag)U  (24)

where the operators and [γ]diag are now 9×9, and the vectors U and Ui consist of the three components of u and its six symmetric derivative sum-pairs.


EXAMPLE 7
Cylindrical Coordinate Methods for Numerical Solution of the SIE

The above examples all use the FFT-BCG algorithm for imaging. This method is one of three methods discussed in this patent. The other two methods are


(1) The cylindrical coordinate recursion method (Cylindrical Recursion)


(2) Rectangular recursion of scattering matrix method (Rectangular Recursion for short reference)


We now discuss the cylindrical recursion method:


7.1 Cylindrical Coordinate SIE Formulation and Discretization


We begin with the acoustic scattering integral equation (SIE) for the constant density case:
fi(ρ)=f(ρ)-k024iγ(ρ)f(ρ)H0(2)(k0ρ-ρ)2ρ(1.1)

Let each field be expanded in a Fourier series in the angular variable:
f(ρ)=nfn(ρ)nϕ

Then using Graf's addition theorem:
H0(2)(k0ρ-ρ)={nHn(2)(k0ρ)Jn(k0ρ)n(ϕ-ϕ)ρ>ρnHn(2)(k0ρ)Jn(k0ρ)n(ϕ-ϕ)ρ<ρ(1.2)

results in the cylindrical coordinate form of (1.1):
fni(ρ)=fn(ρ)-πk022i0a{mγn-m(ρ)fm(ρ)}Bm(k0ρ)Cm(k0ρ)ρρ(1.3)

where the kernel is
Bm(k0ρ)Cm(k0ρ)={Hn(2)(k0ρ)Jn(k0ρ)ρ>ρHn(2)(k0ρ)Jn(k0ρ)ρ<ρ}(1.4)

separably symmetric:


Henceforth assume that k0=1. Discretizing the radial integral by trapezoidal rule using an increment of Δ results in:
fl,n=fnl,n-{hl,nl-ll(γf)l,n,jl,n+jl,nl=l+1L(γf)lnhln}(1.5)wherefl,n=fn(1Δ),hl,n=Δπ2iHn(2)(1Δ),jl,h=Δπ2iJn(1Δ)(1.6)and(γf)1,n=1mγ1,n-mf1,m(1.7)


Notice that the extra l′ resulting from the ρ′ in the integral in (1.3) has been placed in the definition of (γf)l′n′. Equation (1.5) is the final discrete linear system for the solution of the scattering integral equation in cylindrical coordinates. It can be rewritten as:
fli=fl-{[hl]l=7l[jr][γ1*]f1+[jl]1=l+1L}[h][γr*]f1(1.8)


where the vector components are the Fourier coefficients:
fi=[fl,-Nfl,N](1.9)

where, or, equivalently,

{fl}n=fl,nn=−N, . . . ,N

where N is the range of the truncated Fourier series (The vectors are length 2N+1). The notation [x] denotes a diagonal matrix formed from the vector elements:
[x]=[x-N000xN](1.10)

and the notation [γl′*] denotes a convolution (including the l′ factor):
{[γl*]fl}n=lm=-NNγl,n-mfl,m,n=-N,N.(1.11)

Writing (1.8) out in full gives the matrix equation:
[flifLi]=A[flfL]where(1.12)A=[I000000I]-[[jl][h1][j1][h2][jl][hL][hl][j1][hL][j1][hL][j2][jL][hL]]·[[γl]000000[γL]](1.13)


Notice that the kernel is composed of a L×L block matrix with 2N+1×2N+1 diagonal matrix components and that the L×L block matrix is symmetric-separable, i.e.:
Lnm={[jn][hm],nm[hn][jm],n>m(1.14)

where Lnm is one of the L×L component matrices.


For reference in the next section, we make the definitions:
sllL[hl][γl*]fl(1.15)pll=1l-1[hl][γl*]fl(1.16)

7.2 Recursive Solution of the Cylindrical Coordinate SIE


The recursion for the 2-D cylindrical equation is:


Initialize:

pL=0  (2.1)
{sL}n=fLns/hL,n


For 1=L, . . . , 1

sl−1=s1−[j1][γ1*]{[h1]s1+[j1]p1}
pl−1=p1−[h1][γ1*]{[h1]s1+[j1]p1}


Next 1.


Thus far, we have assumed that the number of angular coefficients is the same for each radius 1. In fact, the number of angular coefficients should decrease as I decreases. We find that for Δ=λ/4, an accurate interpolation is achieved with N1=21−1, n=−N1, . . . , N1 Fourier coefficients. To introduce this modification, we must slightly define the operators as:
{[jl]x}n={jl,xnn,nNl0,n>Nl(2.2){[γl*]x}n={lm=NlNlnNl0,n>Nl(2.3)

where the vector operated on is always length N where N is now the maximum number of Fourier coefficients, N=NL. The total field at each layer, 1, is given by:
fln={hsl,nln+jl,npl,n+fl,ninNl0,n>Nl(2.4)

In order to eliminate the need to know the starting values, sL, we note that at each iteration, s1 and p1 are linear functions of sL:

sl=AlsL+bl,pl=ClSL+dl  (2.5)

where the matrices A1, C1 (dimension 2N+1×2N+1) and the vectors b1, d1 (dimension 2N+1) have the initial values:

AL=I, CL=0, b1=0, d1=0  (2.6)

Using (2.5) and (2.6) in (1.1) and equating common terms leads a matrix and a vector recursion:

Initialize: AL=I, CL=0, bL=0, dL=0  (2.7)


for 1=L, . . . , 1

Al−1=Al−[jl][γl*]{[hl]Al+[jl]Cl}
bl−1=bl−[jl][γl*]{[hl]bl+[jl]dl+fli}
Cl−1=Cl−[hl][γl*]{[hl]Al+[jl]Cl}
dl−1=dl−[hl][γl*]{[hl]bl+[jl]dl+fli}

Then using the fact that sL=0 leads to the solution:

AosL+bo=0, sL=−Ao−1bo, fL,ns=sL,n/hL,n  (2.8)

for the scattered field at the outer boundary. Iteration (1.1) with (1.4) can then be used to evaluate the total field internal to the object.


Notice that the LHS matrix recursion in (2.7) is independent of the incident field. Once it is used to compute A.sub.0, the RHS vector recursion can be done for any number of incident fields. Concurrent solution for any number of incident fields can be obtained by replacing the RHS vector recursion with the matrix recursion:

BL=0, DL=0  (2.9)
Bl−1=Bl−[ji][γl*]{[hl]Bl+[jl]Dl+Fli}
Dl−1=Dl+[hl][γl*]{[hl]Bl+[jl]Dl+Fli}

where the matrices Bl, Dll are 2N+1×Nv where Nv is the number of views and the matrix of incident fields, Fli is given by:
Fli=[fl-Nilfl,-NiNyfl,Nilfl,Ni,Nv](2.10)

where the superscript after i is the view number (note that although the incident matrix is written as 2N+1×Nv, the entries are zero for the row index N1<|n″≦N). A more compact recursion can be obtained by concatenating the two recursions to give:

GL=└Al,BL┘=[I,O], HL=└CL,DL┘=[0,0]  (2.11)


For 1=L, . . . , 1

Gl−1=Gl−[ji][γl*]{[hl]Al+[jl]Cl+└0,Fli┘}
Hl−1=Hl+[hl][γl*]{[hl]Al+[jl]Cl+└0,Fli┘}

where the matrices G1, H1 are 2N+1×2N+1+Nv and in the notation [A,B] the first matrix is 2N+1×2N+1 and the second is 2N+1×Nv Then G0=[A0, B0] and the solution for all views is given by:
FLs=[fL-NslfL,-NsNyfL,NslfL,Ns,Nv]=[hL]A0-1B0(2.12)


A slightly modified recursion can be shown to yield the scattering matrix of the object. Recall that the total field is given by:

fl=[hl]si+[jl]pl+fl1  (2.13)

Any externally generated incident field can be expanded in a bessel series which implies that there exists a sequence, g−Ni, . . . , gNi such that:
fi=(ρ,ϕ)=Δπ2ingniJn(ρ)inϕ(2.14)

(recall that we are assuming that k0=1) which means that

fli=[jl]gi  (2.15)

Redefining pl→pl+gi then gives fl=[hl]sl+[jl]pl and leads easily to the iteration:

GL=[I,O], HL=[0,I]  (2.16)


For 1=L, . . . , 1

Gl−1=Gl−[ji][γl*]{[hl]Al+[jl]Cl}
Hl−1=Hl+[hl][γl*]{[hl]Al+[jl]Cl}

where the matrices are now 2N+1×2(2N+1). The last iterate G0=[A0, B0] yields the scattering matrix:

S(γ)=A0−1B0  (2.17)

for the body γ which relates the incident field coefficients to the scattering coefficients:
gs=Sgiwherefs(ρ,ϕ)=Δπ2ingnsH0(2)(ρ)einϕ(2.18)


for all incident field coefficient vectors, gi. Notice that in the previous notation:

gni=fL,ni/jL,n, gns=fL,ns/hL,n  (2.19)

7.3 Computational Formulas for the Jacobian and its Adjoint


In order to apply the Gauss-Newton iteration to the solution of the imaging problem we must first derive recursive formulas for the application of the Jacobian and its adjoint. The Jacobian of the 5 scattering coefficient vector, s.sub.L, operating on a perturbation in γ is given by:
Jδγ=l=1Ln=-NlNlδγl,nγl,nsL(3.1)

From (2.8) we get:

s′L=−A0−1(b′0+A′0sL)  (3.2)

where the prime denotes differentiation followed by summation over the elements of δγ
sL=Jδγ=l=1Ln=-NtNtδγl,nγl,nsL(3.3a)b0=l=1Ln=-NtNtδγl,nγl,nb0(3.3b)A0=l=1Ln=-NtNtδγl,nγl,nA0(3.3c)

Equation (3.2) provides the formula for computing J.delta.γ is recursive formulas for A′0 and b′ can be found. Define the notation:
Al=l=l+1Ln=-NtNtδγl,nγl,nAl(3.4a)Cl=l=l+1Ln=-NtNtδγl,nγl,nCl(3.4b)


(note the lower limit of the 1′ summation). Then using the LHS recursion in (2.7), it is simple to show that:

A′l−1=A′l−[jl]{[γl*]{[ht]A′l′[jl]C′l}+[δγl*]{[hl]Al+[jl]Cl}}  (3.5a)
C′l−1=C′l+[hl]{[γl*]{[ht]A′l+[jl]C′l}+[δγl*]{[hl]Al+[jl]Cl}}  (3.5b)

A further reduction in computational requirements can be achieved by noting from (3.2) that we do not need A′0 but rather A′0 SL where SL is the matrix whose columns are the sL,s for each view. The matrix A′0 is 2N+1×2N+1 while the matrix A′.) SL is 2N+1×Nv. Thus define:

A′ls≡A′lSL, C′ls≡C′lSL, Als≡AlSL, Cls≡ClSL  (3.6)

Then we get the recursion:

ALs=ALSL=ISL=SL, CLs=0. A′ls=0, C′Ls=0  (3.7a)


For 1=L, . . . , 1

A′l−1=A′ls−[jl]{[γl*]{[ht]A′ls+[jl]C′ls}+[δγl*]{[hl]Als+[jl]Cls}}  (3.7b)
C′l−1=C′ls+[hl]{[γl*]{[ht]A′ls+[jl]C′ls}+[δγl*]{[hl]Als+[jl]Cls}}  (3.7c)
A′l−1s=Als−[jl][γl*]{[hl]Als+[jl]Cls}  (3.7d)
Cl−1s=Cls+[hl][γl*]{[hl]Als+[jl]Cls}  (3.7e)

Similarly, for the b's and d's we get the recursion:

BL=0, DL=0, B′L=0, D′L=0  (3.8a)


For L=L, . . . , 1

B′l−1=B′l−[ji]{[γl*]{[hl]Bl+[jl]D′l}+[δγl*]{[hl]Bl+[jl]Dl+Fli}}  (3.8b)
D′l−1=D′l+[hi]{[γl*]{[hl]B′l+[jl]D′l}+[δγl*]{[hl]Bl+[jl]Dl+Fli}}  (3.8c)
Bl−1=Bl−[ji][γl*]{[hl]Bl+[jl]Dl+Fli}  (3.8d)
Dl−1=Dl+[hi][γl*]{[hl]Bl+[jl]Dl+Fli}  (3.8e)

where the matrices are all 2N+1×Nv The final Jacobian is then:

S′L=−A0−1(B′0+A′0s)  (3.9)

where the columns of S′L are the vectors Jn δγ for each view, n=1, . . . , Nv.


We can of course concatenate these two recursions to give:

GL=[SL,0], HL=[0,0], G′Ls=[0,0], H′L=[0,0]  (3.10a)


For 1=L, . . . ,1

G′l−1=G′l−[ji]{[γl*]{[hl]G′l+[jl]H′l}+[δγl*]{[hl]Gl+[jl]Hl+└0,Fli┘}}  (3.10b)
H′l−1=H′l+[hi]{[γl*]{[hl]G′l+[jl]H′l}+[δγl*]{[hl]Gl+[jl]Hl+└0,Fli┘}}  (3.10c)
Gl−1=Gl−[ji][γl*]{[hl]Gl+[jl]Hl+└0,Fli┘}  (3.10d)
Hl−1=Hl+[hi][γl*]{[hl]Gl+[jl]Hl+└0,Fli┘}  (3.10e)

where, at the last iterate, G′0=└A′0s, B′0┘. The matrices (G's and H's) in this recursion are all 2N+1×2Nv. This is the form of the Jacobian recursion used in the imaging programs.


EXAMPLE 8
Rectangular Scattering Matrix Recursion

This section describes a new recursive algorithm the used scattering matrices for rectangular subregions. The idea for this approach is an extension and generalization of the cylindrical coordinate recursion method discussed in the previous section. The computational complexity is even further reduced over CCR. The CCR algorithm derives from the addition theorem for the Green's function expressed in cylindrical coordinates. The new approach generalizes this by using Green's theorem to construct propagation operators (a kind of addition theorem analogue) for arbitrarily shaped, closed regions. In the following, it is applied to the special case of rectangular subregions, although any disjoint set of subregions could be used.


Consider two rectangular subregions A and B of R.sup.2 as shown in FIG. 16 Although the regions are drawn as disjoint, assume that they touch at the center of the FIGURE Let C denote the external boundary of the union of A and B. Define the scattering operator, SA, of region A as the operator that gives the outward moving scattered field on the boundary A, given the inward moving field, due to external sources, evaluated on the boundary A. Similarly, define the scattering matrix for boundary B. The goal is to find the scattering matrix for boundary C, given SA and SB—the scattering matrices for A and B.


Let the incident field due to sources external to boundary C, evaluated on boundary A be denoted fAi and similarly define fBi. For the total problem (A and B both containing scatterers) there exists a net, inward moving field at boundary A. Denote this field fAin and similarly define fBin. total field leaving boundary fAout=SAfAin. Knowledge of the radiated field on a closed boundary due to internal sources allows the field external to the boundary to be computed at any point. Let the operator that maps fAout onto the boundary B be denoted TBA (rectangular translation operator from A to B). Similarly denote the operator mapping fBout to boundary A be denoted TAB.


The total, inward moving field at boundary A has two parts—that due to the incident field external to C and that due to sources internal to boundary B. From the forgoing definitions, it should be obvious that the inward moving fields satisfy:

fAin=FAi+TABSBfBin  (1a)
fBin=FBi+TBASAfAin  (1b)

Solving for the total inward moving fields gives:
[fAinfBin]=[I-TABSB-TBASAI]-1[fAifBi](2)


The total scattered field at boundary A has two components—one from inside A and one from inside B. It should be obvious that the total scattered fields at boundaries, A and B are given by:
fAs=SAfAin+TABSBfBin(3a)fBs=TBASAfAin+SBfBinor(3b)[fAsfBs]=[SATABSBTBASASB][fAinfBin](4)

Combining (2) and (4) gives:
[fAsfBs]=[SATABSBTBASASB][I-TABSB-TBASAI]-1[fAifBi](5)

Assuming that the scattering operators are invertible, then we have the equivalent form:
[fAsfBs]=[ITABTBAI][SA-1-TAB-TBASB-1]-1[fAifBi](6)

The scattered field on the boundary C can be obtained from fAs and fBs by simple truncation (and possible re-ordering depending on how the boundaries are parameterized). Let the operator that does this be denoted:
fcs=[CACB][fAsfBs](7)

Let the incident field on boundary C due to external sources be denoted fci There exist operators DA and DB that operate on fCi to give fAi and fBi (similar to the external translation operators TAB and TBA). Thus:
fcs=[CACB][ITABTBAI][SA-1-TAB-TBASB-1]-1[DADB]fci(8)

from which we see that the scattering matrix for boundary C is given by:
SC=[CACB][ITABTBAI][SA-1-TAB-TBASB-1]-1[DADB](9)

Equation (9) then gives the core computation for our rectangular scattering matrix recursion algorithm. Technical details concerning the existence and discrete construction of the translation and other needed operators has been omitted in this write-up. We have, however written working first-cut programs that perform (9).


An O(N3) Algorithm for Computing All Scattering Views Based on Rectangular Scattering Matrix Recursion


Consider a region containing scattering material covered by an N×N array of rectangular sub regions as shown in FIG. 16.


Again, although drawn as disjoint, assume that the subregions touch. Assume that the scattering matrix for each subregion is known (for example, if each subregion is only λ/2 on edge, the calculation of S is trivial given the material enclosed). Now coalesce 2×2 sets of these scattering matrices into larger scattering matrices (this coalesce of 4 subregions into one is similar to the algorithm defined above for coalescing two subregions). There are N/2×N/2 such 2×2 blocks to coalesce. When done, we have scattering matrices for the set of larger subregions shown in FIG. 18.


This process is continued until, at the final stage, the scattering matrix for the total region is computed. Note that the physical parameters of the scattering media (speed of sound, absorption, etc.) are used only in the first stage (computation of the N×N array of scattering matrices).


Assuming that N is a power of two, the algorithm will terminate after log2(N) such stages with the scattering matrix for the whole region. A careful accounting of the computation required reveals that the total computation is O(N3). The end resulting scattering matrix then allows fast calculation of the scattered field anywhere external to the total region for any incident field (angle of view).


Although derived assuming that N is a power of 2, the algorithm can be generalized by including 3×3 (or any other sized) coalescing at a stage, allowing thereby algorithms for any N (preferably N should have only small, say 2, 3, 5, prime factors). Also, there is no reason that the starting array of subregions cannot be N×M (by using more general n×m coalescing at some stages).


In addition, the existence of layering can be included. If layering occurs above and below the total region so that the total inhomogeneity resides in a single layer, then the algorithm proceeds as before. Once the total region scattering matrix has been obtained, its interaction with the external layering can be computed. If inhomogeneous layer boundaries lie along horizontal borders between row of subscatterers, then the translation matrices can be modified when coalescing across such boundaries, properly including the layer effects. This is an advantage over our present layered Green's function algorithms which require that the inhomogeneity lie entirely within a single layer (This can be fixed in our present algorithms but at the expense of increased computation).


The O(N3) computation of this approach is superior to the O(N4 log2(N)) computation of our original FFT-BCG approach and our present recursion based on cylindrical coordinates which is O O(N3 log2(N).


EXAMPLE 9
Modeling System Transfer Function Including Driving Voltage, Transmitting Transducers Receiving Transducers, Preamplifiers, Analog to Digital Converter etc.

Let the transfer function of the transmitting waveform generator, power amplifier, transmitting multiplexer, transmitting transducer, ocean/sediment, receiving transducers, preamplifiers, differential amplifier, differential waveform generator, and analog to digital converter be, respectively, TTWG, TPA, TTM, TTT, TO/S, TRT, TRM, TDA, TDWG, and TDAC. These separate transfer function can be identified with the system hardware. Then the total transfer function is:

Ttotal=TDAC(TDA1TRMTRTTO/STTTTTMTPATTWG−TDA2TDWG)


Note that the term TDA2 TDWG is subtracted in order to remove direct path energy and to remove reverberations in the transducers and the platform; this subtraction effectively increases the analog to digital converter dynamic range. The signal in the differential waveform generator is programmed to produce a net zero signal output from the analog to digital converter for the case of no sediment present.


Recall that the equation for the scattered field f(sc) (from the sediment) at a transducer (not the output voltage) at a given temporal frequency is given in terms of the transducer-to-sediment Green's function D, the sediment's acoustic properties γ and the internal field in the sediment f by

f(sc)=Dγf


The field (at a given temporal frequency) within the sediment itself f is given in terms of the incident field f(inc), the sediment's acoustic properties y, and the sediment-to-sediment Green's function C by

f(inc)=(I−Cγ)f.


On combining these two equations we eliminate the internal field and find the scattered field in terms of the incinet field and the sediment properties

f(sc)=Dγ(I−Cγ)−1f(inc)


These equations involving C and D are true for both the free space Green's function [“Nonperturbative Diffraction Tomography Via Gauss-Newton Iteration Applied to the Scattering Integral Equation, Borup, D. T. et al., Ultrasonic Imaging] and our newly developed layered Green's functions [Johnson, S. A., D. T. Borup, M J. Berggren, Wiskin, J. W., and R. S. Eidens, 1992, “Modelling of inverse scattering and other tomographic algorithms in conjunction with wide bandwidth acoustic transducer arrays for towed or autonomous sub-bottom imaging systems,” Proc. of Mastering the Oceans through Technology (Oceans 92), 1992, pp 294-299.]. We now combine the scattering equations with the transfer functions. First, identify f(sc) with TO/S TTT TTM TPA TTWG and f(inc) with TTT TTM TPA TTWG. Next we note that measuring f(inc) is a direct way of finding the product TTT TTM TPA TTWG. We also note that Ttotal can be written as

Ttotal(γ,TTWG)=TDAC(TDA1TRMTRTDγ(I−Cγ)−1TTTTTMTPATTWG−TDA2TDWG).


Then Ttotal(γ,TTWG) is a nonlinear operator that transforms (γ,TTWG into recorded signals Ttotal-measured. Thus, for a given set of Ttotal-measured measurements and for a given TTWG, we may in principal find γ by a nonlinear inverse operator

γ=Ttotal−1(Ttotal-measured(γ,TTWG))


Since the exact form of Ttotal(γ,TTWG) is not known, we find γ by a Gauss-Newton iteration method. This requires that the Jacobian of Ttotal(γ,TTWG) be computed. The Jacobian is readily computed in closed form [Borup, 1992] and is given by

J(γ)=−∂Ttotal(n),TTWG)/∂γJ(γ).


Then the Gauss-Newton iteration for computing γ is given by:


(1) set n=1, estimate a value for y(n);


(2) compute J(γ(n));


(3) solve

JT(n))J(n))δγ(n)=−JT(n))(γ(n))└Ttotal-measured−Ttotal(n),TTWG)┘ for δγ(n);


(4) update γ(n) by the formula

γ(n+1)(n)+δγ(n);


(5) if └Ttotal-measured−Ttoal(n),TTWG)┘<ε then set γ=γ(n+1)


and quit, else go to step 2.


The extra dynamic range provided by the differential waveform generator/analog to digital converter circuit raises questions as to the optimal setup procedure (e.g. how many bits to span the noise present with no signal). We have modeled the signal to noise ratio that can be achieved by a beamformer which delays and sums multiple channel signals, each channel being such a circuit. We find as a rule of thumb, that the lowest order one or two bits should span either the noise or the signal, depending which ever is smaller (i.e., for signal to noise ratios greater than unity the noise should be spanned, but for signal to noise ratios less than unity the signal should be spanned). Upon using commercial 16 or 18 bit analog to digital converters this method may well extend their range to 20 bits or more.


2. Model Electrical Crosstalk, Acoustic Crosstalk


Electrical cross talk can be removed by use of knowledge of the cross coupling matrix M. Let Vn(true) be true electrical signal at transducer n and let Vm(meas) be the measured signal at transducer m. Then Vn(meas)=MnmVm(true). We observe for small cross talk that matrix M has the form M=D2(I+ε)D1, where I is the identity matrix, D1 and D2 are diagonal matrices and ε is the differential cross talk matrix whose elements are small in value. We seek

Vn(true)=M−1Vn(meas)

by the binomial theorem (I+ε)−1≈(I−ε). Thus,

M−1≈D1−1(I−E)D2−1.

Once D1, D2, and M are measured the problem of removing cross talk is quite inexpensive numerically (even if ε is not small the exact inverse M−1 can be computed once and stored). If the matrix M turns out to be noninvertible (as can be the case for large magnitude coupling) then we can alternatively concatenate M onto the inverse scattering equation to give:

vωφ(meas)=MωPωGω[fωφ

to which the inverse scattering algorithm can be directly applied.


We believe that cross talk can be removed by good fabrication techniques including careful shielding. Nevertheless, the above numerical method can be used if necessary.


Acoustic cross talk can be removed by several methods: (1) acoustic baffling of each transducers; (2) calibration of individual (isolated) transducers, computing the acoustic coupling in the array from wave equations methods, then inverting the model by a cross talk matrix as above; (3) direct measurement of the coupling in a finished array to find the cross talk matrix and then inverting as shown above.


A more difficult type of cross talk to remove is the direct mechanical coupling between transducers. This problem will be attacked by using vibration damping techniques in mounting each transducer on the frame. We believe that such damping methods will eliminate direct mechanical coupling. As a backup we note that modeling of the coupled system are theoretically possible and has been successfully accomplished by the university's AIM Lab for circular mounting geometries (by derivation of a new “total system Green's function” for the imaging system that that includes cross coupling between elements).


EXAMPLE 10
Inclusion of Transducer Coupling

In the event that significant coupling exists between the object to be imaged and the transducers (and/or coupling between transducers is not negligible), a computationally efficient means of incorporating this coupling into the inverse scattering algorithm is needed. Equivalently, the transducers must be included as part of the scattering problem This section details a computational algorithm for achieving this incorporation.


Consider an object to be imaged, γ, illuminated by a transmitter, T, with the scattering received by a receiver, R, as shown in FIG. 24 Let C denote a closed surface separating γ from the transceivers.


Let S be the scattering matrix of γ which, given the incident field generated from sources outside of C, gives the outward moving scattered field evaluated on C. This operator can be computed by solving a sufficient number of forward scattering problems for the object γ.


Let PR denote the operator that computes the field impinging on R due to sources inside of C from the scattered field evaluated on C. This is a simple propagation operator computable by an angular spectrum technique.


Let PT denote the operator that computes the field impinging on T due to sources inside of C from the scattered field evaluated on C. This is a simple propagation operator computable by an angular spectrum technique.


Let ART denote the operator that computes the field impinging on R due to scattering from T (it operates on the net total field incident on T). This operator is computed by a moment method analysis of the transmitter structure.


Let ATR denote the operator that computes the field impinging on T due to scattering from R (it operates on the net total field incident on R). This operator is computed by a moment method analysis of the receiver structure.


Let BT denote the operator that computes the field on C due to scattering from T (it operates on the net total field incident on T). This operator can be computed by a moment method analysis of the transmitter structure.


Let BR denote the operator that computes the field on C due to scattering from R (it operates on the net total field incident on R). This operator can be computed by a moment method analysis of the receiver structure.


Assume that the transmitter T also produces a field, fi, due to eternally applied excitation (electrical). Denote by fci the values of this field on C. Denote by fRi the values of this field on the receiver surface. We assume that these field values are known (i.e., we know the free-space radiation characteristics of the transmitter).


Given these definitions, the total field incident from outside of C evaluated on C is given by:

fctot=fci+BTfTtot+BRfRtot  1

where the superscript tot indicates the field incident on the particular element due to all other sources. For T and R we have:

fTtot=PTSfctot+ATRfRtot  2
fRtot=fRi+PTSfctot+ARTfTtot  3


Note that the formula for fTtot has no superscript i term since the incident field emanates from its interior. Solving 1-3 for the tot fields gives:
[fCtotfTtotfRtot]=[I-Br-BRPTSI-ARTPRS-ARTI]-1[fco0fRi]=[fctotfTtotfRtot](4)


The size of this matrix operator is O(N×N) and so computation of its inverse does not require much CPU time (mere seconds). In order to compute the signal received by the receiver transducer, we take fRtot computed in 4 and compute the surface currents (EM case) or surface velocities (acoustic case) from which the signal from the receiver can be computed.


This procedure for analyzing a scatterer in the presence of coupling between the T/R pair includes all orders of multiple interaction allowing transducers with complex geometries to be incorporated into our inverse scattering algorithms.


EXAMPLE 11
Frequency Dependent Scattering Parameters

Throughout the previous sections it has been assumed that γ is independent of frequency. Suppose now that γ is a function of frequency. In the event that only a single frequency is needed (transmission mode with encircling transducers and only one complex parameter to be imaged) this is not a problem—the algorithm will simply image 2 or 3-D distribution of γ evaluated at that frequency. However, in reflection mode or when imaging multiple parameters, multiple frequencies are needed. This increases the number of unknowns to Ω*Nx*Ny if we naively seek a separate image at each frequency. Since multiple frequencies were already needed to complete the data for a frequency independent unknown, we have no way of correspondingly increasing the number of equations by a factor of Ω. Instead, consider approximating the frequency variation with a set of parameters at each pixel:

γnm(ω)≈γnm(0)Ψ(0)(ω)+γnm(1)Ψ(1)(ω)+ . . . +γnm(q−1)Ψ(q−1)

where the basis functions Ψ are selected based on the physics of the frequency dependence (we may, for example, simply use the monomial basis: Ψ(n)(ω)≡ωn, perhaps a rational function basis might be chosen since simple relaxation dispersion is a rational function). The formula for the GN update for this case is:

PωGω((I−[γω]Gω)−1[fωφ]Mωjδγ(i)=−Tωφ)

where summation over j is assumed and the matrix M is given by:
M=[Ψ(1)(ω1)Ψ(2)(ω1)Ψ(q-1)(ω1)Ψ(1)(ω2)Ψ(2)(ω2)Ψ(q-1)(ω2)Ψ(1)(ωΩ)Ψ(2)(ωΩ)Ψ(q-1)(ωΩ)]


Solution for the q×Nx×Ny unknowns, assuming that q is sufficiently smaller than Ω can then carried out via the GN-MRCG or RP algorithms. We have already verified the success of this approach using quadratic polynomial models of the frequency variation

(q=3, Ψ(n)(ω)≡ωn)


We have discussed 3 separate algorithms for imaging:


(1) FFT-BCG


(2) Cylindrical Recursion


(3) Rectangular Recursion


We have given examples of how this technology is used in various scenarios. We have shown how various modalities of wave propagation are amenable to the imaging algorithm. Any propagating substance or energy that can be successfully modelled with the wave equation is suitable. We have also included examples of simulated and actual data, thereby establishing the concept as workable.


The apparatus and method of the present invention holds promise for many useful applications in various fields, including seismic surveying, nondestructive testing, sonar, radar, ground penetrating radar, optical microscopes, x-ray microscopes, and medical ultrasound imaging, to name just a few. For purposes of illustrating the utility of the present invention, the detailed description which follows will emphasize the apparatus and method of the invention in the context of a system for use in performing ultrasound imaging of human organs, such as the breast. However, it will be appreciated that the present invention as claimed herein may employ other forms of energy such as microwaves, light or elastic waves and furthermore may be used in other fields and is not intended to be limited solely to medical acoustic imaging.


1. The Scanner and Transducer Configuration


Reference is first made to FIG. 1 which generally illustrates one type of scanner which may be used to implement the apparatus and method of the present invention for purposes of medical ultrasound imaging of a human breast or other organs. As shown in FIG. 1, the scanning apparatus generally designated at 30 includes fixed base 32. Wheels 38 and 40 are attached to the underside of a movable carriage base 34. Small shoulders 42-45 formed on the upper surface of cylindrical pedestal 36 define a track along which the wheels 38 and 40 are guided.


A stepping motor 46 mounted within the fixed base 32 is joined by a shaft 48 to a small pinion gear 50. Pinion gear 50 engages a large drive gear 52. Pillars 54-57 are rigidly joined at one end to the top of drive gear 52 and at the opposite end to the underside of movable carriage base 34. Bearing block 58 supports drive gear 52 and movable carriage base 34.


Stepping motor 46 may be operated to turn the drive gear 52 which in turn will cause the movable carriage base 34 to rotate on top of the cylindrical pillar 36 within the tracks defined by shoulders 42-45. As hereinafter more fully described, rotation of the movable carriage base 34 may be employed to insure that an object is fully scanned from every possible angle.


With continued reference to FIG. 1, it will be seen that movable carriage base 34 has an inner cylindrical wall 60 and an outer cylindrical wall 62. The outer wall 62 and inner cylindrical wall 60 of movable carriage base 34 define a generally cylindrical chamber 64. Vertical drive motor 66 is mounted within chamber 64 and is connected by a shaft 68 to a circular ring of transducer arrays generally designated at 70. Vertical drive motor 66 permits the circular ring of transducer arrays 70 to be vertically adjusted. Slide bracket 72 is mounted within the chamber 64 and serves to guide in a sliding manner, the ring of transducer arrays 70 when it is vertically adjusted.


The ring of transducer arrays 70 is electrically connected through line 74 to components of an electronic system which may be housed in part within the chamber 64, as schematically indicated at 76. As hereinafter more fully described, the electronic system is used to control transmission and reception of acoustic signals so as to enable reconstruction therefrom of an image of the object being scanned.


Circular bracket 78 is attached to the top of the outer wall 62 of movable carriage base 34. A flexible, transparent window 80 extends between circular bracket 78 and the inner cylindrical wall 60 so as to enclose the transducer arrays 70 and stepping motor 66 within the chamber 64. The length of flexible window 80 is greater than the distance between bracket 78 and inner cylindrical wall 60. Window 80 thus serves as a flexible yet water-tight seal which permits vertical motion of the transducer arrays 70 for purposes of vertical focusing. Acoustically transparent window 80 may be made of any suitable material, such as plastic or rubber.


A stationary water tank generally designated 86 is adapted to fit within the movable carriage base 34. Water tank 86 consists of a fixed top plate 88 rigidly attached to vertical support bars 82 and 84. Support bars 82 and 84 are mounted on the fixed base 32. The length of support bars 82 and 84 is chosen such that the fixed top plate 88 of water tank 86 will be slightly suspended above the bracket 78 of movable carriage 34. Thus, a space 87 is provided between bracket 78 and fixed top plate 88. Additionally, a space 89 will be provided between side 94 and bottom 95 of water tank 86 and cylindrical wall 60 and bottom 61 of movable carriage 34. A third support bar 83 extends through a central hole (not shown) provided in block 58 and drive gear 52. Support bar 83 also extends through a watertight opening 85 provided in the bottom 61 of movable carriage 34. Support bar 83 thus helps to support water tank 86 in spaced relation from movable carriage 34. Since water tank 86 is suspended in spaced relation from movable carriage base 34, water tank 86 will remain stationary as movable carriage 34 is rotated. As hereinafter more fully described, rotation of the carriage 34 permits the transducer arrays 70 to scan the object 98 from every possible position around the object 98.


Fixed top plate 88 has a short downwardly extending lip 90 which extends over the end of circular bracket 78. A rubber-covered window 92 extends between the lip 90 and side 94 of the water tank. Window 92 encloses within space 89, water 97, or some other suitable acoustic medium so as to acoustically couple the transducer array 70 to the water 96 contained in tank 86. The rubber-covered window 92 also permits acoustic energy signals to be transmitted therethrough by the transducer arrays 70 and insures that the patient will be protected in the event window 92 should be broken.


The scanning apparatus generally described above may be employed to scan various parts of the human anatomy as, for example, a patient's breast, as schematically illustrated at 98.


The scanning apparatus generally described above may be used for nonmedical applications and may employ acoustic or electromagnetic energy wavefields. For example, the same scheme may be used with acoustic transducers for scanning nonbiological specimens such as plastic parts. As further example, the same scheme may be used with microwave wavefields. In this latter case, the use of a coupling medium such as water 96 and water 97 is optional. Indeed, at frequencies over one GH.sub.z, the microwave absorption of water limits the size of the apparatus to a few centimeters. However, the use of air permits the use of electromagnetic energy over the spectrum from DC to far ultraviolet and soft x-rays. For microwave imaging, the ring of acoustic transducer arrays 70 is replaced with a corresponding ring of microwave transducer (i.e. antennas).


Reference is next made to FIGS. 2-3. FIG. 2 generally illustrates one suitable type of transducer configuration for the transducer arrays of FIG. 1. As shown in FIG. 2, the transducer configuration consists of eight transmitter arrays 100-107 and eight corresponding receiver arrays 108-115. The transmitter array 100-107 are thin, cylindrically-shaped transducer arrays which provide point-source or line-source segment transmission of acoustic energy. The receiver arrays 108-115 are arc-wise shaped arrays which are interposed between each pair of transmitter arrays 100-107. For purposes hereinafter more fully described, every other receiver array (e.g., receiver arrays 108, 110, 112 and 114) has a shortened arc-wise length.


Each of the transducer arrays 100-115 may be any of several well-known types of transducers. For example, transducers 100-115 may be piezoelectric transducers which produce ultrasound energy signals directly from high-frequency voltages applied to the transducer. Alternatively, the transducer arrays 100-115 may be magnetostrictive transducers having a magnetic coil (not shown) which receives the electrical oscillations and converts them into magnetic oscillations which are then applied to the magnetostrictive material to produce ultrasound energy signals.


With continued reference to FIG. 1, it will be seen that the transducer arrays 100-115 are arranged so as to form a cylindrical ring of arrays which encircles the object 98. By encircling the object with the transducer arrays 100-115, the arrays 110-115 may be quickly commutated by either mechanical methods, electronic methods or by a combination of both methods so as to completely scan the object in a much shorter time. In the illustrated embodiment, commutation is achieved by both mechanical rotation by stepping motor 46 and by electronic triggering of transmitter arrays 100-117 in sequence, as described more fully below.


Commutation of the transmitter arrays 100-107 permits acoustic energy to be transmitted from every possible position about the object, thus insuring that the data received (i.e. scattered acoustic energy) is complete. Commutation of the receiver arrays 108-115 insures that all spaces between receiver arrays 108-115 (known as “sound holes”) will be covered, thus providing for accurate collection of all acoustic energy that is transmitted through or scattered by the object 98. However, commutation of the receiver arrays 108-115 is not necessary where transmitter arrays 100-107 are also used to receive acoustic signals. The circular configuration of transducer arrays 100-115 permits certain parts of the body to be scanned which would otherwise be inaccessible because of bones or other obstructions of the tissue.


The method for commuting the arrays 100-115 is best understood by reference to FIG. 3. First, each of the transmitter arrays 100-107 is sequentially triggered so as to transmit acoustic energy. Immediately after each transmitter array 100-107 is triggered, arrays 108-115 receive acoustic energy signals that have been either transmitted through or scattered by the object being scanned. Once this procedure has been followed for each of the transmitter arrays 100-107, the ring of arrays 70 is then mechanically rotated counterclockwise through a small angle, as schematically represented by arrow 116. The mechanical rotation is achieved by the stepping motor 46 (see FIG. 1) which rotates the movable carriage base 34, as described above.


After rotation of the arrays 100-115 to a second position, each of the transmitter arrays 100-107 is again sequentially triggered and data are again collected through receiver arrays 108-115. This procedure is repeated until acoustic energy has been transmitted at each possible point about the object.


Where the arrays 100-107 are used only for transmitting acoustic energy, a second series of rotations must then be effected to cover the sound holes between each pair of receiver arrays 108-115. For example, by rotating transmitter array 101 to the position occupied by the transmitter array 100, receiver arrays 109, 111, 113 and 115 will, because of their longer arcuate (arcwise measured) length, cover the spaces previously occupied by transmitter arrays 101, 103, 105 and 107. This procedure is repeated until all sound holes have been covered.


It should be noted that for a fixed circumference by decreasing the length of each array and increasing the number of arrays, electronic commutation may be used to reduce the angle through which the ring of transducer arrays must be rotated to achieve complete collection of both echo and transmission data.


It should also be noted that, in principal, no mechanical rotation of the array of detectors is necessary if every element is small enough and can be made to act as either a receiver or transmitter. Such an arrangement is more expensive than the technique illustrated in FIGS. 1, 2, and 3.


The transducer array 70 generally describes an implementation for an acoustic scanner and, in particular, of a breast scanner. In this case, the transducer elements generate and detect acoustic energy wavefields. However, can be seen that the acoustic transducers can be replaced with microwave transducers (antennas). The functions of interleaving, rotation, commutation are identical, for either microwave or acoustic transducers.


2. The Electronic System


Reference is next made to FIGS. 4A-4B which schematically illustrate an electronic system which may be used to implement the apparatus and method of the present invention. It is clear from present practices in acoustics and radar technology that in both disciplines quadrature detection and analog to digital conversion are well known and widely used. Therefore, FIGS. 4A-4B generally illustrate the invention in both the acoustic and electromagnetic wavefield case. For clarity and simplicity, the electronic system is described in the context of acoustic wavefields. Following this acoustic description additional discussion on the applicability of FIGS. 4A-4B to microwave energy wavefields is made.


As hereinafter more fully described, the electronic system generates the acoustic energy that is propagated through and scattered by the object 98. The electronic system thereafter detects and processes the acoustic energy signals that are scattered by and transmitted through the object 98, and then communicates the processed signals to a computer (CPU) which interprets the signals and outputs the result in the form of a visual display or printed output.


In the transmission mode, CPU 118 causes an oscillator 128 to output a waveform which is amplified by a power amplifier 130 before being sent through multiplexer 132 to one of the transmitters. CPU 118 controls the multiplexer 132 so as to sequence each transmitter array 100-107 used to generate the acoustic energy propagated through the acoustic medium and object. If desired, after it is amplified, the waveform can also be input to an impedance matching transformer (not shown) and to a series of RC or RLC networks connected in parallel across the transmitter arrays 100-107 as illustrated and described in U.S. Pat. No. 4,222,274 (hereinafter the Johnson '274 patent), which is incorporated herein by reference. The impedance matching transformer may be used to achieve maximum power transfer while the RC or RLD networks may be used to distribute power across each transmitter array 100-107 in a way that deceases the side lobes in the transmitted signal.


Each of the acoustic receivers 108-115 (FIG. 2) are connected through a multiplexer 134 which is also controlled by CPU 118. In the receive mode, the detected signals may also be input through a delay line (not shown) to an analog adder and time variable gain circuit (not shown) to vertically focus the signals and to compensate for signal attenuation, as shown and described in the Johnson '274 patent. CPU 118 causes multiplexer 134 to sequence in turn each of the acoustic receivers 108-115 so as to gather transmitted or scattered acoustic energy around the entire circumference of the object. From receiver multiplexer 134, detected acoustic energy signals are amplified by a preamplifier 136 which may be used to logarithmically amplify the detected signal to reduce storage space required for each signal after it is digitized. The amplified signal is then processed by a phase detector 138.


The operation and components of phase detector 138 are best illustrated in FIG. 4B. As there shown, phase detector 138 receives the amplified signal as schematically represented by line 137 on which the signal is input to multipliers 154 and 156. The signal generated by oscillator 128 is input as shown at line 150 to one of the multipliers 156, and the signal from oscillator 128 is also shifted 90 degrees by the phase shifter 152 and then input as shown at line 153 to the other multiplier 154. Thus, each signal detected at the acoustic receivers is multiplied at multiplier 154 by a signal which is 90 degrees out of phase with the signal which is used to multiply the detected signal at the other multiplier 156. During the receive mode, the switch controller 158 and reset 166 are controlled as shown at line 123 by CPU 118 so that controller 158 closes each switch 164 and 165 after integrators 168 and 170 are reset. The resulting signals from multipliers 154 and 156 are then filtered by low-pass filters 160 and 162 and integrated by integrators 168 and 170. The integrated signals are then output, as shown at lines 139 and 140 to the analog to digital converters (ADCs) 142 and 143 (see FIG. 4A). The two signals which are output by phase detector 138 electronically represent the real and imaginary mathematical components of the acoustic signals which are detected at the acoustic receivers 108-115.


Once the received signals have been digitized, they are input and stored in the memory of an array processor 120. Alternately, a parallel processor or other special purpose high-speed computational device may be used for even higher computational speed. As hereinafter more fully described, CPU 118 in combination with the array processor 120 are programmed to then reconstruct the acoustic image using inverse scattering techniques, several alternatives of which are described more fully in Examples 1-3 of section 3 below. Once the acoustic image is reconstructed, it may be output either visually at a display 124 or in printed form at printer 122, or stored on a disc or other storage medium 123. Mechanical scan devices represented at 126 correspond to the motors 46 and 66 (FIG. 1) for controlling commutation and vertical positioning at the transmitter and receiver arrays 100-107 and 108-115, and are controlled by CPU 118. Other configurations of transducers can be scanned by mechanical means through use of mechanical scanning device 126.


The electronic circuits of FIGS. 4A-4B are effective over the frequency ranges of fractions of H.sub.Z to hundreds of GH.sub.z. The transmitter multiplexer 132 can drive signals to either acoustic or electromagnetic energy transmitting transducers. Likewise, receiving multiplexer 134 can receive signals from either acoustic or electromagnetic energy transducers. The operation of the circuit in FIG. 4A is independent of the source of the received signal at receiver multiplexer 134 or the destination of signal from transmitter multiplexer 132. The operation of the phase detector of FIG. 4B is independent of the type of wavefield used.


Reference is next made to FIGS. 4C-4F which schematically illustrate how electronic multiplexing may be used to eliminate the need for mechanical rotation, thus further enhancing system performance. In the following description, the term transducer applies equally well to both acoustic and electromagnetic energy types. In particular, FIG. 4C shows n transducer elements 131a through 131n of a circular array closely spaced to form a continuous array surrounding the body. The array is shown as a one-dimensional structure, but clearly could be a two-dimensional structure as in FIG. 2. FIG. 4D illustrates the n elements 131a through 131n arranged in a linear fashion. It is also clear that a planar or checkerboard two-dimensional array of element could be used.


Such one-dimensional and two-dimensional arrays of receivers and transmitters have a direct application to advanced medical imaging instruments where motion of the array is undesirable or in seismic exploration in which such movements are difficult. FIG. 4E illustrates how each element 131a through 131n may be switched to either a transmitter circuit or a receiver circuit. Here, for example, element 131a is switched by switch 137a to either a receiver circuit 133a or a transmitter circuit 135a. FIG. 4F shows how a passive network of diodes and resistors may be used to allow a single element to act as either a transmitter or a receiver, or in both capacities. For example, in the transmit mode, diodes 139 are driven into conduction by transmit signal on line 135a. With two silicon diodes in series in each parallel leg, the voltage drop is a few volts. Thus, for an applied transmit signal of 20 volt or more, only a small percentage of signal power is lost across diodes 139. Diodes 139 are arranged in a series parallel network so that either polarity of signal is passed to transducer element 131a with negligible loss. In the transmit mode, resistors 145, 147, and 149 and diodes 141 and 143 prevent excessive and harmful voltage from appearing at output 133a that leads to the preamplifier multiplexer, or analog-to-digital circuits that follow. In operation, resistor 145, diode 141, and resistor 149 act as a voltage divider for the large transmit voltage present at the transducer element 131a. Diodes 141 are arranged with opposing polarity to provide a path for any polarity of signal above their turn on voltage of about 0.7 to 1.0 volts. The values of resistors 145 and 149 are typically so that the impedance of resistor 145 is greater than or equal to that of the internal impedance of transducer element 131a. Resistor 149 is chosen to be some fraction of resistor 145, such as one-fifth. Resistor (resistor 147) typically is chosen to be about equal to the resistance of resistor 149. Thus, during transmission, the voltage appearing at output 133a is only the conduction voltage drop across diodes 143.


In the receiving mode, signals received at transducer element 131a are typically less than one diode voltage drop (about 0.7 volt) and thus are isolated from transmitter 135a, since point 135a is a ground and diodes 139 are not conducting. Diodes 141 and 143 are not conducting and therefore output 133a is not shunted. Thus, the preamplifier following 133a would only see an impedance of resistor 145 plus resistor 147 plus that of the transducer element 131a. In practice, resistor 145 plus resistor 147 can be made about equal to or less than the impedance of transducer element 131a to minimize signal loss. It is clear that the principles illustrated in FIGS. 4C-4F may also be applied in the case of wide bandwidth signal transmission as described further below in connection with FIGS. 5A-5B.



FIG. 4F describing the use of diode networks to isolate the transmitter from the receiver circuits is commonly used in ultrasound scanners and sonar systems. Similar circuits are used in some radar systems. At radar frequencies other methods of isolation such as gas diodes, directional couplers, active diode switches, etc. are also used.



FIGS. 5A and 5B schematically illustrate another electronic system which can be used to implement the apparatus and method of the present invention. In the description that follows, the circuit is specifically designed for driving acoustic transmitting transducers and for digitizing signals from acoustic transducers, but the same circuit also applies to electromagnetic transducers (antennae) as well. In FIG. 5A, oscillator 128 has been replaced by waveform generator 129 that can produce repetitive narrow bandwidth signals or a wide bandwidth signal. The advantages of using a suitable wide bandwidth signal is that in one transmit-receive event, information at all frequencies of interest may be collected. Typically, it will be preferable to scan the object under consideration with signals having several different frequencies. This may be especially true in cases where it is not possible to encircle the object with a ring of transducer arrays, as in the case of seismic imaging. Moreover, by using multiple frequencies, it will typically be possible to obtain more data for use in reconstructing the image. The lowest frequency used for the signal must be such that the wavelength of the signal is not significantly larger than the object to be imaged. The highest frequency must be such that the acoustic signal may be propagated through the object without being absorbed to such an extent as to render detection of the scattered signal impossible or impractical. Thus, depending upon the absorption properties and size of the object which is to be scanned, use of multiple frequencies within the range indicated will typically enhance the ability to more accurately reconstruct the image of the object.


The use of multiple frequencies or signals containing many frequencies also has the advantage of obtaining data that may be used to accurately reconstruct frequency dependent material properties.


In the electronic system of FIG. 4A, in order to scan the object using multiple frequencies, the frequency of oscillator 128 must be change several times at each transmitter position. This increases the time involved in transmitting and detecting the scattered acoustic energy from which the image is reconstructed. In the electronic system of FIG. 5, a special waveform is used which inherently includes multiple frequencies within it. The type of waveform generated by waveform generator 129 may be the well-known Tanaka-Iinuma kernel or the Ramachandran and Lakshiminaraynan kernel, as illustrated and described in the Johnson '274 patent. Swept frequency signals such as the well-known frequency modulated chirp or other types of waveforms could also be used to provide acceptable results.


As shown in FIG. 5B, the waveform generator 129 basically comprises five elements. An electronic memory device 178 is connected to the CPU 118 through read-and-write lines 179 and 181. CPU 118 determines the numerical value for a series of discrete points on the selected waveform. These numerical values are stored in binary form in memory device 178. Each of the discrete values stored in the memory device 178 is then sent to a digital to analog converter (DAC) 180. DAC 180 then transforms each of these digital values into a corresponding analog pulse which is then input to a sample and hold circuit 182. Sample and hold circuits, such as that schematically illustrated at 182, are well-known in the art and operate to hold each analog signal for a predetermined period of time. Counter-timer 184 is used to control the amount of time that each signal is held by the sample and hold circuit 182. Clock and synchronizer 183 is triggered by computer 118 and advances the memory address in memory 178, synchronizes the DAC 180, and controls the sample and hold 182.


With each clock pulse from counter-timer 184, the sample and hold circuit 182 retrieves one of the analog signals from DAC 180 and then holds the value of that signal for the duration of the clock pulse. Sample and hold circuit 182 thus outputs the approximate shape of the desired waveform which is then input to a low pass filter circuit 186 which is used to smooth the shape of the waveform before amplification and transmission through multiplexer 132 to one of the transmitters.


The electronic system of FIG. 5A also differs from the system shown in FIG. 4A in that the amplified analog signals output by the preamplifier 136 are digitized by a high-speed analog-to-digital converter 144 rather than being input to a phase detector. The digitized signals are then transformed by a complex fast Fourier transform (hereinafter “FFT”) operation provided by a parallel processor 146 prior to being input and stored in the memory of the CPU 118. As described further in section 3 below, CPU 118 then reconstructs the image using a novel inverse scattering technique. Use of array processor 120 with CPU 118 in FIG. 4A and parallel processor 146 with CPU 118 in FIG. 5A is arbitrary and in principle either array processor 120 or parallel processor 146 or special purpose hard wired computational hardware could also be used in either FIG. 4A or 5A to accomplish the image reconstruction, as described further in section 3.


Optical Microscope


The operation of the inverse scattering microscope can be seen in block diagram form in FIG. 6A. This figure explains the operation of a specially modified Mach Zehnder interferometer based detection system. This detection system produces the information from which phase and modulus information is produced that is needed by an inverse scattering algorithm. A monochromatic, coherent light source such as laser 500 produces an approximately parallel beam of light 501 that passes through polarizer 502 and passes into beam expander and spatial filter 504. Such beam expanders are well known and a representative form is shown wherein the input beam enters a positive lens 506 with focal length L1. A pin hole is placed at a distance of L1 from lens 506 to remove unwanted spatial frequency spectral components. Light from pin hole 508 is reconverted into output parallel beam 509 by lens 508 with focal length L.sub.2. The pinhole 508 is placed at a distance L.sub.2 from lens 508. Output of beam expander 504 is beam 509 which passes to beam splitter 510 and split into two beams: beam 511 and beam 513. Beam 511 is the precursor of the main beams 515 and 516. Beam 513 is the precursor of stabilizing beams 517 and 518.


The operation of main beams 515 and 516 is now described. Beam 515 is the precursor of a beam that will pass through the interferometer in a clockwise direction and will pass through and scatter from the sample and will act as a probing beam. Main beam 516 is the precursor of a beam that will not pass through the sample, but will act as a reference beam as it traverses the interferometer in a counterclockwise direction. The probing beam and the reference beam will be combined to produce various interference patterns from which phase and modulus information can be obtained. Beam 515 passes through shutter 520 and then is reflected by mirror 522 to become beam 519. Beam 519 passes through objective lens 523 into sample holder 525. The light that is scattered by the sample and the sample holder is collected by objective lens 526 and passed as beam 527 to beam splitter 528. Companion main beam 516 passes from beam splitter 514 through shutter 530 to phase shifter 532. Phase shifter 532 can be set to introduce either a 0 degree phase shift or a 90 degree phase shift. Beam 533 is the output of phase shifter 532.


The purpose of objective lens 526 is to collect as large an angle of scattered light as possible. Objective lens 523 serves several purposes including: (1) that of focusing a small spot of light onto the sample holder of higher intensity than normally would be present in beam 519 in order to improve the signal to noise ratio; and (2) that of restricting the illumination to a narrow, focal spot region with desirable, plane-wave-like phase fronts with lateral amplitude anodizing.


Light beam 531 from shutter 530 passes through transmission-type phase shifter 532 where its phase is shifted either 0 degrees or 90 degrees. Beam 533 exits transmission-type phase shifter 532 and passes to mirror 536. At mirror 536 beam 533 is reflected to become light beam 539. Mirror 536 is driven by optional mirror driver 538 in small amplitude, periodic motion at frequency ω and in a direction perpendicular to the mirror face in its resting position. As mirror 538 vibrates it introduces a small periodic phase shift modulation to beam 537, but this phase shift is negligible. The purpose of either transmission-type phase shifter 534 or reflection phase shifter 538 is to impart a small periodic phase shift, typically from 1 degree to 10 degrees at frequency ca into beam 518 in order to stabilize the main beams 527 and 541 as combined to form beam 543. Either of these phase modulation methods is workable either separately or in combination. Usually, for cost reasons, only one of these phase modulation methods will be implemented in hardware.


Beam 539 emerges from mirror 538 and passes through beam transformer 540 and thence to beam splitter 528. Beam transformer 540 serves the purpose of adjusting the phase fronts and lateral amplitude of beam 541 to match those of beam 527 at the mirror 528 and at camera 544. While such matching may not be strictly necessary it does provide a more simple interference pattern that is mostly a function of the inserted object and not a function of the interferometer itself. Beam transformer 540 could be constructed in a number of ways, but one obvious way would be to replicate objective lens pair 523 and 526, however with the following modifications: (1) the sample holder and object is omitted; or (2) the sample holder is present but no object is present.


Beams 541 and 527 are combined by beam splitter 528, now acting as a beam combiner, and produce an interference pattern on the sensitive one-dimensional or two-dimensional solid state array of detector cells in camera 544. The intensity pattern on the array of detector cells is then converted into an electrical signal that is sent to the digitizer and computer 545 where this signal is digitized and stored in memory. Computer 545 controls the angular position of sample holder 525 by means of rotation device 524. Rotation device 524 also serves the function, under direction of computer 545, of inserting or removing the object from the beam. Rotation device 524 may be a stepping motor or a worm gear driven rotation stage as is common in optical technology. Computer 545 also controls the transmission of shutters 520 and 530, controls the phase of phase shifter 532, and controls the digitizing of camera 544. Computer 545 performs the important role of creating different diffraction patterns by changing the combinations of: (1) sample in or sample out; (2) main beam A only or main beam B only or their sum; or (3) phase sifter in main beam set at 0 degrees or at 90 degrees. From these different diffraction patterns, as described by the eight measurements and equations in the summary of invention section, computer 545 can compute the scattered detected field from the object. By including various test objects of know properties the incident field present at object 526 can be calculated. The set of scattered fields calculated for each of the various respective adjusted angular position of the object holder 525 is used as the detected scattered fields in the inverse scattering algorithms. Likewise, the incident field in and around the sample holder is taken as the incident fields in the inverse scattering algorithms.


Returning to FIG. 6A, note that a second set of beams defined as stabilizing beams is shown and these beams also are caused to interfere by the same mirrors and beam splitters as is traversed by the main beams. The function and justification for using these stabilizing beams is described in the summary of invention section. Note beam 513 is reflected by mirror 512 into beam splitter 514 to become beams 517 and 518. Beam 517 is reflected by mirror 522 to become beam 521 which then passes to beam splitter 528 where it is reflected into detector 542a and transmitted into detector 542b. Beam 518 passes through phase modulator 534 to mirror 536 where it is reflected and becomes beam 537. A fraction of beam 537 then passed through beam splitter 528 to detector 542a where it interferes with reflected version of beam 521. A fraction of beam 537 is also reflected into detector 542b where it combined with a fraction of beam 521. Detector 542a typically would be a semiconductor pn diode or pin diode. The output of detectors 542a and 542b passes to locking amplifier system 546 which adjusts the mean phase shift imposed by mirror 536.


Locking amplifier system 546 functions in the following manner. The signal from detector 542a passes into locking amplifier system to amplifier 547. Output of amplifier is the input of multiplier 548. Multiplier has a second input 550a. The product of the output of amplifier 547 and input 550a passes through low pass filter 549 thence both to non inverting amplifier 550h and to inverting amplifier 550i. Switch 550j selects either the output of noninverting amplifier 550h or that of inverting amplifier 550i. The output of switch 550j passes to switch 550s and thence to summing amplifier 550g. Input 550a to multiplier 548 originates in oscillator 550f at frequency ω. The output of oscillator 550f is shown as signal 550l and is split into two parts: one part, signal 552n, is an input to summing amplifier 550g; the other part, signal 550p, is in turn split into two signals 550c and signal 552q. Signal 550c is one input to switch 550b. Signal 552q is imposed on both inputs of multiplier 550e. The effect of multiplying a signal by itself is to produce sum and difference signal. The difference signal is at zero frequency while the sum frequency is at double the input frequency. The output of multiplier 550e passed through high pass filter 550d to switch 550b. High pass filter 550d rejects the zero frequency signal but passes the doubled frequency. An offset constant voltage 550m is also an input of the summing amplifier 550g. Summing amplifier 550g sums signals 550m, 550k and 552n. The output of summing amplifier 550g is then sent to a phase modulator which may be mirror driver 538 or phase modulator 534. If output of summing amplifier 550g is sent to mirror driver 538 then this signal is first amplified by amplifier 507. If output of summing amplifier 550g is sent to phase modulator 534 then this signal is first amplified by amplifier 505.


The phase locking circuits can function in several modes. In a first mode, heterodyning techniques are used. In the summary of the invention section four different heterodyning methods are described. The hetrodyning mode is now described. One of two heterodyne methods to lock the two component beams at the same phase is now described. The output of amplifier 547 is multiplied by signal of frequency ω at multiplier 548 and the resulting signal is low pass filtered in low pass filter 549. The output is then passed through inverting amplifier 550i or noninverting amplifier 550h and thence through switches 550j and 550s to summing amplifier 550g and thence to phase modular 534 or phase modulator 538. The feedback loop adjusts the phase shifters 534 or 538 to minimize the signal component at ω present in the signal output of amplifier 547.


One of two heterodyne methods to lock the two component beams in quadrature (i.e. 90 degree phase difference) is now described. The output of amplifier 547 is multiplied by signal of frequency 2ω from high pass filter 550d. The signal from high pass filter 550d passes through switch 550b to multiplier 548 and thence to low pass filter 549. From low pass filter 549 the signal passes through either inverting amplifier 550i or noninverting amplifier 550h to switch 550j and thence to switch 550s and thence to input 550k to summing amplifier 550g. Output from summing amplifier 550g then drives either phase modulator 534 or mirror driven 538 to maximize the signal component at 2ω present in the signal output of amplifier 547.


The phase locking circuit can operate in a second mode where no heterodyning techniques are used. In this mode, output signal from detector 542a and output signal from detector 542b are passed to differential amplifier 552q, whose output is proportional to the difference in voltage between these two input signals. The output of differential amplifier is low pass filtered by low pass filter 550r to remove high frequency noise and passed to switch 550s and thence to summing amplifier 550g. The output of summing amplifier is then sent to either mirror driver 538, or phase modular 534. The advantage of this method is a reduction in parts, since no heterodyning circuits are required. Also significant is the elimination of phase modulators. Only the mirror driver for zero frequency baseband stabilization adjustments is necessary. A possible disadvantage may be an insensitivity to systematic errors.


Referring to FIG. 6B, a perspective drawing of the interferometer section of the optical microscope of FIG. 6A is shown. This drawing is useful for visualizing the spatial relationships between the main imaging beam (located in the center of the mirrors and beam splitters) and the stabilizing beams (located at the edges of the mirrors and beam splitters). The main beam passes through the sample and objective lenses. The stabilizing beam passes around and misses the objective lenses and sample. The main beam is collected by solid state camera k, but stabilizing beams are collected by photodetectors m1, m2, n1, n2, p1 and p2. The normals to mirrors e2 and e3 and the normals to beam splitters e1 and e4 are mutually parallel. Main beam a1 enters the interferometer and is split by beam splitter e1 into beams a2 and a4. Beam a2 passes through shutter f and through phase shifter g to mirror e2. From mirror e2 the reflected version of beam a3 passes through wave front transformer L.sub.1 to mirror e4 where it combines with beam a8 and passes to the solid state camera k. Main beam a4 passes through shutter h to mirror e3 where it is reflected and becomes beam as. Beam as passes through objective lens i and lens j and becomes beam a8. The sample is placed between objective lens i and objective lens j as shown in FIG. 6A.


The paths of the stabilization beams are now described. The stabilization beams are comprised of three parallel beams. These beams enter mirror e1, as respective beams b1, c1, and d1. They leave in the clockwise direction as respective beams b4, c4, and d4 and reach mirror e3. They leave mirror e3 as respective beams b5, c5, and d5 and reach beam splitter e4. Beams b1, c1, and d1 also incident at beam splitter e1 produce respective counterclockwise directed beams b2, c2 and d2 which leave beam splitter e1 and are reflected from mirror e2 to become respective beams b3, c3, d3. Beams b3, c3 and d3 reach beam splitter e4 where they combine with respective beams b5, c5 and d5. This combination then exits beam splitter e4 as respective pairs b6, c6 and d6 to the right, and as respective pairs b7, c7, and d7 to the top. Beams b6, c6, and d6 are detected by respective detectors m1, n1 and p1. Beams b7, c7 and d7 are detected by detectors m2, n2, and p2, respectively. Outputs of m1 and m2 are combined to form a first feedback signal for mirror and beam splitter alignment. Outputs of n1 and n2 are combined to form a second feedback signal for mirror and beam splitter alignment. Outputs of p1 and p2 are combined to form a third feedback signal for mirror and beam splitter alignment. The stabilization scheme show in FIG. 6B is the nonheterodyne method described in the brief summary and object of the invention section. Clearly, the heterodyne methods may also be used.


The function of the feedback signals is now described. The signal outputs of detectors p1 and p2 are respectively q1 and q2 and these pass into differential amplifier r1. The output of differential amplifier V1 passes to linear actuator t1 that moves mirror e2. A solid frame is not shown on which linear actuators t1, t2 and t3 are mounted as well as components e1, e3, e4, i, j, m1, n1, p1, m2, n2, p2, and h. Differential actuator provides motion parallel to normal of mirror e2. Detectors m1 and m2 produce respective signals U1 and U2 that pass to differential amplifier r2 and thence becomes signal S2 that drives linear actuator t2 at mirror e2. Detectors n1 and n2 produce respective signals v1 and v2 that pass to differential amplifier r3 and thence as signal s3 to linear activator t3 on mirror e2. Once mirrors e2 and e3 and beam splitters e1 and e4 are aligned, the stabilizing feedback loops just described will maintain alignment.



FIGS. 28A and 28B show an example of the application of inverse scattering to medical imaging through the use of computer simulation. FIGS. 28A and 28B also illustrate the powerful impact of a large inverse scattering image. FIG. 28A is a photograph of a cross section of a human through the abdomen that could appear in an anatomy atlas. The image was actually made on a magnetic resonance clinical scanner. This image is 200 by 200 pixels, each pixel being 1/4 wave length square. It was used a the starting image in the process of creating synthetic scattering data. The pixel values in this image were assigned to a set of speed of sound values in a range that is typical for soft tissue. This range is typically plus or minus 5 percent of the speed of sound of water. Using this image of speed of sound the forward scattering algorithm then computed the scattered field at a set of detectors on the perimeter of the image for the case of incident plane waves for 200 source directions equally spaced in angle around 360 degrees. The detectors enclosed the cross section on all sides and numbered 4(200)×4=796. This set of synthetic scattering data was used to compute the inverse scattering image of FIG. 28B.



FIG. 28B shows the inverse scattering image computed from the synthetic data generated from the image in FIG. 28A. Note that the quantitative values in each pixel are preserved and that the spatial resolution is only slightly degraded.



FIGS. 29A and 29B show that the quality of inverse scattering images of a test object computed from real data collected in the laboratory is excellent. The laboratory data was collected using microwave energy at 10 GHz. FIG. 29A is the image of the cross section of a test object that is a thin walled plastic pipe with an outside diameter of 6 cm and a wall thickness of 4 mm. This pipe was illuminated by microwaves at 10 GHz from a horn antenna. A second receiver horn antenna was rotated 120 degrees in both clock wise and counter clock wise directions from the forward propagation direction, about the sample to pick up the scattered field. The receiving antenna rotates around an axis that is parallel to the axis of the pipe. The transmitting antenna and the receiving antenna lie in a common plane that is perpendicular to the axis of the pipe. FIG. 29B is the inverse scattering image made from the data collected by the receiving antennas. The thickness of the pipe wall has been broadened because the true pipe wall thickness is less than one half-wave length at 10 GHz. The reconstructed wall thickness is about 0.6 wavelength.



FIGS. 29C, 29D and 29E demonstrate through computer simulation that three different and independent acoustic parameters can be imaged accurately by inverse scattering. FIGS. 29C, 29D and 29E show respective images of compressibility, density and acoustic absorption. The images are 50 by 50 pixels, each pixel being ¼ wavelength square, and were reconstructed from frequencies in the range from f.sub.max to fmax/50. The sources were 8 plane waves equally distributed around 360 degrees. The receivers surrounded the object and were placed on the perimeter of the image space. In FIG. 29C an image of true compressibility in the test object is on the right while the reconstructed compressibility is shown on the left. The compressibility scattering potential component has a range of values spanning 0.2 for black, 0.1 for gray and 0 for white. The compressibility region is square in shape. Note that little cross coupling from density and absorption is present. FIG. 29D shows on the right an images of true density in the test object, while the reconstructed density is shown on the left. The density scattering potential component has a range of values spanning 0 for black, −0 025 for gray and −0.05 for white. The density region is circular in shape. Note that little cross coupling from absorption is present and a small amount from compressibility is present. FIG. 29E shows on the right an image of true acoustic absorption in the test object, while the reconstructed acoustic absorption is shown on the left. The attenuation scattering potential component has a range of values spanning 0 for white, 0.01 for gray and 0.02 for black. The absorption region is triangular in shape. Note that little cross coupling from density is present but a little cross coupling from compressibility is present.



FIGS. 30A, 30B and 30C describe the application of inverse scattering to the case where sources and detectors are limited to a single line (as in the case of 2-D scattering) or a single plane (as in the case of 3-D scattering). FIG. 30A shows the true scattering potential values for a test object. In this example, the scattering potential variation corresponds to different dielectric constants. The background dielectric constant corresponds to earth with a dielectric constant of 18 and extends to infinity on both sides and below the image space. Air, with a dielectric constant of unity lies above and next to the top of the image. The small black circle has a dielectric constant of 21.6; the small white circle on the right has a dielectric constant of 16.2; and the larger gray circle in the center has a dielectric constant of 19.8. The pixels are ¼ wave length on a side at the highest frequency. A set of 32 frequencies uniformly distributed between the highest to 1/32 of the highest are used. The image space is 64 pixels horizontally and 32 pixels vertically. 64 detectors and 5 sources are uniformly placed across the top of the image space. The scattering potential values, relative to the background dielectric constant of 18, (scattering potential=.ε/εbackground−1 span the range of 0.1 for black to −0.1 for white. A dielectric constant of 18 therefore corresponds to a scattering potential of zero.



FIG. 30B shows the inverse scattering reconstruction generated by imaging data synthesized from the model in FIG. 30A (please note that the image in this figure was accidentally video reversed). The synthesized data was inverted using a layered Green's function. Note the excellent spatial resolution and excellent quantitative accuracy of the reconstruction.



FIG. 30C shows an image generated by using conventional radar beam forming methods. This conventional method places echoes at a given range and angle by processing the transmitted signals into transmit beams and the received signals into receiver beams. This conventional method is mathematically equivalent to seismic migration imaging. Such methods are linear, not nonlinear as is inverse scattering. As such, linear methods do not accurately model multiple scattering, reverberation from the air-soil interface, and refraction. Such images are not quantitative. Note the inferior quality of this image relative to the inverse scattering image in FIG. 30B.


Inverse Scattering Scenarios



FIG. 1-A shows a typical imaging scenario, such as might be encountered in the geophysical applications. Half space 1300 represents air, with a speed of sound equal to about 344 meters/sec. The image space 1312 is divided up into “pixels” (1320), which cover the region to be imaged. The object to be imaged, 1310, is located within this pixel grid. The term “pixel” is usually associated with two dimensions, however, we apply the same discretization procedure in 3-D (voxels), and higher dimensions (or lower) are certainly possible. For the sake of brevity and clarity, we refer throughout this patent to “pixels”, even though their n-dimensional counterpart is meant.


The layers 1302, 1304, 1306, and 1316 are arbitrary both in vertical dimension, as well as speed of sound associated with them. The sources 1314, and receivers 1298, are in both boreholes 1308.



FIG. 1-B shows a typical scenario such as might be encountered in nondestructive evaluation, or in geophysical imaging. The source positions 1314 are located along the top of the image space at the interface of half space 1340 (sea-water), and layer 1342. Layer 1344 is basement sediment, and the image space 1312 is divided up into pixels, 1320. The detectors, 1348 are also located at the sea-water interface. Notice that the sources and receivers in this case, and the previous case, are juxtaposed to the image space 1312, which contains 1350, the object to be imaged.



FIG. 1-C shows the reflection scenario, as in FIG. 1-B, with the twist that the sources and receivers (1314 and 1308 respectively) are located at some finite distance from the image space. This remote source/receiver geometry is also implementable with the cross-borehole geometry of FIG. 1A. The layers 1302, and target to be imaged 1310 are as before in FIG. 1A. The pixels 1320 are as before. The speeds of sound are such as to be appropriate to the geophysical case.


Software Description


Having described some examples in the previous section which illustrate mathematical and physical models which may be used to model scattered wavefield energy and to process data so as to reconstruct an image of material parameters using inverse scattering, theory, reference is next made to FIG. 8.


This figure schematically illustrates a flow diagram for programming the CPU No. 118 or combination of CPU 118 and array processor 120 or parallel processor 146 in order to enable CPU 118 or said combination to control the electronic system of FIGS. 4 and 5 and the apparatus of FIGS. 1, 2, and 3 in accordance with one presently preferred method of the invention. It should also be understood that special purpose computational hardware should be constructed to incorporate the flow diagrams including FIGS. 8-26 and also explicitly encoded in Appendices A-J. These flow diagrams of FIGS. 8-26 are merely illustrative of one presently preferred method for programming CPU 118 using the scalar Helmholtz wave equation or vector wave equation or similar wave equation for modeling scattered wavefield energy as described in Example 1 in the Summary of the Invention section. Any suitable computer language which is compatible with the type of CPU 118 that is used in the electronic system can be used to implement the program illustrated in the diagrams and flowcharts. As shown in FIG. 8, the CPU 118 is programmed so that it will first cause the transmitter multiplexer 132 to sequence in turn each transducer or antenna so that each transmitter or antenna will in turn propagate a signal at a selected frequency. In the case of FIG. 4A, for each transmitter position sequential multiple frequencies may be transmitted by changing the frequency of the oscillator 128. On the other hand, in the system of FIG. 5A multiple frequency signals are transmitted by selecting and generating a type of waveform which inherently includes multiple frequencies. As schematically illustrated in Step 202, for each transmitter from which a signal is sent CPU 118 next causes a receiver multiplexer 134 to sequence each receiver array 108-115 so as to detect the scattered wavefield energy at each position around the ring of transducers 70. The source receiver geometry represented by Step 202 is not restricted to the ring of transducers 70. One alternative source receiver geometry is depicted in FIG. 1A. This figure depicts inverse scattering in a cross-borehole geometry relevant to geophysical imaging. This FIG. 1A shows four layers representing sedimentation sandwiched between two half spaces which represent, respectively, the ocean and the basement sediment. The image space is depicted and an object to be imaged, 1310 is shown. The sources and receivers are located down two boreholes which exist on either side of the image space. The receiver arrays 1308 detect scattered wavefield energy for each frequency that is transmitted. As described previously, the electronic system then amplifies the detected signals and processes the signals in order to develop the electronic analog of the real and imaginary components of each signal and in order to digitize each detected signal.


Once the scattered wavefield energy has been detected, processed, and digitized, CPU 118 next determines the incident field fωφinc, as indicated at step 204. The incident field is determined from the wavefield energy [i.e., frequency and amplitude components of the transmitted waveform] and transducer geometry from which the incident field is transmitted. The process of determining this incident field may involve direct measurement of the incident field or calculation of the incident field using the appropriate Green's function whether it be Biot, elastic, electromagnetic, scalar, vector, free space or some combination or permutation of these. The incident field is determined at each pixel in the imaged space for each source position φ=1, . . . , Φ and for each frequency ω, which varies from 1 to Ω. In step 206, the scattered wavefield is measured, recorded and stored at the wavefield transducer receivers for each ω, varying from 1 to Ω each source position 1−Φ, and each pixel element at each wavefield transducer receiver position. As described in step 208, the initial estimate for the scattering potential is input directly into CPU 118 or estimated within CPU 118 from data input to it from some outside source. The initial estimate for the scattering potential can be set at either 0 or at an average value which is derived from the average material parameters estimated a priori for the object. When applicable, a lower resolution image of y, such as produced by a more approximate method such as the time of flight method described in U.S. Pat. No. 4,105,018 (Johnson) may also be used to advantage (in the acoustic case) to help minimize the number of iterations. Similar approximations for other forms of wavefield energy may be used to advantage to help minimize the number of iterations.


CPU 118 next determines in step 210 an estimate of the internal fields for each pixel in the image space and for each frequency ω, and each source position Ω This estimate is derived from the incident field and the estimated scattering potential. This step corresponds to step 222 in FIG. 9A as well as to step 210 in FIG. 8. FIG. 7 is also relevant here. As is indicated in step 360 of FIG. 7, the present scattering potential estimate is used simultaneously in several independent processors so that the forward problems are farmed out to these independent processors which simultaneously solve the forward problems for each frequency and for each source position of the incident wavefield. These independent processors are labeled 368-380 in FIG. 7. It may be that there are not enough parallel processors available to farm out each incident field for each frequency and source position. However, effort should be made to delegate and solve as many forward problems independently and in parallel as possible. Ovals 362, 364 and 366 indicate that the forward problems for all possible source positions and a given fixed frequency can be calculated in one processor and simultaneously a second frequency forward problem can be calculated simultaneously in a second parallel independent processor. There is great flexibility in this approach and this example is not meant to be restrictive; for example, it is not difficult to see that the multiple frequency and source position parts of the Jacobian calculation can also be done in parallel in the same manner.


The total fields are then stored in central processor 382 of FIG. 7. For a given and fixed frequency ω, and fixed source position φ, the forward problem consists of using equation 1 or 17 of the Summary of Invention section to determine the total field fωφ given the scattering potential y and the incident field fincωφ and the appropriate Green's function. This Green's function could be the scalar Green's function which is essentially the Hankel function of the second kind of zero order, or it could be the elastic Green's functions which is from the scalar Helmholtz Green's function, or it could be the Biot Green's function, or it could be the electromagnetic vector or scalar Green's function as well as a layered Green's function derived from one of the preceding green's function in the manner described in the Summary of the Invention section. The process whereby this is done is elucidated in FIGS. 11A-1, 11A-2.


Before applying this algorithm, it is required to discretize equation 1 or 17 of the Summary of Invention section so that it may be input into the CPU. This process is described in detail in the Summary of the Invention. The subroutine that solves the forward problem is given explicitly in Appendix B for the two-dimensional case and in Appendix F for the three-dimensional forward problem solution. Our algorithm uses the Whittaker sinc function which is defined on page 18, equation E of the Summary of the Invention. We use the sinc function because of its optimal approximating qualities. However, the infinite support of the sinc function precludes its use in certain situations. More specifically in the presence of layering, one-dimensional “hat” functions are required. This is outlined in detail after eqn (41) in the Summary of the Invention. The number of frequencies that are used in the inverse scattering problem is a function of the source receiver geometry, among other things. FIG. 1B shows the reflection-only source-receiver geometry prevalent in geophysics. In this scenario it is imperative to use multiple frequencies which distinguishes this case from FIG. 1A where, because the source receiver geometry allows transmission mode imaging only a small number of frequencies may be used. An example of such code using transmission mode imaging is given in Appendix H.


All forward problems for all possible frequencies, and all possible source positions are solved with one call of the subroutine in Appendix B for the two-dimensional case, and Appendix F for the three-dimensional case, whichever is relevant. FIG. 10 gives in detail the procedure for solving for the internal fields at all possible source positions and all possible frequencies. From step 222 in FIG. 9A we enter step 340 on FIG. 10. In step 340 of FIG. 10, all of the files that contain all of the Green's functions for all possible frequencies and the files that contain the incident fields for all possible frequencies and all possible source-positions are rewound. In step 342 of FIG. 10, c) is set equal to 1, i.e., we indicate that we are about to calculate the forward problem for the first frequency. In step 344 of FIG. 10, the Green's function for this particular frequency is read into memory. In step 346 of FIG. 10, the source position index φ, is set equal to 1 indicating that we are solving the forward problem for the first source position. In step 348, the incident field for this particular φ, i.e., source position, and this particular ω, i.e., frequency, is read into memory. In step 350, the forward problem for this particular frequency and this particular source position is solved using either biconjugate gradients or BiSTAB. This process is detailed in FIG. 11A-1/2 for the biconjugate gradient or BCG algorithm and FIGS. 11B-1/11B-2 for the BiSTAB or biconjugate gradient stabilized algorithm. In FIG. 11A-1, in step 450 which follows step 350 in FIG. 10, the predetermined maximum number of biconjugate steps and tolerance are read into memory, and the forward problem, which in discrete form reads (I−)f=finc, is solved. For purposes of generality, we consider the BCG or biconjugate gradient algorithm applied to the system Ax=y. A is a matrix which is square, x and y is a vector of suitable dimension. Step 452 consists of reading in an initial guess X0 and calculating an initial residual which is calculated by applying A to the initial guess X0 and subtracting from the right-hand side Y. In step 454, the initial search direction P0, bi-search direction Q, bi-residual S are calculated and stored. In step 456, we put N equal to 0. In step 458, we calculate the step length αn by taking the inner product of the two vectors s and r, the bi-residual and residual respectively, and dividing by a quantity which is the inner product of two vectors. These two vectors are 1) q, the bi-search direction, and 2) the result of the application of the operator A to the present search direction P. In step 460 of FIG. 11A-1, the guess for x is updated as per the equation that is shown there. In Step 462, the residual R is updated as per the formula given, and the bi-residual S is updated as per the formula given. (H indicates Hermitian conjugate.) In particular, the bi-residual is given by the previous bi-residual Sn minus a quantity which is given as .α*n, which is the complex conjugate of αn, multiplied by a vector, which is the result of the operation of the Hermitian conjugate of the operator A, applied to the vector q, the bi-search direction. This is carried out by a call to FIG. 13, which we discuss below. In step 464, check to see if the stop condition has been satisfied, i.e., is the present updated residual less than or equal to ε (which is given a priori) multiplied by the initial residual in absolute value. Step 466 consists of the determination of β, the direction parameter, which is given as the ratio of two numbers. The first number in this ratio is the inner product of the new updated bi-residual with the updated residual and the denominator is the inner product of the previous residual with the previous bi-residual, which has already been calculated in box 458. In step 468, use the calculated β to update and get a new bi-search direction, and a new search direction given by the formulas shown. In step 470, N is incremented by 1 and control is taken back to step 458. In step 458, a new step length is calculated, etc. This loop is repeated until the condition in step 464 is satisfied, at which time control is returned to step 350 in FIG. 10. The forward problem for this particular frequency and this particular source position has now been solved and the result, i.e. the internal field, is now stored.


In step 352, back in FIG. 10, it is determined if φ is equal to Φ, i.e. have we computed all the forward problems for this particular frequency for all possible source positions. If not, then φ is incremented by 1 in step 354 and control is returned to step 348 of FIG. 10. If so, we have calculated the forward problem for all possible source positions φ and control is transferred to step 356 where it is determined if ω is equal to Ω, i.e. have we calculated the forward problem for all possible frequencies ω?” If not, ω is incremented by 1 in box 357 and control is returned to step 344. If so, we have completed the computation of the internal fields for all possible source positions and all possible frequencies and have stored the results, i.e. we have finished the subroutine in Appendix B and Appendix F for the two-dimensional or three-dimensional problem, respectively. Control is now transferred to the main program which is in Appendix A for the two-dimensional case, and in Appendix E for the three-dimensional case. This corresponds to step 224 in FIG. 9A. This also corresponds to step 212 of FIG. 8. FIGS. 9A, 9B, 9C, 9D together comprise an expanded version of the inverse scattering algorithm in its entirety, which is given in compressed form by FIG. 8. Before proceeding further to describe in detail step 212 of FIG. 8, i.e. also FIG. 226 of FIG. 9A, we mention that the BCG algorithm just described may be replaced by the BiSTAB (biconjugate gradient stabilized algorithm) of FIGS. 11B-1/11B-2. Furthermore, in the calculation of both the BiSTAB algorithm or the biconjugate gradient algorithm, the implementation of the operation of the operator A acting on the vector x is carried out in a very specific manner which is described in FIG. 11C. Furthermore, in FIG. 13, the hermitian conjugate of the operator A, denoted AH, is applied to vectors in its domain. Specifically, in step 400 of FIG. 11C, we come in from any step in FIG. 11B-1/11B-2 or 11A-1/11A-2, respectively, the conjugate gradient stabilized algorithm or the bi-conjugate gradient algorithm which requires the calculation of the operation of the operator A acting on the vector “x”. In step 402, the pointwise product of the scattering potential γ and the present estimate of the internal field at frequency ω and source position φ is formed and stored. Step 404, the Fast Fourier Transform (FFT) of this pointwise product is taken. In step 406, the result of the Fast Fourier Transform (FFT) taken in step 404 is pointwise multiplied by Gω which is the Fast Fourier Transform (FFT) of the Green's function at frequency ω, which has been stored and retrieved. In step 408, the inverse Fast Fourier Transform (FFT) is taken the pointwise multiplication that was formed in the previous step. In step 410 this result is stored in vector y. In step 412 the difference, taken pointwise, of the fωφ, the present value for the internal field at frequency ω, and source position φ, and the constructed vector y is taken. Step 414 returns control to either the BCG or the BiSTAB algorithm, whichever has been called by step 350 of 10. The application of the operator which is described in FIG. 11C is carried out multiple times for a given call of the biconjugate gradient or the BiSTAB algorithm, it is imperative, therefore, that this step be computed extremely quickly. This explains the presence of the fast Fourier transforms to compute the convolutions that are required. This operator, (I−Gγ) symbolically written, we refer to as the Lippmann-Schwinger operator. FIG. 13 calculates the adjoint or hermitian conjugate of the given Lippman-Schwinger operator and applies this to a given vector. Step 580 of FIG. 13 takes control from the appropriate step in the biconjugate gradient algorithm. It is important to note that the BiSTAB algorithm does not need to compute the adjoint of the Lippmann-Schwinger operator. Therefore, FIG. 13 is relevant only to FIGS. 11A-1 and 11A-2, i.e. the biconjugate gradient algorithm. In step 580 of FIG. 13 the Fast Fourier Transform (FFT) of the field f to which the adjoint of the Lippmann-Schwinger operator is being applied, is taken. In step 582 of FIG. 13, the Fast Fourier Transform (FFT) computed above is pointwise multiplied by G*107, the complex conjugate of the Fast Fourier Transform (FFT) of the Green's function at frequency ω. In step 584, the inverse Fast Fourier Transform (FFT) of this pointwise product is taken and stored in vector y. In step 586, the pointwise product of the complex conjugate of γ, the present estimate of the scattering potential is taken with the vector y. The result is stored in vector y. In step 590, the difference is formed between the original field to which the adjoint of the Lippmann-Schwinger operator is being applied, and the vector y just calculated. Now (I−Gγ)Hfωφ has been calculated and control is now returned to box 462 of FIG. 11A-1/11A-2, the biconjugate gradient algorithm, where use is made of the newly created vector, AHq. Next the stop condition is checked at step 464.


We now return to step 224 of FIG. 9A, i.e. step 212 of FIG. 8 where we determine the scattered field at the detector locations for all possible source positions, i.e. incident fields and also all possible frequencies ω. This process is carried out in FIG. 26, to which we now turn.


Step 1150 of FIG. 26 transfers control to step 1152 of FIG. 26 where ω is set equal to 1. Control is then transferred to step 1154 where the Green's function for this particular ω is read into memory. Control is then transferred to step 1156 where φ is set equal to 1. Control is then transferred to step 1158 where the internal estimate for this source and this frequency is read into memory. Control is then transferred to step 1160 where the incident field is subtracted from estimated total internal field. In step 1162 the truncation or propagation operator shown FIG. 21 is performed. When the truncation or propagation has been performed, control is transferred back to FIG. 26 and in step 1164 the calculated scattered field is stored. In step 1166 it is determined if φ is equal to Φ, if it is not, then φ is incremented by one, in step 1170, and control is returned to step 1158 of FIG. 26. If φ is equal to Φ, then control is transferred to step 1172 where it is determined if ω is equal to Ω. If it is not, ω is incremented by one in step 1174, and control is returned to step 1154 of FIG. 26. If ω is equal to Ω, then control is returned to step 226 of FIG. 9A, the inverse scattering subroutine.


Now consider FIG. 21, which was called by FIG. 26 above.


Step 1118 of FIG. 21 transfers control to step 1122 where the input is truncated to the border pixels. Control is passed to step 1124 which determines if the receivers are distant from the image space as in FIG. 1C or whether the receivers are juxtaposed to the image space as in FIG. 1A or 1B. If the detectors are in fact juxtaposed to the image space, then control is transferred to step 1126 and from there to the calling subroutine. If the detectors are not juxtaposed to the image space, the fields are propagated to the detector positions. The details of this propagation are given in the “EXTENSION OF THE METHOD TO REMOTE RECEIVERS WITH ARBITRARY CHARACTERISTICS” section of this patent. Actually, there are several methods which could be used to accomplish this. For example, an application of Green's theorem is one possibility. The beauty and utility of the method shown in the “EXTENSION OF THE METHOD TO REMOTE RECEIVERS WITH ARBITRARY CHARACTERISTICS” section however, is that the normal derivatives of the Green's function, or the fields themselves, do not need to be known or estimated.


The input to FIG. 21 is a vector of length NxNy. FIG. 21 represents the subroutine that is called by the Jacobian calculation or following immediately after the calculation of the internal fields. The input in the first case of the Jacobian calculation consists of box 720 of FIG. 20B. The input in the second case is the scattered field calculated for a particular frequency ω and a particular source position φ. The purpose of FIG. 21 is to propagate the input (scattered field in the image space) to the detector positions. In step 1124 it is determined whether the receivers are distant. If the detector positions are located at some distance from the image space, the subroutine passes control from step 1124 to step 1128.


In step 1128, the matrix P, constructed earlier is used to propagate the field to remote detectors. In step 1130 of 21, control is returned to the program which called the subroutine. The matrix P will change when correlation is present; the fundamental form of the propagation algorithm remains unchanged, however.


These scattered fields are stored in vector dωφ. Control is then transferred to step 228 in FIG. 9A which subtracts the measured scattered field from the calculated scattered field and stores the result in one vector rωφ which is the residual vector for frequency ω and source position φ. This step is repeated until all possible frequencies and all possible source positions have been accounted for, whereupon the residual vectors rωφ for all possible ω's and φ's are stored in one large residual vector shown in FIG. 9A as step 228. We now apply the Hermitian conjugate of the Jacobian to the residual vector r just calculated in order to establish the search direction Pn where n=0, i.e. the initial search direction. The search direction is given explicitly by the negative of the action of the hermitian conjugate of the Jacobian acting upon the residual vector r, in step 238.


There are in effect at least two possibilities regarding the use of the Jacobian. First, the full Jacobian could be used or, secondly, an approximation to the Jacobian could be used. The determinator of which path to follow is the programmer, who bases his decision on the magnitude of the γ which is to be solved in the full inverse scattering problem. If the γ is large, the exact Jacobian must be used. On the other hand, if the γ is reasonably small, although still larger than required for the Born or Rytov approximation, the approximation to the full exact Jacobian which is shown on FIG. 9A, step 236 may be used. The application of the exact or approximate Jacobian will be discussed later.


Now we step through FIGS. 9A to 9D to get an overall view of how the algorithm works.


Step 238 computes the initial search direction P(0) Step 240 of FIG. 9A zeros the initial guess for δγ, which is the Gauss-Newton (or Ribiere-Polak, or other) update to γ, the scattering potential. γ can be initially set=0. A large portion of the rest of FIG. 9A, 9B, 9C, 9D is concerned with the calculation of δγ. This is used in FIG. 8 also, of course.


Step 212 of FIG. 8 has estimated the scattered field at the detector positions. Step 214 of FIG. 8 now determines if the residual (which is given by the magnitude squared of the difference between the measured scattered field and the calculated scattered field, for each possible source position and each possible frequency) is less than some predetermined e tolerance. If the condition is satisfied then control is transferred to Step 216 where the reconstructed image of the scattering potential is either displayed or stored. If the condition is not met, then from the updated internal field calculated in Step 210 of FIG. 8 and also from the most recent estimate of the scattering potential and also from the estimated scattered field at the detectors calculated in Step 212 of FIG. 8 and also from the measured scattered field at the detectors, a new estimate of the scattering potential γ is calculated in a manner to be made explicit below. This updated scattering potential estimate is derived in Step 218, and is now used in Step 210 of FIG. 8 to calculate a new update of the estimate of the internal fields, for each possible frequency ω and each possible source position φ. Then in Step 212 of FIG. 8 a new estimated scattered field at the detectors is calculated by means of Green's theorem as shown in FIG. 26. This new estimate of the scattered field is then used in Step 214 as before, to estimate the magnitude of the residual vector between the measured scattered field and the calculated scattered field to determine if this quantity is less than ε, the predetermined tolerance. As before if this tolerance is exceeded the loop just described, is repeated until such time as either the tolerance is satisfied or the maximum number of iterations of this loop has been exceeded. This maximum number is predetermined by the operator.


We have just concluded a general description of the overall inverse scattering algorithm leaving out many important details. We now look at each of the steps in more detail. First, we explain how the δγ0 shown in Step 240 of FIG. 9A is related to Step 218 of FIG. 8. δγ0 in FIG. 8 is the first estimate of δγ in Step 314, which is the vector which is solved for, in Steps 244 through to Step 314 in FIGS. 9B through 9D. This δγ is the correction term to the scattering potential which is applied to the present estimated scattering potential γ(n) in Step 314 of FIG. 9D in order to get the updated scattering potential γ(n+1). Once this new updated scattering potential has been determined, transfer to Step 316 of FIG. 9D where it is determined if the maximum number of Gauss-Newton corrections have been exceeded. Each pass through Steps 244 through 314, to calculate δγ is one Gauss-Newton step. These Gauss-Newton steps are repeated until one of two things occurs. Either the maximum number of Gauss-Newton corrections is exceeded, or the tolerance condition of Step 294 is satisfied. It is important to reiterate that which was noted earlier, namely, that the description of the Gauss-Newton algorithm here, does not preclude the use of a suitable Ribiere-Polak, or Fletcher-Reeves non-linear algorithm, or other non-linear system solver. The changes in detail do not change the overall character of the minimization algorithm.


The appendix A contains the main program for imaging in two dimensions, which represents the code associated with the FIGS. 9A through 9D. More specifically Appendix A deals with the two-dimensional situation whereas Appendix E deals with the three-dimensional imaging situation.


It is important to note that the code in Appendices A, B, C, F, and G, deal with the most general case. This means that they can account for layering as is discussed in the Summary of the Invention Example 2. This means that the Lippman-Schwinger operator described earlier incorporates a convolution and correlation part. If it is known a priori that there is negligible layering in the geophysical or non-destructive testing or other scenario envisioned then it is appropriate to use the code given in Appendix H and Appendix I which deals with the case of no correlation part in the Green's function. The great simplification of the code that results from this omission and specialization is reflected in the fact that the code in Appendix H contains both the main imaging program and the forward problem solution. Furthermore, this code also computes the two-dimensional free space Green's function from scratch. This is in contradistinction to the codes in Appendix A or B or C where in the two-dimensional case in the presence of layering, three separate programs are used to solve the inverse scattering problem, the forward problem and the construction of the layered Green's function. The construction of this layered Green's function which is discussed in Example 2 for the acoustic scalar case, requires substantial amount of time for its computation. However, it is important to note that once the Green's function has been constructed for a given layered scenario, in a geophysical context, for example, it is not necessary to recalculate this Green's function iteratively. Rather the Green's function once it has been determined by methods discussed in the literature, see for example, [Johnson et al. 1992, or Wiskinet al, 1993], is used in the code in Appendix A and Appendix B to complete the imaging problem. To display the versatility of this inverse imaging code note that the code in Appendix H deals with the transmission mode imaging problem with the particular modality being electromagnetic wave propagation in particular in the microwave frequency range. Note also that Appendix H incorporates within it the situation where the detectors are at some finite distance away from the image space, i.e. they are not juxtaposed to the image space. The implication being that Green's theorem or some type of “scattering matrix” is required to propagate the field from the border pixels of the image space to the detector positions at some finite distance away from the image space. Such a scattering matrix is constructed earlier.


It is also important to note that all of the Appendices, with the exclusion of Appendix D, deal with the rectangular iterative method. Appendix D in distinction deals with the two-dimensional cylindrical recursion method for solving the forward problems in less time than the rectangular recursion method for Gauss-Newton iteration. It is also important to recognize that the construction of the layered Green's function as shown in Summary of the Invention, Example 2 shows explicitly how the convolution and correlation are preserved even though the distribution of the layers above and below the layer containing the image space are arbitrarily distributed. The preservation of convolution and correlation is a critical element of the ability to image objects in a layered or Biot environment (Biot referring to the Biot theory of wave propagation in porous media) in real time. The reflection coefficients which are used in the construction of the layered Green's function in the acoustic, elastic and the electromagnetic case are well known in the literature. See, for example [Muller, 1985] or Aki and Richards, 1980. The incorporation of this Green's function for layered media in the acoustic, elastic and electromagnetic case for the express purpose of imaging objects buried within such a layered medium is novel. The ability to image in real time is critical to practical application of this technology in the medical, geophysical, non-destructive and testing microscopic environments.


We now elucidate the steps shown in FIG. 9A. We have already discussed in detail, steps up to 240, Step 242 of FIG. 9A begins the conjugate gradient loop. Step 244 of FIG. 9A sets n equal to 0.


This n is the counter that keeps track of the number of conjugate, gradient steps that have been performed in the pursuit of δγ, the update to the scattering potential estimate. Step 246 of FIG. 9A again determines if the full Jacobian will be used. As in previous step 230 of FIG. 9A the answer depends on the magnitude of γ. In the case that the γ is large, the approximation given in Step 250 is not appropriate. Furthermore the actual implementation of the algorithm depends on whether there is correlation present. They are sufficiently different to warrant separate flow charts. FIG. 20A/B/C deal with the free space case. Considering this case first, JACFLAG is set to one and Step 252 is computed via FIG. 20A/B/C. Step 700 of FIG. 20A transfers the algorithm control to Step 702 where the disk file containing the Green's functions and the disk file containing the internal fields for all possible ω and all possible φ source positions are rewound. Control is then transferred to Step 704 where ω, the index for the frequency, is set equal to 1. In Step 706 thereafter, the complex Fast Fourier Transform (FFT) of the Green's function, at this particular frequency, is retrieved. In Step 708, φ, the counter corresponding to the source positions, is set equal to 1. The Step 710 in FIG. 20A/B/C retrieves the internal field for this particular source position φ, the Step 712 takes the point-wise product of this internal field which was retrieved in Step 710, with the δγ which is the input to FIG. 20A/B/C and is also the same δγ which is being sought after in the main program in FIG. 9A through 9D. In Step 252 of FIG. 9B which is where we have come from in FIG. 20A/B/C, the Jacobian is applied to p, which is the search direction for δγ. That is p is the input to FIG. 20A/B/C, i.e. δγ γ in this call of FIG. 20A/B/C is p of Step 252 in FIG. 9B. In Step 714 of FIG. 20A/B/C either the inverse of a certain operator or the first order binomial approximation of the inverse, is applied to the point-wise product produced in Step 712. This operator, (I−γG), is similar to, but not the same, as the Lippmann-Schwinger operator which is represented as (I−Gγ). Note, the determinator of which operator (I−Gγ)−1 or (I+Gγ) is applied is the JACFLAG variable defined in FIG. 9B JACFLAG=1 corresponds to application of (I−Gγ)−1


It is important to note that the inverse of this operator, (I−γG)−1, is never explicitly calculated in Step 714. The actual computation of the inverse of this operator, which in discrete form is represented as a matrix would be computationally very expensive. Therefore, the inverse action of the operator (I−γG))−1 acting on the point-wise product calculated in Step 712 is calculated by using bi-conjugate gradients or BiCGStab as indicated in Step 714 of FIG. 20A/B/C. This bi-conjugate gradient algorithm is the same bi-conjugate algorithm shown in FIG. 11A-1/11A-2 and was used to calculate the application of the Lippman-Schwinger operator [I−Gγ] on a vector f. However, it is important to note that the operator represented by the matrix A in Step 450 of FIG. 11A-1 is not the Lippman-Schwinger operator, rather it is the above (I−γG) operator in this case. As before, the BiCGStab or Biconjugate gradient stabilized algorithm may be used in place of the bi-conjugate gradient algorithm. This BiCGStab or conjugate gradient stabilized algorithm is shown in FIG. 11B-1/11B-2. The application of the algorithm shown in FIG. 11C for the actual application of the operator A in BiCGStab, FIG. 11B-1/11B-2, or BCG, FIG. 11A-1/11A-2, will of course, differ, in that the point-wise multiplication by γ will be performed after the convolution with the Green's function G appropriate to this frequency ω, in this case. However, the changes that must be made to FIG. 11C are clear in this case. Namely, the point-wise product shown in Step 402 is taken after the computation of the inverse Fast Fourier Transform (FFT) in Step 408 instead of before. A similar comment applies to FIG. 13 where the Hermitian conjugate of I−γG is computed. In this case the Hermitian conjugate of the operator I−γG requires that the pointwise product of the conjugate of γ which is carried out in Step 586 of FIG. 13 be calculated before the Fast Fourier Transform (FFT) is taken in Step 580 of FIG. 13. Step 254 of FIG. 9B now calculates the magnitude of the vector computed in Step 252 namely, the application of the vector A to the search direction p. Step 254 of FIG. 9B also computes the magnitude of the vector which is the result of the application of the Hermitian conjugate of the vector A to the residual vector r. Again, A is either the exact Jacobian or the approximation to the Jacobian shown in Step 250. The Step 256 computes the square of the ratio of the two magnitudes computed in Step 254, this ratio is denoted by α. Step 258 of FIG. 9B now updates δγ which is the Gauss-Newton correction sought after in the loop from Step 244 to 314 of FIGS. 9B through 9D. The update is given explicitly in Step 258 of FIG. 9B. Also at the same time the residual is updated via the formula shown explicitly in Step 258 of FIG. 9B. Step 260 transfers control to the Step 286 in FIG. 9C which determines if the full, exact Jacobian will be used or the approximation shown explicitly in Step 290. Depending upon which A is defined i.e. either the exact Jacobian or the approximation, JACFLAG is set to 1 or 0 and Step 292 is carried out. It is the application of the Hermitian conjugate of the matrix A which is defined in either Step 290 or 288, depending upon whether the exact Jacobian is used. The Hermitian conjugate of A is applied to the updated residual vector r. Step 294 calculates the ratio of two numbers. The first number is the magnitude of the updated residual calculated in Step 292, the second number is the magnitude of the scattered field for all possible ω's, i.e. source positions, and φ's i.e. frequencies, and compares this ratio to E the pre-determined tolerance. If the tolerance is larger than this calculated number, control is transferred to Step 296 which effectively stores the image of the scattering potential and/or displays this image. If the tolerance is not satisfied, control is transferred to Step 298 where β is calculated as shown explicitly in Step 298. Step 300 calculates the new search direction which is given by the explicit formula shown. Note that the Hermitian conjugate applied to the residual vector r has already been calculated in previous Step 298. Now transfer to Step 304 of FIG. 9D and it is determined in Step 310 if the maximum number of iterations allowed for the conjugate gradient solution have been exceeded. If no, then n, the counter for the conjugate gradient steps is incremented by one, and control is transferred to Step 246 of FIG. 9B. If, however, the maximum number of conjugate gradient iterations has been in fact exceeded, control is transferred to Step 314 of FIG. 9D. This is the step that has been discussed previously in which the scattering potential is updated by the δγ vector which has been computed by the conjugate gradient iterations. Control is then transferred to Step 316 of FIG. 9D where it is determined if the maximum number of Gauss-Newton corrections, specifically the δγ's, have been calculated. If the maximum number of Gauss-Newton (see earlier note regarding Gauss Newton vs Ribiere Polak) steps has been exceeded then the control is transferred to Step 320 where the present scattering potential estimate is stored and/or displayed. If the maximum number of Gauss-Newton corrections has not been exceeded then the index parameter that counts the number of Gauss-Newton corrections that have been computed, is incremented by 1 and control is transferred to Step 308 which transfers to Step 284 (arrow) of FIG. 9C which transfers control to Step 244 (arrow) of FIG. 9B which transfers control to Step 236 (arrow) of FIG. 9A which transfers control to Step 222 of FIG. 9A. Again as before, in Step 222 the forward problems for all possible ω's and all possible φ's are calculated. More specifically control is transferred to Step 340 of FIG. 10. This FIGUREure has been discussed in detail above.


The loop just described and displayed in FIG. 9A through 9D is now reiterated as many times as necessary to satisfy the tolerance condition shown in Step 294 of 9C or until the maximum of Gauss-Newton iteration steps has been performed.


It is appropriate to study in detail the application of the Lippman-Schwinger operator and the Jacobian in the situation where layering is present. It is important to note that the discussion heretofore, has emphasized the special situation where layering is not present so that the free space or similar Green's function not involving any correlation calculation, has been used. This specialization is appropriate for pedagogical reasons, however, the importance of the layered Green's function makes it necessary to discuss in detail the application of the Lippman-Schwinger operator in this case. Before doing this however, we look at FIG. 25A/B/C where the Hermitian conjugate of the Jacobian calculation is considered in the free space case. Just as in the calculation of the Jacobian itself (or its approximation) the variable JACFLAG determines whether BCG (or variant thereof) is applied or whether the binomial approximation shown is used.


In Step 600 the full Hermitian conjugate of the Jacobian calculation is undertaken with the disk files containing the Green's functions and the disk files containing the internal fields for each ω and each φ being rewound. In Step 604 ω is set equal to 1, i.e. the first frequency is considered. In Step 606 the complex Fast Fourier Transform (FFT) of the Green's function at this particular frequency ω is retrieved from disk where it has been stored previously. In Step 608 φ is set equal to 1 indicating we are dealing with the first source position. Step 610 retrieves the internal-field fωφ, for this source position φ at frequency ω. Step 612 propagates the scattered field from the detector positions to the border pixels of the image space. Also in Step 612 the field values at the non-border pixels of the image space are equated to 0. In Step 614 the Fast Fourier Transform (FFT) of the field resulting from Step 612 is computed and stored in vector x. In Step 616 the complex conjugate of the Fast Fourier Transform (FFT) of the Green's function G is point-wise multiplied by x, the vector computed in Step 614. As before the actual computation of this inverse operator is computationally prohibitive. Therefore, the application of the inverse of I−G*γ* to the vector x is carried out by using bi-conjugate gradients or BiCGStab as discussed above. In Step 620 either the inverse of the operator I−G*γ* or the approximation (I+G*γ*) is applied to this result. The system [I−G*γ*]y=x is solved for y using BCG. See FIGS. 11A-1/11A-2. In Step 622 of FIG. F01 the point-wise multiplication by the complex conjugate of fσφ, the internal field for this frequency and source position, is carried out. This point-wise multiplication is the product with the result of Step 620. The Step 624 stores the result of this point-wise multiplication. Step 626 determines if φ is equal to Φ, the maximum number of source positions. If the maximum number of source positions has not yet been achieved, then the φ is incremented by 1 to indicate movement to the next source position. Transfer of control is now made to Step 610, where the internal field for this particular source position is retrieved. The loop from Step 610 to Step 626 is performed iteratively until the maximum number of source positions has been achieved. When this occurs, it is determined if ω is equal to Ω, the maximum number of frequencies, in Step 630. If the maximum number of frequencies has been achieved then control is transferred to Step 634 which transfers control back to FIG. 9A where the Hermitian conjugate of the Jacobian, or its approximation, was being applied. If, however, the maximum number of frequencies has not been achieved then transfer is made to Step 632 where the frequency index is incremented by 1 and control is transferred to Step 606 of FIG. F01 which is the retrieval of the complex Fast Fourier Transform (FFT) of the Green's function at new frequency ω. This loop is now repeated until all possible frequencies and all possible source positions have been exhausted. At this time control is transferred to the inverse scattering routine in FIG. 9A through 9D.


It is appropriate now to turn to that part of FIG. 20A/B/C which details the application of the reduced Jacobian calculation which is used by FIG. 9A through 9D, i.e. the Gauss-Newton iteration in the situation where it is permissible to use the approximation to the Jacobian in place of the exact Jacobian. This calculation is called the reduced Jacobian calculation by virtue of the fact that no inverse operator is required to be calculated in this FIGURE Therefore, the total computational complexity of this part of FIGS. 20A through 20C (ie. Step 715) is less than the corresponding Step 714 of FIGS. 20A20C, in which the full exact Jacobian calculation is carried out. When the Jacobian calculation in FIG. 20A through 20C is being carried out to compute Step 252 in FIG. 9B δγ is in fact pn, the search direction for the conjugate gradient step in determining δγ, which in turn is the update to the scattering potential determined by this particular Gauss-Newton step. In Step 715a) the point-wise product is stored in vector x. In Step 715b) the Fast Fourier Transform (FFT) of the point-wise product is computed and in Step 715c) this Fast Fourier Transform (FFT) is taken with a point-wise product with the Fast Fourier Transform (FFT) of the Green's function for this particular frequency ω, and the inverse Fast Fourier Transform (FFT) of the point-wise product is taken to yield G convolved with the product f*δγ which result is stored in vector y. In Step 715d) the point-wise product of vector y is taken with γ and the vector x computed and stored in Step 715a) is added to it. The Fast Fourier Transform (FFT) of this sum is calculated in Step 716 of FIGS. 20A through 20C and the algorithm finishes exactly as described previously. Recall that transportation takes place in the scenario where the receivers exist at some finite distance from the image space in contradistinction to the truncation operation which takes place when the receivers exist at the border pixels of the image space. The truncation or transportation operator is discussed in FIG. 21 for the general case in which correlation, i.e. layering, is present or absent.


It is very important to note that the correlation that is calculated in the case of layering is, without exception, calculated as a convolution by a change of variables. It is the presence of the convolution which allows the introduction of the Fast Fourier Transform (FFT) and allows this algorithm to be completed in real time when the Fast Fourier Transform (FFT) is used in conjunction with the other specific algorithms and devices of this apparatus.


It is now appropriate to consider in detail the action of the Lippmann-Schwinger operator and the Jacobian in the case when layering is present. FIGS. 23A and B displays this case.


It is also important to note regarding the Jacobian calculation with correlation, that both the reduced and the exact form have their counterpart in the Hermitian conjugate. This is discussed in FIG. 19A/B. In Step 654, the re-winding of the disk with the Green's function and the disk with the internal fields is carried out, and ω is set equal to 1. In Step 658 the complex Fast Fourier Transform (FFT) of the Green's function at this particular frequency ω is retrieved. Step 658 also sets φ (source position) equal to 1. Step 660 retrieves the internal field for this φ, Step 662 calculates the Hermitian conjugate of the truncation operator which puts values in border positions of the image space of the appropriate field and zeros everywhere else. Step 664 point-wise multiplies the result of 662 by complex conjugate of the Fast Fourier Transform (FFT) of the Green's function. Step 668 takes the inverse Fast Fourier Transform (FFT) and stores this in vector y. Now, in Step 670 depending upon the value for JACFLAG, either the full inverse (I−G*γ*)−1 (via BCG or BiSTAB, etc.) or (I+G*γ*) is applied directly to y. The second alternative is carried out in the standard manner. That is, one takes the point-wise multiplication of the complex conjugate of γ, the scattering potential estimate at this time with the vector y. The one takes the Fast Fourier Transform (FFT) of the result and takes the point-wise multiplication of the complex conjugate of the Fast Fourier Transform (FFT) with this result. Finally one adds the vector y and stores it in vector y, (overwriting y). The Step 672 multiplies the final result point-wise, with the complex conjugate of the internal field for this particular frequency and source position, and stores the result. Step 672 transfers control to 674 where it is determined if φ is equal to Φ. If no, then φ is incremented by 1 and returned to Step 660 in Step 676. If yes, control is transferred to 680 where it is determined if ω is equal to Ω; if yes, increment ω by 1 in Step 682, and transfer control to Step 656. If no, transfer control to Step 684 where control is transferred back to the inverse scattering main algorithm, FIGS. 9A through 9D.


It is important to note at this point that when the operator I−Gγ is applied to the vector T1, and also when the Lippmann-Schwinger operator I−Gγ is applied to any input vector in the layering scenario, FIG. 12 must be followed. Step 556 of FIG. 12 forms the vectors which store the Fast Fourier Transform (FFT) of the convolutional and the correlational parts of the layered Green's function separately. These convolutional and correlational parts are computed by appendices C and G for the two and three-dimensional cases, respectively, and are given explicitly by the formulas given on page 35 of the Summary of the Invention section.


The step 558 passes control to step 560 where the vector of pointwise products shown explicitly there, is formed. This pointwise produce of y, the scattering potential estimate, and F, the internal field, is Fourier transformed in step 564 and in parallel simultaneously in step 562, the reflection operator is applied to this pointwise product γF computed in step 560. In step 566, following step 562, the Fourier Transform of the reflected product γF is taken. In step 570 and in step 568, there is pointwise multiplication by the correlational part and the convolutional part of the layered Green's function, respectively. The correlational part of the layered Green's function is pointwise multiplied by the Fast Fourier Transform (FFT) computed in step 566, the convolutional part of layered Green's function fast Fourier transformed is pointwise multiplied by the Fast Fourier Transform (FFT) which was computed in step 564. In step 572, the vector W, which is the difference of the results of step 568 and 570, is computed and stored. In step 574, the inverse Fast Fourier Transform (FFT) of the vector W is computed. In step 576, the final answer which is the action of the Lippmann-Schwinger operator acting upon the field f is computed for the layered medium case, i.e., the case in which correlation exists. This process of applying a Lippmann-Schwinger operator in the presence of layering established in FIG. 12 is used explicitly in the calculation of the forward problems in step 222 of FIG. 9A in the presence of layering. It is also carried out explicitly in Appendices B and F for the two and three-dimensional cases, respectively. It is also required in some of the calculations of the application of the Jacobian in the layered medium case.


The operator described by FIG. 14, (I−γG), is called the “Second Lippmann-Schwinger operator”. This operator is in fact the transpose of the “Lippmann-Schwinger” operator so-called, i.e. (I−Gγ). Note that this is not the Hermitian adjoint of the Lippmann-Schwinger operator, precisely because no complex conjugate is taken.


The input to FIG. 14 is Fast Fourier Transformed in step 1222. The result is then pointwise multiplied by the Fast Fourier Transform (FFT) of the Green's function for this particular frequency. The result is inverse Fast Fourier Transformed and stored in vector y, in step 1226. The pointwise product γ•y is computed and stored in vector y. Finally the difference between input vector x, and y is formed in step 1230. Finally, the control is transferred back to the calling subroutine.



FIG. 15 details the application of the hermitian conjugate of the “second Lippmann-Schwinger” operator


In 15 control is transferred to step 1242, where the pointwise product is formed, between the complex conjugate of the scattering potential estimate, and the input to this subroutine, x. The Fast Fourier Transform (FFT) is then applied to this product in step 1244. In step 1246 the pointwise multiplication by G*, the complex conjugate of the Fast Fourier Transform (FFT) of the Green's function at frequency ω, is effected. In step 1248, the Inverse Fast Fourier Transform (FFT) of the product is taken, and finally the result is stored in y. In step 1252, the difference is formed between the input to this subroutine, x, and the computed vector y, and the result is stored in y.



FIG. 21 delineates the propagation operator that takes fields in the image space, and propagates them to the remote receivers at some distance from the image space. This propagator is discussed elsewhere in more detail.


In FIG. 21, the input is truncated to the border pixels, in step 1122. Next it is determined if the receivers are remote, or juxtaposed to the image space. If the detectors are in fact juxtaposed to the image space, the control is returned to the calling subroutine.


If the detectors are remote, the field is propagated with the aid of the propagator matrix P (described in section EXTENSION OF THE METHOD TO REMOTE RECEIVERS WITH ARBITRARY CHARACTERISTICS), in step 1128. Control is then transferred to the calling subroutine in the situation where the detectors are remote.



FIG. 22 details the Hermitian conjugate of the propagation operator discussed in FIG. 21:


In FIG. 22, the Hermitian conjugate of the Propagation operator P, it is first determined in step 1262, whether the receivers are at some remote location. If not, then control is transferred directly to step 1268. If the detectors are in fact remote, then control is passed to step 1266, where the fields are propagated via the propagation matrix to the border pixels of the image space. In either case, control is passed then to step 1270, and on to the calling subroutine.



FIG. 23A/B determines the effect of the Jacobian on an input vector x in the presence of layering. Step 1050 transfers control to step 1052 where ω is set equal to 1. Also the files containing the Greens function for this frequency (both the correlations and the convolutional parts), and the internal field estimates, and the propagator matrix are rewound. Control is then transferred to step 1054 where the propagator matrix for this frequency is read in and also the Greens function for this particular frequency is read into memory. Also in step 1054 φ resource position index is set equal to 1. Control is then transferred to step 1056 where the file that contains the internal field estimate for this particular frequency and source position is read into memory. Control is then transferred to step 1058 where the pointwise product of the internal field estimate and the input vector x is computed. Next in step 1060 it is determined if the approximate Jacobian or the full Jacobian will be used. If the full Jacobian is to be used then control is transferred to step 1062 where biconjugate gradients or biconjugate gradients stabilized is used to solve for the action of the inverse matrix of I−γG acting on T1. Recall T1 is the temporary matrix which holds the pointwise product computed in step 1058. Control is then transferred to step 1066. If in step 1060 the determination is made to perform the approximate Jacobian operation then control is transferred to step 1064. In step 1064 the Fast Fourier Transform (FFT) is taken of the vector T1 and it is stored in temporary vector T2. Also the reflection operator is applied to the vector T, and the result is stored in vector T3. Also the Fast Fourier Transform (FFT) is taken of the result of the planar reflection operator which is stored in T3 and T2 now contains the pointwise product of the convolutional part of the Greens function and the previous T2 added to the pointwise product of T3 with the correlational part of the Greens function for this particular frequency. The inverse Fast Fourier Transform (FFT) is applied to vector T2 and the result is stored in vector T2. Finally, the pointwise product between γ and T2 is formed and the result is added to vector T, and the result is stored in vector T1. Regardless of whether step 1064 or 1062 has just been completed in step 1066 the reflection operator is applied to the result of either box 1062 or 1064 and the result is Fast Fourier transformed in vector T3 Also the original vector T1 is Fast Fourier transformed and the result is stored in Vector T2. T2 is the pointwise multiplied by the convolutional part of the layered Greens function at this particular frequency and the result is added to the pointwise product of T3 with the correlational part of the layered Greens function at this frequency. The result is stored in vector T2. Finally a Fast Fourier Transform (FFT) is applied to the resulting vector T2 control is then transferred to step 1068 where the transportation or truncation operator given in FIG. 21 is applied to the result. In step 1070 it is determined if φ is equal to Φ if in fact φ is equal to Φ control is transferred to step 1074 where it is determined if ω is equal to ω. If φ is not equal to Φ control is transferred to step 1072 where φ is incremented by 1 and control is then transferred back to step 1056. If in step 1074 ω is determined to be equal to Ω control is transferred to step 1076 where ω is incremented by 1 and control is transferred back to step 1054. If ω is equal to o) in step 1074 control is transferred step 1080 from there control is returned to the calling subroutine.


In FIG. 19A the Hermitian conjugate Jacobian with correlation present is applied to an input vector x in step 654 of FIG. 19A e) is set equal to 0 and the files containing the Greens functions for all frequencies are rewound in step 658. The Greens function both the convolutional and the correlational parts for this particular frequency are read into memory, also φ is set equal to 1. In step 660 the file containing the internal field estimate for this particular source position and frequency is read into memory in step 662 the Hermitian conjugate of the propagation operator is applied to the vector x the result is stored in vector T1. In step 664 the reflection operator is applied to the vector T1 and the result is stored in vector T3 also Fast Fourier transforms are applied to both vector T1 and vector T3. Finally the pointwise product of the complex conjugate of the FFT of the convolution part of the layered Greens function is formed with T1 and the result is added to the pointwise product of T3 with the FFT of the correlational part of the layered Greens function with its complex conjugate taken. The result is stored in vector T1. Control is then transferred to step 668 where the inverse Fast Fourier Transform (FFT) of T1 is taken and the result is stored in vector T1. In step 670 either the approximation or the full action of the inverse of the second Lippmann-Schwinger operator with Hermitian conjugate taken is applied as before. If the inverse operator is applied it is approximated by the solution to a system via biconjugate gradients or biconjugate gradients stabilized. In step 670 also, the approximation I+G*γ* is applied to the vector T, resulting from step 668, and the pointwise product with Gamma* ie the complex conjugate of γ is taken. The Fast Fourier Transform (FFT) of the result is then pointwise multiplied by G* the complex conjugate of the Greens function, with care being taken to insure that the convolution and the correlational parts are performed separately as before. This result is added to T1, then stored in T1 and control is then transferred to step 672 where the result is pointwise multiplied by the complex conjugate of the internal field estimates and the result is added to the previous frequency and source position result. Then control is transferred to step 674 where it is determined if φ is equal to Φ if not φ is incremented by 1 and control is transferred to step 660 if it is, control is transferred to step 680 where it is determined if ω is equal to Ω. If not the ω in incremented by 1 and control is transferred back to step 658. If it is, control is transferred back to the calling subroutine.


Embodiments within the scope of the present invention also include computer-readable media having computer-executable instructions or data structures stored thereon. Such computer-executable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. When information is transferred or provided over a network or another communications connection to a computer, the computer properly views the connection as a computer-readable medium. Thus, such connection is also properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer readable media. Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.


The following discussion is intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. Although not required, the invention can be implemented in the general context of computer-executable instructions, such as program modules, being executed by computers in network environments. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein.


Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked through a communications network. In the distributed computing environment, program modules may be located in both local and remote memory storage devices.


Water Bath Scanner compatible with Reflection tomography and Large Pixel Inverse Scattering


General features of the proposed water bath scanner are shown in FIG. 31. It is seen that the general architecture is a pair of 2-D arrays (actually 1.75-D arrays) that are rotated about a vertical axis below the water surface. Each array can act as a transmitter or a receiver in both pulse echo mode (B-scan and reflection tomography) and in transmission mode (inverse scattering tomography). This arrangement allows a complete 360 degree tomographic scan to be obtained in only 180 degrees of rotation. Each array can be tilted up to image features close to the chest wall. The respective tilt axes are mutually parallel. The arrays as a group can be raised and lowered to image different slices of the breast. An optional latex rubber shield (or a woven nylon stocking-like shield) holds the breast stationary to minimize motion artifacts during scanning. The shield is held at the top of the water tank by a ring and at the bottom by a tube that passes through a hole in the rotation shaft.


A slight vacuum can be applied to the tube to make a tighter fit as needed for the latex version. This shield is disposable and contributes to the esthetics and sanitation of the scanner. A tilting cot provides a comfortable means in patient positioning, shown in FIG. 32. The patient stands on the foot rest when the cot is vertical. Then the cot tilts to the horizontal position for the scanning process. The breast is then pendant in the water bath tank. One breast at a time is scanned. The tilt process is reversed for patient exit.


Some of the important specifications for the scanner are given in TABLE 1 below. Said table illustrates the performance of both reflection tomography and inverse scattering transmission tomography. The in-plane and normal to plane spatial resolution of reflection tomography will approach 0.2 mm and 2 to 3 mm respectively (the vertical resolution may be increased to 1 to 1.5 mm with special apodizing functions).


The number of views for reflection tomography will be determined experimentally, but best estimated as equal to be the number of depth resolution lengths per circumference of a circle of diameter equal to the lateral resolution of a single view. Thus, N=λπ[2R/A]/[c/(2Pf)]=4πλRPf/[cA]=4πRP/A where λ is wavelength, R is range to target from array, A is the active lateral aperture of the array, c is the speed of sound, P is the fractional bandwidth, and f is the center frequency of the signal. The total data collection time for one slice is 12 seconds for the system specified in TABLE 1.


The in-plane spatial resolution of inverse scattering can be varied by choice of the pixel size (this ability is not possible with reflection tomography and is one of the new developments we report). In TABLE 1 we use 2 wavelength pixels (0.6 mm at 5 MHz) as an example.


The number of views for inverse scattering tomography will be taken to be the same as for diffraction tomography which is N=πD/λ where D is the breast diameter. For inverse scattering this generalizes to N=πD/p=667 at 5 MHz for D=5 inches and for p=pixel size=2 wavelengths. Using many views at one frequency is equivalent to using fewer views with more frequencies per view. This creates a speed up factor in data collection since many frequencies can be collected at each view in the same time required to collect a single frequency after 2 wavelengths. The frequency sample interval required to reduce the view-to-view angle separation for a complete high quality image reconstruction is given by Δf=2πf/N==2πf/[πD/p]=2 pf/D=0.047 MHz. The frequency bandwidth for 16 frequencies is then 0.047 MHz×16=0.752 MHz and requires only 667/16=42 views. For this example with p=2 wavelengths, the spatial resolution is 0.6 mm. If p=1 wavelength were used, then the spatial resolution would be 0.3 mm. The total single slice, data collection time for 42 views and 16 frequencies is 4.3 seconds for the system specified in TABLE 1.


The block diagram of the scanner electronics is shown in FIG. 33.a. Starting on the left, the transducer array is 8 by 96 elements. Each row has a receiver multiplexer (MUX) that feeds one of 8 analog to digital converters (ADC). Two ADCs are packaged on one GaGe™, Montreal, QC, Canada, “CompuScope 12100™” card and is clocked at 30 million samples per second (www.gageapplied.com). Each of the 8 channels digitizes 6,000 samples simultaneously per transmit event. Each transmit event is a transmission from a single element or from a single column (or part of a column) of elements. Also shown is a potential future expansion (using even and odd columns to separate MUXs) where 4 more GaGe cards are added to each computer or to 4 additional separate computers. If the PCI bus in each computer runs at 100 Mbytes/sec then each channel on the GaGe card runs at 25 sample/micro-sec (two Bytes/sample) into the PCI bus. Then 6000 samples will be transferred in 6000/25=240 microseconds on a single channel. This closely matches the 30 samples/micro-sec digitization rate. The computational load can be handled is several ways: (1) array processor boards; (2) Apple G4 computers with their velocity engine that, when programmed properly and matched to compatible problems, operates up to 4 GFLOPS (4 billion floating operations per second); or (3) by parallel computers. Our first choice would be to place one dual G4 PCI card in one slot of each of the 4 computers. This would provide 4×2=8 G4 computers, which would provide a maximum of 32 GFLOPS. We could also put two G4 cards in each computer for a maximum of 64 GFLOPS.

TABLE 1Scanner SpecificationsDiameter of water bath tank9 inchesDepth of water bath tank13 inchesWater temperature stability±0.1 degrees FGround fault and safety protectionyesPatient hand controlled interruptYesBreast stabilizing disposable lineryesAntibacterial water in tankYesDisposable sanitary liners per patientyesInter array separation5 ± 2 inchesCenter frequency of arrays5 MHzPercent bandwidth of arrays70 percentSize of each array (elements)8 by 96Waterproof cable built inyesNumber of sub cables/array768Impedance of sub cable (Ohms)50Length of arrays4.91 inchesHeight of arrays0.736 inchesVert. rotation of arrays370 degreesVert. motion of arrays6 inchesTilt of arrays0 ± 30 degreesArrays identicalyesCenter lateral resolution/view, r = 3″1.47 mmCollection time per reflection view.34 sec# views for reflection tomography19 to 36Total collect time for reflection slice12 secSpatial resolution for reflect tomog0.2 mm planarContrast resolution: reflect tomog15 to 12-3 mm vert.Number of reflection vertical slicesup to 153Collect time for inverse scat view0.102 sec# views for inverse scat tomogra'y667 @ 1 freq42 @ 16 freqTotal collect time/slice for inv. scat68 sec @ 1 freq4.3 sec @ 16 freqSpatial resolution: mv. Scat. tomog0.3-0.6 mm plnContrast resol'n: Inv. Scat. tomog100 to 12-3 mm vertFrequency separation to replace47 kHzFrequency band = 16 × 47 kHz=752 kHzangle separation for 667 viewsNumber of inv scat. vertical slicesup to 153


The question of over loading the PCI bus has been examined and is not a problem since our algorithms are coarse grained and each distributed process needs minimum communication with the other processes. Thus, similar operations can run in parallel on all the G4s nodes and possibly parts 5 of each node (each G4 has 4 parallel SIMD processors). SIMD (single instruction multiple data) computers also work well with our coarse grained algorithms. The natural process divisions are frequency, view angle and slice height.


The signals to the transmitter elements are generated by an arbitrary wave form generator card (such as the GaGe™ CompuGen 1100) that is placed in the PCI bus of one of the computers that acts as the master computer. This master computer synchronizes the actions of the other computers. This is not a time consuming task since each computer's processing is largely independent of the other nodes. The output of the waveform generator is amplified by a linear amplifier and sent to the transmitter MUX. The MUX will be designed to allow future expansion for using 2 to 8 linear amplifiers to be placed after respectively 2 to 8 delay lines(connected to the common wave form generator) to form transmit beams. The MUX circuits will be solid state analog switches of either the diode network type or of the Supertex™ (Sunnyvale, Calif. 94089) MOS (metal oxide semiconductor) type. We have used the diode network type with success in one of our present scanners. The diode network requires more printed circuit (PC) board space, but has lower series resistance (about 5 Ohms) and low shunt capacitance. The Supertex approach uses less PC board space (the HV20822 has 16 switches in a 0.354 inch square package) but has higher series resistance (about 25 Ohms) and shunt capacitance (20 pf off, 50 pf on). The diode network is a safe bet. However, we will look at placing the Supertex switches in the MUX circuit bays close to the arrays to reduce cable capacitance. We will also consider using preamplifiers or buffer amplifiers after the Supertex switches to reduce the effect of their higher resistance and capacitance. The Supertex is not a bad choice and is used in many commercial ultrasound scanners with normal cable lengths from probe to chassis where space-savings is important in the chassis. The MUX circuits will use two stages, for both the transmitter and receiver connections.


The tank and scan motion control will be constructed and assembled as per well known methods and components known to the art. The arrays can be built by transducer jobbers such as Blatek, State College, Pa. The engineering and tooling for the arrays has already been completed; and allows the present array we are using (8 rows and 16 columns) to be replicated 6 times per new array to make a 96 by 8 element array.


A more detailed understanding of the MUX shown in FIG. 103a is given in FIG. 103b. The MUX circuit is divided into two natural parts, the transmitter MUX and the receiver MUX. The transmitter MUX is shown in the top part of FIG. 103b. The transmitter portion consists of a programmable arbitrary waveform generator, a power amplifier, a protection circuit (which blocks the signal if the voltage exceeds a critical threshold), and a programmable gain module that can provide independent gains to each of 8 separate output lines. The 8 independent output lines drive 8 independent rows of the 96 column transducer. Each row has six multiplexers, labeled A through F, that multiplex to 16 respective elements on that row, so that 6×16=96 elements per row. Each of the said six multiplexers drives one row of an 8 row by 16 column subarray. The 6×8=48 multiplexers labeled S1A through S8A for column A through SiF through S8F for column F are 16 to 1 supertex MOS, analog, high voltage switches.


The receiver multiplexer is shown in the bottom half of FIG. 33.b and consists of two subsections. The first subsection consists of six modules that multiplex 128 to 16. This is accomplished by 16 daughter cards that each multiplex 8 to I- 1 using divide gates, 8×16=128 inputs; each subarray has 8 rows and 16 column=128 elements to match. The outputs of the six modules in the first subsection then are routed to a matrix switch that selects which of 16 outputs of each module are connected to either 8 or 16 analog to digital converters. The matrix switch has 2×6×16 nodes that thus allows either 8 or 16 ADC to be used. The node switches can be diode switches or selectable preamps. Outputs can be parallel when deselected without loading the input to any ADC channel.


The design and construction of multiplexer circuit boards is a process well known in the art. We have experience in designing such multiplexers for the compression plate inverse scattering scanner and other scanners in our lab.


The circuit board fabrication can be sent to a jobber. The circuit boards layout work can be done by Orcad or similar software. The design and coding of the software for data acquisition is a well known process in the art. For example, one can use C++ or the Labview™ application software from National Instruments™ (Austin, Tex. 78759-3504) for the main data acquisition program to allow the use of a virtual instrument panel (showing scanner array position, digitized waveforms, gain settings, etc.). The inner loops of the Labview program will be written in C or C++ for speed. Algorithm implementation and programming may be done in Fortran or C or C++.


Reflection Tomography and Inverse Scattering Imaging Algorithms


The algorithms used for reflection tomography are described further below. The exact algorithms for inverse scattering are given by Borup et al in [D. T. Borup, S. A. Johnson, W. W. Kim, and M. J. Berggren, “Nonperturbative diffraction tomography via Gauss-Newton iteration applied to the scattering integral equation,” Ultrasonic Imaging 14, 69-85, (1992), and Wiskin, J W, Borup, D T, and Johnson, S A, “Inverse Scattering from arbitrary two-dimensional objects in stratified environments via a Green's Operator”, J. Acoust. Soc. Am. Pt.1, August 1997.]. We show here an improved faster version of the parabolic method which is one of three faster versions of inverse scattering that uses forward propagating wave solutions. This method is an approximate method that is much more accurate than the Born or Rytov methods, and even approaches closely the accuracy of the exact solution. An outline of the theory will be given here, but a full account of several approaches using this method may be found in the literature and our patent.


We start with the Helmholtz wave equation

[∂2/∂x2+[k2(x,y)+∂2/∂y2]]f(x,y)=0.


Next we take the Fourier transform with respect to y and then assume an approximate factor of the resultant equation. Note that the convolution theorem of Fourier transforms is used, where denotes the convolution operator. Let {tilde over (f)}(x, λ) denote the Fourier transform of f(x, y) with respect to y. The approximate factored equation (by forcing commutation of all variables) is then:
(x+k2~(x,λ)-λ2)(x-k~2(x,λ)-λ2)f~(x,λ)=0.


We next note that the two factors represent waves moving to the left and to the right along the x axis. Taking only the wave moving from left to right (the first factor), we have
(x-k~2(x,λ)-λ2)f+~(x,λ)=0,

where the superscript + indicates that part of {tilde over (f)}(x, λ) moving in the +x direction (left to right).


It is a parabolic differential equation. For the special case, k2(x, y)=k02=const . . . , the factorization is exact and convolution becomes multiplication and we can solve the equation for {tilde over (f)}+(x, λ) and then take the inverse Fourier transform to get the propagation formula:
f+(x,y)=12π-f~+(x0,λ)-(x-x0)k02-λ2λyλ,x>x0>0,

which approximation propogation in breast tissue where contrast in k2 is small, i.e. k2≈k02.


This formula is the basis for a multitude of marching approaches for solving the direct scattering problem. The general idea is to perform the integral on a line parallel to the y-axis and to propagate the angular spectra forward by an increment of x. Then the process is repeated for additional increments of x. The method for dealing with k2(x,y)≠const. varies between methods (including a binomial expansion of the square root). Our approach follows this general pattern but adds additional refinements to improve accuracy and speed. This completes the outline of the parabolic method.


The parabolic method has allowed us to design the proposed scanner to use inverse scattering at 5 MHz. Using several tricks has accomplished this. One trick is to use larger pixels to reduce the computational load. Other tricks involve further approximations that increase speed but do not degrade performance. The basis of inverse scattering imaging is to solve by optimization methods the following least squares problem for the vector of independent variable y or (x,y)*, whose components are the image parameters. Let f(measured)(xdet) be the measured field on detectors and let f(calculated)(xdet,γ,f(inc)) be the computed received field as a function of an estimate of γ and the incident field f(inc)(x) in the body and on the detectors. Then the scattering potential is found by finding the minimum value of the norm of the measurement residual R, where, R is defined as the square of the difference, in Hilbert space, between the measured and computed fields on the detector.
minγ{f(measured)(xdet)-f(calculated)(xdet,γ,f(inc))2}=minγ{R2}


We have shown how this optimization problem may be solved by Gauss Newton, Fletcher Reeves, Ribiere Polak and other methods [D. T. Borup, S. A. Johnson, W. W. Kim, and M. J. Berggren, “Nonperturbative diffraction tomography via Gauss-Newton iteration applied to the scattering integral equation,” Ultrasonic Imaging 14, 69-85, (1992), J W Wiskin, D T Borup, S A Johnson, “Inverse Scattering from arbitrary two dimensional objects in Stratified Environments via a Green's Operator”, APPARATUS AND METHOD FOR IMAGING WITH WAVEFIELDS USING INVERSE SCATTERING TECHNIQUES, S. A. JOHNSON ET AL.], herein included by reference. Formerly we used integral equations or finite difference time domain (FDTD) methods to solve the forward problem (given γ and f(inc)(x), compute
f(calculated)(xdet,γ,f(inc))).TheJacobianisdefinedasJδf(calculated)δγ

The solution, γ, is found by the iteration

J(n)δγ(n)=−R(n)
γ(n+1)(n)+δγ(n)


The derivation and use of the Parabolic Jacobian is based on a recursion process and is found in our patent [APPARATUS AND METHOD FOR IMAGING WITH WAVEFIELDS USING INVERSE SCATTERING TECHNIQUES, S. A. JOHNSON ET AL.] in a subsection called “Inverse problem and construction of Jacobian” in section call “EXAMPLE 12, PARABOLIC MARCHING METHODS”.


We form a Jacobian for the parabolic method and incorporate it as part of the above iterative formula. We next show how the new parabolic method performs in terms of speed and accuracy by simulating the geometry and parameters for scanners similar to the proposed scanner specified in TABLE 1.


EXAMPLES OF EMBODIMENT OF INVENTION
Example 1

In this example we consider the design of a compression plate ultrasound scanner combining both reflection imaging and transmission inverse scattering imaging. This is not the geometry for the water bath scanner, yet we show that both geometries can produce images of sound speed and attenuation.


We specify two arrays facing each other, with their faces mutually parallel. Each array can be a 1-D array or a 2-D array of transducer elements. We have constructed a compression plate scanner where each array is a 1-D array with 256 elements with λ/2 element separation at 2 MHz. A drawing of this compression plate configuration is shown in FIG. 34.a and 34.b. Therein is shown the side view of a breast between two compression plates and an end view of the top and bottom linear array transducers and their translation motions. The two opposing transducers allow transmission data to be collected that includes scattering and diffraction into wide angles thus improving spatial resolution. The motion of the top and bottom linear 1-D array or 2-D array is included to image successive slices of the breast. The top and bottom compression plates are also shown.


The array geometry can be extended to provide improved imaging by adding side reflecting plates or side transducer arrays which also serve as side compression plates. This extension in function is shown in FIGS. 34.c and 34.d. In particular, FIG. 34.c shows an isometric view of a compression plate scanner showing options for placing transducer elements either inside or outside of the top and bottom compression plates. We also note that the top and bottom transducer arrays can be used for reflection imaging as well. The side plates with transducers may also be used for reflection imaging. FIG. 34.d Shows an isometric view of a compression plate scanner showing options for placing side compression plates with optional transducer elements either inside or outside of the side compression plates. A front compression plate and optional reflector or transducer array can also be added as a front plate as is shown in FIG. 34.e.


A simulation of the compression plate scanner is described next. The bandwidth for the reflection imaging mode was set equal to a Blackmann window from 0 to 2 MHz with a center frequency of 1 MHz. The reflection mode beam former used a 16 element segment of the array to create a beam perpendicular to the array face which was then translated across the array. The beam former was focused at all ranges for both the transmitter and receiver elements and a Hamming window was used to apodize both the transmitter and receiver beams FIG. 35.a. Shows the speed of sound model of a breast compressed between two plates as in a mammogram. FIG. 35.c. Shows the corresponding attenuation model. The tissue model for the simulation was constructed with five tissue speeds of sound and attenuations. For speed of sound: Cfat=1458 m/s, Cparenchyma=1519 m/s, Ctumor=1564 m/s, Ccyst=1568 m/s, Cskin=1525 m/s. For attenuation: afat=0.41 dB/cm/MHz, aparenchyma=0.81 dB/cm/MHz, atumor=1.18 dB/cm/MHz, acyst=0.10 dB/cm/MHz, askin=0.81 dB/cm/MHz. A value of 1500 m/s for water was assumed. A random component with mean 0 and uniformly distributed from −5% to 5% of the average speed of sound-was added to the speed of sound model. The data was generated with the generalized Born approximation corrected for time delay and attenuation loss through the tissue. The generalized Born scattering model and image arrays were 512 by 200 with a λ/4 at 2 MHz pixel size. A reflection image was made by software that simulates a reflection mode scanner. FIG. 35.e. shows the reflection mode image created from the simulated data for the model defined in FIGS. 35.a and 35.c. The image range is −50 dB (black) to 0 dB (white). Notice the noise and speckle caused by the random speed of sound component. The speckle caused by the random component prevents the identification of the targets (except possibly the cyst although there are speckle artifacts at least as bright).


The inverse scattering data was generated by computer simulation using a direct scattering solver. This solver generated data at all elements on the receiving array for each element transmitting on the opposite transmitter array. The inverse scattering image was formed by software that inverted this simulated data. Perfectly reflecting plates (infinite impedance) were assumed to exist on the other two sides. Two frequencies, 1 MHz and 2 MHz, were used. The image array was 256 by 100, λ/2 at 2 MHz, pixels. For 30 iterations, the algorithm required 1.5 hr. of CPU time on a 400 MHz Pentium II. The model defined in FIGS. 35.a and 35.c was used to generate data which was then inverted with the parabolic inverse scattering algorithm.


The performance of the standard parabolic method with ½ wavelength pixels is shown if FIG. 35.a through 35.d.



FIG. 35.a. Shows the true speed of sound model. FIG. 35.b shows the reconstructed image of sound speed using the ½ wavelength pixel parabolic inverse scattering algorithm. Values range from 1419 m/s (black) to 1602 m/s (white). Notice that unlike the reflection mode image, all targets are visible. If sources & receivers are added to the sides, the reconstructed image becomes almost identical to original, but this simulation has only the top and bottom arrays. Note the accurately reproduced random component of the speed of sound.



FIG. 35.c. Shows the true corresponding attenuation model. Note that no random components were added to the attenuation for this simulation. FIG. 35.d. Shows the reconstructed image of the acoustic attenuation using the ½ wavelength pixel parabolic inverse scattering algorithm. Notice that all of the targets are clearly visible. The original parabolic method used λ/2 pixels. With 2λ pixels, there are 4×4=16 times fewer pixels, and 4 times fewer sources and 4 times the inner loop speed, for a factor of 44=256. At 5 MHz the 2λ pixel spatial resolution is 0.6 mm. With the new parabolic method two options still exist: (1) use λ/2 or x pixels and run the imaging program over night for those slices that are wanted at 0.3 mm spatial resolution; or (2) increase computing power. By Moore's law, computing speed increases 2.5 times per 2 years, so in about 12 years 0.3 mm resolution may be possible in some breast types.


Example 2

Inverse scattering images from real lab data from two ¼ inch, 5 Hz Panametrics™ video scan transducers facing each other and using parabolic inverse scattering algorithm.


This example illustrates that our breakthrough in large pixel parabolic codes is practical. We applied it to existing time of flight data (digitized waveforms) that were collected to be analyzed by time of flight tomography (a modified x-ray CT algorithm). No transducer array was used as a receiver (thus no diffraction pattern was recorded). Nevertheless, the parabolic inverse scattering method has defined the two holes in an agar phantom somewhat better than the straight line time of flight method (see FIG. 106).



FIG. 36.a. shows the “time of flight image” i.e., speed of sound map (image) obtained from real time of flight data collected on a laboratory scanner through application of a time of flight CT algorithm. Minimum=black=1477 m/s, Maximum=white=1560 m/s.



FIG. 36.b Shows the sound speed map (image) obtained after 15 steps of the new fast 2 wavelength pixel algorithm starting with the same real lab data as used by the time of flight CT algorithm. Minimum=black=1465 m/s, Maximum=white=1560 m/s. Although the data used was not designed for this new algorithm, the image is better than the corresponding “time of flight image”. Based on this evidence we therefore point out that with hardware designed to be compatible with and optimized for this new method, we can rapidly compute images of speed of sound at 5 MHz that exceed the spatial resolution of time of flight images by a factor of 10 (0.6 mm vs. 6 mm). With further effort we can obtain images of 0.3 mm and even 0.15 mm resolution (by combining reflection and transmission data). We have tested this claim with computer simulation and the results support the claim as we show in FIGS. 107 and 108 in the next example.


Conclusion for Example 2: The new fast algorithm has been tested in adverse conditions with poor data (incomplete angular spectrum) from the laboratory and performs better than the “gold standard” time of flight CT algorithm. Its predicted performance should be about 10 times better when using data that is designed to meet it's computational and interface requirements. This will be verified in the next two examples.


Example 3

Inverse scattering images from simulated data from ¼ inch piston transducers using the parabolic inverse scattering algorithm.


For this computer simulation example we use the following parameters: Frequency=5 MHz. Number of frequencies=1. Pixel dimension=2λ Image size 100 by 100 pixels=2.36 by 2.36 inches. Element aperture=0.236 inches. 90 element translation positions for 90 view angles. Speed of sound values (m/s): cwater=1500, cskin=1505, cfat=1490, cparenchyma=1505, ctumor=1510. The image size can easily be doubled or beyond for real data. The results of this simulation are shown in FIGS. 37.a and 37.b. FIG. 37.a shows the normalized true speed of sound image, [cO/c (x,y)−1], used to generate the simulated data. FIG. 37.b shows the normalized image, [cO/c(x,y)−1], reconstructed by the 2 wavelength pixel algorithm.


Conclusion for Example 3: Using an accurate model of the two, 5 MHz, ¼ inch diameter piston transducers rather than lab data that was not designed with the transducers accurately aligned and calibrated, the spatial resolution is about 4 times better than the lab data. In fact, the images are remarkable. This simulation experiment suggest a follow up simulation experiment where a much larger receiver aperture is used to sample a much greater part of the scattered (diffracted) waves in the forward direction. In this case an additional factor of improvement should be obtained. This experiment is performed in the next example.


Example 4

Images made from simulated data for the same phantom as in EXAMPLE 3, but with the ¼ inch diameter piston transducers replaced with aft transducer array of 120 elements, each 2 wavelengths in width and center to center separation.


See FIG. 108 for the comparison of the time of flight image using the CT algorithm with straight acoustic rays vs. the new, high speed inverse scattering method with 2λ (two wavelength) size square pixels. The phantom for generating the computer simulated data (used by the inversion algorithm in FIGS. 107 and 108) are the same, but the transducers are different (FIG. 107 uses piston transducers, while FIG. 108 uses a 120 element array). The parameters used in FIG. 108 are: D=pixel size=2λ at 5 MHz=0.6 mm; Numeric array size=180 by 120 pixels=10.8 by 7.2 cm=4.25 by 2.84 in; outside circle diameter=6 cm=2.36 in; the receivers are located at 151 points centered on the 180 pixel border; the left and right hand sides of the 180 by 120 array are assumed to be perfectly reflecting (infinite impedance) boundaries; 120 views, 2 frequencies at 2.5 MHz and 5 MHz; cfat=1485; cparen=1515; ctumor=1520; ccyst=1520; and cskin=1515.



FIG. 38.a shows a 120 by 120 pixel image of [cO/c(x,y)−1] for the true object. FIG. 38.b shows a 120 by 120 pixel image of the [cO/c(x,y)−1] reconstructed using the straight line, time of flight CT algorithm. FIG. 8.c shows a 120 by 120 pixel image by the new fast parabolic algorithm. Note the big improvement over the CT method.


Note the greater accuracy of the image from the new algorithm (right) vs. the time of flight (i.e. CT) image (center). Also note that using an array (FIG. 38.c) produces a much better image than a single piston receiver (FIG. 37.b).


Conclusion for Example 4: Increasing the receiver aperture does indeed improve spatial resolution as predicted by theory. The new algorithm is stable, accurate and fast and can use this increased data set size. The results of the four (4) Examples are self consistent and a speed up factor of 256 over previous parabolic methods is verified.


Transition to 3-D Imaging


The transition from 2-D algorithms (either standard parabolic or fast multi pixel parabolic) is straight forward and only involves replacing the 1-D convolution kernel by a 2-D convolution kernel and other obvious dimensional changes. We have reprogrammed our standard 2-D parabolic algorithm to make 3-D inverse scattering images. The image pixel space for the following test case was 150 by 150 in the (x,y) topographic plane with 60 stacked planes (z axis direction). FIG. 39.a and 39.b show the results of this simulation.



FIG. 39.a shows the 3-D breast model with the true speed on sound on a z-x plane at y=75. In particular, the image is given by: speed of sound c(x, 75, z); positive contrast=4.6%; negative contrast=5.4%; gray scale limits are max black=−6% and max white=8%. Note that the three simulated tumors near the center are spatially resolved and reconstructed accurately.


We conclude from these images that 3-D imaging is a straight forward modification of 2-D imaging.


Value of incorporating reflectivity imaging by reflection tomography.


We show next that incorporating reflection tomography imaging with the inverse scattering algorithm is straightforward and valuable. The speed of sound image made by inverse scattering can be used with ray tracing (either straight ray approximation or actual ray tracing) as a correction map or correction table to adjust the delay times for back propagation of data from a common transmission and reception transducer array. This process is well known to the art. The main difficulty to date has been obtaining an accurate speed of sound image to proceed with the mapping process to generate the correction table. The inverse scattering method fulfills this requirement almost perfectly. We show the result of straight line ray tracing through a speed of sound image made by time of flight tomography to correct a reflection tomography image of human breast tissue in a thin plastic cylinder.



FIG. 40.a shows an image system made with a commercial B-scan of the cancerous breast tissue in the cylinder. A 3.5 MHz ultrasound sector probe was used while the phantom was submerged under water. The commercial scanner used 32 beam former channels.



FIG. 40.b shows a reconstructed image made by reflection tomography of the same breast tissue sample at approximately the same level,—made with a Water Bath, Reflection Tomography scanner at 5 MHz. This image was reconstructed from 12 vertical source/receiver pairs on a 420×420 grid using 0.25 mm wide pixels. Corrections in time delays were made for speed of sound variations (but not for refraction effects). It takes approximately 1.5 hr. of CPU time to compute this image on a 27 MFLOP computer. The breast is encapsulated in a plastic container whose walls are clearly visible and the bright spots on the walls are four marker strings. The greatly improved quality of the reflection tomography image is obvious.


Invention Disclosures for the Parabolic Inverse Scattering Algorithm


—Large (2λ) Pixels


In our original parabolic disclosure: example 12 of APPARATUS AND METHOD FOR IMAGING WITH WAVEFIELDS USING INVERSE SCATTERING TECHNIQUES, S. A. JOHNSON ET AL., the forward propagation of the wavefield from f(x,y) to f(x, y+Δ) where Δ is the pixel site, was performed by Fourier transforms via:

f(x,y+Δ)=Fλ→x−1{p(λ)Fx′→λ{f(x′,y)}}  101.1

where F and F−1 denote the forward and inverse Fourier transforms and the exact propagator, p, is given by:
p(λ)=-iΛk02-λ2101.2

This propagator is exact. By the convolution theorem, 101.1 can be rewritten as:

f(x,y+Δ)=∫f(x′,y)p(x−x′)dx′  101.3
where
p(x)=Fλ→x−1{p(λ)}  101.4


In other words, the propagator can be implemented by convolution. The question is, for a discrete version of these equations, which is faster, FFT's and multiplication in the frequency domain or convolution in the spatial domain? FIG. 41 shows our discrete propagator, p(iΔ), for 3 cases: Δ=2λ, Δ=λ, and Δ=λ2 where λ is the wavelength.


Notice that for Δ=λ/2, if we assume that we can cut off the propagator at i=±20 then the implementation of the propagator by a direct convolution sum would require N*41 operations. This convolution length is probably too long to be any faster than the FFT implementation. However, the Δ=λ case can be cut off at i=±6 and the Δ=2λ case can be cut off at i=±3. In particular, we have found that the Δ=2λ case can be implemented with a cut off at i=±3 with no discernible error over the FFT implementation providing that the propagator is multiplied by a Blackmann window:

{circumflex over (p)}(i)=p(i)(0.42+0.5 cos(πi/(nfilt+1))+0.08 cos(2πi/(nfilt+1))), i=−nfilt, . . . , nfilt

where nfilt=3 for the Δ=2λ case. Without this filter multiplication, the parabolic propagation was found to be unstable. Apparently, even though the neglected values are small, simple truncation of the propagator results in at least one eigenvalue slightly greater than 1 resulting in exponential growth for large numbers of parabolic steps.


We had originally thought that the parabolic algorithm required Δ≦λ/2 in order to maintain accuracy for scattering from biological tissue contrasts. This is certainly true if one requires scattering at angles out to 45 deg. from the incident field propagation direction. However, for computation of all scattering in a ±15 deg. cone, Δ=2λ sampling is sufficient. Of course, in a typical medical imaging problem, there will be scattering at large angles, however, suppose that the receivers are 2λ segments. Then the receivers do not see this large angle scattering energy anyway. Thus, the Δ=2λ algorithm will accurately compute the received data. Since the pixel size is now larger, the number of views needed for a complete dataset is also reduced by a factor of 4 over the Δ=λ/2 algorithm.


Compared with the previous Δ=λ/2 algorithm, the speedup in compute time is given by: 4 times fewer views*16 times fewer pixels*speedup factor of short convolution vs. FFT propagator.


Even assuming that the short convolution buys only a factor of 4 over the FET (it is likely better than this for nfilt=3), the algorithm is still 256 times faster than the previous algorithm for the same sized object at the same frequency.


Pseudo Code Definitions of the Plane Wave Parabolic Algorithms for the Frequency Domain


Parameter Definitions


c0=background medium speed of sound in m/s


denotes the complex conjugate
x2=xx*array2=i,j,array(i,j,)2wherei,j,aretheindicesofthearrayarray1,array2=i,j,array1(i,j,)*array2(i,j,)denotesdiscreteconvolution=pixeldimension


nx=x-axis array dimension (x is normal to the parabolic propagation direction)


ny=y-axis array dimension (y is parallel to the parabolic propagation direction)


nr=the number of receivers per view


nv=the number of plane wave views from 0 to 360 deg.


nf=the number of frequencies used


nfilt defines the width of the short convolution propagator: the values f(−nfilt+n) to f(n+nfilt) are used to propagate one step for field index f(n).


ne defines the number of samples used to define a receiver element. Each receiver is −ne to ne, Δ samples long. For a point receiver, ne is set equal to 0.


Propagator Definition


The following pseudo code defines the short convolution propagator generation.

δγ = 2π/(nxΔ)for lf=1,...,nffor n=1,...,nx= (n−1)δγif n > nx/2, γ = (n−nx−1)δγif γ<k0(lf) thentemp(n) = exp(−i Δ sqrt(k0(lf)2−γ2))elsetemp(n) = exp(− Δsqrt(γ2−k0(lf)2))endifnext nThe next step is done by the FTT algorithm:for m=1,nxtemp2(m)=0for n=1,nxtemp2(m)=temp2(m)+temp(n) exp(i 2π(n−1)(m−1)/nx)/nxnext nnext mfor n=0,nfiltwind=0.42+0.5cos(πn/(nfilt+1))+0.08cos(2πn/(nfilt+1))prop(n,lf)=temp2(n+1) windnext nfor n=−nfilt,−1prop(n,lf)=prop(−n,lf)next nnext if


The window (wind=Blackmann in the present case) applied above to the short convolution propagator is essential for stability.


Definition of the Propagator Operation


The Notation:

temp→tempprop(.,lf) will henceforth be used to denotethe following computation:for n=1,...,nxtemp2(n)=temp(n)next nfor n=−nfilt+1,0temp2(n)=temp(1−n)next nfor n=nx+1,nx+nfilttemp2(n)=temp(2nx−n+1)next nfor m=1,...,nxtemp(m)=0for n=n−nfilt,n+nfilttemp(m)=temp(m)+temp2(n)prop(m−n,lf)next nnext mNote that this implementation applies the boundary conditionfor an infinite impedance boundary(f=0) at the array ends 1 and nx.


Definition of the Rotation Operator


The Notation

tr=Rot(t,φ)

will henceforth denote the rotation of the function in array t(nx,ny) by 0 radians followed by placement into array tr(nx,ny) via bilinear interpolation.


Definition of the Receiver Location Array


The nr receivers are located at the points mr(lr), lr=1, . . . , nr where each mr(lr) is an element of [1+ne, . . . , nx−ne]


Definition of the Receiver Sensitivity Function


Receiver lr is centered at point mr(lr) and extends from mr(lr)−ne to mr(lr)+ne. The array wr(−ne:ne,nf) is set to be the receiver sensitivity function on the support of the receiver for each frequency 1, . . . , nf.


Code 1. Frequency domain, plane wave source, parabolic algorithm

common k0,Δ,prop,wr,mrcommon /gamma/ γcomplex γ(nx,ny), grad(nx,ny), tg(nx,ny), p(nx,ny)f(nr,nv,nf), r(nr,nv,nf),tr(nr,nv,nf), wr(−ne:ne,nf),prop(−nfilt:nfilt,nf)real k0(nf),freq(nf)integer mr(nr)set Δ = pixel size in m.set freq(lf), 1f=1,...,nf in Hz.k0(lf)=2π freq(lf)/c0, lf=1,...,nfset the prop( ) array as described in “Definition of the propagatoroperation” section above.set wr as described in “Definition of the receiver locationarray” section above.set mr as described in “Definition of the receiver sensitivityfunction” section above.select a residual termination tolerance ε1select a gradient magnitude termination tolerance ε2select a maximum number of iterations, nrpread the data f(nr,nv,nf)resdata=∥f∥set γ equal to a starting guesscall scatter(γ,r)r→r−fres0=∥r∥/resdatacall jacobian_adjoint(r,grad)r1 = ∥grad∥2p→−gradg0=∥grad∥for lrp=1,...,nrpcall jacobian(p,tr)α=−Re{r,tr}/∥tr∥2γ→γ+αpcall scatter(γ,r)r→r−fres(lrp)=∥r∥/resdatacall jacobian_adjoint(r,tg)g(lrp)=∥tg∥/g0if res(lrp) < ε1, go to line 100if g(lrp) < ε2, go to line 100β=Re {tg,tg−grad)}/r1if β < 0,β→0r1 = ∥tg∥2p→−tg+βpgrad→tgnext lrp100 write γ to a datafilestopsubroutine scatter(γ,f)common k0,Δ,prop,wr,mrcomplex t(nx,ny),γ(nx,ny),temp(nx),f(nr,nv,nf),wr(−ne:ne,nf),prop(−nfilt:nfilt,nf),tr(nx,ny)real ko(nf)integer mr(nr)for lf=1,...,nft(n,m)=exp(−i k0(lf)Δ γ(n,m)), n=1,...,nx, m=1,...,nyfor lv=1,...,nv=2π(lv−1)/nvtr = Rot(t,φ)temp(n) = 1, n=1,...,nxfor m=1,...,nytemp→tempprop(.,lf)temp(n) = temp(n) tr(n,m), n=1,...,nxnext mfor lr=1,...,nrf(lr,lv,lf) = 0for le=−ne,...,nef(lr,lv,lf) = f(lr,lv,lf)+temp(mr(lr)+le)wr(le,lf)next lenext lrnext lvnext lfreturnsubroutine jacobian(δγ,δf)common k0,Δ,prop,wr,mrcommon /gamma/ γcomplex t(nx,ny),γ(nx,ny),temp(nx),wr(−ne:ne,nf,prop(−nfilt:nfilt,nf),tr(nx,ny)δt(nx,ny),δtr(nx,ny),temp2(nx),δf(nr,nv,nf)real ko(nf)integer mr(nr)for lf=1,...,nft(n,m)=exp(−i k0(lf)Δ γ(n,m)), n=1,...,nx, m=1,...,nycoeff = −i k0(lf)Δfor lv=1,...,nv=2π (lv−1)/nvtr = Rot(t,φ)δtr = Rot(δt,φ)temp(n) = 1, n=1,...,nxtemp2(n) = 0, n=1,...,nxfor m=1,...,nytemp→tempprop(.,lf)temp2→tempprop(.,lf)for n=1,...,nxtemp2(n) = (temp2(n)+temp(n) δtr(n,m) coeff) tr(n,m)temp(n) = temp(n) tr(n,m)next nnext mfor lr=1,...,nrδ(lr,lv,lf) = 0for le=−ne,...,neδ(lr,lv,lf) = δf(lr,lv,lf)+temp(mr(lr)+le)wr(le,lf)next lenext lrnext lvnext lfreturnsubroutine jacobian_adjoint(δf,δy)common k0,Δ,prop,wr,mrcommon /gamma/ γcomplex t(nx,ny),γ(nx,ny),temp(nx),wr(−ne:ne,nf),prop(−nfilt:nfilt,nf),tr(nx,ny)δ(nx,ny),δtv(nx,ny),δtvr(nx,ny),δ(nr,nv,nf)real ko(nf)integer mr(nr)δ(n,m)=0, n=1,...,nx m=1,...,nyfor lf=1,...,nft(n,m)=exp(−i k0(lf)Δ γ(n,m)), n=1,...,nx, m=1,...,nycoeff =−i k0(lf)Δfor lv=1,...,nv=2π (lv−1)/nvtr = Rot(t,φ)temp(n) = 1, n=1,...,nxfor m=1,...,nytemp→tempprop(.,lf)for n=1,...,nxδtv(n,m)=temp(n)temp(n) =t emp(n) tr(n,m)next nnext mtemp(n) = 0, n=1,...,nxfor lr=1,...,nrfor le=−ne,...,netemp(mr(lr)+le) =temp(mr(lr)+le)+δf*(lr,lv,lt)wr(le,lf)next lenext lrfor m = nx,...,1for n =1,...,nxδtv(n,m) = δtv(n,m)*temp(n)temp(n) = temp(n) tr(n,m)next ntemp→tempprop(.,lf)next mδtvr = Rot(δtv,−φ)δt(n,m) = δ(n,m)+δtvr(n,m) tr(n,m) coeff, n=1,...,nx m=1,...,nynext lvnext lfδ(n,m) = δt*(n,m), n=1,...,nx m=1,...,nyreturn


The Enveloped Time Domain Parabolic Algorithm


The second invention disclosure for the parabolic inverse scattering algorithm is a means of avoiding local minima when imaging an object with many it phase shifts and when starting from a zero (or a somewhat poor) initial guess. If one has only one frequency and the phase shift through the object is less than π radians then one can converge to the global, correct solution from a starting guess of zero. If the phase shift is greater than It then the algorithm will converge to a local minimum if a zero starting guess is used. Extrapolating this, if one has a bandwidth from fmin to fmax then convergence to the correct solution will occur from a zero starting guess only if the phase shift through the object is less than π at fmin. For biological tissue at ultrasound frequencies, say 5 MHz, phase shifts of on the order of 10π are encountered. This means that fmin needs to be on the order of 1/10th fmax. This is not possible with the typically 50% bandwidth transducers available at present.


In order to get around this difficulty, we have examined the use of starting guesses computed by straight line time of flight imaging. In this approach the time delay of the acoustic time pulse through the body is extracted from the data. Assuming that the energy took a straight line path through the body for each receiver (not a good assumption due to ray bending (refraction) and diffraction effects) then the speed of sound image can be reconstructed by well known x-ray CT type algorithms. We have found that these starting guesses will work up to a point but as we increase the size and contrast of the body up to that of tissue, we find that the time of flight image is not sufficiently close to the truth to allow inverse scatter imaging with 50% bandwidth transducers.


One may reasonably ask the question: why does the time of flight algorithm work with 50% bandwidth transducers? The answer is that even though the time pulse that travels through the target has only 50% bandwidth, it is still possible to deduce the time delays.



FIG. 32 shows the time signal computed for a 2.4 cm dia. 1520 m/s cylinder illuminated by a plane wave for 20 frequencies from 2.5 to 5 MHz at a receiver located at the center of the receiving array. The frequency domain data for this case is ftotal(ω)/finc(ω) which was then transformed to time. Note that this division by finc in the frequency domain removes the time delay due to the propagation from source to receiver through the background medium (water 1500 m/s) and so the center of the time pulse gives the time shift due to the cylinder.


Suppose now that we attempted to find the center of the waveform in FIG. 32 with a optimization algorithm (say Newton's method) starting with t=0 (sample 51) as the starting guess. Further suppose that we defined the center as coincident with the maximum wave magnitude.


Clearly we would find the negative peak near sample 51—a local maximum. This is analogous to what happens with the parabolic inversion algorithm starting with a zero guess. The time of flight algorithm, on the other hand, finds the time delay by first taking the envelope of the waveform. The envelope squared of the signal shown in FIG. 32 is shown in FIG. 103.


It is now a simple matter to find the time delay to the waveform center with either an optimization algorithm or by simply detecting the index of the maximum value.


The preceding discussion suggests that the parabolic inversion algorithm might avoid local minima, even with 50% bandwidth transducers, if one operated on the envelope of the time waveform as the data. It is a simple matter to take the frequency domain output of the parabolic algorithm, transform to time, and then take the envelope. Unfortunately, the envelope operator is not differentiable and the parabolic inversion algorithm relies on gradient direction optimization (conjugate gradients). This difficulty can, however, be overcome by making the square of the envelope the time domain output data. This function is differentiable and a conjugate gradient optimization algorithm can be applied. Specifically, the new functional to be minimized is:

F[γ]=∥env(fparabolic(γ,t))2−env(fdata(t))22  102.1

(with multiple receivers and views of course). Derivation of the Jacobian and its adjoint for the new functional is straightforward. The resulting forward scattering calculations and the Jacobian and its adjoint are best viewed in the following pseudo code form.


PseudoCode for Time Domain Parabolic Code Algorithm:


For the time domain code, we first set the parameter Δf=the frequency domain sample interval so that the total time period T=1./Δf is sufficient to contain the minimum and maximum time shifts in the data. We then specify the minimum and maximum-frequencies by their indices nmin and nmax:


fmin=minimum frequency=nmin Δf,


fmax=maximum frequency=nmax Δf,


using the desired fmax and fmin.


We then select the number of time steps, nt>nmax and set the time domain sample interval as:


Δt=1/(nt Δf). Then, nf=the number of frequencies, is set=nmax−nmin+1 and the frequency list is set as:

freq(lf)=(lf−1+nmin)Δf, lf=1, . . . ,nf


Now, suppose that the array fc(nr,nv,nf) contains the frequency domain total field computed by the parabolic algorithm for nr receivers and nv views and let the array fiω(nr,nf) contain the incident field for the parabolic algorithm. We then transform this frequency domain to the envelope squared of the time domain data via the operations:

for lv = 1,...,nvfor lr = 1,...,nrfctemp(i)=0, i = 1,....ntfor lf = 1,...,nffctemp(lf+nfmin)=(fc(lr,lv,lf)/fiω(lr,lf)) wt(lf)next lffctemp→FFT−1nt{fctemp}for n = 1,...,ntfω(lr,lv,n)=fctemp(n)f(lr,lv,n)=fctemp(n) fctemp*(n)next nnext lrnext lv


Now, f(nr,nv,nt) contains the time envelope squared of the simulated signals. Note we have also computed the array fω(nr,nv,nt) which contains the complex analytic time domain waveforms. This function is needed for the Jacobian and Jacobian adjoint calculations. The frequency domain filter wt(nf) is given by:

freqc=(fmax+fmin)/2freqb=(fmax−fmin)/2wt(lf) =(0.42+ 0.5cos(π(freq(lf)−freqc)/freqb)+ 0.08cos(2π(freq(lf)−freqc)/freqb))*exp(−i2πfreq(lf)Δt (nt/2)), lf=1,...,nf


and is applied to prevent sidelobes in the envelope and to shift the time=0 index in the array from 1 to nt/2+1.


In order to compute the gradient (Jacobian adjoint operating on the residual), we first compute the time domain residual array:

r(lr,lv,lt)=fdata(lr,lv,lt)−fs(lr,lv,lt), lt=1, . . . ,nt, lv=1, . . . ,lv, lr=1, . . . ,nr


we then compute the frequency domain residual, rω(nr,nv,nf) by the calculation:

for lv = 1,...,nvfor lr = 1,...,nrfor n = 1,...,ntfctemp(n)=2 r(lr,lv,n) fω(lr,lv,n)next nfctemp→FFT nt{fctemp}for lf = 1,...,nfrω(lr,lv,lf)=fctemp(lf+nfmin) (wt(lf)/fiω(lr,lf))*next lfnext lrnext lv


Once the frequency domain residual array is computed, we then apply the original Jacobian adjoint of the frequency domain parabolic algorithm:

F=J(frequency domain parabolic Jacobian)H•rω

The Jacobian operating on a perturbation to γ (which is also needed to perform a conjugate gradient optimization) is similarly computed.


Note on the Jacobian implementation: observe that the operation of converting to time domain data, then taking the envelope and squaring is concatenated onto the operation of propagating the fields via the parabolic algorithm at the respective frequencies, therefore, it follows that the implementation of the Jacobian will follow from the standard chain rule of vector calculus. This implementation is in fact given explicitly in the above pseudo-code, but is restated explicitly here for further elucidation of the process.


CONCLUSION

We have implemented this enveloped time domain parabolic algorithm and have verified that it does indeed converge from a zero starting guess. The final image is somewhat degraded (lower resolution) relative to the frequency domain algorithm with a good starting guess due to the loss of some information from the envelope operation. However, this code does produce a sufficiently accurate starting guess for the frequency domain algorithm which can then refine the resolution of the image.


The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims
  • 1. A transmission wave field imaging method, comprising: transmitting an incident wave field into an object, the incident wave field propagating into the object and, at least, partially scattering; measuring a wave field transmitted, at least in part, through the object to obtain a measured wave field, the measured wave field based, in part, on the incident wave field and the object; and processing the measured wave field utilizing a recursive reconstruction algorithm to generate an image data set representing at least one image of the object.
  • 2. The method of claim 1, wherein the recursive reconstruction algorithm calculates a predicted total wave field utilizing a single independent variable, the single independent variable representing scatter potential.
  • 3. The method of claim 1, wherein the recursive reconstruction algorithm represents a gradient-based recursive algorithm that solves for an independent scatter potential variable, the gradient-based recursive algorithm treating the total wave field as a dependent variable depending upon the independent scatter potential variable.
  • 4. The method of claim 1, wherein the processing calculates a predicted total wave field based on the incident wave field and generates the image data set based on a comparison of the predicted total wave field and the measured wave field.
  • 5. The method of claim 1, wherein the transmitting includes transmitting multiple incident wave fields from associated multiple source positions.
  • 6. The method of claim 1, wherein the incident wave field includes multiple frequency components, the recursive reconstruction algorithm separately processing the multiple frequency components.
  • 7. The method of claim 1, wherein the incident wave field represents an ultrasound wave field and the method further comprises displaying the at least one image as at least two images co-displayed, the at least two images including a speed-of-sound image and an attenuation image.
  • 8. The method of claim 1, wherein the incident wave field represents an ultrasound wave field and the processing including generating a speed-of-sound data set and an attenuation data set, the speed-of-sound data set representing speed of the ultrasound wave field through the object, the attenuation data set representing attenuation of the ultrasound wave field within the object.
  • 9. The method of claim 1, wherein the incident wave field comprises at least one of acoustic energy, elastic energy, electromagnetic energy and microwave energy.
  • 10. The method of claim 1, wherein the object is a breast, the method further comprising tilting a transmission source to direct the incident wave field toward a chest wall of a patient to image features close to the chest wall.
  • 11. The method of claim 1, wherein the transmitting and receiving are performed by a transmitter and receiver, respectively, the method further comprising moving at least one of the transmitter and receiver with respect to the object.
  • 12. The method of claim 1, further comprising repeating the transmitting and receiving at multiple slices within the object to generate a 3D volume of image data sets.
  • 13. The method of claim 1, further comprising displaying 2-D images representative of slices through the object.
  • 14. The method of claim 1, further comprising displaying a 3-D image of at least a portion of the object.
  • 15. The method of claim 1, wherein the measuring is performed at multiple positions about the object.
  • 16. A transmission wave field imaging method, comprising: transmitting an incident wave field into an object, the incident wave field propagating into the object and, at least, partially scattering; measuring a total wave field transmitted, at least in part, through the object to obtain a measured wave field, the measured wave field based, in part, on the incident wave field and the object; and processing the measured wave field utilizing a gradient-based reconstruction algorithm to generate an image data set representing at least one image of the object, the gradient-based algorithm solving for an independent scatter potential variable, the gradient-based algorithm treating the measured wave field as a dependent variable depending upon the independent scatter potential variable.
  • 17. A transmission wave field imaging method, comprising: transmitting an ultrasound incident wave field into an object, the incident wave field propagating into the object and, at least, partially scattering; measuring an ultrasound total wave field transmitted, at least in part through the object, the measured wave field being defined, in part, by the incident wave field and the object; obtaining speed-of-sound and attenuation values, the speed-of-sound values corresponding to a speed of the incident wave field through tissue, the attenuation values corresponding to an amount of attenuation of the incident wave field propagating through the tissue; constructing a speed-of-sound image from the speed-of-sound values; constructing an attenuation image from the attenuation values; and displaying the speed-of-sound image and the attenuation image.
  • 18. The method of claim 17, further comprising calculating a scattering potential based on the measured wave field, the attenuation image being representative of a component of the scattering potential.
  • 19. The method of claim 17, further comprising calculating a scattering potential based on the measured wave field, the speed-of-sound image being representative of a component of the scattering potential.
  • 20. The method of claim 17, wherein the speed-of-sound and attenuation values represent ranges of gray scale values for pixels comprising the speed-of-sound and attenuation images.
  • 21. The method of claim 17, further comprising utilizing an inverse scattering algorithm to reconstruct the speed-of-sound and attenuation images based on the measured wave field.
  • 22. The method of claim 17, wherein the processing calculates a predicted total wave field based on the incident wave field and the constructing generates the speed-of-sound and attenuation images based on a comparison of the predicted total wave field and the measured wave field.
  • 23. The method of claim 17, wherein the object is a breast, the method further comprising tilting a transmission source to direct the incident wave field toward a chest wall of a patient to image features close to the chest wall
  • 24. The method of claim 17, wherein the transmitting and receiving are performed by a transmitter and receiver, respectively, the method further comprising moving at least one of the transmitter and receiver with respect to the object.
  • 25. The method of claim 17, further comprising repeating the transmitting and receiving at multiple slices within the object to collect a 3D volume of transmission data.
  • 26. The method of claim 17, wherein the speed-of-sound and attenuation images represent 2-D images of slices through the object.
  • 27. The method of claim 17, wherein the speed-of-sound and attenuation images represent 3-D images of at least a portion of the object.
  • 28. A method for displaying transmitted ultrasound breast images, comprising: obtaining a transmitted ultrasound data set representative of a measured wave field transmitted, at least in part, through an object, the measured wave field being defined, in part, by an ultrasound incident wave field, a scattering potential and the object; and obtaining speed-of-sound and attenuation values, the speed-of-sound values corresponding to a speed of ultrasound incident wave fields through tissue, the attenuation values corresponding to an amount of attenuation of ultrasound incident wave fields propagating through the tissue; constructing a speed-of-sound image from the speed-of-sound values; constructing an attenuation image from the attenuation values; and displaying the speed-of-sound image and the attenuation image.
  • 29. A transmission wave field imaging device, comprising: a transmitter for transmitting an incident wave field into an object, the incident wave field propagating into the object and, at least, partially scattering; a receiver measuring a total wave field transmitted, at least in part, through the object to obtain a measured wave field, the measured wave field based, in part, on the incident wave field and the object; and a processing module processing the measured wave field utilizing a recursive reconstruction algorithm to generate an image data set representing at least one image of the object.
  • 30. The device of claim 29, wherein the processor module comprises a plurality of processors operated in parallel.
  • 31. The device of claim 29, further comprising a PCI bus interconnecting the processing module and at least one of the transmitter and the receiver.
  • 32. The device of claim 29, wherein the processor module is implemented on one of a personal computer, a hand-held device, a multi-processor system and a network PC.
  • 33. The device of claim 29, wherein the transmitter operates in a pulse echo mode to perform reflective tomography.
  • 34. The device of claim 29, wherein the receiver is rotated about the object to obtain measured data at multiple receive positions around the object, the measured data defining the measured wave field.
  • 35. The device of claim 29, further comprising a display for displaying an image based on the image data set as at least one of a speed of sound image and an attenuation image.
  • 36. The device of claim 29, wherein at least one of the receiver and the transmitter includes transducer elements arranged in at least one of a circular array, a linear array, a 2D planar array and a checkerboard 2D array.
  • 37. The device of claim 29, wherein the transmitter sequentially transmits first and second incident wave fields into the object having different first and second fundamental frequencies, respectively.
  • 38. The device of claim 29, wherein the transmitter sequentially transmits a high frequency incident wave field and a low frequency incident wave field.
  • 39. A transmission wave field imaging system, comprising: a transmitter for transmitting an incident wave field into an object, the incident wave field propagating into the object and, at least, partially scattering; a receiver measuring a total wave field transmitted, at least in part, through the object to obtain a measured wave field, the measured wave field being defined, in part, by the incident wave field, a scattering potential and the object, the measured wave field including at least one of speed of sound information and attenuation information characterizing the object; a processing module for processing, on-site and in real-time, the measured wave field to generate an image of the object, the image comprising pixel values based on at least one of the speed of sound information and the attenuation information characterizing the object; and. a display for displaying, on-site and in real-time, the image containing at least one of the speed of sound information and the attenuation information of the object.
  • 40. The system of claim 39, further comprising a breast scanner including the transmitter and receiver, the processing module generates the image substantially during a breast scan.
  • 41. The system of claim 39, wherein the processor module calculates a scattering potential from the measured wave field, the scattering potential includes a real component and an imaginary component, the real component defining the speed of sound information and the imaginary component defining the attenuation information.
  • 42. The system of claim 39, further comprising a phase detector joined to the receiver, the phase detector outputting real and imaginary signal components.
  • 43. The system of claim 39, wherein the transmitter and receiver repeat the transmitting and receiving operations at multiple slices within the object to collect a 3D volume of data.
  • 44. The system of claim 39, wherein the display displays 2-D images representative of slices through the object.
  • 45. The system of claim 39, wherein the display displays a 3-D image of at least a portion of the object.
  • 46. The system of claim 39, further comprising a movable carriage supporting the transmitter and receiver, the movable carriage rotating about an axis of a cylindrical chamber that is configured to receive the object.
  • 47. The system of claim 39, further comprising a movable carriage supporting the transmitter and receiver, the movable carriage being vertically adjusted along an axis of a cylindrical chamber that is configured to receive the object.
  • 48. The system of claim 39, further comprising a fluid tank configured to receive at least one of the object, the transmitter and receiver.
PRIORITY CLAIM

This is a continuation of U.S. patent application Ser. No. 10/615,569, filed Jul. 7, 2003, which is a continuation of U.S. patent application Ser. No. 10/024,035, filed on Dec. 17, 2001, now U.S. Pat. No. 6,636,584, which is a continuation of U.S. patent application Ser. No. 09/471,106, filed on Dec. 21, 1999, now U.S. Pat. No. 6,587,540, which is a continuation-in-part of U.S. patent application Ser. No. 08/706,205, filed on Aug. 29, 1996, now abandoned, which is a continuation-in-part of U.S. patent application Ser. No. 08/486,971, filed on Jun. 22, 1995, now abandoned, which is a continuation-in-part of U.S. patent application Ser. No. 07/961,768, filed on Oct. 14, 1992, now U.S. Pat. No. 5,588,032; which are all herein incorporated by reference.

Continuations (3)
Number Date Country
Parent 10615569 Jul 2003 US
Child 11222541 Sep 2005 US
Parent 10024035 Dec 2001 US
Child 10615569 Jul 2003 US
Parent 09471106 Dec 1999 US
Child 10024035 Dec 2001 US
Continuation in Parts (3)
Number Date Country
Parent 08706205 Aug 1996 US
Child 09471106 Dec 1999 US
Parent 08486971 Jun 1995 US
Child 08706205 Aug 1996 US
Parent 07961768 Oct 1992 US
Child 08486971 Jun 1995 US