A non-invasive three-dimensional optical video of patient tissue such as the brain using multiple wavelengths could reveal useful information including real-time spectroscopic information of the imaging volume, which can be used for highly-specific quantitative maps of many different bio-markers in parallel. This can represent information about tissue parameters such as blood oxygenation, glucose, clots, swelling, and neuron firing; see for example, “In Vivo Observations of Rapid Scattered Light Changes Associated with Neurophysiological Activity”, Rector et al. from book: In Vivo Optical Imaging of Brain Function, 2009, which is incorporated herein by reference in its entirety. This could lead to new diagnostic approaches for many medical conditions such as traumatic brain injury and tumors, and could also provide maps of brain activation patterns, with implications for psychiatric diagnostics, communication systems for paraplegics and others, control of prosthetics, and brain-machine interfaces more generally.
In certain spectral windows, particularly including red and near infrared (NIR), light from non-invasive external light sources can penetrate through the skin and skull into the target tissue (e.g., the brain) sufficiently to get meaningful data out. Unfortunately, red and NIR light undergoes multiple scattering which obfuscates the spatial structure of the target tissue, thus making it very challenging to get a high-resolution spatial map. There is currently no good solution to this problem.
Embodiments of the present invention are directed to computer-implemented arrangements for multi-frequency ultrasonically-encoded optical tomography of target tissue such as a brain of a patient. A light source is configured for generating light input signals to the target tissue. An ultrasound transducer array is configured for placement on the outer surface of the target tissue and has multiple ultrasound transducers each generating a different time-dependent waveform to form a plurality of ultrasound input signals to an imaging volume within the target tissue. An optical sensor is configured for sensing scattered light signals from the imaging volume, wherein the scattered light signals include light input signals modulated by acousto-optic interactions with the ultrasound input signals. Data storage memory is configured for storing optical tomography software, the scattered light signals, and other system information. An optical tomography processor includes at least one hardware processor coupled to the data storage memory and configured to execute the optical tomography software including instructions to perform spectral analysis of the scattered light signals to create a three-dimensional image map representing biomarker characteristics of the target tissue.
In further specific embodiments, the spectral analysis of the scattered light signals includes heterodyning the scattered light signals with a local oscillator light signal corresponding to frequency-shifted light from the light source. The light source may be configured for generating non-invasive light input signals to the target tissue, for generating light input signals at a plurality of different wavelengths—e.g. red and/or infrared light—and the light source may include a spatial light modulator device. The system different time-dependent waveforms may represent different ultrasound frequencies.
The following discussion and examples are set forth in terms of red/infrared imaging of the brain. But the various discussed techniques may be useful for any medium which is highly scattering to light. Other specific applications include other tissues (e.g. breast cancer diagnostics), imaging in turbid water, microwave probing of the brain and other tissues, microwave probing of pipes and other infrastructure, and so on. Also, the discussion is set forth using terms like “light” and “optical”, it will be understood to refer generically to electromagnetic radiation, which could be any specific frequency from ultraviolet to radio.
The multi-frequency tomography approach illustrated in
An ultrasound transducer array 302 is configured for placement on the outer surface of the target tissue and has multiple ultrasound transducers 303 each operating at a different ultrasound frequency to generate ultrasound input signals to an imaging volume within the target tissue 102. The ultrasound transducer array 302 might specifically have, for example, 10,000 individual ultrasound transducers 303 on it arranged in a 100×100 square. There may be as few as 10 total ultrasound transducers 303, or as many as 100,000, and they could be arranged in various possible shapes such as a square, circle, annulus, several patches, etc. The spacing between the ultrasound transducers 303 may usefully be related to half the ultrasound wavelength (typically 1 mm or less). A different continuous-wave ultrasound frequency is applied to each individual ultrasound transducer 303. For example, one ultrasound transducer 303 may be vibrating at 5.0000 MHz, another might be at 5.0001 MHz, and so on. For discussion clarity, ultrasound scattering, refraction, etc. will be omitted and it is assumed that each ultrasound transducer 303 creates clean, smooth, outgoing spherical wavefronts in the target tissue 102. (The effects of ultrasound scattering, refraction, etc. are discussed further below.)
An optical sensor 304 is configured for sensing scattered light signals from the imaging volume in the target tissue 102, wherein the scattered light signals include light input signals modulated by acousto-optic interactions with the ultrasound input signals. The optical sensor 304 may specifically include a multi-mode fiber or fiber bundle that takes light scattering out of the target tissue 102 from one or more specific locations and aims it onto a fast single-pixel detector.
Data storage memory 306 is configured for storing optical tomography software, the scattered light signals, and other system information. An optical tomography processor 305 includes at least one hardware processor coupled to the data storage memory and configured to execute the optical tomography software including instructions to perform spectral analysis of the scattered light signals from the optical sensor 304 to create a three-dimensional image map representing biomarker characteristics of the target tissue 102.
Due to the different ultrasound frequencies, each specific location in the target tissue 102 is subjected to a different time-dependent waveform, distinguished by the relative phase and amplitude of each frequency component. For example, in
The spectral analysis performed by the tomography processor 305 includes a post-processing step that converts the amplitude and phase information associated with each ultrasound transducer into the three-dimensional map. This can be thought of (in many ways) as a “holographic reconstruction”. The spectral analysis may be based on a computer model that treats each ultrasound transducer as emitting an ultrasound wave with the phase and amplitude inferred from the amplitude and phase of the corresponding frequency component of the detector data. (The phase may or may not need to be sign-flipped, depending on the sign conventions used.) As all these waves propagate and interfere in the computational simulation, they create a three-dimensional intensity profile corresponding to the three-dimensional map that is sought. This computer model should include effects such as ultrasound refraction, diffraction, reflection, and scattering (to the extent that these are known).
The three-dimensional map produced by the tomography processor 305 reflects the product of local light intensity, local light output probability (i.e. the probability for light at this point to eventually reach the optical sensor 304), and acousto-optic coefficient (which in turn is related to refractive index and other properties of the materials and their configuration).
With reference the simple example shown in
Due to acousto-optic interactions, if (for example) 400 THz light goes into the brain, the scattered light exiting is mostly 400 THz, but in the example above it would have sidebands at (400 THz±5.0000 MHz), (400 THz±5.0001 MHz), etc. The spectrum analyzer in the tomography processor 306 should therefore see a strong peak at frequency f_shift, with 10,000 pairs of sidebands, one pair for each ultrasound transducer 303. Each pair of sidebands is caused by one particular ultrasound transducer 303, and analysis of the detector output will yield the amplitude and phase with which the ultrasonic waves from this particular ultrasound transducer 303 are interacting with the light, in the aggregate. The post-processing analysis (“holographic reconstruction”) is as above.
In the embodiment in
An equivalent functionality could also be accomplished using frequency comb techniques somewhat along the lines of dual-comb spectroscopy. More specifically, the light input would be one frequency comb, and the local oscillators would be a different comb. If the two combs have different teeth spacing, the result would be similar to that in
One advantageous feature of such arrangements is its speed. New data points are obtained as quickly as the inverse separation between transducer frequencies (e.g. 100 Hz). Partial information is available even faster, though that is more difficult to interpret (but not impossible). And this is a whole three-dimensional image at each 1/(100 Hz) interval, not just one imaging volume (voxel) at a time, and indeed, in multiple-wavelength embodiments, it is a whole three-dimensional image with spatially-resolved spectral information.
This quasi-continuous monitoring can be advantageous for many different applications. One example is mapping brain activation patterns for purposes such as psychological studies, psychiatric diagnoses, brain-machine interfaces for paraplegics, and others. These activation patterns have important high-speed dynamics which usefully can be captured, and for brain-machine interfaces, it is critical to minimize the delay between brain activation and its detection. Another example is that with a high data rate, an embodiment can effectively perform computational correction for motion of the ultrasound transducer array relative to the imaged anatomical features. Implementation would be generally along the lines of the digital image stabilization techniques used in many cameras. Another example is that with a high data rate, a variety of temporal filters can be applied to extract additional information. For example, it is possible to extract just the image or spectral changes that are in synchrony with the pulse rate, by combining measurement data with a heart-rate monitor and then using typical lock-in amplifier-type techniques. Or conversely, the pulse-related changes can be suppressed in the data output. As another example, frequency filtering may enable the sensing of neural activity such as gamma waves.
Another appealing feature is the image resolution, which should be comparable to the ultrasound frequency used, typically 1 mm or less, which is similar to fMRI. Embodiments also provide good signal-to-noise ratio (SNR)—low-noise high-sensitivity heterodyne receivers can be implemented via various known techniques including, for example, balanced detection, local oscillators with high power and intensity stabilization feedback, etc. Embodiments can be implemented at favorably low size, weight, power, and cost. For example, the input light is single-pixel in the sense a spatial light modulator (SLM) is not required, and the output light is also single-pixel in the sense that there is no detector array required. Although the ultrasound transducers must be driven with many different frequencies, it helps that each is following a simple continuous sinusoidal waveform, which is generally easy to synthesize.
It might be useful to include a spatial light modulator (SLM) as part of the light source module, particularly in order to improve the efficiency with which light transmits into (and back out of) the general region being imaged, particularly through the skin and skull. (See “Light finds a way through the maze”, John Pendry, Physics 1, 20 (2008)). The SLM settings could be optimized using existing 3D data available through the device, as this data indirectly indicates the three-dimensional light intensity profile, conveniently including only those photons which eventually reach the optical sensor. While it would increase system complexity, this could provide higher (perhaps dramatically higher) signal-to-noise ratio if input light power is held constant, or reduced light input power for the same signal-to-noise ratio (reducing the risk of skin burning etc.). If a multi-mode fiber is used to carry the input light, the SLM could be located before the light enters the fiber, rather than at the patient's head. An SLM is not the only non-invasive way to increase light transmission through the skin and skull and into a region of interest, which could also involve finely adjusting the optrode angle, and/or position, and/or light wavelength, in order to find a configuration where transmission into the region of interest is higher than usual. Similarly, there could be a spatial light modulator or other adjuster at the output side, in order to increase the efficiency with which light, having exited from the tissue, reaches the small detector.
Overall, the geometrical arrangement of which transducers use which frequency does not matter much, however, this design parameter can have some indirect consequences. For example, pairs of transducers with especially close frequencies—for example 5.4792 MHz vs. 5.4793 MHz—should probably be placed farther apart from each other to reduce undesirable cross-talk via electrical and/or mechanical coupling.
The modulated scattered light output could be tapped at multiple points and/or fed into multiple heterodyne detectors to improve SNR. This might be accomplished as simply as putting multiple fast detectors side-by-side in the same optical sensor unit.
Typically an optical diode protects the laser light source. And the path lengths of the two optical paths to the heterodyne receiver should be approximately equal. The laser linewidth should be sufficiently narrow and frequency sufficiently stable so as to obtain high-contrast narrow-bandwidth beat notes that are spectrally well separated from each other. For example, a 1 GHz linewidth allows heterodyne beat notes to be visible with up to about 1 foot of optical path length discrepancy between the two paths that are being interfered. On the other hand, subject to these constraints, the laser frequency could be dithered or broadened to a certain extent to reduce the distracting effects of laser speckle in the images.
A single instrument could potentially be configured to take measurements using both the modality described above, and also other modalities such as traditional ultrasound, photoacoustic imaging, various fNIRS or diffuse optical tomography techniques, and so on. For example, a traditional ultrasound scan could reveal the acoustic scattering, speed of sound profile, and other parameters that could make the “holographic reconstruction” step (see above) more accurate. As another example, the technique here could be combined with focused ultrasound brain stimulation, in order to not only read but also modify neurological states. As still another example, the technique here could be combined with high-intensity focused ultrasound in order to destroy a tumor while monitoring progress.
Higher-order acousto-optic interactions could produce extra sidebands or contribute to already existing sidebands in the modulated scatter light, for example, at the ultrasound sum- or difference-frequencies. It may be beneficial to reduce the ultrasound amplitude sufficiently to minimize these types of interactions and so make the data analysis more tractable. However, to the extent that they are present, they could be used in the spectral analysis and could even increase the image resolution (because sum-frequency waves have a shorter wavelength).
As previously mentioned, the computational ultrasound wave propagation part of the holographic reconstruction process should account for effects such as ultrasound refraction, diffraction, reflection, and scattering, to the extent that these are known. These parameters can be predicted from typical anatomy and/or measured by conventional ultrasound and/or inferred from the three-dimensional image itself. For example, assuming that sound travels at a different speed in the skull than elsewhere, then if the skull thickness profile is estimated incorrectly, it might cause the three-dimensional map to have a warped appearance with smooth surfaces appearing wavy. Using such a map, the skull thickness profile could be corrected based on prior knowledge about the shapes of anatomical features. As another example, if a surface has an incorrectly-estimated ultrasound reflection coefficient, then a spurious mirror-reflected copy of features might appear in the three-dimensional map. But this duplication, if recognized, could be used to correct the ultrasound reflection coefficient in the computer model, thus fixing or mitigating the erroneous duplication and so improving the fidelity of the map.
Spectroscopic information can also be obtained by using optical filters to split up different wavelengths, and then having one heterodyne detector for each wavelength. This increases the system complexity but may increase SNR. Spectroscopic information also can be obtained simply by turning one wavelength on, then the next wavelength, etc. But that would impair temporal resolution and perhaps SNR.
There are two prior techniques known in the literature that are somewhat similar to what is described herein in the sense that: (1) three-dimensional spatially-resolved and potentially spectrally-resolved information is obtained, and (2) the resolution is related to ultrasound wavelengths because ultrasound is ultimately used to encode or detect the position. One such approach is known by various terms including ultrasonically-encoded optical tomography, acousto-optic tomography, or ultrasound guide star; see “Time-reversed ultrasonically encoded optical focusing into scattering media”, Xu et al., Nat. Phot. 5, 154 (2011)(incorporated herein by reference in its entirety). Another such approach is known as photoacoustic imaging; see e.g., “Imaging cancer with photoacoustic radar”, Mandelis, Physics Today 70, 42 (2017)(incorporated herein by reference in its entirety). But in their specifics, these two techniques are very different from each other and from the technique described herein.
Photoacoustic imaging uses a very different detailed mechanism, using light to create ultrasonic waves and then detecting that ultrasound with piezo transducers, whereas the embodiments of the present invention described herein use piezo transducers to create ultrasonic waves that modulate light in a way that is detected optically. So in one sense, the two different approaches are opposites. In addition, embodiments of the present invention enable a better signal-to-noise ratio, and allows measuring many wavelengths at once without losing spatial or temporal resolution. Moreover, photoacoustic imaging measures almost purely absorption, whereas embodiments of the present invention are also sensitive to acousto-optic coefficient, which is related to refractive index and other parameters. In this respect, the two different techniques might be complementary, and, as mentioned above, it is conceivable that the same system devices could support both sensing modalities.
Ultrasonically-encoded optical tomography has previously generally used single-frequency ultrasound phased arrays (as in
Even though embodiments of the present invention have been discussed in terms of using an SLM on the input light, the purpose and details are quite different. In ultrasound guide star (and other known techniques), the SLM is used to focus light to one voxel, and then get data just about that one voxel, with a separate phase map for each voxel. In embodiments of the present invention, the SLM is provides more light into a relatively large-volume general region (e.g., through the skull into the brain and/or deeper into the brain and/or in the general direction of the light output) much larger than an image voxel. Spatial resolution comes from the ultrasound frequency encoding, not from the SLM, and hence this technique can get images much faster, and with greatly reduced requirements on the speed, size, resolution, and location of the SLM.
Diffuse optical tomography typically just sends light in at one point and collects it at another point. Hence it is far lower resolution than the approach used in embodiments of the present invention, which gets a whole three-dimensional map for each input and output rather than merely one data point. For example, “Mapping distributed brain function and networks with diffuse optical tomography”, Nature Photonics 8, 448 (2014) by Eggebrecht et al. refers to ˜1.5 cm resolution as “high-density diffuse optical tomography”, even though it probes perhaps 3 orders of magnitude larger volume elements than the approach described above for embodiments of the present invention (cm3 instead of mm3). fNIRS (functional near infrared spectroscopy) methods all have similar resolution limitations. Optical coherence tomography (OCT) has higher resolution, but much shallower depth in highly-scattering tissues, since OCT uses photons that only scatter once, whereas the present invention can get good data from photons that have scattered very many times.
Magnetic resonance imaging (MRI) senses different characteristics than light does and also has extremely high size, weight, power, and cost, and is not portable, and generally cannot be used on patients with metal implants (e.g. pacemakers, cochlear implants, etc.). Positron-emission tomography (PET) also observes different characteristics than light does, and has high size, weight, power, and cost, and is not portable, and is sometimes not usable due to the ionizing radiation. Ultrasound (by itself) similarly observes different characteristics than light does. EEG and MEG tend to have far lower resolution than the sub-mm voxels discussed here, and again, they see very different things than light does.
Embodiments of the invention may be implemented in part in any conventional computer programming language such as VHDL, SystemC, Verilog, ASM, etc. Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
Embodiments can be implemented in part as a computer program product for use with a computer system. Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).
Although various exemplary embodiments of the invention have been disclosed, it should be apparent to those skilled in the art that various changes and modifications can be made which will achieve some of the advantages of the invention without departing from the true scope of the invention.
This application claims priority from U.S. Provisional Patent Application 62/653,646, filed Apr. 6, 2018, and U.S. Provisional Patent Application 62/621,100, filed Jan. 24, 2018, and U.S. Provisional Patent Application 62/582,391, filed Nov. 7, 2017, and U.S. Provisional Patent Application 62/559,779, filed Sep. 18, 2017, all of which are incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62653646 | Apr 2018 | US | |
62621100 | Jan 2018 | US | |
62582391 | Nov 2017 | US | |
62559779 | Sep 2017 | US |