Phase-based optical interferometric techniques are widely employed in optical distance measurements in which sub-wavelength distance sensitivity is required. Optical distance is defined as the product of the refractive index and the length. However, most such techniques are limited by an issue which is widely known in the filed at 2π ambiguity or integer ambiguity which can be defined as the difficulty in telling the interference fringes of an axial scan apart from each other. An unmodified harmonic phase based low coherence interferometry (LCI) method can be used to determine the differential optical distance, (nλ
The problem lies in the fact that unmodified LCI is unable to tell the interference fringes of an axial scan apart from each other, described herein as the 2π ambiguity issue. It is a problem that plagues most phase-based optical interferometric techniques. As a result, these techniques are unable to determine optical distance absolutely. Therefore, most such techniques are used in applications, such as evaluating the texture of continuous surfaces or detecting time-dependent distance changes, in which phase unwrapping is possible through comparison of phases between adjacent points or over small time increments.
In many applications it is important to quantitatively measure the phase of light transmitted through or reflected from a sample. In particular, the phase of light transmitted through or reflected from biological samples can form a powerful probe of structure and function in living or nonliving cells.
Interferometry is a versatile technique for measuring the phase of light. One common problem in quantitative interferometry is the susceptibility to phase noise due to external perturbations such as vibrations, air motions, and thermal drafts. There remains a need for systems for phase measurement which solve the problem of phase noise.
Interferometry is one way to access the phase information associated with a specimen. Techniques such as phase contrast and Nomarski microscopy use optical phase just as a contrast agent and do not provide quantitative information about its magnitude. Several techniques exist for measuring the phase of light transmitted through nearly transparent samples. These include digital recorded interference microscopy (DRIMAPS) and noninterferometric detection of phase profiles via the transport of intensity equation.
Reflection interferometry is capable of sensitivity much smaller than the wavelength of light used. Measurements on the scale of fractions of nanometers or smaller are common in metrology and microstructure characterization. However, little work has been done in nanometer-scale interferometry in weakly reflecting samples such as biological cells and tissues. Optical coherence tomography (OCT), an interferometric technique used with biological samples, is primarily concerned with the amplitude rather than the phase of interference from reflected light, and is therefore limited in resolution to the coherence length of the light used, typically 2-20 microns.
Phase-referenced reflection interferometry has been used to measure the volume changes in a monolayer of cells. The harmonic phase-based interferometer used requires two sources, is relatively slow (5 Hz), and has a phase sensitivity of about 20 mrad over this bandwidth. There thus remains a need for effective systems for phase measurement which solve the problem of phase noise and assist in developing different imaging applications.
Preferred embodiments of the present invention are directed to systems for phase measurement which address the problems such as phase noise, for example, using combinations of a number of strategies including, but not limited to, common-path interferometry, phase referencing, active stabilization and differential measurement. Embodiments are directed to optical devices for imaging tissue or small biological objects with light. These embodiments can be applied to the fields of, for example, cellular physiology and neuroscience. These preferred embodiments are based on principles of phase measurements and imaging technologies. The scientific motivation for using phase measurements and imaging technologies is derived from, for example, cellular biology at the sub-micron level which can include, without limitation, imaging origins of dysplasia, cellular communication, neuronal transmission and implementation of procedures using the genetic code. The structure and dynamics of sub-cellular constituents cannot be currently studied in their native state using the existing methods and technologies including, for example, x-ray and neutron scattering. In contrast, light based techniques with nanometer resolution enable the cellular machinery to be studied in its native state. Thus, preferred embodiments of the present invention include systems based on principles of interferometry and/or phase measurements and are used to study cellular physiology. These systems include principles of low coherence interferometry (LCI) using optical interferometers to measure phase, or light scattering spectroscopy (LSS) wherein interference within the cellular components themselves is used, or in the alternative the principles of LCI and LSS can be combined to result in systems of the present invention.
The preferred embodiments for phase measurement and imaging systems include actively stabilized interferometers, isolation interferometers, common path interferometers and can include phase contrast microscopy using spatial light modulation.
In a preferred embodiment, the methods of the present invention are directed at an accurate phase-based technique for measuring arbitrarily long optical distances, preferably with sub-nanometer precision. A preferred embodiment of the present invention employs an interferometer, for example, a Michelson interferometer, with harmonically related light sources, one continuous wave (CW) and a second source having low coherence (LC). The low coherence source provides a broad spectral bandwidth, preferably a bandwidth of greater than 5 nm for a 1 micron (μm) wavelength, for example, the required bandwidth can vary as a function of wavelength and application. By slightly adjusting the center wavelength of the low coherence source between scans of the target sample, the phase relationship between the heterodyne signals of the CW and low coherence light can be used to measure the separation between reflecting interfaces with sub-nanometer precision. As this method is completely free of 2□ ambiguity, an issue that plagues most phase-based techniques, it can be used to measure arbitrarily long optical distances without loss of precision. An application of a preferred embodiment of the method of the present invention is the precision determination of the refractive index of a sample at a given wavelength of a sample with a known physical thickness. Another application of a preferred embodiment of the method of the present invention is the precision determination of a sample's physical thickness with a known refractive index. A further application of a preferred embodiment of the method of the present invention is the precision determination of the refractive index ratio at two given wavelengths.
In an alternate preferred embodiment, the low coherence light source provides a sufficiently broad bandwidth light, preferably greater than 5 nm, to provide simultaneously a first low coherence wavelength and a second low coherence wavelength with the respective center wavelengths separated from each other by more than approximately 2 nm. The frequency spectrums for the low coherence wavelengths do not significantly overlap. An additional detector and filters are disposed in the interferometer to transmit and detect the two low coherence wavelengths.
The preferred embodiment methods can be used to make precise optical distance measurements. From such measurements, optical properties of target objects can be accurately measured. By measuring the dispersion profile of the target, structural and/or chemical properties of the target can be evaluated. The dispersion profile maps out the refractive index differences at various wavelengths. In the biomedical context, preferred embodiments of the present invention can be used to accurately determine the dispersion properties of biological tissues in a non-contact and non-invasive manner. Such dispersion determination can be used on the cornea or aqueous humor of the eye. The sensitivity achieved can be sufficient to detect glucose concentration dependent optical changes. In a preferred embodiment of the present invention method, the blood glucose level can be determined through non-invasive measurements of the dispersion profile of either the aqueous and/or vitreous aqueous humor or the cornea of the eye. A preferred embodiment of the present invention can be applied as a measurement technique in semiconductor fabrication to measure small features formed during the manufacturing of integrated circuits and/or optoelectronic components. As the preferred embodiment of the method is non-contact and non-destructive, it can be used to monitor the thickness of semiconductor structures or optical components as they are being fabricated.
In accordance with a preferred embodiment of the present invention that used a Mach-Zender heterodyne interferometer, a method for measuring phase of light passing through a portion of a sample includes the steps of providing a first wavelength of light, directing light of the first wavelength along a first optical path and a second optical path, the first optical path extending onto a sample medium to be measured and the second path undergoing a change in path length, and detecting light from the sample medium and light from the second optical path to measure a change in phase of light passing through two separate points on the sample medium. The medium comprises biological tissue such as, for example, a neuron. The method includes using a photodiode array or a photodiode-coupled fiber bundle to image the phase of the sample at a plurality of positions simultaneously. The method further includes the step of frequency shifting the light in the second optical path. The method includes providing a helium-neon laser light source that emits the first wavelength or a low coherence light source.
In accordance with another aspect of the invention an actively stabilized interferometer is used in a method for measuring phase of light passing through a portion of a sample, including the steps of providing a first signal and a second signal generated by a first light source and a second light source, respectively, the second light source being a low coherence source. The method includes directing the first signal and the second signal along a first optical path and a second optical path, varying a path length difference between the first optical path and the second optical path, generating an output signal indicative of the sum of the first and the second signal with an optical path delay between them, modulating the output signal at an interferometer lock modulation frequency, and determining the phase of the sample by the time evolution of the interferometer lock phase. The first and second signals are both low coherence signals. The method further includes demodulating the first signal by a mixer or a lock-in amplifier. The method includes electronically generating the interferometer lock phase.
In accordance with another aspect of the present invention, a dual beam reflection interferometer is used in a system for measuring phase of light passing through a portion of a sample. The system includes a first light source that generates a first signal, an interferometer that generates a second signal with two pulses separated by a time delay from the first signal, a first optical path from the interferometer in communication with the sample and a second optical path from the interferometer in communication with a reference, and a detector system that measures a first heterodyne signal from the first and the second signal from the sample and the reference, respectively, and the interference between the light reflected from the sample and the reference. The system includes detecting a phase of the heterodyne signal indicative of the phase of the sample reflection relative to the reference reflection. The first signal is a low coherence signal. The first light source can include, without limitation, one of a superluminescent diode or multimode laser diode. The second path of the interferometer further includes a first path and a second path, and the second path has acousto-optic modulators. The system includes an optical pathway including an optical fiber. The system includes a vibration-isolated heterodyne Michelson interferometer. The interferometer further includes a mirror attached to a translation stage to adjust an optical path length difference. The detection system comprises a first detector that detects a signal reflected from the sample and a second detector that detects a signal reflected from the reference.
In accordance with another aspect, the present invention provides a method for imaging a sample using phase contrast microscopy and spatial light modulation. In various embodiments, the method includes illuminating the sample, the light originating from the sample due to the illumination having low frequency spatial components and high frequency spatial components. The phase of the low frequency spatial component is shifted to provide at least three phase shifted low frequency spatial components. Preferably, the phase is shifted in increments of, for example, π/2 to produce phased shifted low frequency spatial components with phase shifts of π/2, π and 3π/2.
The unshifted low frequency spatial component and at least three phase shifted low frequency spatial components are separately interfered with the high frequency spatial component along a common optical path to produce an intensity signal for each separate interference. An image, or phase image, is then generated for the sample using at least four of the intensity signals, for example.
In accordance with another aspect, the present invention provides a method for non-contact optical measurement of a sample having reflecting surfaces having the steps of providing a first light source that generates a first signal generating a second signal with two pulses separated by a time delay from the first signal using a dual-beam interferometer, providing a first optical path from the interferometer in communication with the sample and a second optical path from the interferometer in communication with a reference; and measuring a first heterodyne signal from the first and the second signal from the sample and the reference, respectively, and the interference between the light reflected from the sample and the reference; and detecting a phase of the heterodyne signal indicative of the phase of the sample reflection relative to the reference reflection.
In a preferred embodiment, the first signal is a low coherence signal. The first light source can be a superluminescent diode or multimode laser diode. The interferometer further comprises a first path and a second path, the second path having acousto-optic modulators. The method further includes an optical pathway including an optical fiber. The sample can be a portion of a nerve cell.
In a preferred embodiment, the interferometer includes a vibration-isolated heterodyne Michelson interferometer. The interferometer further includes a mirror attached to a translation stage to controllably adjust an optical path length difference. Preferred embodiments include a heterodyne low coherence interferometer to perform the first non-contact and first interferometric measurements of nerve swelling. The biophysical mechanisms of nerve swelling can be imaged and analyzed using individual axons in accordance with a preferred embodiment of the present invention. The dual-beam low coherence interferometer may have many other applications in measuring nanometer-scale motions of living cells. Other embodiments can include a microscope based on the interferometer to detect mechanical changes in single neurons associated with action potentials. A related interferometric method is also used to measure cell volume changes in cultured cell monolayers.
Another aspect of the present invention includes a fiber optic probe for optically imaging a sample, having a housing with a proximal end and a distal end, a fiber collimator in the proximal end of the housing coupled to a light source; and a graded index lens in the distal end of the housing, the lens having a first and second surface wherein the first surface is the reference surface and wherein numerical aperture of the probe provides efficient light gathering from scattering surfaces of a sample. The probe further includes mounting the fiber optic probe on a translator stage to perform at least one of two-dimensional phase imaging and three-dimensional confocal phase imaging. The translator stage includes a scanning piezo translator. The numerical aperture of the probe is in a range of approximately 0.4 to 0.5.
The foregoing and other features and advantages of the system and method for phase measurements will be apparent from the following more particular description of preferred embodiments of the system and method as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
The present invention is directed at phase crossing based systems and methods for measuring optical distances that overcome the integer or 2π ambiguity problem by introducing a dispersion imbalance in an interferometer. A preferred embodiment of the method is able to measure the relative height difference of two adjacent points on a surface with precision. Further, the refractive index of a sample can be found to an accuracy that is limited only by the precision with which the physical thickness of the sample can be experimentally measured.
The substitution of one of the low coherence light sources with a continuous wave (CW) light source in harmonic phase based interferometry (HPI), allows the use of the associated CW heterodyne signal as a form of optical ruler by which the low coherence heterodyne signal can be measured. The low coherence light source provides a spectral bandwidth, for example, greater than 5 nm for 1 micron wavelength. One of the benefits of using such a modified HPI is that the measured phase is now sensitive to the length scale nL instead of (nλ
Interferometric optical distance measuring systems employing readily available low coherence light sources have achieved resolution on the order of tens of wavelengths. While this technique is relatively insensitive, it does not have to contend with the 2π ambiguity issue. A preferred embodiment includes a low coherence interferometry method that uses phase to measure arbitrarily long optical distances with sub-nanometer precision. This method uses a low coherence phase crossing technique to determine the integer number of interference fringes, and additional phase information from the measurement to accurately obtain the fractional fringe. In addition, it provides depth resolution and can be used for tomographic profiling of stratified samples. As the method can measure long optical distances with precision, it can be used to determine refractive indices of a plurality of materials accurately. As this is a phase-based method, the refractive index thus found is the phase refractive index and not the group refractive index.
The center wavelength of low coherence light is then adjusted by approximately 1-2 nm and a second set of ψCW and ψLC values is measured. From these two sets of readings, the various interfaces in the target sample can be localized with sub-nanometer precision. The processing of data for localization is described hereinbelow.
Consider a sample which consists of a single interface at an unknown distance x1 from the beamsplitter 14. The distance from the beamsplitter 14 to the reference mirror 32, x, is a known quantity at each time point in the scan of the reference mirror.
A method to find an approximate value for x1 is by scanning x and monitoring the resulting heterodyne signal in the recombined low coherence light beam. When x is approximately equal to x1, a peak in the heterodyne signal amplitude is expected. The precision of such a method is limited by the coherence length, lc, of the light source and the signal-to-noise quality of the heterodyne signal. Under realistic experimental conditions, the error in x1 determined thus, is unlikely to be better than a fifth of the coherence length.
Given that the coherence length of a typical low coherence source is approximately 10 □m nominally, this means that the error in such a means of length determination is limited to about 2 μm.
In considering the phase of the heterodyne signal, the varying component of the heterodyne signal detected can be expressed as:
where Eref and Esig are the electric field amplitude of the reference and signal electric field amplitude, respectively, k is the optical wavenumber, ω is the optical frequency. The factor of 2 in the exponents is due to the fact that light travels twice the path going to the mirror/sample and back to the beamsplitter.
Note that when x matches x1 exactly, the heterodyne signal is expected to peak. The two returning beams are in constructive interference. This property is therefore used to localize the interface. x1 is found by finding the value of x for which the two beams are in constructive interference. Since phase can be measured accurately, such an approach gives a length sensitivity of about 5 nm. Unfortunately, this method is calculation intensive because there are multiple values of x for which the heterodyne signal peaks; specifically, the heterodyne signal peaks at:
where a is an integer and λ is the optical wavelength. This is a manifestation of the 2π ambiguity issue.
The preferred embodiment includes a method to distinguish the correct peak. Note that when x=x1 exactly, the heterodyne signal peaks regardless of the optical wavelength. On the other hand, the subsequent peaks are wavelength dependent, as illustrated in
The CW light source is needed in such a localization method for two reasons. First, it is very difficult in practice to know the value absolutely and accurately in an interferometer. The CW component of the interferometer permits highly accurate measurements of x to be made as the reference mirror is scanned. In a specific preferred embodiment, to determine the distance between two interfaces in the sample, a count of the number of CW interference fringes that occurred between where x1 is equal to the distance to the first interface as shown in
Second, the prior described method for localization of the interface may partly fail if there is a phase shift associated with the reflection process. For example, if the surface is metallic, the phase shift is non-trivial and the phase of the heterodyne signal takes on some other value when x=x1 exactly. While the prior method allows the correct interference fringe to be identified where x=x1, however sub-wavelength sensitivity may be compromised. The presence of the CW heterodyne signal allows the difference phase via the HPI method to be found. The knowledge of this value, allows the localization of the interface with a high level of sensitivity.
The principle of the HPI method can be illustrated through the exemplary embodiment of a sample of thickness, L, and refractive index, n775 nm, at a wavelength of 775 nm. The two interfaces of the sample are at optical distances x1 and x2 (where x2=x1+n775 nmL) from the beamsplitter, respectively. Note that the method only works if the optical distance separation is greater than the coherence length, for example, typically between 1 micron and a 100 microns of the low coherence light source. Otherwise, the heterodyne phase signals associated with the interfaces merge together and result in inaccurate interface localization. For clarity of explanation, the incorporation of the phase shifts associated with reflection are deferred until later.
with RLC,j the reflectivity of the interface j at the low coherence wavelength, k the optical wavenumber, a=4 ln(2)/lc, lc the coherence length, x the distance of the reference mirror from the beamsplitter, and hc(x) a piecewise continuous function with value of 1 for |x|<2lc and 0 otherwise. The factors of 2 in the exponents are due to the effective doubling of optical paths in the backreflection geometry. Equation 3 reflects the fact that phase cannot be measured far beyond the coherence envelopes, due to noise. Although the coherence envelops modeled are Gaussian in profile, the same phase treatment is valid for profiles of any slowly varying envelope.
The phase of the CW heterodyne signal is given by:
with RCWj the reflectivity of the interface j at the CW wavelength, n1550 nm the sample's refractive index,
k
LC=2kcw+Δ (5)
where Δ is a small intentionally added shift, then a difference phase, ψD, of the form is obtained:
The above quantity provides both the approximate number of fringes in the interval (x2−x1) and the fractional fringe, which provides sub-wavelength precision.
As the parameter Δ is varied by a small amount (corresponding to a wavelength shift of approximately 1-2 nm), the slope of ψD(x) pivots around the points where x=x1 and x=x2. In other words, the phase scans at different values of Δ crosses at those points. The optical distance from x1 to x2 can be found by counting the fringes that ψcw(x) goes through between the two phase crossing points. Twice the quantity thus found is denoted by Sfringe, which is not an integer value, and corresponds to the number of fringes at the low coherence wavelength. In the event where multiple phase crossing points occur for a single interface, the point that corresponds to the position of the interface can be found by making multiple scans at additional values of Δ. The position of the interface is the only location where ψD(x) will cross for all Δ values.
The phase shift information is used to further localize the interface separation. Specifically, the difference between the phase shifts at x=x1 and x=x2 is:
This measures the fractional fringe with great sensitivity.
The absolute optical separation (x2−x1) can be determined with precision from Sfringe and Sphase through the following equation:
where ΔS=res(Sfringe)−Sphase and U( ) is a unit step function. Here, int( ) and nm( ) denote the integer and fractional parts of the argument respectively. The first term localizes the optical distance to the correct integer number of fringes by minimizing the error between Sphase and the fractional part of Sfringe. The error of an optical separation determination is limited only by the measurement error of Sphase. In an experiment such error translates to an error in (n775 nmL)measured of approximately 0.5 nm. The measurement error of Sfringe needs only be smaller than half a fringe so that the correct interference fringe can be established; having satisfied this criterion, it does not enter into the error of (n775 nmL)measured. The maximum measurable optical distance simply depends on the ability of the system to accurately count fringes between two crossing points and the frequency stability of the light sources.
The above equation is a condensed expression of the method for finding the correct fringe and the fractional fringe. The operation can be illustrated through the following example and
where a is an integer. Given the value of Sfringe, the possible values of (n775 nm L)measured can be limited to the following 3 values:
Given that the value of
is closest to
it is the correct estimate of (n775 nm L)measured.
In preferred embodiments for interferometry measurements based on harmonically related light sources, the appropriately chosen pair of light sources and the method of extracting difference phase allows the minimization and preferably the elimination of the effect of jitter in the interferometer, which would otherwise make high precision optical distance measurement impossible. The elimination of jitter also allows the comparison of scans performed at different times.
To demonstrate the capability of a preferred embodiment of the method, the system is used to probe the optical distance between the top and bottom surface of a fused quartz cover slip having a physical thickness, L=237±3 μm. In this embodiment there is a π phase shift associated with reflection from the first interface, that marks a positive refractive index transition.
Hence, there is a e−iπ term associated with the factors RLC,1 and Rcw,1 in Equations 1 and 2. This results in a correction factor of half on Sfringe and Sphase.
The experimental data yields an optical absolute distance measurement with sub-nanometer precision. The optical distance found is associated with the low coherence light source. The CW heterodyne signal serves as an optical ruler. If L of the quartz cover slip is known precisely, n775 nm for quartz at the wavelength 775.0 nm can be found to a very high degree of accuracy from (n775 nmL)measured.
Alternatively, without knowing the exact value of L, the refractive index ratio at two different wavelengths can be determined by measuring the corresponding optical distances using low coherence light at these wavelengths and CW light at their respective harmonics. Using a range of low coherence wavelengths, the dispersion profile of a material can be determined accurately. The dispersion profile maps out the refractive index differences at various wavelengths. The experimental results in accordance with a preferred embodiment predict that a precision of approximately seven significant figures can be achieved with an approximately 1 mm thick sample.
In another preferred embodiment the light sources of the system are changed to a low coherence superluminescent diode (SLD) emitting at 1550.0 nm and a CW Ti:Sapphire laser emitting at 775.0 nm. By adjusting the operating current through the SLD the center wavelength is changed by about 2 nm; this is adequate to achieve phase crossing. Using this preferred embodiment of the present invention system, the optical distance can be measured at 1550.0 nm. Taking the ratio of the result of this measurement with the previous measurement, the ratio of the refractive indices n775 nm/n1550 nm for quartz can be determined. It should be noted that the index ratios found are for harmonically related wavelengths due to the sources used in the preferred embodiments. Refraction index ratios for other wavelengths can be measured with other appropriate choices of light sources. For comparison, the corresponding data for glass and acrylic plastic are tabulated in Table 2 as measurements of n775 nm/n1550 nm for different materials.
Note that some of the equations used when the low coherence wavelength is half that of the CW wavelength are slightly different from the equations previously presented herein. For example:
Preferred embodiments of the methods for overcoming 2π ambiguity is of significant use in applications such as high precision depth ranging and high precision refractive index determination of thin film solid state materials.
The use of the preferred methods can be illustrated through consideration of a slab of glass. There exist systems that can measure the distance from the systems to the averaged center of the glass slab very accurately. There are also systems that can measure the roughness of the glass surface very accurately. A preferred embodiment of the present invention system measures with nanometer sensitivity the thickness of the glass slab end-face.
The steps in the implementation of a preferred embodiment of the method to determine the optical distance are illustrated in the flow chart 124 in
The two difference phases found from the two scans are then superposed on each other on a graph with the x-axis representing the displacement of the reference mirror per step 138. It should be noted that the extraction of difference phases can also be done with the appropriate light sources or chromatic filters or software/hardware signal processing on a single scan.
The next step in method 124 includes determining the phase crossing points on a graph to mark the locations of the sample interfaces per step 140. By counting the number of times the heterodyne signal associated with the CW light wraps over by 2π between the two crossing points, the optical separation between the interfaces is determined per step 142 with accuracy to about a fraction of a wavelength, for example, of approximately 0.2. By measuring the difference phase at the crossing points, further localization and/or refining of the separation to a very small fraction of a wavelength, for example, approximately 0.001.
In another preferred embodiment as illustrated in
The light beams are then incident on the detectors and their heterodyne signals are processed in the fashion discussed with respect to
The two difference phases are then superposed on each other on a graph with the x-axis representing the displacement of the reference mirror per step 202. The remaining steps 204, 206, 208 are similar to steps 140, 142, 144 as discussed with respect to
The preferred embodiment of the method can be used to absolutely measure arbitrarily long optical distances with sub-nanometer precision. The preferred embodiment of the system can be free space based or fiber based.
The input light 256 includes approximately harmonically related low coherence light having a wavelength λ1 and a CW light beam having a wavelength λ2 which travel in fiber 251. The composite beam is divided in two, one part of the signal is incident on the target lens 254 and sample 256 and travels in fiber 253 while the other is incident on the reference mirror 266 via a lens 268 and travels in fiber 251. The movement of the reference mirror introduces a Doppler shift on the reflected beam. The reflected beams are recombined and then separated into their component wavelength components by means of the dichroic mirror 258. These wavelength components are measured separately with photodetectors 260, 262. The resulting heterodyne signals at their respective Doppler-shifted frequencies are bandpassed around their respective center heterodyne frequencies and Hilbert transformed to extract the corresponding phases of the heterodyne signals, ψCW and ψLC.
The preferred embodiment methods can be used to make precise optical distance measurements. From such measurements, optical properties of target objects can be accurately measured. By measuring the dispersion profile of the target, structural and/or chemical properties of the target can be evaluated. In the biomedical context, preferred embodiments of the present invention can be used to accurately determine the dispersion property of biological tissues in a non-contact and non-invasive manner. Such dispersion determination can be used on the cornea or aqueous humor of the eye. The sensitivity achieved can be sufficient to detect glucose concentration dependent optical changes. In a preferred embodiment of the present invention method, the blood glucose level can be determined through non-invasive measurements of the dispersion profile of either the aqueous, vitreous humor or the cornea of the eye.
As discussed hereinbefore, phase based interferometry methods are able to measure optical distances very sensitively. However, they are typically limited in their applications by a problem that is widely known in the field as the □□ ambiguity problem. The crux of this problem is that it is impossible to differentiate a length of 10.1 wavelengths from the length of 11.1 wavelengths. The preferred embodiments of the present invention overcome this limitation and allow absolute optical distance measurements with sub-nanometer accuracy.
There are numerous phase based methods that measure changes in optical distances with a sensitivity of approximately the nm range. As long as the change is small and gradual, the change can be continuously tracked. There are low coherence methods that measure absolute optical distance by tracking the delay in arrival at the detector of light reflected from different interfaces of the reflector sensitivity in approximately microns. As discussed hereinbefore, the simultaneous use of a CW and a low coherence light sources in an interferometer provides for the methods to measure optical distance. The heterodyne phases of the signals associated with the two wavelengths are intrinsically related. By processing the phase per the preferred embodiments, motional noise is minimized and preferably eliminated from our measurements.
An application of a preferred embodiment is the glucose level determination using the measurement of the refractive index of the vitreous and/or aqueous humor of the eye. The sensitivity of this technique affords the ability to measure chemical concentrations with a sensitivity that is clinically relevant. One of the more obvious applications of the method of a preferred embodiment is the determination of blood glucose level through measurements performed on the eye. The glucose level of the fluid in the eye mirrors that of the blood with clinically insignificant time delay.
The method of a preferred embodiment measures the optical path lengths of the vitreous and/or aqueous humor layer in the eye using at least two separate sets of wavelengths as illustrated in
In the event that such a refractive index ratio is insufficient for absolute blood glucose level determination due to the presence of other chemicals that are changing in the vitreous and/or aqueous humor, a more complete range of optical path length measurements can be made at a range of other wavelengths. This set of more complete measurements allows the determination of glucose level and other chemical concentrations by fitting the measurements to known dispersion profiles of glucose and other chemicals.
A preferred embodiment of the present invention can be applied as a measurement technique in semiconductor fabrication. As the preferred embodiment of the method is non-contact and non-destructive, it can be used to monitor the thickness of semiconductor structures as they are being fabricated. In addition, the composition of the semiconductor structures can be assessed in much the same manner as that discussed with respect to the characterization of the vitreous and/or aqueous humor measurements.
Phase Measurement and Imaging Systems:
Alternate preferred embodiments of the present invention are directed to imaging small biological objects or features with light. These embodiments can be applied to the fields of, for example, cellular physiology and neuroscience. These preferred embodiments are based on principles of phase measurements and imaging technologies. The scientific motivation for using phase measurements and imaging technologies is derived from, for example, cellular biology at the sub-micron level which can include, without limitation, imaging origins of dysplasia, cellular communication, neuronal transmission and implementation of the genetic code. The structure and dynamics of sub-cellular constituents cannot be currently studied in their native state using the existing methods and technologies including, for example, x-ray and neutron scattering. In contrast, light based techniques with nanometer resolution enable the cellular machinery to be studied in its native state. Thus, preferred embodiments of the present invention include systems based on principles of interferometry and/or phase measurements and are used to study cellular physiology. These systems include principles of low coherence interferometry (LCI) using optical interferometers to measure phase, or light scattering spectroscopy (LSS) wherein interference within the cellular components themselves is used, or in the alternative the principles of LCI and LSS can be combined in systems of the present invention.
The preferred embodiments for phase measurement and imaging systems include actively stabilized interferometers, isolation interferometers, common path interferometers and interferometers that provide differential measurements. The embodiments directed to differential measurement systems include two-point heterodyne interferometers and dual beam interferometers. Embodiments using common path interferometers can include phase contrast microscopy using spatial light modulation.
Optical low coherence interferometry (LCI) has found many applications in the study of biological media. The most widely used LCI technique is optical coherence tomography (OCT), which images the 2D or 3D backscattering profile of a biological sample. The LCI technique has been described by Drexler, W. et al, in “In vivo ultrahigh-resolution optical coherence tomography,” Optics Letters Volume 24, No. 17 pages 1221-1223, the entire teachings of which are incorporated herein by reference. OCT has a depth sensitivity limited by the coherence length of the light source used. Ultra-broadband sources have resolved feature of size on the order of 1 micron.
Phase-sensitive low coherence interferometry is sensitive to sub-wavelength optical path changes in a sample. The primary difficulty in phase-sensitive LCI is phase noise due to optical path fluctuations in the arms of the interferometer. A laser beam of different wavelengths passing through a nearly identical optical path can be used to measure interferometer phase noise, which is then subtracted from a similarly noisy sample signal to extract real sample phase shifts. Other researchers have employed orthogonal laser polarizations along a common optical path to measure differential phase contrast or birefringence with high phase sensitivity. In both techniques, the reference arm path is scanned and computer calculations are required to extract the phase (via a Hilbert transform) from the resulting fringe data; in addition, a phase unwrapping algorithm must be used to eliminate the 2π ambiguity in the phase measurements. The fringe scanning and information processing procedures substantially reduce the speed of the measurement, and may increase noise.
Systems Including Actively Stabilized Interferometers:
Preferred embodiments of the present invention use LCI methods in which active stabilization of an interferometer by a reference beam allows continuous detection of very small phase shifts with high bandwidths and minimal computer processing. Reference beam locking to an arbitrary phase angle gives a direct sample phase measurement without reference arm scanning. Preferred embodiments provide two-dimensional and three-dimensional phase imaging.
Preferred embodiments rely on active stabilization of a Michelson interferometer by a reference laser beam. A schematic of a preferred embodiment of an actively stabilized interferometer 300 is shown in
The phase difference between the two interferometer arms is modulated sinusoidally:
Φ=ψ+ψd sin(Ωt) (16)
where ψ=k(L1−L2)=kΔL is the phase difference between the two arms, ψd<2π is the modulation depth, and Ω is the modulation frequency. The detected interferometer signal is given by a coherent addition of beams from the two interferometer arms:
I=I
1
+I
2+2(I1I2)1/2 cos φ (17)
The nonlinear relation between I and φ results in a detected signal with frequency components at many harmonics of modulation frequency Ω. The first (IΩ) and second (I2Ω) harmonic terms are given by
I
Ω=4J1(ψd)(I1I2)1/2 sin ψ sin(Ωt) (18)
I
2Ω=4J2(ψd)(I1I2)1/2 cos ψ cos(2Ωt) (19)
Demodulation of IΩ and I2Ω at Ω and 2Ω, respectively, is performed via a mixer 316 or lock-in amplifier, and the two signals are amplified to give equal amplitudes as a function of ψ:
V1=V0 sin ψ (20)
V2=V0 cos ψ (21)
Using analog or digital circuits, the linear combination V0 is calculated with θ as a time varying parameter:
V
θ=cos θ*V1−sin θ*V2=V0 sin(ψ−θ) (22)
This signal is used as an error signal to lock the interferometer to any zero crossing with positive slope. Vθ(t) is integrated, filtered, and amplified before being fed back to the phase modulator (high frequency) and path length modulator (low frequency) to actively cancel interferometer noise. The linear combination Vθ(t) is used as the error signal in order to allow locking to an arbitrary phase θ.
The stabilized interferometer may be combined with phase-sensitive low coherence interferometry as described herein. All of the system setups described herein may be implemented via free space optics or fiber optics. For clarity, the illustrations show free space optics implementations.
The schematic for a reference-beam stabilized interferometer for optical delay phase-sensitive LCI is shown in
The sample 382 is placed on a cover glass which has been anti-reflection-coated at the LC wavelength on the side in contact with the sample. The LC beam is focused by a microscope objective 380 through the glass and sample. Backscattered light is collected by the same optics and focused onto the detector 366. The detected signal is an autocorrelation of the backscattered field with time delay ΔL/c. It can be shown to display interference fringes at zero delay and at delays corresponding to twice the optical path lengths between pairs of scattering or reflective surfaces in the sample arm illustrated in
With ΔL=˜2nd, the sample signal is demodulated by a mixer or lock-in amplifier at the modulation frequency to give a continuous measurement of sample phase. The interferometer lock phase θ may, in turn, be electronically varied in order to lock to a zero crossing in the demodulated low coherence signal. In this manner the time evolution of the interferometer lock phase is used as a direct measurement of the sample phase. This lock scheme has the advantage of being independent of the amplitude of the sample signal.
This system, in accordance with a preferred embodiment resembles a dual-beam optical computed tomography (OCT) technique in that an optical delay is prepared before the low coherence light enters the sample, and the detected signals are insensitive to changes in distance between sample and interferometer. In an alternative preferred embodiment, a Mach-Zender interferometer configuration to prepare the low coherence beam may also be used.
By introducing a variable attenuator into one or both of the interferometer arms, the relative amplitudes of the two time-delayed fields can be adjusted to optimize the interference signal.
A reference-beam stabilized phase-sensitive LCI schematic is shown in
The signal beam has a coherence length several times smaller than the cover glass thickness, in order to distinguish between sample and back surface reflections. The reference arm length is adjusted to give interference fringes from the sample, and as previously described the signal is demodulated to give the sample phase as illustrated in
Compared with the optical delay method described with respect to
Preferred embodiments can image sample phase over an area using two methods. In a preferred first method, the incident beam may be scanned in the X-Y direction on the sample, as in most implementations of OCT. In the embodiment including the reference-beam stabilized LCI, care must be taken to maintain the reference beam interferometer lock as the beam is scanned. In accordance to a second method, a charge-coupled device (CCD) or photodiode array may be used to detect the signals without scanning.
For CCD imaging, measurement of relative phase may be performed by analyzing a sequence of 4 images, each varying in phase from the previous by π/2.
For high bandwidth phase imaging the signals from a photodiode array may be individually demodulated at both the first and second harmonics; this allows phase at each pixel to be measured without ambiguity.
The higher sensitivity and bandwidth of the preferred embodiments of the present invention interferometric systems open new possibilities for the measurement of small optical phase shifts in biological or nonbiological media. For example, the preferred embodiments enable studies of cell membrane motions and fluctuations. Dual wavelength, phase-sensitive LCI has been used to observe cell volume regulation and membrane dynamics of a human colon cell culture. Recently, low-frequency oscillations in the cell membranes have been observed subsequent to the addition of sodium azide to the culture. The preferred LCI embodiments allow the study of membrane dynamics on smaller time scales, where thermally-driven fluctuations and mechanical vibrations may be more important. Two-dimensional imaging methods in accordance with a preferred embodiment of the present invention allow the study of membrane fluctuations in collections of interacting cells. Oscillations and correlations can provide information related to cell signaling.
Preferred embodiments of the present invention can be used for measurements of neuron action potentials. There is great interest in neuroscience for improved optical methods for noninvasively monitoring the electrical signals of neurons. Current methods rely on calcium-sensitive or voltage-sensitive dyes, which have a number of problems, including short lifetimes, phototoxicity, and slow response times.
It has been known for several decades that the action potential is accompanied by optical changes in nerve fibers and cell bodies. In addition, nerves have been shown to exhibit a transient increase in volume during excitation. These changes have been interpreted in terms of phase transitions in the cell membrane, and index shifts due to reorientation of dipoles in the cell membrane.
Phase-sensitive LCI methods in accordance with the preferred embodiments may be used to measure the optical and mechanical changes associated with the action potential. Increased bandwidths allow sensitive phase measurements on the ˜1 ms time scale of the action potential. Preferred embodiments of the present invention can be used to provide noninvasive long-term measurements of neural signals and provide the ability to image many neurons simultaneously. The embodiments help with analyzing spatio-temporal patterning of neural activity is important to understanding the brain. Small (≈10−4 rad) index shifts and membrane fluctuations that are known to accompany the action potential can be detected in preferred embodiments of the present invention that provide a high level of sensitivity speed and high bandwidth (>1 kHz). These embodiments use noise-canceling methods such as isolation, which prevents noise from entering; stabilization methods, which use feedback elements to cancel noise; differential measurement that provides for noise cancellation without feedback and common path interferometry that minimizes noise influence.
The embodiments as described herein can be used in many medical applications. For example, cortex mapping can be performed during neurosurgery with an improvement in the speed and resolution compared with electrode methods of the prior art. Further, the preferred embodiments can be used to localize an epileptic foci during neurosurgery. The embodiments can also enable the monitoring of the retinal nerve activity in an eye. Additional applications of the preferred embodiments of the present invention include two-dimensional and three-dimensional scanning due to the high speed provided by the embodiments; high dynamic range and DC rejection provided by the photodiode detectors; nanometer-scale imaging in cell biology; characterization of epithelial tissue and detection of vibrations of membranes, for example, but not limited to, auditory cells and blood vessels.
Systems Including Dual-Beam Interferometers:
A preferred embodiment of the present invention includes a fiber-based optical delay phase-sensitive low coherence interferometer for integration into a conventional light microscope. Simultaneous electrical and optical measurements can be performed in cultures of hippocampal neurons. A preferred embodiment includes an imaging system containing a photodiode array or a rapidly scanning beam. A method for optical excitation of neurons in combination with LCI measurements of action potentials can form an extremely useful new tool for investigating neural network dynamics, synaptic plasticity, and other fundamental problems in neuroscience.
Another embodiment applies phase-sensitive imaging techniques to brain slices and even neurons in vivo. Tracking and compensation for motions of the brain surface is a considerable challenge. Optical scattering limits the depth from which neuronal signals may be extracted, but a depth on the order of 100 microns may be possible.
Preferred embodiments of actively stabilized interferometers described herein before have included two wavelength systems wherein a first wavelength is used for stabilization and the second wavelength for phase measurement.
A collimated laser beam or low coherence light source is split into sample 586 and reference paths by beamsplitter 584. The sample beam passes through the sample 586 and lenses L1 (objective lens) 588 and L2 (tube lens) 590 before the final beamsplitter 592. Lenses L1 588 and L2 590 have focal lengths f1 and f2 respectively and constitute a microscope with magnification M=f2/f1. The lenses are aligned such that the distance between sample 586 and L1 588 is f1, the distance between L1 and L2 is f1 f2, and the image plane is located a distance f2 from L2.
The reference beam passes through two acousto-optic modulators 594 AOM1 and AOM2 which are driven by RF fields at frequencies ω1 and ω2, respectively. Irises are used to select the +1 order diffracted beam from AOM1 and −1 order from AOM2. Therefore, light at frequency ω0 incident on AOM1 exits the second pinhole at frequency ωR=ω0+Ω where Ω=ω1−ω2. This two AOM configuration is used in order to obtain a relatively low heterodyne frequency Ω on the order of 100 kHz. A low heterodyne frequency may be preferable for the use of high sensitivity photodetectors, and also facilitates optical alignment since Ω may be set equal to zero with only a very small change in beam direction. A single AOM may be used if a higher heterodyne frequency is desired. A processing device such as, for example, computer 609 and image display 611 are in communication with the system.
The frequency shifted reference beam is expanded by lenses L3 598 and L4 600 which are separated by a distance equal to the sum of their focal lengths. The signal and reference fields at the two image planes can be described in complex notation by
E
S(x,y,t)=ES0(x,y)exp[i(φs(x,y,t)+φN,S(x,y,t)−ωt)] (23)
E
R(x,y,t)=ER0(x,y)exp[iφN,R(x,y,t)−(ω+Ω)t] (24)
Here x and y are the transverse coordinates along the optical path, φS(x,y,t) is the sample phase under investigation, φN,S(x,y,t) and φN,R(x,y,t) represent interferometer noise in the sample and reference arms, and Es0(x,y), ER0(x,y) are the field amplitude profiles which may be, for example, but not limited to, Gaussian.
The sample phase φS(x,y,t) may be expressed in terms of the time-dependent refractive index distribution of the sample nS(x,y,z,t):
where z is the axial coordinate and the integration is carried out over the depth of the sample. Note the magnification factor M.
The intensity at the two image planes is given by
I
±
=|E
S
±E
R|2=|ES0|2+|ER0|2±2|ES0∥ER0|cos [φS(x,y,t)+φN,S(x,y,t)−φN,R(x,y,t)+Ωt] (26)
This heterodyne signal is detected by two photodiodes PD1604 and PD2606 located at positions (x1, y1) and (x2, y2). The light may be collected through optical fibers or pinholes. The AC components of the detected intensities are given by
I
1(t)=2|ES0∥ER0|cos [φS(x1,y1,t)+φN,S(x1,y1,t)−φN,R(x1,y1,t)+Ωt] (27)
I
2(t)=−2|ES0∥ER0|cos [φS(x2,y2,t)+φN,S(x2,y2,t)−φN,R(x2,y2,t)+Ωt] (28)
The phase difference between heterodyne signals I1 and −I2 is then measured by lock-in amplifiers or a phase detector circuit 608.
If one now assumes that interferometer noise is independent of transverse position, that is
φN,S(x1,y1,t)=φN,S(x2,y2,t) (30a)
φN,R(x1,y1,t)=φN,R(x2,y2,t) (30b)
then the measured phase difference is simply the difference in sample phase at the selected points:
Φ12(t)=φS(x1,y1,t)−φS(x2,y2,t) (31)
This method in accordance with a preferred embodiment of the present invention may be implemented with any number of photodetectors at the image planes subject only to physical constraints. A photodiode array or photodiode-couple fiber bundle may be used to image the phase at many positions simultaneously. Any single detector may be chosen as a “reference” detector relative to which the phase differences at all other points are measured.
The schematic for the imaging Mach-Zender heterodyne interferometer is shown in
of light which has passed through a sample 673.
The optical layout is similar to the two-point Mach-Zender heterodyne interferometer, described with respect to
The time-dependent intensity distribution at the CCD image plane is given by
I_(x,y,t)=|ES±ER|2=|ES0|2+|ER0|2−2|ES0∥ER0|cos [φS(x,y,t)+φN,S(x,y,t)−φN,R(x,y,t)+Ωt] (32a)
Stroboscopic phase shift interferometry is used to image this heterodyne fringe pattern in a phase sensitive manner. This requires “gating” of the detection at the CCD and can be performed in several ways. An intensified CCD can be gated by controlling the intensifier voltage. A large-aperture electro-optic cell in front of the CCD can be used as a fast shutter. In the system illustrated in
A photodiode aligned (via fiber optic if necessary) to the first image plane is used to obtain the following signal as in the two-point heterodyne interferometer
I
1(t)=2|ES0∥ER0|cos [φS(x1,y1,t)+φN,S(x1,y1,t)−φN,R(x1,y1,t)+Ωt] (32b)
Gating signals are then derived from heterodyne signal I1 as follows. An electronic comparator outputs “high” when the heterodyne signal is positive with positive slope. This corresponds to gate signal with phase 0. Similar signals at phase shifts of pi/2, pi, and 3pi/2 can be generated by triggering at heterodyne signal positive with negative slope, negative with negative slope, and negative with positive slope, respectively. The heterodyne 687 and gate 688-691 signals are shown in
The gate signals are then used in succession to gate the CCD detector. The sequence is controlled by a computer 685. Light is allowed to fall onto the CCD only when the gate signal is “high”. Four exposures are captured by the CCD, corresponding to the four gate signals but equal numbers of heterodyne periods, so as to achieve corresponding intensities on the four exposures. The four measured fringe images are called I0(x,y), Iπ/2(x,y), Iπ(x,y), I3π/2(x,y). Then the relative sample phase can be calculated by
due to the phase shifts between each of the four frames. Other methods for phase shifting and calculating the phase can also be used, for example, those described by Creath, K., “Phase-Measurement Interferometry Techniques,” in Progress in Optics. Vol. XXVI, E. Wolf, Ed., Elsevier Science Publishers, Amsterdam, 1988, pp, 349-393, the entire teachings of which are incorporated herein by reference. Furthermore, the interferometer noise, insofar as it is constant over the image plane, is cancelled via reference to the correlated noise heterodyne signal I1(t). Stroboscopic phase imaging can be considered a form of “bucket” integration, wherein the integration is performed over time with reference to a common heterodyne reference signal.
Stroboscopic phase imaging can also be performed with a dual-beam heterodyne interferometer in accordance with a preferred embodiment of the present invention. This requires a low coherence wavelength such as 850 nm capable of being detected by a CCD. It also requires a modification of the sample beam delivery system to an imaging system as shown in
Preferred embodiments of the present invention include a dual-beam reflection interferometer. A preferred embodiment of the dual-beam reflection interferometry includes an isolated dual-beam heterodyne LCI. The heterodyne dual beam interferometer 620 is shown in
A low coherence source 622 such as a superluminescent diode (SLD) or multimode laser diode is coupled into a single mode optical fiber which enters a vacuum chamber 640 through a vacuum feedthrough. Enclosed within the chamber is a vibration-isolated free space heterodyne Michelson interferometer. The low coherence beam is launched from the fiber via a collimating lens and split by a beamsplitter 626. The arms of the interferometer (called 1 (656) and 2 (658)) contain acousto-optic modulators (AOM1628 and AOM2634) driven by RF fields at frequencies ω1 and ω2. In each arm the positive-shift 1st order diffracted beams is selected by a pinhole. The light is focused by the lenses 630 and 636 and then reflected by the mirrors M1632 and M2638 back to the AOMs. The lenses are placed at a distance of one focal length away from both the AOMs and the mirrors. This design allows the AOM retroreflection alignment to be maintained across the spectrum of the low coherence (broad spectrum) light.
Since the AOMs are operated in double-pass configuration, the incident light at frequency ω0 is shifted to ω0+2ω1 and ω0+2ω2 after passing through arms 1 (656) and 2 (658), respectively. The frequency difference between two beams having passed through arms 1 and 2 is Ω=2(ω1−ω2).
One of the mirrors, M1632, is attached to a translation stage to adjust the optical path length difference Δl=l1−l2 between the two arms. The combined beam after passing through the two arms can be considered as a beam with two pulses separated by a time delay Δl/c. The reflections from the two interferometer arms is focused back into the fiber by the collimator 660 and exits the chamber 640.
An optical circulator is used to separate the backreflected beam from the incident beam. The light is launched as a free space beam by another collimator 662 and focused on the sample 642, passing first through a partially reflective surface 664. The backscattered light is collected by the same collimator and detected by a photodiode 650 after passing through another optical circulator. The optical delay in the Michelson interferometer is adjusted to match the optical path difference Δs between the reflection from the sample S and the reflection from the reference surface. When this condition ΔL=Δs holds to within the coherence length lc of the source, a heterodyne signal at frequency Ω is detected due to interference between light reflected from surfaces S 642 and R 664. The phase of the heterodyne signal, measured relative to the local oscillator provided by mixing and doubling the two AOM driving fields, represents a measure of the phase of the sample reflection relative to the reference reflection.
The length Δs must be substantially longer than the coherence length lc, in order to keep a heterodyne signal from being created by a single surface reflection. It is also assumed that the sample thickness is smaller than the glass thickness Δs, so that signals are referenced to the glass surface, not scattering from the sample.
A quantitative description of the interferometer follows. Consider first a monochromatic source with wave number k0. The electric field amplitude at the input of the Michelson interferometer can be described by
E
i
=A
i cos(k0z−ω0t) (33)
The electric field returning from the beamsplitter after passing through the AOMs is given by a sum of fields from the two arms of the interferometer:
E
m
=E
1
+E
2
=A
i cos(k1l1−(ω0−2ω1)t)+A1 cos(2k2l2−(ω0+2ω2)t) (34)
where k1=k0+2ω1/c and k2=k0+2ω2/c
This dual beam is now incident on the sample. Let s1 be the optical distance to the reference reflection and s2 the optical distance to the sample reflection. If the reflectivities of the reference and sample reflections are R1 and R2 respectively, and multiple reflections are ignored, the field reflected from the sample is:
E
s
=A
i√{square root over (R1)} cos [2k1(l1+s1)−(ω030 2ω1)t]+Ai√{square root over (R1)} cos [2k2(l2+s1)−(ω0+2ω2)t]+Ai√{square root over (R2)} cos [2k1(l1+s1)−(ω0+2ω1)t]+Ai√{square root over (R2)} cos [2k2(l2+s2)−(ω0+2ω2)t] (35)
The detected intensity iD is proportional to the field amplitude squared:
where optical frequency oscillation terms have been ignored and the wavenumber shiftΩ/c due to the frequency shift is assumed to be negligible compared with the inverse of the path length differences Δs and Δl.
To model a low coherence (broadband) source, it is assumed that it has a Gaussian power spectral density with center wavenumber k0 and full wavelength at half maximum (FWHM) spectral bandwidth Δk
The detected intensity for low coherence radiation is found by integrating the monochromatic result over the spectral distribution:
ĩ
D
=∫i
D(k)S(k)dk=(R1+R2)(1+F(Δl)cos(2k0Δl−Ωt)+2√{square root over (R1R2)}[2F(Δs)+f(Δl+Δs)cos(2k0(Δl+Δs−Ωt))+F(Δl−Δs)cos(2k0(Δl−Δs−Ωt))] (38)
where
is the source coherence function for the chosen spectral density. Here lc is the coherence length
If the path length difference is chosen such that Δl=Δs to within the coherence length, and Δl>>lc then the dominant time-dependent signal is of the form
ĩ
D(AC)=2√{square root over (R1R2)}[F(Δl−Δs)cos(2k0(Δl−Δs−Ωt))] (41)
By measuring the phase of this signal relative to the local oscillator 652 LO=cos(Ωt), changes in Δs can be measured. Note that isolation of the Michelson interferometer is required to keep phase noise from influencing the measurement through changes in Δl.
Light from a low coherence source 702 is split into upper and lower paths by a fiber optic coupler 706. The upper path is similar to the dual-beam interferometer described hereinabove with respect to
A quantitative description of the interferometer with the case of a monochromatic source follows. The upper path field can be written as:
E
1
=A
i cos(2k0l1−(ω0+2ω1−2ω3)t)+Ai cos(2k0l2−(ω0+2ω2−2ω3)t) (42)
and the lower path is (again assuming the sample contains two reflections at positions si and s2):
E
2
=A
i√{square root over (R1)} cos [2k0s1−ω0t]+Ai√{square root over (R2)} cos [2k0s2−ω0t] (43)
The path lengths of the fiber optic cables has been assumed to be equal between the two arms. The mirror 740 associated with the AOM 736 with frequency ω3 may be translated to equalize the path lengths.
The AC component of the photodetector signal is given by
where Ω13=2(ω1−ω3) and Ω23=2(ω2−ω3). The polychromatic case for Gaussian spectral distribution gives:
ĩ
D
=∫i
D(k)S(k)dk∝√{square root over (R1)}[F(l1−s1)cos(2k0(l1−s1)−Ω13t)+F(l2−s1)cos(2k0(l2−s1)−Ω23t)]+√{square root over (R2)}[F(l1−s2)cos(2k0(l1−s2)−Ω13t)+F(l2−s2)cos(2k0(l2−s2)−Ω23t)] (45)
Suppose that to within the coherence length, l1≈s1 and l2≈s2, and furthermore Δl, Δs<<lc. Then the dominant terms are
ĩ
D∝√{square root over (R1)}F(l1−s1)−Ω13t)+F(l2−s2)−Ω23t)] (46)
Next, these two frequency components are combined in a mixer and a bandpass filter selects the difference frequency Ω12=Ω13−Ω23=ω1−ω2:
X=√{square root over (R1R2)}F(l1−s1)F(l2−s2)cos(2k0(l1−l2−(s1−s2)−Ω12t) (47)
A phase-sensitive detector then measures the phase of this signal relative to a local oscillator at Ω12 generated by mixing and doubling the AOM drive fields. The measured phase is φ=2k0(Δl−Δs).
The phase shifter is used to compensate interferometer noise which may have some effect on the phase measurement despite its differential nature. The phase of the photodiode signal component
√{square root over (R1)}F(l1−s1)cos(2k0(l1−s1)−Ω13t) (48)
is measured and used as an error signal to lock the reflection from s1, via the phase shifter, to a constant phase.
In an embodiment using an actual sample there will be not two reflections but a distribution of scattering. By setting the reference arm positions this interferometer measures the phase difference between light scattered from two different depths.
The embodiment described with respect to
Preferred embodiments of the present invention minimize and preferably eliminate the noise and include an optical-referenced measurement to reduce drifts and noise, or use an AOM tuning voltage provided by a precision voltage source or alternatively use polarization-maintaining fiber optic components.
In this dual-referenced interferometry embodiment, combines the transverse reference point and reflection-referenced phase measurement. Ideally, the reference point and the sample object are located on the same glass. This has the additional benefit of canceling any tilt, vibration and/or expansion effects in the cover glass. As illustrated in
φ(t)=(φ1−φ1′)−(φ2−φ2′) (49)
A preferred embodiment of the dual-referenced interferometer includes photodetectors that have similar gains and frequency responses to cancel the noise. Further, polarization maintaining components and fibers can be used to address polarization effects in the fiber. In particular, polarization mode dispersion in optical circulator creates variable delay between two orthogonal polarizations which leads to noise in amplitude and phase which can be mitigated by using the polarization maintaining components. In a preferred embodiment, a digital bandpass filter is used to address harmonics found in optical signals.
The phase change corresponding to the displacement of the mirror 888 is graphically illustrated in
Preferred embodiments of the present invention are directed to a system including a dual-beam low coherence interferometer used to perform non-contact measurements of small motions of weakly reflecting surfaces such as nerve displacement motions during an action potential. Nerve fibers exhibit rapid outward lateral surface displacements during the action potential. This “swelling” phenomenon, which is generally attributed to water influx into axons, was first observed in crab nerve and later in a number of other invertebrate and vertebrate preparations. All observations of nerve swelling to date have relied on optical or piezoelectric sensors placed in physical contact with the nerve. An optical non-contact method for measuring nerve displacements can eliminate contact-related artifacts and permit the imaging of activity of multiple nerves simultaneously in their native state.
A preferred embodiment of the present invention includes a dual-beam heterodyne low coherence interferometer and its application to measuring the swelling effect in a lobster nerve bundle. Prior art methods of interferometric observation of nerve swelling have been unsuccessful because of low sensitivity and the failure to detect any movement associated with the action potential in frog or lobster nerves. More recently, the refractive index changes in a nerve during the action potential have been successfully measured using a transmitted light interferometer.
Measurement of nerve displacements, of the order of nanometers over millisecond time scales, requires a fast and stable interferometric measurement system capable of recording from surfaces of low reflectivity. In accordance with a preferred embodiment, a dual-beam system, composed of both single-mode fiber and free space elements, is shown in
The output from each of the two ports of the Michelson interferometer is a dual beam composed of two low coherence fields with different frequency shifts and a variable delay. One of the dual beams is incident on the nerve chamber setup (detailed in
ΔLS and ΔLR are the round trip optical path differences between reflections from surfaces 1 and 2 of the sample and reference gaps, respectively. The various components are adjusted so that the three path lengths ΔL, ΔLS and ΔLR are all equal to within the source coherence length. When this condition is satisfied the photodetectors 932, 962 (New Focus 2011) record heterodyne signals at frequency Ω=2(ω1−ω2)=200 kHz due to interference between: (1) light which traverses arm 1 of the Michelson interferometer and reflects from surface 2 of the sample (or reference gap) and (2) light which traverses arm 2 of the Michelson interferometer and reflects from surface 1 of the sample (or reference gap). The phase difference between the two heterodyne signals (up to a multiple of 2π is φ(t)=k0[(ΔLS−ΔL)−(ΔLR−ΔL)]=k0(ΔLS−ΔLR), where k0 is the central wave number of the source. The quantity most susceptible to phase noise, the Michelson path delay ΔL, is cancelled in this differential measurement method. Polarization independent optical circulators 926, 930, 960 are used to maximize detected power and keep the reflected light from re-entering the Michelson interferometer. A polarization controller (not shown) is used to minimize the effects of polarization-mode dispersion in the fiber optical components.
To measure the phase difference φ(t) the outputs of the photodetectors are digitized by a 12-bit A/D card (National Instruments PCI-6110) at 5M samples/s. A sequence of instructions in a computer calculates the phase difference between the two signals via a Hilbert transform, and expresses the phase shift as a relative surface displacement d(t)=φ(t)/2k0.
To verify that the interferometer performs a displacement measurement, the nerve setup was replaced with a planar Fabry-Perot cavity with cavity spacing sinusoidally modulated at 300 Hz frequency and 27 nm amplitude using a piezo transducer. Dual-beam interferometer measurements of amplitude and frequency were in good agreement with values determined by monitoring the transmission of a 632.8 nm helium-neon laser beam as the cavity was scanned over several microns.
In accordance with a preferred embodiment, the walking leg nerve (˜1 mm diameter, ˜50 mm long) from an American lobster (Homarus americanus) was dissected and placed on a nerve chamber machined from acrylic as illustrated in
The displacements were observed in about half of the nerve preparations and varied in amplitude from 0 to 8 nm for 5 mA stimulation. The large variability may reflect differences in the nerves themselves or in the preparation procedure. Similar displacement amplitudes have also been reported using nerves of crab and crayfish. In a recent study of lobster nerve swelling using an optical lever about ˜10 times smaller displacements were observed, which may reflect artifacts of the technique.
In order to control for artifacts such as thermal expansion from ohmic heating heating of the nerve due to stimulus current, the peak electrical and displacement signals of a single nerve were measured as the stimulus current was varied (as illustrated in
Illustrated in
The system shown and described in connection with
Illustrated in
Quantitative Phase Microscopy Using Spatial Light Modulation:
In another aspect, the present invention provides microscopy systems and methods that combine phase contrast microscopy and phase shifting interferometry. The systems and methods of the present invention can be applied in a transmission geometry and reflection geometry. In various embodiments, the methods and systems utilize a common optical path for waves of different spatial frequency and shift the phase between waves of different spatial frequency that originate from the same point on a sample.
The phase of optical fields has been used for many years to provide the sub-wavelength accuracy needed in many applications. For example, biological systems, which are inherently weak scatterers, have been rendered visible using principles of phase contrast microscopy. Interferometry is one way to access the phase information and, therefore, various interferometric techniques have been developed over the past years with the purpose of retrieving the phase associated with a specimen. Techniques such as phase contrast and Nomarski microscopy, although very useful and popular methods use the optical phase just as a contrast agent and do not provide quantitative information about its magnitude.
Phase shifting techniques, on the other hand, are able to determine the phase information in a quantitative manner, and various interferometric schemes have been proposed over the past decades. Differential phase contrast techniques based on polarization optics have been interfaced with common optical coherence tomography. The bucket integration technique, as a particular case of phase shifting interferometry, has also been used for two-dimensional phase imaging. Most of these interferometers, however, require creating two physically separated beams, which makes them susceptible to uncorrelated environmental noise. This problem often requires specific measures to actively cancel the noise. Phase lock loops have been used for this purpose. What is needed is a microscopy system and method that reduces or eliminates uncorrelated noise from the interferometric signal.
The systems and methods of the present invention interfere different spatial frequencies of the light originating from a sample using a common optical path. In various embodiments, the present invention provides systems and methods that provide a phase image of a sample that is substantially free of uncorrelated environmental phase noise. In addition, in preferred embodiments, the methods of the present invention can obtain a phase image even in the presence of phase singularities when a low coherence illumination source is used.
In various preferred embodiments, the present invention provides an instrument that is not sensitive to environmental phase noise and that can provide highly accurate and stable phase information over an arbitrary exposure time. In various embodiments, the invention is based on the description of an image as an interference pattern. One example of such a description is Abbe's imaging theory. Each point in the image plane is considered a superposition (interference) of waves propagating at different angles with respect to the optical axis. If we consider the zero-order scattering from the sample, as the reference of an interferometer, the image can be viewed as an interference between the zero-order field and the fields traveling off the optical axis.
The amplitude of the electronic field in the imaging plane and the intensity in the image plane can be represented as:
E
image
=e
i·φ
+e
i·φ
(50)
I
image∝cos(φ1−φ0) (51)
where Eimage represents the amplitude of the electrical field of the light at a point on the imaging plane, φ represents the phase of the light, and Image represents the intensity of the light at a point on the imaging plane, and where the subscripts 0 and 1 represent, respectively, for the zero-order component and higher order components in Equations 50-56, for example, and unitary amplitudes have been considered. Equation 51 illustrates that for variations of phase, Δφ=φ1−φ0, which are small compared to π across the sample, the intensity in the image plane varies slowly, which is equivalent to saying that the image lacks contrast. However, by shifting the phase of the zero-order φ0 by π/2, the image intensity distribution can be represented as:
I
image∝sin(φ1−φ0) (52)
Equation 52 illustrates that now the intensity in the image plane is very sensitive around the value Δφ=0, which is equivalent to saying the image presents significant contrast even for purely phase objects.
In addition to improving intensity contrast, shifting the phase of the zero-order light component can also provide quantitative information about the phase distribution of the object. For example, consider shifting phase of the zero-frequency component by an amount δ that can be changed controllably. The total electric field E(x,y)image and the intensity, Iimage(x,y;δ) at any point (x,y) in the image plane, keeping in mind that the zero-order field is constant over the image plane, can be represented by:
E(x,y)image=E0ei·(φ
I
image(x,y;δ)∝I0+I1(x,y)+2·√{square root over (I0·I1(x,y))}·cos [φ1(x,y)−φ0+δ] (54)
where I0 is the intensity associated with the low frequency components and I1 is the intensity associated with the high frequency components.
Herein, we generally refer to the order of the light coming from the sample. However, when a SLM is used, in practice, it is very difficult to controllably shift only the phase of the zero-order component of the light from the sample. Accordingly, in preferred embodiments, we shift the phase of the low frequency spatial components, which contain all the zero-order light. Accordingly, it is to be understood that the systems and methods of the present invention can be practiced by shifting the phase of only the zero-order component and that shifting the phase of other orders is not required.
By varying δ, one can obtain Δφ(x,y)=φ1(x,y)−φ0, and the expression:
The phase associated with the sample φ can be obtained using Equations 53 and 54, using, for example, a phasor representation of the total electric field E(x,y). The phase associated with the object can be represented by:
In Equation 56, β=I1/I0 and represents the ratio between the intensities associated with the high and low spatial frequency components, respectively. Values of p can be obtained, for example, from Iimage(x,y;δ) at four values of δ.
In various embodiments, the systems and methods of the invention are based on a transmission geometry for the microscopy system.
A wide variety of devices can be used to control the SLM and acquire images of the sample. For example, in various embodiments, a computer 1250 controls the modulation of the SLM 1216, incrementing δ by π/2 and also preferably synchronizes the image acquisition of the detector 1230. The operation of Equation 55 can be performed in real time; thus the speed of the displayed phase images are limited in preferred embodiments only by the acquisition time of the detector 1230 and refresh rate of the SLM 1216.
A wide variety of illumination modes and illumination sources can be used to provide illumination 1260 for a transmission geometry of the present invention. The illumination can be performed in either bright or dark field mode. In addition, there are no specific requirements on the coherence properties of the source used. The system and methods of the present invention can use laser light, partially coherent radiation, or “white” light such as, for example, from a discharge lamp. The illumination source should, however, have good spatial coherence.
As illustrated in
In various embodiments, the systems and methods of the invention are based on a reflection geometry for the microscopy system. The difference between the transmission and the reflection geometry is in the illumination geometry. The transmission geometry can be transformed into a reflection geometry.
Referring to
A wide variety of devices can be used to control the SLM and acquire images of the sample. For example, in various embodiments a computer 1350 controls the modulation of the SLM 1316 incrementing δ by π/2 and also preferably synchronizes the image acquisition of the detector 1330. The operation of Equation 55 can be performed in real time; thus the speed of the displayed phase images in preferred embodiment are limited only by the acquisition time of the detector 1330 and refreshing rate of the SLM 1316.
A reflection geometry in accordance with a preferred embodiment of the present invention can also include illumination 1360 such as used, for example, in a transmission geometry. Suitable transmission illumination modes include, but are not limited to, bright field and dark field modes. As in a transmission geometry in accordance with the present invention, there are no specific requirements on the coherence properties of the illumination source used. The system and methods of the present invention can use laser light, partially coherent radiation, or “white” light such as, for example, from a discharge lamp. The illumination source, however, should have good spatial coherence.
In a reflection geometry in accordance with the present invention, the interfering low frequency and high frequency fields are also components of the same beam; and thus, share a common optical path. The low frequency and high frequency components are thus affected in a similar fashion by phase noise and various embodiments of the systems of the present invention can be considered as an optically noise-free quantitative phase microscope. For example, in various embodiments, phase sensitivity of λ/1,000 are possible over arbitrary time scales of acquisition.
In various embodiments, the present invention provides a phase contrast microscopy system utilizing spatial light modulation that comprises an imaging module and a phase imaging module. The imaging and phase imaging modules can, for example, be built independently, facilitating their use in existing optical microscopes.
The phase imaging head 1450 comprises a lens L3 1454 used to form the Fourier transform of the image onto a spatial light modulator (SLM) 1456. A central region of the SLM 1456 can apply, relative to the remainder of the SLM, a controllable phase shift δ to the central zone of the incident beam 1460 and reflects the entire incident beam 1460. The central zone of the incident beam 1460 corresponds to the low spatial frequency waves. The lens L3 1454 can also serve as the second lens of a 4-f system, creating a final image on a detector 1470, such as, for example, a CCD, using a beam splitter BS 1472.
Control of the SLM and the acquisition of images of the sample can be accomplished, for example, using a computer 1480 which controls the modulation of the SLM 1456 incrementing δ by π/2 and also preferably synchronizes the image acquisition of the detector 1470. The computer can be a stand alone computer, for example, provided with the phase imaging head or the “computer” can comprise instructions, in accordance with the present invention, that are resident on a computer associated with the microscope. The operation of Equation 55 can be performed in real time; thus the speed of the displayed phase images are limited in preferred embodiments only by the acquisition time of the detector 1470 and refreshing rate of the SLM 1456.
In various embodiments, the transverse resolution of the systems of the invention can be improved by the addition of a 4-f system. A 4-f system can be used in both transmission geometries and reflection geometries. In addition, a 4-f system can be used in systems including a calibration system. A 4-f system facilitates taking advantage of other Fourier operations performed to the image.
A 4-f system can be added to both the various transmission geometry embodiments of the present invention and the reflection geometry embodiments of the present invention. A reflection source (such as, for example, the second illumination source 1302 at
The systems and methods for phase contrast microscopy utilizing spatial light modulation have a wide variety of applications. These systems and methods can, for example, be used to image micrometer and nanometer scale structures. Important classes of applications lie in the investigation of inter-cellular and intra-cellular organization, dynamics and behavior. The stability provided by using a common optical path for both low and high frequency components and the ability to perform measurements in transmission and backscattering modes, makes various preferred embodiments of the present invention suitable for investigating single cells and ensembles of cells over extended periods of time, from a few hours to days. Thus, in various embodiments, the phase imaging provided by preferred embodiments of the present invention are used to provide information about slow dynamical processes of cells, such as, for example, the transformations in size and shape of a living cell during a life cycle, from mitosis to cell death.
In various preferred embodiments, the methods and systems of the present invention are used to investigate with nanometer precision the process of cell separation after division, and provide information about the dimensions, properties or both of the cell membrane. A phenomenon that has received particular attention lately is programmed cell death-apoptosis. Given that apoptosis can be controlled in the laboratory, in various preferred embodiments, the methods and systems of the present invention are used to investigate the transformation induced in the cell during this process. In various preferred embodiments, the methods and systems of the present invention are used to investigate and detect differences in the life cycles of various types of cells (e.g., cancerous vs. normal).
It is expected that confluent layers of cells have to a certain extent mutual interactions that can lead to a collective mechanical behavior. In various preferred embodiments, the methods and systems of the present invention are used to investigate this mutual interaction by, for example, performing cross correlations between different points of a phase image obtained in accordance with the present invention.
The phase imaging provided by preferred embodiments of the present invention can also be used to provide information about fast dynamical processes of cells, such as, for example, responses to stimuli. For example, processes such as cell volume regulation are responses of the living cell to a biochemical stimulus. The time scale for this response may be anywhere from milliseconds to minutes and should be measurable with high accuracy using preferred embodiments of the systems and methods of the present invention. In various preferred embodiments, the methods and systems of the present invention are used to investigate the response of cells to biochemical stimulus and measure the mechanical properties of the cell structure (for example, cytoskeleton).
In various preferred embodiments, the methods and systems of the present invention are used to investigate cell structure information which has important implications, for example, in understanding the phenomenon of organelle transport inside the cell, as well as in creating artificial biomaterials. In various preferred embodiments, the methods and systems of the present invention are used to investigate cell structure by, for example, using mechanical vibrations to excite the cell membrane and measuring the amplitude of the cell membrane oscillations to, for example, relate them with the mechanical properties of the cell and cellular matter. Traditionally, magnetic or trapped beads are used to excite this motion. In various preferred embodiments, the methods and systems of the present invention are used to investigate cell structure using magnetic or trapped beads to excite mechanical vibrations, the photon pressure of a femtosecond laser pulse to cause mechanical excitation, or both.
One important class of applications is the investigation of intra-cellular organization and dynamics of cell organelles. In various preferred embodiments, the methods and systems of the present invention are used to investigate the transport of various particles inside a cell.
In addition to the diversity of biological investigations, preferred embodiments of the present invention are suitable for industrial applications, such as, for example, the investigation of semiconductor nano-structures. The semiconductor industry lacks a fast and reliable assessment of wafer quality during the nano-fabrication process. In various embodiments, the methods and systems of the present invention are used to provide nanometer scale information about semiconductor structures in, for example, a quantitative manner. In preferred embodiments, nanometer scale information is provided with a measurement on the order of a second.
Referring to
A lens L3 1630 is used to form a Fourier transform of the image onto a first spatial light modulator (SLM) 1632. The SLM 1632 is used to apply a controllable phase shift δ to the central zone of the incident beam 1634. In one embodiment, the lens L3 1630 serves as a second lens of a 4-f system creating a final image on a first detector 1636 (here illustrated as a CCD) using a second beam splitter 1638. In one embodiment, the system illustrated in
A variety of devices and schemes can be used to control the system of
In various embodiments, the systems and methods of the present include dynamic focus using, for example, a microlens. In various embodiments, the systems and methods of the present invention include parallel focus to, for example, image two or more points on a sample at the same time. In various embodiments, the systems and methods of the present include coherence function shaping to, for example, access in depth several points at the same time.
The phase contrast microscopy systems of the present invention utilizing spatial light modulation can be operated in two modes. In the first mode, referred to hereafter as the “amplitude mode” Fourier filtering can be obtained and calibration performed. In the second mode, referred to hereafter as the “phase mode” wavefronts of the light can be reconstructed and phase imaging performed.
In the “phase mode,” in various embodiments, there is no polarizer before the SLM, and the light is aligned with the fast axis of the SLM. The incident light is phase shifted, for example, according to the addressed values on the SLM.
In the “amplitude mode” a polarizer is placed in front of the SLM. The incident light on the SLM is phase shifted (as, for example, in the “phase mode”) and the polarization is rotated. As the light reflects from the SLM, it returns through the polarizer and the signal is attenuated. Therefore, there is a calibrated decrease in amplitude based on the SLM phase shift.
Examples are provided in which a transmission geometry in accordance with the present invention has been used and examples are provided in which a reflection geometry in accordance with the present invention has been used. The rotation unwrapped notation which appears, for example, in
In this example, a well calibrated sample has been investigated and illustrates that the present invention can provide quantitative information at the nanometer (nm) scale. The sample consisted of metal deposition on a glass substrate, followed by etching. The metal deposit pattern was in the shape of the numeral eight and the thickness of the metal layer was about 140 nm, as measured with a nano-profilometer.
As illustrated by
In this example, onion cells were phase imaged using a transmission geometry in accordance with the present invention. An intensity image 2500 of the onion cells is shown in
The intensity image,
Preferred embodiments of the present invention include the use of the coherent decomposition of a low-coherence optical image field into two different spatial components that can be controllably shifted in phase with respect to each other to develop a phase imaging instrument. The technique transforms a typical optical microscope into a quantitative phase microscope, characterized by high accuracy and a sensitivity of λ/5,500. The results obtained on live biological cells suggest that the instrument in accordance with a preferred embodiment of the present invention has a great potential for quantitative investigation of biological structure and dynamics.
Phase contrast and differential interference contrast (DIC) microscopy are capable of providing high-contrast intensity images of transparent biological structures, without sample preparation. The structural information encoded in the phase of light is retrieved through an interference process. However, while both techniques reveal the structure of the sample in the transversal (x-y) plane, the information provided on the longitudinal (z) axis is largely qualitative.
As described herein before, phase shifting interferometry has been used for quite some time in quantitative metrology of phase samples and various interferometric techniques have been proposed. The phase noise due to air fluctuations and mechanical vibrations inherently present in any interferometer makes the quantitative retrieval of the phase associated with an optical field particularly challenging in practice. Preferred embodiments include related wavelengths to overcome this obstacle.
Further, a non-interferometric technique based on the irradiance transport equation has been proposed for full-field phase imaging, at the expense of time-consuming numerical computations. Using spatial light modulation with laser radiation, phase images of λ/30 sensitivity have been also obtained. Digitally recorded interference microscopy with automatic phase shifting (DRIMAPS) is a method that makes use of conventional interference microscopes and provides phase images of biological samples. Although in DRIMAPS no precautions are taken against the phase noise, which ultimately limits the sensitivity of any phase measurement technique, the potential of this instrument for applications in cell biology has been demonstrated.
A preferred embodiment of the present invention includes a low-coherence phase microscope (LCPM) as a new instrument for biological investigation. The technique transforms a traditional optical microscope into a quantitative phase microscope characterized by very good accuracy and extremely low noise. The principle of the technique relies on the coherent decomposition of the field associated with an optical image into its spatial average and a spatially varying field, which can be controllably shifted in phase with respect to each other. Let E(x,y) be the complex image field, assumed to be stationary over the spatial domain of interest. This field can be expressed as
E(x,y)=E0+E1(x,y) (57)
where E0 is the spatial average and E1 the spatially varying component of E. Thus an arbitrary image can be regarded as the result of an interference phenomenon between a plane wave (the average field) and a spatially varying field. It should be noted that, as a result of the central ordinate theorem, E0 and E1 can be assimilated in each point of the image with the zero- and high-spatial frequency components of the field E. Consequently, these two spatial components can be easily separated and independently phase modulated by performing a Fourier decomposition.
The experimental setup is depicted in
In Equation 58, the factor β represents the amplitude ratio of the two field components, β(x,y)=|E1(x,y)|/|E2|. The parameter β is measured operating the PPM as a π/2 wave plane (amplitude mode) that selectively filters the two spatial frequency components. Thus, using Equation 58, the spatial phase distribution of a given transparent sample can be uniquely retrieved. The optimal value of the on-axis modulated area in the Fourier plane was found to be 160×160 μm2, while the FWHM intensity-based diffraction spot associated with the optical system at the same plane had a diameter of approximately 100 μm. Since the numerical computations of Equation 58 are virtually instantaneous, the speed of the phase image retrieval is limited only by the refreshing rate of the PPM, which in one embodiment is 8 Hz. However, the overall speed of technique can be potentially increased by using other spatial modulators, such as ferroelectric liquid crystals.
In order to demonstrate its potential for quantitative phase imaging, the LCPM technique was applied for investigating various standard samples.
The LCPM instrument was further used to phase image live biological cells.
The phase image of a whole blood smear is shown in
In order to assess the stability of the instrument against the phase noise and, ultimately, quantify its sensitivity, a cell well containing culture medium only (no cells) was imaged over a period of 100 minutes, at intervals of 15 s.
Thus, preferred embodiments of the present invention include a low-coherence phase microscope, which is characterized by high accuracy and a λ/5,500 level of sensitivity. The preliminary results on live cancer and red blood cells suggest that the apparatus and methods have the potential to become a valuable tool for structure and dynamics investigation of biological systems. By incorporating a traditional optical microscope in the system setup, the instrument in accordance with a preferred embodiment of the present invention is characterized by high versatility and particular ease of use.
The claims should not be read as limited to the described order or elements unless stated to that effect. Therefore, all embodiments that come within the scope and spirit of the following claims and equivalents thereto are claimed as the invention.
The present application is a divisional of U.S. application Ser. No. 10/871,610 filed Jun. 18, 2004, which is a continuation-in-part of U.S. application Ser. No. 10/823,389 filed Apr. 13, 2004, which is a continuation-in-part of U.S. patent application Ser. No. 10/024,455 filed Dec. 18, 2001 and claims the benefit of U.S. Provisional Application No. 60/479,732 filed Jun. 19, 2003. The entire contents of the above applications are incorporated herein by reference in their entirety.
The invention was supported, in whole or in part, by a Grant No. P41-RR02594 from the National Institutes for Health. The Government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
60479732 | Jun 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10871610 | Jun 2004 | US |
Child | 12494605 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10823389 | Apr 2004 | US |
Child | 10871610 | US | |
Parent | 10024455 | Dec 2001 | US |
Child | 10823389 | US |