The work leading to this invention has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no 258461-TERATOMO.
The present invention relates to methods for obtaining both amplitude and phase information in imaging a specimen, and, more particularly, to holographic imaging with a synthetic reference wave, enabling various advantages such as efficient subwavelength holography.
Holography has found a place in various domains of technology where waves scattered from an object are used to image the object. Holography is a form of coherent imaging in which the phase of a scattered wave conveys information about the scattering object, and it does so by interference between a wave scattered by the object and a wave that is either derived from the same source used to illuminate the object or bears a known (typically fixed) phase relationship to the illuminating wave.
Examples of coherent imaging are known in acoustics, in radar, as well as in optical holography. Interference of a scattered wave with the illuminating wave itself gives rise to fringes in the place of a detector, while an example of post-detection interference of a detected scattered wave with a non-radiated signal derived from the illuminating source is U.S. Pat. No. 6,825,674 (to Smith). The phase relation between the illuminating source and the signal used as a phase reference may be fixed, or may be modulated, as in U.S. Pat. No. 3,856,986 (to Macovski), for example, and the source, reference, or detector may be spatially swept, as in U.S. Pat. No. 3,889,226 (to Hildebrand), or U.S. Pat. No. 3,640,598 (to Neeley), for example. Processing of synthetic aperture radar (SAR) data with a coherent optical processor (where processing is performed in the visible portion of the electromagnetic spectrum) is termed a ‘quasi-holographic’ system, and may be used to yield reconstructed radar images, as described by Pasmurov et al., Radar Imaging and Holography, (The Institution of Engineering and Technology, London, 2009, hereinafter, Pasmurov)), which is incorporated herein by reference.
In all of the foregoing applications, the illuminating beam is diffracted from an aperture defined by the source of illuminating radiation and impinges upon the scattering object with a wavefront characterized by a finite radius of curvature. Moreover, in all of the foregoing applications, illumination of the scattering object is either monochromatic or quasi-monochromatic, which is the say that the range of wavelengths impinging on the scattering object is contiguous in frequency, and is narrow in comparison with the central wavelength, where “narrow” typically connotes a bandwidth-to-wavelength ratio of less than 5%.
In the various fields of optical microscopy, including, for example, optical scanning microscopy, or confocal microscopy (with additional modalities discussed below), the phase of light scattered by, or transmitted through, a specimen may carry important information. However, under ordinary circumstances, phase information is lost in the recording process because optical detectors record only the squared amplitude of the field and not the phase. While it is possible to determine the phase interferometrically at each pixel of the image separately, this requires the direct or indirect acquisition of multiple data points at each pixel, as is obvious from simple information-theoretic considerations, since amplitude and phase components must be distinguished. That is, at the least, the reference field must be varied to obtain data at three delays to as to determine the amplitude of the unknown field, the phase of the unknown field and the amplitude of the reference. In practice, noise in the measurements makes this minimalistic scheme unstable and many data must be obtained to perform a stable, nonlinear fit to a sinusoid. This typically requires the acquisition of tens to a few hundred data point distributed over a cycle of the interferogram. The result of this requirement is an acquisition time multiplied by a large factor, and substantially increased apparatus complexity.
As an alternative to point-by-point interferometry, holography records the optical amplitude and phase over an entire image by introduction of a reference wave. The interference between the optical field and the reference wave creates a specific interference pattern that encodes the optical amplitude and phase. The fact that the whole field is recorded gives rise to the illusion of a three-dimensional scene, the aspect of holography best known in popular culture.
Methods of optical scanning microscopy, in particular, have had tremendous impact in almost all fields of science and technology, especially in biology. While optical phase measurements provide an added level of insight, they have come at the cost of increased complexity, longer measurement times, and smaller images. In wide-field methods, holographic measurements are fast and relatively easy. The power of holography is that it encodes the amplitude and phase of the optical field at the measurement plane over the entire image and generally does not require the acquisition of any more data than are acquired in the nonholographic version of the same modality. Holography has proven difficult to implement in far-field scanning methods and is seemingly impossible in near-field and super-resolved imaging for reasons now described in detail.
A generic, prior art holography apparatus is shown in
I(r)=|Ur(r)|2+|Us(r)|2+Us*(r)+Ur(r)+Ur*(r)Us(r). (1)
In the case that the reference field is a plane wave and in the plane of detection is of the form Ur=Arei k
{tilde over (I)}(q)=|Ar|2δ(q)+C(q)+Ar(k∥+q)+Ar*(k∥+q), (2)
where the tilde indicates FT with respect to position, δ is the Dirac delta function, and C is the autocorrelation of . Of the four terms, the first term is a constant, proportional to the square of the reference amplitude, and may be subtracted or ignored. The second, autocorrelation, term is the square of the scattered field and is typically considered negligible with respect to the other terms and may be made so by increasing the amplitude of the reference field. The so-called direct term, Ar*, and conjugate term, Ar, proportional, respectively, to the scattered field and its conjugate, are shifted by −k∥ and k∥, i.e., they are shifted by 2|k∥| in the FT plane. (The conjugate of a complex quantity has an imaginary component of equal magnitude and opposite sign.)
In the case that the scattered field is spatially bandlimited to the Ewald circle of reflection (as discussed in Born & Wolf, Principles of Optics, (Cambridge UP, 7th ed., 1999), see pp. 699-703), the direct and conjugate images do not overlap and may be obtained separately by simply filtering the Fourier transform with a filter matched to the spatial bandwidth of the field and centered at q=−k∥. More precisely, assuming Us is band limited and that D(q)=1 for q in the support of Ũs (where Ũs is non-zero) and D(q)=0 otherwise, then, if D(k∥+q)D(k∥−q)=0, and assuming the autocorrelation term is negligible, (k∥+q)=AD(k∥+q)Ĩ(q)/|A|2. Either direct or conjugate image, once filtered, may be referred to herein as an “isolated crossterm.” Thus, off-axis holography provides a means to determine the complex field, both phase and amplitude, from a single image so long as the field is band-limited.
In far-field holography (where fields emanating from a point are effectively spherical waves, i.e., where points of constant phase lie on a sphere), the physics of wave propagation guarantees that the scattered field is spatially bandlimited to the Ewald circle of reflection, which is to say that the support of Ũs is contained within a circle of radius 2π/λ. The component of the wavevector of the reference wave in the detector plane may likewise be as large as 2π/λ. Thus, the direct and conjugate terms may always be determined separately by off-axis holography with sufficiently oblique illumination.
As has long been known, the physics of waves (the fact that propagating fields are spatially bandlimited) imposes limits on resolution in standard far-field imaging systems, and these resolution limits are indeed manifested in the low-resolution reconstruction of the phase and amplitude in
However, a marriage of holographic phase imaging and superresolved imaging has remained seemingly impossible because of the fundamental issue illustrated in
In order to separate the direct and conjugate terms, a much larger reference wavevector would be needed. A superresolved imagining system working at a wavelength of 10 microns with 10-nm resolution would require a k∥ of size three orders of magnitude greater than the free-space wavenumber. No practicable mechanism exists to physically generate such a reference field. Superoscillatory reference waves with k∥ a few times larger than the free-space wavenumber have been tried (as by Bozhevolnyi et al., Near-field Optical Holography, Phys. Rev. Lett., vol. 77, pp. 3351-54 (1996), incorporated herein by reference). However, superoscillatory reference waves that are only several times larger than the free-space wavenumber of the imaging beam do not suffice, and at least two coregistered, nondegenerate holograms are needed in order to independently determine phase and amplitude, as shown by Carney et al., A computational lens for the near-field, Phys. Rev. Lett., vol. 92, 163903 (2004), incorporated herein by reference. Indeed, confocal scanning holography, as described by Jacquemin et al., A low-error reconstruction method for confocal holography to determine 3-dimensional properties, Ultramicroscopy, vol. 117, pp. 24-30 (2012) has proven elusive.
To enable phase imaging with a single, superresolved hologram, reference fields with wavevectors hundreds, or even thousands, of times the size of the free-space wavenumber, are required. Such fields will be referred to herein as “hyperoscillatory waves.” In order to achieve true supperresolution holography, it would thus be desirable to provide an effectively hyperoscillatory reference wave.
In accordance with embodiments of the present invention, a method is provided for imaging at least one of a phase and an amplitude characterizing a scattered field that emanates from a sample. The method has steps of:
In accordance with another embodiment of the present invention, the step of processing the hologram to generate an image may include:
In alternate embodiments of the present invention, sequentially scanning a parameter may include sequentially scanning a position in physical space. The illuminating beam may include a beam of photons, such as in at least one of the infrared, terahertz, visible, ultraviolet, and x-ray portions of the electromagnetic spectrum. Sequentially scanning the position may include scanning a probe position relative to the sample.
In further embodiments of the invention, sequentially scanning the position may include stepping an illuminating beam to a plurality of positions at successive fields of view, each field of view encompassing a portion of the sample. At least one of the illuminating beam and the reference beam may be quasi-monchromatic. A plurality of illuminating or reference beams may be distinguished in the step of detecting. Superimposing the reference beam and field associated with the scattered field may include superimposing a plurality of reference beams that traverse distinct reference paths.
In other embodiments of the invention, the single crossterm may be one of a direct term and a conjugate term in a Fourier transform of the holograms. The scattered field may be characterized by a wavelength substantially equal to a wavelength characterizing the illuminating beam. The scattered field may be characterized by a wavelength that is distinct from any wavelength characterizing the illuminating beam, and, at least one of the illuminating and reference beams is one of the group including a beam of photons, plasmons, phonons, surface waves, surface polaritons and massive particles. At least one of the illuminating and reference beams may propagate in free space or may be a guided wave.
In yet other embodiments of the present invention, sequentially stepping the local probe to a plurality of successive probe positions may include motion of a nanoparticle within the medium. At least one of the illuminating beam and the reference beam may include a plurality of wavelengths, which may, or may not, be distinct. A frequency characterizing the illuminating beam may be scanned.
In further embodiments of the invention, the probe may be a local probe, or, more specifically, a near-field probe. Sequentially scanning a parameter may include sequentially scanning at least one physical parameter, such as voltage, temperature or magnetic field, applied to a portion of the sample. It may also include monitoring a dynamic process or movement on the sample. Additionally, the superimposed reference beam and field associated with the scattered field may be wavelength-resolved for each locus to produce a plurality of detector signals, and the steps of recording the detected signal and processing the hologram to generate an image may be applied to each of the plurality of detector signals. Wavelength-resolving may be performed by means of a grating. Alternatively, the superimposed reference beam and field associated with the scattered field for each locus may be demultiplexed to produce a plurality of wavelength-resolved detector signals.
In yet other embodiments of the invention, stepping the local probe to a plurality of successive probe positions may include motion of a nanoparticle within a medium, where the motion may be induced by at least one of an electric field, a magnetic field, and optical tweezers.
Sequentially stepping a local probe to a plurality of successive probe positions may include stepping the local probe to a plurality of successive probe positions in three dimensions, and stepping in a third dimension is accomplished by vibration of the local tip at a fundamental frequency. The detector signals may be simultaneously demodulated at harmonics of the fundamental frequency.
Sequentially stepping the illuminating beam to a plurality of successive fields of view may include scanning confocal microscopy and may include stepping the illuminating beam to a plurality of successive fields of view in three dimensions. Imposing a specified phase function on a reference beam relative to the illuminating beam includes modulating a phase characterizing the reference beam. In particular, the reference field phase may be modulated by reflection from a movable mirror.
In accordance with further embodiments of the invention, the reference beam may be generated from the illuminating beam by partial reflection at a beamsplitter chosen from the group of a dielectric surface, a thin metal film, and a profiled mirror. The scattered field may be collected with a light-concentrating element, and detecting the superimposed reference beam and field associated with the scattered field may include detection by means of a photodetector. Imposing a specified phase function on a reference beam relative to the illuminating beam includes incrementing a phase characterizing the reference beam as a linear function of time, and may also include wrapping the phase modulo 2π.
In accordance with other embodiments of the present invention, a reference beam phase function may be imposed in such a manner as to position crossterms associated with distinct frequencies in specified positions in the transform space. Imposing a specified phase function on the reference beam relative to the illuminating beam may include nonlinear reference phase functions. Imposing a specified phase function on the reference beam relative to the sample beam may include feedback from topography of the physical medium, and may be tailored to identify defects in manufactured samples. There may also be successive imaging to provide dynamic imaging of a developing sample.
In accordance with another aspect of the present invention, an apparatus is provided for imaging at least one of a phase and an amplitude characterizing a scattered field emanating from a sample. The apparatus has a scanning mechanism for sequentially scanning a parameter characterizing the sample to a plurality of successive loci of a hologram dimension, a source for generating an illuminating beam, and a phase modulator for imposing a specified phase function on a reference beam relative to the illuminating beam. Additionally, the apparatus has a detector for detecting the reference beam and a field associated with the scattered field to generate a detector signal and a processor adapted for processing the hologram to generate an image of at least one of phase and amplitude associated with the scattered field.
In other embodiments of the invention, the scanning mechanism may be a scanning local probe or a scanning near-field probe, and the scanning local probe may be a cantilevered tip of an atomic force microscope. The phase modulator may be a translating mirror. The source may emit multiple discrete wavelengths, or may be swept in wavelength. A dispersive element may resolve wavelength components prior to detection, and a demultiplexer for separating components of the detector signal.
In accordance with another aspect of the present invention, a method is provided for recording a hologram, having steps of:
a. sequentially scanning a parameter characterizing the sample to a plurality of successive loci of a hologram dimension;
b. illuminating the sample with a focusing illuminating beam;
c. imposing a specified phase function on a reference beam relative to the illuminating beam;
d. superimposing with the reference beam a field associated with the scattered field;
e. detecting the superimposed reference beam and field associated with the scattered field for each locus to produce a detected signal; and
f. recording the detected signal as a function of locus in order to obtain a hologram.
In another embodiment, the hologram may be rendered as a classical viewable hologram.
In another embodiment yet, and apparatus is provided for recording a hologram. The apparatus has a scanning mechanism for sequentially scanning a parameter characterizing the sample to a plurality of successive loci of a hologram dimension and a source for generating an illuminating beam. A phase modulator imposes a specified phase function on a reference beam relative to the illuminating beam, a detector detects the reference beam and a field associated with the scattered field to generate a detector signal. Finally, the apparatus has a processor adapted for recording the detected signal as a function of locus thereby generating a hologram.
The foregoing features of the invention will be more readily understood by reference to the following detailed description, taken with reference to the accompanying drawings, in which:
In accordance with embodiments of the present invention, the concept of a “synthetic reference wave” (SRW) is employed, corresponding to the “artificial reference field,” as that term was used in the '296 Provisional application. Use of an SRW, as now described with respect to a variety of embodiments, allows circumvention of the usual physical limits, described above with reference to the prior art, in order to achieve novel holographic effects on scales both larger and smaller than the diffraction limit. Techniques in accordance with embodiments of the present invention may be referred to herein as Synthetic Optical Holography (SOH).
In an exemplary application of the present invention, detailed below, scattering-type scanning near-field optical microscopy (s-SNOM) is combined with synthetic reference wave holography to achieve nanoscale infrared images with unprecedented speed and simplicity. Specifically, embodiments of the present invention are described, a 2.25-megapixel image of nanowire arrays, acquired in less than an hour, will be shown as an example of how the present invention may be applied. Methods in accordance with the present invention may advantageously shift phase-resolved SNOM from the kilopixel range into the realm of digital cameras. They may also be advantageously applied to many kinds of scanning microscopy, for instance, confocal microscopy, structured illumination, STED, Raman and two-photon scanning microscopy. Where the scattered field is characterized by a frequency that is distinct from any frequency characterizing the illuminating beam, a reference beam is employed that is coherently converted to the frequency of the scattered field of interest.
The term “image” shall refer to any multidimensional representation, whether in tangible or otherwise perceptible form, or otherwise, whereby a value of some characteristic (amplitude, phase, etc.) is associated with each of a plurality of locations corresponding to dimensional coordinates of an object in physical space, though not necessarily mapped one-to-one thereonto. Thus, for example, the graphic display of the spatial distribution of some field, either scalar or vectorial, such as field intensity or phase, for example, constitutes an image. So, also, does an array of numbers, such as a 3D holographic dataset, in a computer memory or holographic medium. Similarly, “imaging” refers to the rendering of a stated physical characteristic in terms of one or more images.
An image (in the generalized sense defined in the preceding paragraph) containing both phase and amplitude information associated with a scattered field may be referred to herein as a “hologram.” It is to be understood that, for further analysis or display of data, not all of the information contained within a hologram must be used.
The terms “object,” “sample,” and “specimen” shall refer, interchangeably, to a tangible, non-transitory physical medium or object capable of being rendered as an image, whether in transmission or in scattering.
Where a field is incident upon a sample, the “incident field” shall refer to that field present in the absence of the sample, and the “scattered field” is defined to be the difference between the total field at the detector when the sample is present and the incident field. The difference between the total field measured in the vicinity of the sample or in the far field may arise due to elastic or inelastic processes, such as diffraction, scattering, fluorescence, stimulated emission or stimulated emission depletion, etc.
The word “probe,” as used herein, and unless otherwise required by specific context of usage, shall encompass both any illuminating beam 205 (shown in
The field at the sample may be referred to as “the local field.” In some instances, the local field may be reflected, transmitted, or scattered towards the detector, and, in other instances, the local field may be guided to a detector, via a tapered glass fiber, transmission line, or otherwise. In all cases, the field at the detector shall be referred to as “a field associated with the scattered field,” whether the local field propagates to the detector via free space, or whether the field is guided to the detector.
In a wave function of the form Ur[r(t)]=Ar(t)e−i φ
As used herein, and in any appended claims, “modulation” as referring to phase in the context of any sort of interferometric measurement is to be understood as applying either to a reference beam or to a signal beam, or to any combination of the two such that the relative phase between the reference and scattered waves incident upon a detector is varied.
It is to be noted that while tip 222 may serve for purposes both of a “local probe” and a “near field probe,” these are utterly separate functional concepts and must, under no circumstances, ever be conflated. A “near field probe” is merely one form of a local probe, and both terms are now rigorously defined to avoid any misunderstanding:
A “local probe,” as the term is used herein, shall be defined as a localization of the field used to probe the sample by any means. This includes, for example, a focus, associated with a focal point which is scanned. Such a focus may be formed by refractive, reflective, or diffractive elements. The local probe may also be formed by other means of field-localization such as the placement of an aperture in the near-zone of the sample. The local probe may generally interrogate a finite volume of the sample but is associated with a point that is scanned.
A “near field probe,” as used herein, is a local probe that, additionally, consists of a material element which interacts with the sample through the field in the near zone of the sample, that is in the region where evanescent fields are appreciable. The near-field probe will usually consist of elements which are of high spatial-frequency, that is, it has features with a local radius of curvature smaller than the wavelength. The probe, like the local probe of which it is a special case, is associated with a point which is scanned. It may be made of any material, but is commonly metallic, or a semiconductor, or a dielectric. The common near-field probe is a sharp needle-like structure with a tip brought into close proximity or even touching the sample.
The verb “record,” as used herein and in any appended claims, shall refer to reducing data to a format that is sufficiently fixed and non-transient as to be capable of being subsequently read by a person or machine. Thus, for example, fixing a detector signal in a memory stage of a processor shall constitute “recording” the phenomenon—say, field intensity—that is sensed by the detector.
As used herein as a matter of notational convention, and as indicated in the apparatus
The term “white light position,” referring to any interferometer, shall denote the point where the phase difference between light traversing both interferometer arms is zero for all wavelengths.
The adjective “arbitrary,” and the adverb “arbitrarily,” as applied to reference fields that may be generated in accordance with the present invention, are used in the sense of “unconstrained” or “unrestricted,” and subject to specification by the designer or user of a system, rather than in a sense of randomness. It is to be understood, however, that the designer may specify random basis functions, for example, for purposes of compressive holography, as further described below, and randomness by design is not excluded from the scope of the present invention.
Examples of Sources
Reference will be made in the present description to a variety of types of illumination, of which several examples will now be presented, without limitation, and to which reference may then be made infra without further elucidation.
A source of electromagnetic radiation (also referred to herein as a “source,” or as an “illuminating source”) may be referred to as a “broadband source” when it is characterized by a bandwidth which exceeds the spectral resolution of the detection scheme, in the case of spectrally resolved measurements, or by a bandwidth which exceeds the frequency defined by the inverse of the longest delay introduced in the reference arm. Examples, provided without limitation, of sources characterized herein as broadband, include thermal radiation sources such as a glow bar, as conventionally employed in Fourier transform infrared (FTIR) spectroscopy, and plasma sources such as a current-induced high-temperature argon arc source, described by Bridges, et al., Characterization of argon arc source in the infrared, Metrologia, vol. 32, pp. 625-628 (1995), incorporated herein by reference. In another example, coherent broadband infrared light may be generated by a difference frequency generator (DFG), as described by Amarie et al., Broadband-infrared assessment of phonon resonance in scattering-type near-field microscopy, Phys. Rev. B, vol. 83, 045404, (2011), incorporated herein by reference. Synchrotron radiation in the x-ray spectrum provides a further example of broadband radiation.
Another class of sources useful in various embodiments of the present invention includes sources that emit multiple discrete wavelengths simultaneously, λ1, λ2, . . . , λN. Examples of such sources, provided without limitation, include: frequency comb light sources, lasers emitting multiple-frequency radiation, gas discharge lamps, and emission from a broadband source filtered with a Fabry-Perot resonator.
Certain embodiments of the present invention may be practiced with purely monochromatic sources, such as quantum cascade lasers or gas lasers, for example, which continuously emit at the same single wavelength. In other embodiments, monochromatic frequency scanning sources may be employed, emitting radiation at only one single wavelength at a given time. The wavelength may be swept between two defined limits. The wavelength sweeping may be continuous, partially continuous or discrete, manual or automatic. For discrete sweeping, the wavelengths λ1, λ2, . . . , λN are sequentially emitted. Examples of monochromatic frequency scanning sources include quantum cascade lasers, gas lasers and other laser types using a motorized grating or rotating mirror to sweep the frequency.
“Metachromatic frequency scanning sources” emit radiation simultaneously at a number M of discrete wavelengths numbered λ1, λ2, . . . , λM. Each individual wavelength λm may be swept from wavelength λm,a to wavelength λm,b. Sweeping may be discrete, partially continuous or continuous. Sweeping of the wavelength λm may be independently from the weeping of other wavelengths, or, alternatively, all wavelengths λm may be swept dependently. An example of emission falling within the rubric of a multichromatic frequency scanning source is emission from several monochromatic frequency scanning sources superimposed to form a single output beam, where each source may be swept independently.
Use of SRWs, defined above, is introduced herein as a new concept in holography, as applied to imaging systems that make use of serial data acquisition such as confocal microscopy (
The synthetic reference wave (SRW), introduced in accordance with the present invention, solves the fundamental problem of generating hyperoscillatory reference fields, and it also provides a means for readily implementing novel holographic schemes in a number of serial coherent imaging modalities. In a regular holography experiment, the entire image is acquired in one moment and the reference field must satisfy the reduced wave equation,
(where ω is the optical frequency and c is the velocity of light), limiting the available fields. However, in a serial data acquisition system such as scanning microscopy, the data at each point are acquired also at different moments in time. The reference field may be adjusted arbitrarily (in the sense defined above) between measurements so as to achieve any desired spatial dependence.
More explicitly, the intensity, I(r), that is scattered from a scanning probe tip, or, in another embodiment of the invention, that is either reflected from, as shown in
Examples of apparatus in which embodiments of the present invention may advantageously be implemented are now described with reference to
A source 202 that is pulsed, or that is swept in frequency, is also within the scope of the present invention. Alternatively, a scanning near-field optical microscope 220 (described infra, with reference to
Beam 204 is split in a Michelson (or other) interferometric configuration, by means of beamsplitter 206, for example. Of course, it is to be understood that
A second portion of beam 204 is used to form reference beam Ur, the phase of which may be modified by phase modulator 212 (depicted as a translating mirror, by way of example, although any means, such as a spatial light modulator (SLM), etc., is within the scope of the present invention). Alternative methods for generating the reference beam Ur include the use of a separate source that emits coherently with source 202 at either the same frequency or at a different frequency. In this case, a further method for controlling the phase of the reference beam Ur is made available by detuning the emission frequency of the second source with respect to the frequency that is to be detected in the scattered field Us.
As a reminder, it is to be noted that, as used herein, and in any appended claims, “modulation” as referring to phase in the context of any sort of interferometric measurement is to be understood as applying either to a reference beam or to a signal beam, or to any combination of the two such that the relative phase between the reference and scattered waves incident upon a detector is varied. A variety of modulation techniques encompassed within the scope of the present invention are described below.
The configurations shown in
When the field Us scattered from the sample, or a field with a field scattered from the sample, in the sense defined above, is superimposed with one or more reference beams Ur at detector 214, a detector signal derived from detector 214 is processed by processor 230 such that an interferogram is recorded and subsequently processed as described in detail hereinafter. Processor 230 may be referred to interchangeably herein as a controller. The combination of components used to form an interferogram may be referred to collectively herein as an interferometer, designated generally by numeral 216 (shown in
Confocal microscopy is a particularly advantageous embodiment of the present invention because the scanning may be performed in three dimensions. The added dimensionality makes even more space available in the Fourier domain for the multiplexing of signals, as discussed in further detail below.
An alternative embodiment of the present invention is a scanning near-field optical microscopy (SNOM) apparatus, designated generally by numeral 220 in
It is to be noted that while tip 222 may serve for purposes both of a “local probe” and a “near field probe,” these are utterly separate functional concepts and must, under no circumstances, ever be conflated. A “near field probe” is merely one form of a local probe, and both terms have been rigorously defined in the Definition section above, which definitions are to apply under all circumstances.
Scanning tip 222 may be the cantilevered tip of an atomic force microscope (AFM), for example. As in the case of
In a particular exemplary embodiment, an illumination scheme may be employed, where the illuminating beam is s-polarized, i.e., parallel to the long axis of an object being imaged (a nanoantenna in Example I below), and perpendicular to the probing tip 222, in order to ensure an efficient excitation of the imaged object and to avoid a direct excitation of the tip 222. As is well known in the field of SNOM, any contribution to detected signal from the field scattered from the sample, or from any the macroscopic parts of the instrument (i.e., everything that is not the sharp end of the probe) needs to be removed as background. Background contributions may be suppressed in a variety of ways, and the scope of the present invention is not limited by the particular modality employed. The remaining near-field signal arises from the interaction of the end of tip with the sample where a highly localized field is generated and it is this signal that is desired for purposes of near-field imaging. This part of the signal is strongly dependent on the height of the tip above the sample.
It is typical in SNOM that the signal is obtained by tapping the tip 222 and locking in that tapping or recording harmonics of that tapping. This processing of the signal is referred to herein as “demodulation.” It is the common practice because it separates the signal for the so-called background. In nanoholography, the background is in the low-frequency part of the hologram and so demodulation is not typically necessary. It is possible to directly record the detector signal, reconstruct holograms and apply a method to remove the background in the images. For example, a linear function or any other function from the approach curve may be subtracted to remove the background, and thus demodulation is not the only method to do this. Thus, as should be made abundantly clear, demodulation is not necessary for holographic SNOM, indeed, this may be a particular advantage of SOH in some cases.
Merely by way of example, and, in no way suggesting that these techniques are required to be practiced in accordance with the practice of the present invention, it is noted that in some methods (pseudoheterodyne), tip 222 taps the surface 210 and background signal is rejected in the signal acquired by demodulation of higher harmonics of the tapping frequency Ω. (But, the reader is reminded that demodulation need not be performed, in accordance with practice of the invention.) For example, tip 222 may oscillate at a frequency, say, in the sub-MHz range, for example (tapping-mode AFM), and by subsequent demodulation of the detector signal at 3Ω. It is to be understood that particular examples of SNOM modalities are provided by way of example only, and not by way of limitation of the scope of the present invention.
Indeed, in accordance with novel methods in accordance with the present invention, higher harmonic demodulation for background suppression is not necessary at all, insofar as filtering may be achieved in the spatial Fourier transform, or by a host of other means currently known in the art or yet to be invented, all of which are encompassed within the scope of the present claims.
Volume Scanning with a Near-Field Tip
In accordance with embodiments of the present invention, the near-field probe (whether a cantilever AFM tip, or any other nanoprobe, moved by any means) may move in a volume above the physical medium being imaged, but the invention does not require that it be so moved. Other techniques of volume scanning are discussed below, with reference to
Accordingly, the SRW of the present invention is introduced with any chosen spatial dependence, by controlling the reference field as a function of time. Included in the possible reference fields is the hyperoscillatory plane wave needed in a superresolved hologram, a nanohologram. A linear-in-time phase function is particularly attractive when matched to a constant scanning rate. Such a phase function is given by φr(t)=t+φ0. The FT of the data is as in Eq. (2), and exhibits an effective k∥ determined by . Explicitly, data are obtained at positions in the image plane, r(t)=(x, y, z0)=(vxt, vyt, z0), where vx and vy are rates at which the sampled region is moving in respective directions in the frame of reference of the sample. The phase of the reference may thus be rewritten as a function of position, φr(r)=k∥·r+φ0, where kx=/vx, and ky=/vy. By a combination of making sufficiently large and vx or vy sufficiently small, a hyperoscillatory SRW is created, that is, |k∥|>>2π/λ.
Suppose that the tip 222 is scanned along the principal directions of a Cartesian system, moving rapidly in the x direction such that x=vxt and that after traveling the total scan length X, it returns at the same speed and moves one step, Δy, in the y direction. The effective velocity in the y direction is then vy=vxΔy/(2X). The phase of the reference may this be rewritten as a function of position, φ(r)=k∥·r+φ0, where kx=/vx, and ky=/vy=2x/Δy=2X/(vxΔy). The effective k∥ may thus be seen to be controlled by the rate of change of the phase in the reference arm, the length of the fast scanning axis, the speed of scanning, or the step size along the slow axis. Real systems require a finite turn-around time, T. This may be taken into account by modifying the effective slow-axis scanning speed, vy=Δy[(2X/vx)+T]−1. The effective k∥ may be made as large as desired, making accessible the hyperoscillatory fields needed in super-resolved imaging and near-field phase-resolved imaging, nanoholography.
Spectrally-Resolved Synthetic Optical Holography
A series of embodiments of the present invention are now described, wherein data are collected explicitly as a function of frequency. This may be accomplished by using a spectrometer (designated generally by numeral 245 in
Illumination source 240, in the embodiment of
Detection is performed by an array of one-pixel detectors 244. Each detector (designated by an index #n) detects the beam related to a corresponding wavelength λn. Recording the nth detected signal 248 as a function of probe position is performed in order to obtain the corresponding nth hologram 255. The recording of all detector signals #1 to #N is performed simultaneously, thus a stack of N holograms is recorded. Each hologram pertains solely a single corresponding wavelength λn. Processing of each hologram 255 proceeds as described for the SOH method as generally described herein with respect to a single-wavelength source.
In accordance with other embodiments of the invention, the number of detectors 244 may be greater or fewer than the number N of discrete wavelengths λi emitted by source 240, although use of fewer detectors results in a loss of information. In accordance with yet another embodiment, source 240 may be a continuum source, with individual detectors 244 detecting a portion of the spectrum dispersed by dispersive element 242.
Spectrally-Resolved Synthetic Optical Holography—Multiple Reference Arms
With a single reference arm 249, governed, for example, by motion of reference mirror 212 (which may be a piezo-mirror, for example), the phase modulation of the reference beam acts on all wavelengths emitted by source 240. Using multiple reference arms, phase modulation on only part of all wavelengths is enabled. A reference field derived from a single multi-frequency source may be divided into independent reference arms either by means of a dispersive element such as a prism or grating, or fields may be derived from multiple sources which independently provide input to the reference arms and are combined to form a single illuminating field used to interrogate the sample. Separate modulation of the relative phase of the reference for distinct wavelengths allows a better separation between the direct and conjugate terms for the case that the individual wavelengths λn cover a large spectral band, as when the multi-wavelength source spans an octave or more, for example (λN:λ1>2).
Instead of emitting a plurality of wavelengths, as described above, source 240 may include a broadband source, in another embodiment of the invention. In that case, each one-pixel detector 244 detects a wavelength band [λn−1, λn+]. Thus, the nth hologram #n contains amplitude and phase information on the scattered field that is spectrally integrated from λn−to λn+.
Spectral-Resolved SOH with Swept Sources and a One-Pixel Detector
A further embodiment of the invention is now described with reference to
The local probe 222 (shown in
The foregoing procedure yields a stack of holograms, #1 to #N, where each hologram 255 contains only a single wavelength λn. Processing of each hologram 255 is performed by the SOH method, as described herein, assuming a single-wavelength source.
It is to be understood that while trace 268 of source wavelength as a function of time, shows time segments 262 of emission by source 260 at each wavelength to be substantially equal in duration, embodiments in which wavelength segments are sampled for unequal durations of time are also within the scope of the present invention. Additionally, the reference phase, governing, for example, by an offset in the position of mirror 212, may be changed during the course of a frequency sweep of source 260.
Source 260, or a controller 261 associated therewith, may send a synchronization signal 263 when a new sweep is started. A further synchronization signal (not shown) may be sent, as well, at each wavelength switch from λn to λn+1. These synchronization signals 263 may be used by the device that records the detector signal (SOH controller 265) to achieve the demultiplexing of the detector signal. The functionality of a demultiplexer is designated by numeral 264, although it is understood that demultiplexing may be performed by controller 265. The synchronization signal may be of either electrical or optical format (such as a TTL signal via cable or optical cable) or, it may involve a change of the source emission (e.g., blanking or attenuation of the source emission during a short time). Synchronization signals are not required, however, in order to implement the methods described with reference to
In accordance with other embodiments, a synchronization signal 263 may also be issued by the SOH controller, telling the source 260 to (a) start a new sweep, or to (b) switch the wavelength of emission from λn to λn+1.
Spectrally-Resolved SOH with Swept Multichromatic Source and Multiple Detectors
In accordance with other embodiments, a synchronization signal 263 may also be In accordance with a further embodiment of the invention, parallel multiplexing techniques described with reference to
Additionally, the combination of “Spectral-resolved SOH with swept sources and a one-pixel detector” with “Spectroscopic detection using an array of one-pixel detectors” is also possible within the scope of the present invention. In this case, N×M holograms are recorded, each hologram #(m,n) containing only the single wavelength λm,n.
Reference Measurements
For each emitted wavelength λn, the intensity of emission by source 260 (referred to herein as a “reference measurement”) may be recorded in parallel with transcription into digital memory of the corresponding hologram 268, as common in instrumentation. This may be accomplished by adding a further detector (not shown) to the setup, or with the same detector 214 used for recording the hologram data. In the latter case, a multiplex approach may be employed for mixing the hologram data with the reference measurement data, thus allowing for demultiplexing after the detector signal has been recorded. Additionally, the instantaneous wavelength of the source emission may be determined independently by an appropriate measurement. For example, this can be accomplished by recording an interferogram of the source emission, either with the interferometer 216 used for recording the hologram or with an additional interferometer setup. For example, the wavelength may be determined with a spectrometer.
If the local probe 222 (shown in
Furthermore, if the local probe 222 is vibrated or stepped in height, as, for example, in SNOM, a reference measurement may be obtained by comparing the detector signal at the different heights of the local probe.
The reference measurements, as heretofore described, may be used for correcting errors of source 210 in tuning to the same wavelength λn with the same emission intensity in repeated measurements, as in repeated sweeps at different locations on sample 210 of local probe 222 or illuminating beam 205.
Arbitrary SRW
Whereas discussion herein of the phase function φr(t) as a function of time t largely considers a linear ramp as a function of time, it is to be understood that such consideration is only by way of example and for heuristic convenience. In point of fact, various functions of time may be advantageous under different circumstances. One specific way to implement a phase function is to describe it as a function of the (x,y) position of the local probe, i.e., φr(x,y). In case a piezo-actuated reference mirror 212 is used, for example, this can be accomplished as shown in
Generalization to Q+1 Dimensions
In any of the modalities (such as spectroscopic dispersion of a broadband source after recombination of sample and reference beams, or temporal sweeping of monochromatic or polychromatic sources) discussed above with reference to
If variations with frequency of the detected field are not too rapid (according to the criterion laid out below), at each pixel a Fourier transform (FT) of the acquired data yields a time-domain response in which the direct and conjugate images may be identified and determined by filtering: a form of one-dimension holography. The single-point case is described below. On the other hand, if variations of the field as a function of frequency are so rapid that the FT is not sufficiently “time-limited”, the cross-information across pixels made be taken advantage of That is, by Q+1 dimensional FT, the data may be transformed to the spatial-frequency/time domain. The usual shift in the spatial frequency domain due to the phase ramp in the reference arm can be used, together with the separation on the time axis, to sufficiently separate the direct and conjugate terms. Thus, analysis of the acquired data may be performed simply as a Q+1 dimensional generalization of the Q-dimensional case.
In the case, moreover, where spatial variations in the axes 1, 2, . . . , Q are so rapid that the FT is not sufficiently “time-limited”, but the spectral variations in frequencies (Q+1 axis) are not too rapid, then a Q+1 dimensional generalization can be applied and a sufficient separation of the direct and conjugate terms may be achieved.
For example, in the typical case of stepping in (x,y) directions, the stack of 2D holograms may be merged to a 3D data volume. Processing of the data then is applied to the 3D data volume, rather than to the 2D holograms individually. This transfers the individual 2D SRWs to a single 3D SRW. A separation of the direct and conjugate term is possible when the variations in at least one axis are not too rapid.
Examples of particular embodiments of the invention employing spectrally resolved modalities as described above with reference to
In accordance with one embodiment, scattered field Us may be associated with tip 222 acting as a near-field probe, as defined above. By keeping reference mirror 212 at a fixed position during the acquisition of the stack of holograms, the reference phase is constant across each hologram. Modulation of the reference phase is provided by placing the reference mirror at a distance d≠0 from the white light position of the interferometer. The white light position is the point, where the phase difference of both interferometer arms is zero for all wavelengths. The reference phase is then a function of the wavelength λ according to φ=4πd/λ, where normal incidence on the reference mirror is assumed as an example.
Discretization and Phase Wrapping
Returning now to a general discussion of methods in accordance with all forms of SOH discussed herein, the effects of spatial discretization (i.e., discretization of r) should be carefully considered. As the probe is scanned over the sample, the field is measured at time intervals Δt, leading to a spatial-sample spacing, Δx. Since the length X of {circumflex over (x)} travel is generally expected to be much larger than Δy, ky will usually be large compared to kx. Suppose that kyΔy=n2π, where n is an integer. Clearly, integer multiples of 2π in the phase have no physical consequence. Thus ky may be replaced with
Optimal Sampling and Phase Ramp
Given a fixed number of pixels along the direction of the phase ramp and assuming the autocorrelation term is negligible, the shift,
In cases where, for whatever reason, the autocorrelation term cannot be made insignificant by increasing the amplitude of the reference, filtering may be applied in order to reduce or eliminate the autocorrelation term. An increase in both the sampling rate and
Advantages of the Present Invention with Respect to the Synchronization Requirement
Phase-resolved SNOM, whether phase-shifting, or pseudo heterodyne, requires that the motion of the mirror be synchronized with the motion of the probe. This then greatly increases cost and complexity. The phase is determined at each pixel individually, this has meant that phase-resolved near-field imaging has, to-date, been slower than amplitude-only SNOM and atomic force microscopy, and therefore provides smaller images than the AFM on which it is based. Because only one datum needs to be acquired at each pixel and no synchronization is require between the probe and the reference arm, data may be acquired very fast using nanoholography, in principle as fast as AFM.
Steps in accordance with methods of the present invention are depicted, by way of example, in flowchart 300 of
Application of an SRW in accordance with an embodiment of the present invention is now described with reference to
The s-SNOM employed in this Example is based on an atomic force microscope (AFM) where a dielectric tip acts as a scattering near-field probe 222. The sample 210 and the tip 222 are illuminated from the side under an angle of about 50° to the surface normal with a focused laser beam at a wavelength of λ=9.3 μm. The tip is scanned over the surface of the sample, and the p-polarized scattered field is recorded as a function of tip position spaced at 10 nm intervals. In order to implement nanoholography, the light scattered out of the tip-sample system was superimposed with a reference field with a linear-in-time phase function. The reference phase was controlled to change by 2π every 4 steps in the y-direction, as in the simulation shown in
The improvements in speed of the imaging in nanoholography enable much larger images to be acquired, thus shifting routine practice in phase-resolved near-field microscopy from kilopixels into the realm of digital cameras, namely, megapixels.
The topography and optical images in
Examples provided above illustrate significant advantages of nanoholography as a technique, providing, as it does, important insights into the nanoscale structure of samples. Nanoholography makes this emerging technology more broadly accessible by easing implementation, and more practically useful by dramatically reducing imaging time.
Generally, the SRW concept opens new horizons in holography. Holography is no longer constrained to physically realizable reference fields. These ideas may be applied advantageously not only to near-field methods like SNOM where nanoholography is made possible, but also to other scanned imaging techniques including far-field scanning methods like scanning confocal microscopy where the probe position, r(t), is replaced by the position of the focus.
While examples have been provided where the phase function of the reference is a simple phase ramp, t, it is to be understood that Eq. (1) points to a large range of options within the scope of the present invention. Nonlinear (in time) reference wave functions may be used to provide a rapidly oscillation reference in select regions of the sample and so acquire high spatial-bandwidth data only where needed, further speeding image acquisition. Reconfigurable reference fields may be made into Hadamard masks or random basis functions used to implement compressive holography, recently shown great potential in imaging sparse objects from minimal data sets. In SNOM, the reference field may be controlled by feedback from the AFM topography to paint onto the acquired image a linear phase gradient that follows the contours of the sample. Specialized SRWs may be tailored to rapidly identify defects and irregularities in manufactured samples.
Synthetic optical holography, as described in accordance with the present invention, may advantageously provide the fastest image acquisition in near-field phase imaging today, improving imaging times by one to two orders of magnitude. While current schemes rely on pixel-by-pixel interferometry, where many data must be acquired for each pixel, in holography the information is distributed (multiplexed) so that only one datum is acquired at each pixel. Thus, the speed of imaging in nanoholography is only limited by the mechanical speed of the underlying AFM. Moreover, nanoholography is relatively easy to implement. With constant-velocity scanning and a linear phase ramp in the reference, there is no synchronization required between the reference arm and the probe. An error in synchronization between scanning tip position and phase simply amounts to an offset in the origin of time for the phase function, and the additional global phase, may be wrapped into φ0. The optical system essentially becomes decoupled from the mechanical AFM system, greatly simplifying implementation.
Multiplex Acquisition of Other Data
In prior art holography it is well-known that multiple images may be written into the same holographic plate and recovered separately. This is done by encoding each image on a different reference wave. If two images are acquired from fields which do not interfere, either because they are mutually incoherent or because the field components are orthogonal, then the multiplex recording of holographic images may be carried out simultaneously.
Frequency-Multiplexing Along a Single Axis in the Fourier Domain
In accordance with embodiments of the present invention, wavelength multiplexing may be accomplished with a single reference arm, as shown in
Making Use of the Whole Fourier Space
In the event that the reference arm delay is varied rapidly enough to induce phase wrapping in the y-direction, a further advantage may be gained in separating the signals at distinct frequencies. Explicitly, suppose that k1 y=n2π/Δy, that is
Thus
A particularly advantageous embodiment may be employed in conjunction with confocal microscopy where the field of view 201 (shown in
Multiplexing with Multiple Reference Arms
Referring now to
implying that the two signals will be displaced by collinear vectors in the Fourier space. However, the foregoing example is provided solely by way of example, and without limitation. It is to be understood that the wavevectors ks∥ and kp∥ are not required to bear any relation to one another within the scope of the present invention. However, once the phase-wrapping effect is taken into account, as in the single-arm multifrequency case, appropriate choice of rates of phase variations, scan length, step size, etc. allow for the construction of nonparallel
Examples of Three-Dimensional Recording Method for Near-Field Tomography
Examples are now provided of three-dimensional methods for near-field tomography. Non-SOH methods of near-field tomography are described in U.S. Pat. No. 7,978,343 (to Sun et al.), which is incorporated herein by reference. As has been made clear above, use of tip 222 as a local probe in other contexts of SOH is never to be confused with its use as a near-field probe, an example of which is now described.
Stepping the local probe to a plurality of successive probe positions in three dimensions may be implemented by literally stepping the probe in three dimensions. This means that the probe position is updated and the probe remains stationary at that position during some time. In SNOM, however, local probe 222 is vibrated in height, thus the third dimension is not accessed in a discrete, but rather in a continuous, fashion. This allows for treating this situation as a 2D stepping process for the x,y coordinates and using multiplexing to encode the height variation. Data can be recovered using a time-division demultiplexing approach or a spectral-domain demultiplexing approach, quite similar to methods described above in the context of Spectrally-Resolved SOH. At each voxel, amplitude and phase are obtained. This data set can be used for tomographic reconstruction as described in Sun et al., Nanoscale optical tomography using volume-scanning near-field microscopy, Applied Physics Letters, vol. 95, pp. 121108-121108-3, (2009).
The description of SOH as provided herein may similarly also apply to other methods that use vibration of the local probe, as will be apparent to persons of ordinary skill in the art. In particular, stepping may be continuous in one, two, or all, dimensions, within the scope of the present invention.
One such example, described now with reference to
In accordance with another embodiment within the scope of the present invention, described now with reference to
Detector signal 706 is recorded during one oscillation cycle T=1/Ω with a digitizer card 712. The sampling rate is chosen such that N data points are acquired during time T. Each data point #n contains the momentary detector signal I at the time when the data point #n was recorded. Storing the data point #n as a function of the probe position produces a hologram #n. Thus, a stack of hologram #1 . . . #N is obtained. Each hologram #n may be processed individually, or the stack of holograms #0 . . . #N may be considered a Q+1-dimensional space (here: Q=2) and processed as described above.
Alternatives to Stepping the Probe
Mechanical stepping of the probe is only one way of scanning that may be employed in recording a hologram in accordance with the present invention. Other parameters characterizing a physical medium may also be scanned within the scope of the invention. Other scanning methods include: (a) scanning a voltage applied across a portion of the sample, (b) scanning the temperature of the sample, (c) scanning any other physical parameter, such as magnetic field, applied to a portion of the sample, or (d) “scanning” the time by monitoring a dynamic process or movement of a developing sample. In such cases, the x- and y-axes of the hologram do not indicate distance, but rather voltage, temperature, time etc. This applies to all SOH methods. A “scanning mechanism” shall refer to any one of the physical devices, all known in the art, for imposing the aforesaid variations in the values of physical parameters associated with a sample.
The value of a scanned parameter at any particular step of the scanning process shall be referred to herein as a “locus” (or plural, “loci”). Thus, a locus is a dimension of a hologram that need not be physical space but may include such parameters characterizing the sample such as voltage, temperature, etc.
Particularly, each applied variation spans a new dimension of the hologram. For example, scanning the local probe in x,y directions and further varying the sample voltage creates a 3-dimensional hologram.
Instead of being applied to the sample 210, for example, scanning methods may be applied to other parts of a microscope setup are also within the scope of the present invention. For example, scanning may be applied to the source 204 or to modulators introduced in the interferometer 216 (shown in
Alternative Multiplexing Modalities
Whereas multiplexed SOH has been described above in the context of a swept source (with reference to
For example, within the scope of the present invention, scanning may also be applied to such other quantities as (a) scanning a voltage applied to part of the sample 210 or (b) illuminating the sample 210 with a second source to provoke changes in the sample. Moreover, instead of (or, in addition to) applying multiplexing with respect to some aspect of the sample, multiplexing may also be applied to other parts of a microscope setup. For example, multiplexing may be applied to the source or to modulators introduced in the interferometer 216.
Common-Path (FTIR-Like) Synthetic Optical Holography
The term “Synthetic Optical Holography,” as used herein and in any appended claims, may also be applied where sample 210 is disposed, not within one arm of an interferometer 216, as depicted, for example, in
In FTIR configuration 800 of
It is typically preferred that strong sources be used with FUR-like methods, allowing for short integration times (typically, in the millisecond regime). With thermal sources, as an example for a weak source, integration times are in the seconds to minutes range, thus the full speed of this method cannot be achieved. The embodiment described with reference to
SOH Using One or More Arrays of Pixel Detectors
In embodiments of the present invention previously described above, the scattered field is evaluated for only a single position of the local probe or illuminating beam at a given time. Moreover, the superposition with the reference field is detected by a one-pixel detector. This concept can be extended by detecting the scattered field in a spatially-resolved way, described, now, with reference to
Beam 204 from source 202 is split by beamsplitter 206 into illuminating beam 205 and reference beam Ur. As sample 210 is scanned in the y directions, scattered beam Us from sample 210 is collected by imaging system 820 and combined with reference beam Ur at line camera 822, spatially resolving the x direction.
A phase function is generated with reference beam Ur by employing amplitude or phase modulation. The signal of the detectors is recorded as a function of the sample position, thus a line of the hologram is obtained, and a 2D hologram 824 is obtained by line-wise recording the data. The hologram 824 may then be processed using SOH techniques elsewhere described herein.
It is noteworthy that oblique incidence of the reference beam is not a requirement. The reference beam may also be superimposed with the scattered fields at the camera by sending the reference beam through the camera lens, and/or at normal incidence to line camera 822.
In accordance with a further embodiment of the present invention, described now with reference to
While one- and two-dimensional arrays have been described, it should be understood that any configuration of one-pixel detectors may be employed, within the scope of the present invention. Moreover, more than one illuminating beam may illuminate sample 210 in practicing the present invention. In the embodiments described in the immediately preceding sections, as in other embodiments of the present invention, it is to be understood that alternative methods of stepping may be employed, as described above, including, in particular, a series of images taken of a dynamic process, such as movements in biological samples, for example. Similarly, methods employing spectroscopic detection described above may also be employed in conjunction with detector arrays, within the scope of the present invention.
Alternative Data-Processing Schema in Monochromatic and Spectrally-Resolved Implementations of SOH and Quick-Look Holographic Imaging
It is to be understood that, within the scope of the present invention as claimed, it is not required that data to support the entirety of ultimate images be acquired prior to Fourier transformation. A contemporaneous data set may be analyzed, and images may be derived on the basis on a partial dataset acquired to an intermediate point in the course of sequential scanning. This allows for a provisional view of analyzed data, while additional data are continuously acquired and aggregated in a larger data set, on the basis of which amplitude and phase images may be recalculated.
Description of data processing in the context of SOH, as presented herein, has focused on classical holographic processing methods of taking the FT of the data and then filtering in the FT domain before transforming back by inverse Fourier transform (IFT). It is to be understood that such description has been by way of heuristic example and without limitation. Of course, the FT-filter-IFT process may be accomplished in a single step as a convolution in the domain of raw data. The convolution kernel has an effectively limited range, and, consequently, finite truncations of the kernel would seem reasonable. Thus, several local methods are suggested in which the original data are processed using only the pixels in a small neighborhood of the pixel of interest, or some other subset of the data.
In a “two-line” method, in the case of homodyne interferometric detection with phase stepping, a series of detector signals are recorded for a stepwise-shifted reference phase. Details about the latter can be found, for example, in Novak, Five-Step Phase-Shifting Algorithms with Unknown Values of Phase Shift, Optik, vol. 114, pp. 63-68 (2003), and Surrel, 63-68 and Surrel, Design of Algorithms for Phase Measurements by the Use of Phase Stepping, Applied Optics, vol. 35, pp. 51-60 (1996), both of which articles are incorporated herein by reference. A step-shifted phase-shift method has been implemented in SNOM, as described by Taubner et al, Performance of visible and mid-infrared scattering-type near-field optical microscopes, for Journal of Microscopy, vol. 210, pp. 311-14 (2003), incorporated herein by reference. There, a line is scanned with a reference phase setting of 0°, and the same line is rescanned with a reference phase setting of 90°. While the first line yields a term proportional to the cosine term of the complex-valued line, the second line yields the sine term. Accordingly, a single complex-valued line can be constructed with these two lines.
For purposes of SOH, a “two-line” method, similar to what is done in phase-stepping homodyne interferometry, may be employed, but the 90-degree phase steps are taken across adjacent lines using the SRW method. Assuming that the phase shift of the reference wave between two lines is 90°, the first line can be considered to contain the cosine term, the second line the sine term, the third line the—cosine term and so forth, of the complex-valued electric field Es of the sample arm. Two lines can be pairwise combined to form a single line containing complex-valued data by a simple addition (i=sqrt(−1)):
New Line 1=1*Line 1+1i*Line 2
New Line 2=1i*Line2−1*Line 3.
It should be noted that the phase step need not be 90°, nor need it be equal across all lines. When the phase step is known, the complex-valued field can be calculated, within the limits of phase-stepping interferometric detection. In this case, the factors “1”, “1i”, “−1” are replaced by the corresponding coefficients, which can be different for each line.
More generally, N-lines can be combined to form one line, as described in phase-stepping homodyne interferometry. By combining 3 or more lines to one line, the autocorrelation term C can be suppressed.
Other local data processing schemes may also be practiced within the scope of the present invention, of which several examples are provided now.
Given any grouping of N pixels, the known SRW values, together with the data, lead to N equations in 2N unknowns (the phase and amplitude or the real and imaginary parts of the scattered field at each pixel). Some assumption must be made to reduce the number of unknowns. In the two-line method, there is an assumption that the field at a particular pixel in line 1 is the same as the field in the adjacent pixel in line 2. Thus the number of unknowns is reduced by the necessary factor 2. In the FT method (the classical holographic processing), the bandwidth of the image is cut in half, avoiding a strict equivalence between any two pixels, but globally reducing the number of unknowns again by a factor two. Alternatives will suggest themselves to persons of skill in the art, and are encompassed within the scope of the present invention.
In a further example, sets of four pixels may be grouped, assuming the scattered field to be the same at each pixel. Given the values of the SRW, the data may then be fit to a sinusoid. Such an approach is very local and could be implemented fast as data are collected. It can also be implemented with any SRW as long as the SRW varies enough to produce sufficiently linearly independent data. That is, the SRW need not mimic a plane wave. It is, however, also over-constrained, in that it reduces the number of unknowns by a factor of 4. Thus, this approach fails to make use of some of the available information across pixels, so-called mutual information.
In yet another example, a subset of the data may be treated by the classical FT method. That is, the SRW behaves like a plane-wave over a block of the data and this block of data is treated in the usual way by FT-filter-IFT. This allows for a windowed approach to the problem, taking advantage of the mutual information across the block and allowing for adaptive sampling schemes to treat samples that have patches of high variation and low variation. The disadvantage is that this approach ignores available information across blocks.
Moreover, as in the single-frequency case, the N-pixel data problem can be thought of in a more general setting as a problem of solving for the 2N unknowns that represent the scattered field, with M discrete frequencies, and, without spectrally-resolved detection, this becomes 2MN unknowns from N data. For instance, with two discrete frequencies and sufficient variation in the SRW, the complex scattered field at each frequency may be determined from data at four pixels by assuming the scattered field is the same at all four pixels and then fitting the data to two sinusoids.
In the case of spectrally-resolved detection, the data consist if N pixels at M frequencies and the result data set of MN data must be used to found the 2MN unknown values of the scattered field. As described above, this can be accomplished with the usual FT-filter-IFT approach but now in one dimension higher. Just as in the single frequency case, the key point is an assumption that reduced the unknowns by at least a factor of two. This can also be done by taking patches of the data as described above for the single-frequency case.
Methods for Leveraging Reference Arm Motion
Among alternate embodiments of the present invention, many of which will be evident and matters of design choice to skilled artisans, methods of generating fast or long-running reference arm stages are now described with reference to
Typical piezo stages have a travel range of between several 100s of micrometers to a few millimeters. Particularly for longer wavelengths such as infrared light and THz light, these ranges fall a bit short for generating large k-vectors or for recording holograms with 1000s of lines. This situation is worse if frequency-multiplexing without wrapping, as described herein, is applied. Consequently, it is advantageous to amplify the motion of a single piezo stage-mounted mirror 860 in its effect on the phase by having the beam 861 make multiple passes between mirrors, one a fixed mirror 862, and another piezo-mounted 860. For instance, by coming in obliquely and using multiple bounces between two mirrors we could achieve an N-fold amplification of the piezo movement, N−1 being the number of reflections from the added mirror.
Additionally, as described with reference to
Various SOH Phase and Amplitude Modulation Modalities
As has been pointed out above, “modulation,” as the term is used herein, while generally described in the context of varying the relative phase between a field scattered by a sample and a reference beam is not so limited, within the scope of the present invention, and is merely described in those terms for convenience of description and heuristic purposes. A variety of means for controlling, and, thus, modulating, the relative amplitude and phase of the reference beam relative to the scattered beam are now described by way of example, and without limitation.
Devices based on reflection:
Devices based on refraction:
Electronic control
Scanning
While the foregoing methods of modulation may be deliberately employed for the generation of phase functions, a phase modulation by some of these methods could occur without intention, thus producing unwanted phase shifts in the reconstructed images. Such contributions may be corrected for, by, for example, (a) calibration measurements or (b) by taking into account the actual geometry and other factors of the setup to create a correction function. The foregoing discussion applies to amplitude modulation and to any other modulation as well.
Correction of Errors in the Modulation of the Reference Arm
As is clear from the description above, phase modulation and other modulation techniques of the reference arm are a fundamental part of SOH. Any errors introduced in the modulation may degrade the performance of SOH, consequently, care must be taken to compensate for errors introduced during the course of phase modulation. Where a piezo stage 870 (shown, for example, in
In accordance with embodiments of the present invention, an error in piezo stage response may be taken into account by recording also the measured position of the piezo stage simultaneously with the detector signal. Comparing with the SRW matrix (=setpoint), a correction to the reconstructed amplitude and phase image may be applied by interpolating or “resampling” the acquired data onto the desired grid, which is linear in linear in piezo position. Similarly, a pre-characterization of the modulation device may be performed yielding a set of data that can be used for correction. In this case, even modulators without direct read-out option can be corrected. Similar corrections may be applied in cases of other phase modulators, as would be completely obvious to persons of ordinary skill in instrumentation design.
Configuration for Implementing SOH in Confocal Microscopy
An embodiment of the present invention is now described with reference to
Lens 882 and mirror 884 are mounted on a linear piezo stage 890 and moved together to generate a phase function.
Exemplary Applications of SOH Techniques
Several applications are now listed, without limitation, in which SOH techniques, as described herein in accordance with the present invention, may advantageously be applied.
Infrared Holography, where Phase Contrast in Reflection or Scattering Yields a Measure of Absorption:
It has recently been demonstrated that the phase in a SNOM experiment may be related to the local absorption. Using sources emitting a plurality of wavelengths, swept-sources or broadband sources, complex-valued spectra from the sample may be obtained. The complex-valued spectra not only contain the reflection coefficient, as it would be obtained from a non-interferometric measurement, but also the absorption of the sample. More details on this application may be found in Huth et al., Infrfared-Spectroscopic Nanoimaging with a Thermal Source, Nature Materials, vol. 10, pp. 352-56 (2011), which is incorporated herein by reference.
Printing Holograms Acquired with SOH:
By appropriately scaling the raw hologram from a one-frequency experiment and printing the result using a printer, holograms measured with SOH may be turned into classical viewable holograms. Additionally, raw holograms from experiments involving a plurality of frequencies may be recorded by a processor and turned into classical viewable color holograms by means of a printer. It is, moreover, possible to process SOH holograms, magnifying imaged structures to visible scales.
THz Holography in Microscopy and Scanning Systems:
SOH can also be applied to THz frequencies because the key element of SOH is the reflection of light from a mirror which works from visible to THz frequencies, and even x-ray mirrors and radar mirrors are possible. Imaging is difficult in the THz region of the spectrum because camera technology is lacking. This makes the SOH concept attractive for THz because only a one-pixel detector is needed. SOH with THz radiation may enable holographic microscopy at THz, yielding amplitude and phase maps of the sample. Moreover, a 3D THz scanner using SOH techniques may be used in security. A depth range of a few meter can be expected for THz. This makes THz radiations particularly attractive for 3D scanner of large objects such as humans, cars etc.
Rapid SNOM and Megapixel SNOM:
In SNOM, amplitude- and phase-resolved measurement of the light that is scattered or collected by tip 222 (shown, for example, in
The ability to record more than 1000 pixels per second enables applications such as the acquisition of megapixel near-field images with SNOM in times on the order of minutes or hours. This is useful for screening applications in biomedicine, graphene production, semiconductor fabrication, etc.
A further application is rapid SNOM where a kilopixel near-field image can be recorded in a time frame on the order of seconds. Acquiring a series of near-field images allows the visualization of dynamic processes.
Megapixel SNOM and rapid SNOM are limited by the scanning speed of the underlying AFM. The combination of SOH with fast AFM techniques may lead to video-rate phase-resolved SNOM.
Embodiments of the present invention may advantageously be considered as interferometric amplification of scatter signals, potentially by several orders of magnitude, thereby advantageously providing for the measurement and imaging, with substantial signal-to-noise, of weak signals. Thus, advantage may be provided even with respect to intensity images (squaring amplitude images) with respect to non-holographic modalities.
In preferred embodiments of the present invention, certain disclosed methods for nanoholographic imaging may be implemented as a computer program product for use with a computer system. Such implementations may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).
The embodiments of the invention described above are intended to be merely exemplary; numerous variations and modifications will be apparent to those skilled in the art. For example, the function of a scanning near-field probe may be served by a nanoparticle inserted into the sample and driven, for instance by a magnetic field, or allowed to drift by Brownian motion or some other diffusive process. An appropriately fast-rising phase ramp of the reference may then be used with assumptions about the particle path to a 1-D hologram of the field scattered by the particle-sample system as the particle moves.
While described herein in the context of photons, techniques described herein may be applied to plasmons, phonons, surface waves, surface polaritons and massive particles, the latter, for example, as in scanning electron microscopy, all without departure from the scope of the present invention. The illuminating beam or reference beam may propagate as a free-space-propagating beam or as a guided beam. These and all other such variations and modifications are intended to be within the scope of the present invention as defined in any appended claims.
The present application claims the priority of U.S. Provisional Patent Application Ser. No. 61/705,296, filed Sep. 25, 2012, and incorporated herein by reference. It may be referred to herein as the “'296 Provisional.”
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2013/030411 | 3/12/2013 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2014/051680 | 4/3/2014 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
3640598 | Neeley et al. | Feb 1972 | A |
3760344 | Hildebrand | Sep 1973 | A |
3856986 | Macovski | Dec 1974 | A |
3889226 | Hildebrand | Jun 1975 | A |
6262818 | Cuche et al. | Jul 2001 | B1 |
8659810 | Joo | Feb 2014 | B2 |
20060078113 | Javidi et al. | Apr 2006 | A1 |
20070024866 | Nisper et al. | Feb 2007 | A1 |
20100253762 | Cheong et al. | Oct 2010 | A1 |
20110261154 | Chang | Oct 2011 | A1 |
20120026856 | Shimada et al. | Feb 2012 | A1 |
20120182591 | Masumura | Jul 2012 | A1 |
Number | Date | Country |
---|---|---|
2002-526815 | Aug 2002 | JP |
2012-027996 | Feb 2012 | JP |
10-2011-0098241 | Sep 2011 | KR |
WO 9912079 | Mar 1999 | WO |
Entry |
---|
Chowdhury et al. “Structured oblique illumination microscopy for enhanced resolution imaging of non-fluorescent, coherently scattering samples,” Biomed. Opt. Exp., vol. 3, No. 8, pp. 1841-1854 (Jul. 2012). |
Dong Yun Lee, Examiner Korean Intellectual Property Office, International Search Report—Application No. PCT/US2013/030411, date of mailing Dec. 26, 2013, including The Written Opinion of the International Searching Authority (12 pages). |
Number | Date | Country | |
---|---|---|---|
20150077819 A1 | Mar 2015 | US |
Number | Date | Country | |
---|---|---|---|
61705296 | Sep 2012 | US |