The invention relates to a method of performing sub-surface imaging of a specimen in a charged-particle microscope of a scanning transmission type, comprising the following steps:
The invention also relates to a charged-particle microscope in which such a method can be performed.
Charged-particle microscopy is a well-known and increasingly important technique for imaging microscopic objects, particularly in the form of electron microscopy. Historically, the basic genus of electron microscope has undergone evolution into a number of well-known apparatus species, such as the Transmission Electron Microscope (TEM), Scanning Electron Microscope (SEM), and Scanning Transmission Electron Microscope (STEM), and also into various sub-species, such as so-called “dual-beam” tools (e.g. a FIB-SEM), which additionally employ a “machining” Focused Ion Beam (FIB), allowing supportive activities such as ion-beam milling or Ion-Beam-Induced Deposition (IBID), for example. More specifically:
More information on some of the topics elucidated here can, for example, be gleaned from the following Wikipedia links:
http://en.wikipedia.org/wiki/Electron microscope
http://en.wikipedia.org/wiki/Scanning electron microscope
http://en.wikipedia.org/wiki/Transmission electron microscopy
http://en.wikipedia.org/wiki/Scanning transmission electron microscopy
As an alternative to the use of electrons as irradiating beam, charged-particle microscopy can also be performed using other species of charged particle. In this respect, the phrase “charged particle” should be broadly interpreted as encompassing electrons, positive ions (e.g. Ga or He ions), negative ions, protons and positrons, for instance. As regards ion-based microscopy, some further information can, for example, be gleaned from sources such as the following:
In all cases, a Scanning Transmission Charged-Particle Microscope (STCPM) will comprise at least the following components:
An example of a method as set forth in the opening paragraph above is known from so-called HAADF-STEM tomography (HAADF=High-Angle Annular Dark Field), in which the beam parameter P is beam incidence angle (beam tilt) relative to (a plane of) the specimen, and in which the measurement set M is a so-called “tilt series” or “sinogram”. See, for example, the following publication:
Although prior-art techniques such as set forth in the previous paragraph have produced tolerable results up to now, the current inventors have worked extensively to provide an innovative alternative to the conventional approach. The results of this endeavor are the subject of the current invention.
It is an object of the invention to provide a radically new method of investigating a specimen using an STCPM. In particular, it is an object of the invention that this method should allow sub-surface imaging of the specimen using alternative acquisition and processing techniques to those currently used.
These and other objects are achieved in a method as set forth in the opening paragraph above, which method is characterized in that:
The current invention makes use of integrated vector field (iVF) imaging, which is an innovative imaging technique set forth in co-pending European Patent Applications EP 14156356 and EP 15156053, and co-pending U.S. patent application U.S. Ser. No. 14/629,387 (filed Feb. 23, 2015) which are incorporated herein by reference, and will be referred to hereunder as the “iVF documents”. Apart from this difference in the nature of the employed image I, the invention also differs from the prior art in that the adjusted beam parameter P (which is (incrementally) changed so as to obtain the measurement set M) is axial focus position instead of beam tilt. Inter alia as a result of these differences and various insights attendant thereto, which will be elucidated in greater detail below the invention can make use of a different mathematical approach to perform depth-resolution on the measurement set M.
Of significant importance in the present invention is the insight that an iVF image is essentially a map of electrostatic potential φ(x,y) in the specimen, whereas a HAADF-STEM image is a map of φ2(x,y) [see next paragraph also]. As a result of this key distinction, one can make use of a linear imaging model (and associated deconvolution techniques) in the present invention, whereas one cannot do this when using HAADF-STEM imagery. More specifically, this linearity allows image composition/deconvolution to be mathematically treated as a so-called Source Separation (SS) problem (e.g. a Blind Source Separation (BSS) problem), in which an acquired image is regarded as being a convolution of contributions from a collection of sub-sources distributed within the bulk of the specimen, and in which sub-source recovery can be achieved using a so-called Inverse Problem Solver; this contrasts significantly with HAADF-STEM tomography, in which image reconstruction is based on mathematics that rely on “line-of-sight” or “parallax” principles (based on so-called Radon Transforms). The SS approach is made possible because the aforementioned linearity implies minimal/negligible interference between said sub-sources, and it preserves phase/sign which is lost when working with a quadratic entity such as φ2(x,y)[as in HAADF-STEM]. Moreover, the irradiating charged-particle beam in an STCPM can effectively be regarded as passing directly through the specimen, with negligible lateral spread (scattering); as a result, there will be relatively low loss of lateral resolution in an associated SS problem. Examples of mathematical techniques that can be used to solve an SS problem as alluded to here include, for example, Principal Component Analysis (PCA), Independent Component Analysis (ICA), Singular Value Decomposition (SVD) and Positive Matrix Formulation (PMF). More information with regard to SS techniques can, for example, be gleaned from:
At this point, it should be noted that the current invention is substantially different from the technique commonly referred to as “Confocal STEM” or “SCEM” (SCEM=Scanning Confocal Electron Microscopy). In this known technique:
in which the “*” operator indicates a cross-correlation, and in which linear dependence on φ({right arrow over (r)}) is immediately evident.
For good order (and purposes of comparison), the imaging situation in a HAADF-STEM is given by:
IHAADF-STEM({right arrow over (r)}p)=CHAAD(|ψin({right arrow over (r)})|2*({right arrow over (r)})({right arrow over (r)}p)
where CHAADF is a constant whose value depends on particulars of the employed detector configuration, and “*” again indicates a cross-correlation. It is clear that, in this technique, imaging is a function of φ2({right arrow over (r)}) [as already stated above].
For purposes of completeness, it is noted that the “iVF documents” referred to above inter alia make it clear that:
In a particular embodiment of the current invention, an approach is adopted wherein:
Step 4: Post-process the obtained sequence using de-noising and restoration methods. Using such an approach, the relative thickness of the computed slices (layers/levels) can be adjusted by suitable choice of the focus increments applied during acquisition of the focus series. This can result in very high depth resolution in many applications. Although the example just given makes specific use of PCA, one could also solve this problem using ICA or another SS technique. For more information on the above-mentioned Karhunen-Loeve Transform, see, for example:
http://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem
In a variant/special case of the embodiment set forth in the previous paragraph, the following applies:
In the context of the present invention, the set {Pn} (={Fn}) can be referred to as a “focus series” (as already alluded to above). The skilled artisan will understand that the cardinality of this set, and the (incremental) separation of its elements, are matters of choice, which can be tailored at will to suit the particulars of a given situation. In general, a larger cardinality/closer spacing of elements can lead to higher deconvolution resolution, but will generally incur a throughput penalty. In a typical instance, one might, for example, employ a cardinality of the order of about 20, with focus increments of the order of about 5 nm; such values are exemplary only, and should not be construed as limiting.
The invention will now be elucidated in more detail on the basis of exemplary embodiments and the accompanying schematic drawings, in which:
The specimen S is held on a specimen holder H. As here illustrated, part of this holder H (inside enclosure E) is mounted in a cradle A′ that can be positioned/moved in multiple degrees of freedom by a positioning device (stage) A; for example, the cradle A′ may (inter alia) be displaceable in the X, Y and Z directions (see the depicted Cartesian coordinate system), and may be rotated about a longitudinal axis parallel to X. Such movement allows different parts of the specimen S to be irradiated/imaged/inspected by the electron beam traveling along axis B′ (and/or allows scanning motion to be performed as an alternative to beam scanning [using deflector(s) D], and/or allows selected parts of the specimen S to be machined by a (non-depicted) focused ion beam, for example).
The (focused) electron beam B traveling along axis B′ will interact with the specimen S in such a manner as to cause various types of “stimulated” radiation to emanate from the specimen S, including (for example) secondary electrons, backscattered electrons, X-rays and optical radiation (cathodoluminescence). If desired, one or more of these radiation types can be detected with the aid of sensor 22, which might be a combined scintillator/photomultiplier or EDX (Energy-Dispersive X-Ray Spectroscopy) module, for instance; in such a case, an image could be constructed using basically the same principle as in a SEM. However, of principal importance in a (S)TEM, one can instead/supplementally study electrons that traverse (pass through) the specimen S, emerge (emanate) from it and continue to propagate (substantially, though generally with some deflection/scattering) along axis B′. Such a transmitted electron flux enters an imaging system (combined objective/projection lens) 24, which will generally comprise a variety of electrostatic/magnetic lenses, deflectors, correctors (such as stigmators), etc. In normal (non-scanning) TEM mode, this imaging system 24 can focus the transmitted electron flux onto a fluorescent screen 26, which, if desired, can be retracted/withdrawn (as schematically indicated by arrows 26′) so as to get it out of the way of axis B′. An image (or diffractogram) of (part of) the specimen S will be formed by imaging system 24 on screen 26, and this may be viewed through viewing port 28 located in a suitable part of a wall of enclosure E. The retraction mechanism for screen 26 may, for example, be mechanical and/or electrical in nature, and is not depicted here.
As an alternative to viewing an image on screen 26, one can instead make use of the fact that the depth of focus of the electron flux emerging from imaging system 24 is generally quite large (e.g. of the order of 1 meter). Consequently, various types of sensing device/analysis apparatus can be used downstream of screen 26, such as:
Note that the controller/computer processor 10 is connected to various illustrated components via control lines (buses) 10′. This controller 10 can provide a variety of functions, such as synchronizing actions, providing setpoints, processing signals, performing calculations, and displaying messages/information on a display device (not depicted). Needless to say, the (schematically depicted) controller 10 may be (partially) inside or outside the enclosure E, and may have a unitary or composite structure, as desired. The skilled artisan will understand that the interior of the enclosure E does not have to be kept at a strict vacuum; for example, in a so-called “Environmental (S)TEM”, a background atmosphere of a given gas is deliberately introduced/maintained within the enclosure E. The skilled artisan will also understand that, in practice, it may be advantageous to confine the volume of enclosure E so that, where possible, it essentially hugs the axis B′, taking the form of a small tube (e.g. of the order of 1 cm in diameter) through which the employed electron beam passes, but widening out to accommodate structures such as the source 4, specimen holder H, screen 26, camera 30, detector 32, spectroscopic apparatus 34, etc.
In the context of the current invention, the following specific points deserve further elucidation:
A further explanation will now be given regarding some of the mathematical techniques that can be used to obtain an iVF image as employed in the present invention.
Integrating Gradient Fields
As set forth above, a measured vector field {tilde over (E)}(x,y)=({tilde over (E)}x(x,y), {tilde over (E)}(x,y))T can (for example) be derived at each coordinate point (x,y) from detector segment differences using the expressions:
where, for simplicity, spatial indexing (x,y) in the scalar fields {tilde over (E)}x, {tilde over (E)}y and Si=1, . . . 4 has been omitted, and where superscript T denotes the transpose of a matrix.
It is known from the theory of (S)TEM contrast formation that {tilde over (E)} is a measurement of the actual electric field E in an area of interest of the imaged specimen. This measurement is inevitably corrupted by noise and distortions caused by imperfections in optics, detectors, electronics, etc. From basic electromagnetism, it is known that the electrostatic potential function φ(x,y) [also referred to below as the potential map] is related to the electric field by:
E=∇φ (3)
The goal here is to obtain the potential map at each scanned location of the specimen. But the measured electric field in its noisy form {tilde over (E)} will most likely not be “integrable”, i.e. cannot be derived from a smooth potential function by the gradient operator. The search for an estimate {tilde over (φ)} of the potential map given the noisy measurements Ē can be formulated as a fitting problem, resulting in the functional minimization of objective function J defined as:
J(φ)=∫∫∥(−∇φ)−{tilde over (E)}∥2dxdy=∫∫∥∇φ+{tilde over (E)}∥2dxdy (4)
where
One is essentially looking for the closest fit to the measurements, in the least squares sense, of gradient fields derived from smooth potential functions φ.
To be at the sought minimum of J one must satisfy the Euler-Lagrange equation:
which can be expanded to:
finally resulting in:
which is the Poisson equation that one needs to solve to obtain {tilde over (φ)}.
Poisson Solvers
Using finite differences for the derivatives in (7) one obtains:
where Δ is the so-called grid step size (assumed here to be equal in the x and y directions). The right side quantity in (8) is known from measurements and will be lumped together in a term ρi,j to simplify notation:
which, after rearranging, results in:
φi−1,j+φi,j−1−4φi,j+φi,j+1+φi+1,j=Δ2ρi,j (10)
for i=2, . . . , N−1 and j=2, . . . , M−1, with (N,M) the dimensions of the image to be reconstructed.
The system in (10) leads to the matrix formulation:
Lφ=ρ (11)
where φ and ρ represent the vector form of the potential map and measurements, respectively (the size of these vectors is N×M, which is the size of the image). The so-called Laplacian matrix L is of dimensions (N×M)2 but is highly sparse and has a special form called “tridiagonal with fringes” for the discretization scheme used above. So-called Dirichlet and Neumann boundary conditions are commonly used to fix the values of {tilde over (φ)} at the edges of the potential map.
The linear system of (11) tends to be very large for typical (S)TEM images, and will generally be solved using numerical methods, such as the bi-conjugate gradient method. Similar approaches have previously been used in topography reconstruction problems, as discussed, for example, in the journal article by Ruggero Pintus, Simona Podda and Massimo Vanzi, 14th European Microscopy Congress, Aachen, Germany, pp. 597-598, Springer (2008). One should note that other forms of discretization of the derivatives can be used in the previously described approach, and that the overall technique is conventionally known as the Poisson solver method. A specific example of such a method is the so-called multi-grid Poisson solver, which is optimized to numerically solve the Poisson equation starting from a coarse mesh/grid and proceeding to a finer mesh/grid, thus increasing integration speed.
Basis Function Reconstruction
Another approach to solving (7) is to use the so-called Frankot-Chellapa algorithm, which was previously employed for depth reconstruction from photometric stereo images. Adapting this method to the current problem, one can reconstruct the potential map by projecting the derivatives into the space-integrable Fourier basis functions. In practice, this is done by applying the Fourier Transform FT(⋅) to both sides of (7) to obtain:
(ωx2+ωy2)FT(φ)=√{square root over (−1)}(ωxFT({tilde over (E)}x)+ωyFT({tilde over (E)}y)) (12)
from which {tilde over (φ)} can be obtained by Inverse Fourier Transform (IFT):
The forward and inverse transforms can be implemented using the so-called Discrete Fourier Transform (DFT), in which case the assumed boundary conditions are periodic. Alternatively, one can use the so-called Discrete Sine Transform (DST), which corresponds to the use of the Dirichlet boundary condition (φ=0 at the boundary). One can also use the so-called Discrete Cosine Transform (DCT), corresponding to the use of the Neumann boundary conditions (∇φ·n=0 at the boundary, n being the normal vector at the given boundary location).
Generalizations and Improved Solutions
While working generally well, the Poisson solver and Basis Function techniques can be enhanced further by methods that take into account sharp discontinuities in the data (outliers). For that purpose, the objective function J can be modified to incorporate a different residual error R (in (4), the residual error was R(v)=∥v∥2). One can for example use exponents of less than two including so-called Lp norm-based objective functions:
The residual can also be chosen from the set of functions typically used in so-called M-estimators (a commonly used class of robust estimators). In this case, R can be chosen from among functions such as so-called Huber, Cauchy, and Tuckey functions. Again, the desired result from this modification of the objective function will be to avoid overly smooth reconstructions and to account more accurately for real/physical discontinuities in the datasets. Another way of achieving this is to use anisotropic weighting functions wx and wy in J:
J(φ)=∫∫wx(ϵxk-1)(−φx−{tilde over (E)}x)2+wy(ϵyk-1)(−φy−{tilde over (E)}y)2dxdy (15)
where the weight functions depend on the residuals:
R(ϵzk-1)=R(−φxk-1,{tilde over (E)}x) and R(ϵyk-1)=R(−φyk-1,{tilde over (E)}y) (15a)
at iteration k−1.
It can be shown that, for the problem of depth reconstruction from photometric stereo images, the use of such anisotropic weights, which can be either binary or continuous, leads to improved results in the depth map recovery process.
In another approach, one can also apply a diffusion tensor D to the vector fields ∇φ and {tilde over (E)} with the aim of smoothing the data while preserving discontinuities during the process of solving for {circumflex over (φ)}, resulting in the modification of (4) into:
J(φ)=∫∫∥D(−∇φ)−D({tilde over (E)})∥2dxdy (16)
Finally, regularization techniques can be used to restrict the solution space. This is generally done by adding penalty functions in the formulation of the objective criterion J such as follows:
J(φ)=∫∫[∥(−∇φ)−{tilde over (E)}∥2+λƒ(∇φ)]dxdy (17)
The regularization function ƒ(∇φ) can be used to impose a variety of constraints on cp for the purpose of stabilizing the convergence of the iterative solution. It can also be used to incorporate into the optimization process prior knowledge about the sought potential field or other specimen/imaging conditions.
Position Sensitive Detector (PSD)
Using a Position Sensitive Detector (PSD) and measuring a thin, non-magnetic specimen, one obtains (by definition) the vector field image components as components of the center of mass (COM) of the electron intensity distribution ID({right arrow over (k)},{right arrow over (r)}p) at the detector plane:
IxCOM({right arrow over (r)}p)=∫∫−∞∞kxID({right arrow over (k)},{right arrow over (r)}p)d2{right arrow over (k)} IyCOM({right arrow over (r)}p)=∫∫−∞∞kyID({right arrow over (k)},{right arrow over (r)}p)d2{right arrow over (k)} (18)
where {right arrow over (r)}p represents position of the probe (focused electron beam) impinging upon the specimen, and {right arrow over (k)}=(kx,ky) are coordinates in the detector plane. The full vector field image can then be formed as:
where {right arrow over (x)}o and {right arrow over (y)}0 are unit vectors in two perpendicular directions.
The electron intensity distribution at the detector is given by:
ID({right arrow over (k)},{right arrow over (r)}p)=|{ψin({right arrow over (r)}−{right arrow over (r)}p)eiφ({right arrow over (r)})}({right arrow over (k)})|2 (20)
where ψin({right arrow over (r)}−{right arrow over (r)}p) is the impinging electron wave (i.e. the probe) illuminating the specimen at position {right arrow over (r)}p, and eiφ({right arrow over (r)}) is the transmission function of the specimen. The phase φ({right arrow over (r)}) is proportional to the specimen's inner electrostatic potential field. Imaging φ({right arrow over (r)}) is the ultimate goal of any electron microscopy imaging technique. Expression (19) can be re-written as:
where {right arrow over (E)}({right arrow over (r)})=∇φφ({right arrow over (r)}) is the inner electric field of the specimen which is the negative gradient of the electrostatic potential field of the specimen—and the operator “*” denotes cross-correlation. It is evident that the obtained vector field image
using any arbitrary path l. This arbitrary path is allowed because, in the case of non-magnetic specimens, the only field is the electric field, which is a conservative vector field. Numerically this can be performed in many ways (see above). Analytically it can be worked out by introducing (21) into (22), yielding:
It is clear that, with this proposed integration step, one obtains a scalar field image that directly represents φ({right arrow over (r)}), as already alluded to above.
The linearity assumptions in image formation elucidated above can be represented in the model:
Q=AI (24)
in which:
I=(I1, I2, . . . , IN)T is the set of iVF images acquired by varying focus value;
Q=(Q1, Q2, . . . , QN)T is a set of source images that are statistically de-correlated and that represent information coming from different depth layers (levels);
A=(a1, a2, . . . , aN)T is a square matrix transforming the original images into so-called principal components.
PCA decomposition obtains the factorization in equation (24) by finding a set of orthogonal components, starting with a search for the one with the highest variance. The first step consists in minimizing the criterion:
The next step is to subtract the found component from the original images, and to find the next layer with highest variance.
At iteration 1<k≤N, we find the kth row of the matrix A by solving:
It can be shown (see, for example, literature references [1] and [3] referred to above) that successive layer separation can be achieved by using so-called Eigenvector Decomposition (EVD) of the covariance matrix ΣI of the acquired images:
ΣI=E{ITI}=EDET (27)
in which:
E is the orthogonal matrix of eigenvectors of ΣI;
D=diag(d1, . . . , dN) is the diagonal matrix of Eigenvalues.
The principal components can then be obtained as
Q=ΣTI (28)
The Eigenvalues are directly related to the variance of the different components:
di=(var(Qi))2 (29)
In cases in which noise plays a significant role, the components with lower weights (Eigenvalues) may be dominated by noise. In such a situation, the inventive method can be limited to the K (K<N) most significant components. The choice to reduce the dimensionality of the image data can be based on the cumulative energy and its ratio to the total energy:
One can choose a limit for the number of employed layers K based on a suitable threshold value t. A common approach in PCA dimensionality reduction is to select the lowest K for which one obtains r≥t. A typical value for t is 0.9 (selecting components that represent 90% of the total energy).
Noise effects can be minimized by recombining several depth layers with a suitable weighting scheme. Additionally, re-weighting and recombination of layers can be useful to obtain an image contrast similar to the original images. In the previously described PCA decomposition, the strongest component (in terms of variance) is commonly associated with the background (matrix) material. Adding this component to depth layers enhances the visual appearance and information content of the obtained image. One can achieve the effect of boosting deeper-lying layers, reducing noise, and rendering proper contrast by re-scaling the independent components by their variances and reconstructing the highest-energy image using the rescaled components, as follows:
The skilled artisan will appreciate that other choices for the linear weighting of depth layers can also be used.
As an alternative to the PCA decomposition set forth above, one can also employ an SS approach based on ICA. In ICA, one assumes a linear model similar to (24). The main difference with PCA is that one minimizes a higher-order statistical independence criterion (higher than the second-order statistics in PCA), such as so-called Mutual Information (MI):
With marginal entropies computed as:
and the joint entropy:
in which:
Other criteria such as the so-called Infomax and Negentropy—can also be optimized in ICA decomposition. Iterative methods—such as FastICA—can be employed to efficiently perform the associated depth layer separation task. Adding more constraints to the factorization task can lead to more accurate reconstruction. If one adds the condition that sources (layers) render positive signals and that the mixing matrix is also positive, one moves closer to the real physical processes underlying image formation. A layer separation method based on such assumptions may use the so-called Non-Negative Matrix Decomposition (NNMD) technique with iterative algorithms.
For more information, see, for example, literature references [1] and [2] cited above.
Number | Date | Country | Kind |
---|---|---|---|
15163623 | Apr 2015 | EP | regional |
Number | Name | Date | Kind |
---|---|---|---|
20070194225 | Zorn | Aug 2007 | A1 |
20130037715 | Boughorbel | Feb 2013 | A1 |
20130193322 | Blackburn | Aug 2013 | A1 |
20150243474 | Lazic et al. | Aug 2015 | A1 |
Entry |
---|
A.J. D'Alfonso, et al., “Depth sectioning in scanning transmission electron microscopy based on core-loss spectroscopy,” Ultramicroscopy, Elsevier, Amsterdam, NL, vol. 108, No. 1, Oct. 25, 2007, pp. 17-28. |
Niels De Jonge et al., “Three-Dimensional Scanning Transmission Electron Microscopy of Biological Specimens,” Microscopy and Microanalysis, Springer, New York, NY, U.S., vol. 16, No. 1, Feb. 2010, pp. 54-63. |
Number | Date | Country | |
---|---|---|---|
20160307729 A1 | Oct 2016 | US |