Method of performing tomographic imaging in a charged-particle microscope

Information

  • Patent Grant
  • 10403469
  • Patent Number
    10,403,469
  • Date Filed
    Thursday, April 14, 2016
    8 years ago
  • Date Issued
    Tuesday, September 3, 2019
    5 years ago
Abstract
A method is presented for sub-surface imaging of a specimen in a charged particle microscope. A series of images, with individual members In is collected, with a value of a beam parameter P varied for each image, thereby compiling a measurement set M={(In, Pn)}, with P being the focus position along the charged particle axis. The data for the images are recorded using signals from a segmented detector. The signals from segments combined and compiled to yield a vector field. Mathematical processing then deconvolves the vector field, resulting in depth-resolved imagery of the specimen.
Description

The invention relates to a method of performing sub-surface imaging of a specimen in a charged-particle microscope of a scanning transmission type, comprising the following steps:

    • Providing a beam of charged particles that is directed from a source along a particle-optical axis through an illuminator so as to irradiate the specimen;
    • Providing a detector for detecting a flux of charged particles traversing the specimen;
    • Causing said beam to follow a scan path across a surface of said specimen, and recording an output of said detector as a function of scan position, thereby acquiring a scanned charged-particle image I of the specimen;
    • Repeating this procedure for different members n of an integer sequence, by choosing a value Pn of a variable beam parameter P and acquiring an associated scanned image In, thereby compiling a measurement set M={(In, Pn)};
    • Using computer processing apparatus to automatically deconvolve the measurement set M and spatially resolve it into a result set representing depth-resolved imagery of the specimen.


The invention also relates to a charged-particle microscope in which such a method can be performed.


Charged-particle microscopy is a well-known and increasingly important technique for imaging microscopic objects, particularly in the form of electron microscopy. Historically, the basic genus of electron microscope has undergone evolution into a number of well-known apparatus species, such as the Transmission Electron Microscope (TEM), Scanning Electron Microscope (SEM), and Scanning Transmission Electron Microscope (STEM), and also into various sub-species, such as so-called “dual-beam” tools (e.g. a FIB-SEM), which additionally employ a “machining” Focused Ion Beam (FIB), allowing supportive activities such as ion-beam milling or Ion-Beam-Induced Deposition (IBID), for example. More specifically:

    • In a SEM, irradiation of a specimen by a scanning electron beam precipitates emanation of “auxiliary” radiation from the specimen, in the form of secondary electrons, backscattered electrons, X-rays and photoluminescence (infrared, visible and/or ultraviolet photons), for example; one or more components of this emanating radiation is/are then detected and used for image accumulation purposes, and/or spectroscopic analysis (as in the case of EDX (Energy-Dispersive X-Ray Spectroscopy), for example).
    • In a TEM, the electron beam used to irradiate the specimen is chosen to be of a high-enough energy to penetrate the specimen (which, to this end, will generally be thinner than in the case of a SEM specimen); the flux of transmitted electrons emanating from the specimen can then be used to create an image, or produce a spectrum (as in the case of EELS, for example; EELS=Electron Energy-Loss Spectroscopy). If such a TEM is operated in scanning mode (thus becoming a STEM), the image/spectrum in question will be accumulated during a scanning motion of the irradiating electron beam.


More information on some of the topics elucidated here can, for example, be gleaned from the following Wikipedia links:


http://en.wikipedia.org/wiki/Electron microscope


http://en.wikipedia.org/wiki/Scanning electron microscope


http://en.wikipedia.org/wiki/Transmission electron microscopy


http://en.wikipedia.org/wiki/Scanning transmission electron microscopy


As an alternative to the use of electrons as irradiating beam, charged-particle microscopy can also be performed using other species of charged particle. In this respect, the phrase “charged particle” should be broadly interpreted as encompassing electrons, positive ions (e.g. Ga or He ions), negative ions, protons and positrons, for instance. As regards ion-based microscopy, some further information can, for example, be gleaned from sources such as the following:

  • http://en.wikipedia.org/wiki/Scanning Helium lon Microscope
  • W. H. Escovitz, T. R. Fox and R. Levi-Setti, Scanning Transmission Ion Microscope with a Field Ion Source, Proc. Nat. Acad. Sci. USA 72(5), pp 1826-1828 (1975).
  • http://www.innovationmagazine.com/innovation/volumes/v7n1/coverstory3.shtml


    It should be noted that, in addition to imaging and/or spectroscopy, a charged-particle microscope (CPM) may also have other functionalities, such as examining diffractograms, performing (localized) surface modification (e.g. milling, etching, deposition), etc.


In all cases, a Scanning Transmission Charged-Particle Microscope (STCPM) will comprise at least the following components:

    • A radiation source, such as a Schottky electron source or ion gun.
    • An illuminator, which serves to manipulate a “raw” radiation beam from the source and perform upon it certain operations such as focusing, aberration mitigation, cropping (with a stop/iris/condensing aperture), filtering, etc. It will generally comprise one or more charged-particle lenses, and may comprise other types of particle-optical component also. If desired, the illuminator can be provided with a deflector system that can be invoked to cause its output beam to perform a scanning motion across the specimen being investigated.
    • A specimen holder, on which a specimen under investigation can be held and positioned (e.g. tilted, rotated). If desired, this holder can be moved so as to effect a scanning motion of the beam w.r.t. the specimen. In general, such a specimen holder will be connected to a positioning system such as a mechanical stage.
    • An imaging system, which essentially takes charged particles that are transmitted through a specimen (plane) and directs (focuses) them onto analysis apparatus, such as a detection/imaging device, spectroscopic apparatus, etc. As with the illuminator referred to above, the imaging system may also perform other functions, such as aberration mitigation, cropping, filtering, etc., and it will generally comprise one or more charged-particle lenses and/or other types of particle-optical components.
    • A detector, which may be unitary or compound/distributed in nature, and which can take many different forms, depending on the radiation/entity being recorded. Such a detector may, for example, be used to register an intensity value, to capture an image, or to record a spectrum. Examples include photomultipliers (including solid-state photomultipliers, SSPMs), photodiodes, (pixelated) CMOS detectors, (pixelated) CCD detectors, photovoltaic cells, etc., which may, for example, be used in conjunction with a scintillator film, for instance. For X-ray detection, use is typically made of a so-called Silicon Drift Detector (SDD), or a Silicon Lithium (Si(Li)) detector, for example. Typically, an STCPM will comprise several detectors, of various types.


      In what follows, the invention may by way of example sometimes be set forth in the specific context of electron microscopy. However, such simplification is intended solely for clarity/illustrative purposes, and should not be interpreted as limiting.


An example of a method as set forth in the opening paragraph above is known from so-called HAADF-STEM tomography (HAADF=High-Angle Annular Dark Field), in which the beam parameter P is beam incidence angle (beam tilt) relative to (a plane of) the specimen, and in which the measurement set M is a so-called “tilt series” or “sinogram”. See, for example, the following publication:

  • C. Kübel et al., Recent advances in electron tomography: TEM and HAADF-STEM tomography for materials science and semiconductor applications, Microscopy and Microanalysis November 2005, pp. 378-400:
  • http://www.researchgate.net/publication/6349887 Recent advances in electron tomogra phy TEM and HAADF-STEM tomography for materials science and semiconductor applications


    In this known technique, the deconvolution/spatial resolution (“reconstruction”) of the set M can be performed using various mathematical tools. For example:
    • SIRT: Simultaneous Iterative Reconstruction Technique.
      • See, for example:
  • http://www.vcipt.org/pdfs/wcipt1/s2_1.pdf
  • P. Gilbert, Journal of Theoretical Biology, Volume 36, Issue 1, July 1972, Pages 105-117.
    • DART: Discrete Algebraic Reconstruction Technique.
      • See, for example: http://en.wikipedia.org/wiki/Algebraic reconstruction technique http://www.emat.ua.ac.be/pdf/1701.pdf, and references therein.
    • FST: Fourier Slice Theorem.
      • See, for example, the book by A. C. Kak and Malcolm Slaney, Principles of Computerized Tomographic Imaging, IEEE Press, 1999; in particular, chapter 3, especially sections 3.2 and 3.3.
    • WBP (Weighted Back Projection) and POCS (Projection Onto Convex Sets), etc.


Although prior-art techniques such as set forth in the previous paragraph have produced tolerable results up to now, the current inventors have worked extensively to provide an innovative alternative to the conventional approach. The results of this endeavor are the subject of the current invention.


It is an object of the invention to provide a radically new method of investigating a specimen using an STCPM. In particular, it is an object of the invention that this method should allow sub-surface imaging of the specimen using alternative acquisition and processing techniques to those currently used.


These and other objects are achieved in a method as set forth in the opening paragraph above, which method is characterized in that:

    • Said variable beam parameter P is focus position (F) along said particle-optical axis;
    • Said scanned image I is an integrated vector field image, obtained by;
      • Embodying said detector to comprise a plurality of detection segments;
      • Combining signals from different detection segments so as to produce a vector output from the detector at each scan position, and compiling this data to yield a vector field;
      • Mathematically processing said vector field by subjecting it to a two-dimensional integration operation.


The current invention makes use of integrated vector field (iVF) imaging, which is an innovative imaging technique set forth in co-pending European Patent Applications EP 14156356 and EP 15156053, and co-pending U.S. patent application U.S. Ser. No. 14/629,387 (filed Feb. 23, 2015) which are incorporated herein by reference, and will be referred to hereunder as the “iVF documents”. Apart from this difference in the nature of the employed image I, the invention also differs from the prior art in that the adjusted beam parameter P (which is (incrementally) changed so as to obtain the measurement set M) is axial focus position instead of beam tilt. Inter alia as a result of these differences and various insights attendant thereto, which will be elucidated in greater detail below the invention can make use of a different mathematical approach to perform depth-resolution on the measurement set M.


Of significant importance in the present invention is the insight that an iVF image is essentially a map of electrostatic potential φ(x,y) in the specimen, whereas a HAADF-STEM image is a map of φ2(x,y) [see next paragraph also]. As a result of this key distinction, one can make use of a linear imaging model (and associated deconvolution techniques) in the present invention, whereas one cannot do this when using HAADF-STEM imagery. More specifically, this linearity allows image composition/deconvolution to be mathematically treated as a so-called Source Separation (SS) problem (e.g. a Blind Source Separation (BSS) problem), in which an acquired image is regarded as being a convolution of contributions from a collection of sub-sources distributed within the bulk of the specimen, and in which sub-source recovery can be achieved using a so-called Inverse Problem Solver; this contrasts significantly with HAADF-STEM tomography, in which image reconstruction is based on mathematics that rely on “line-of-sight” or “parallax” principles (based on so-called Radon Transforms). The SS approach is made possible because the aforementioned linearity implies minimal/negligible interference between said sub-sources, and it preserves phase/sign which is lost when working with a quadratic entity such as φ2(x,y)[as in HAADF-STEM]. Moreover, the irradiating charged-particle beam in an STCPM can effectively be regarded as passing directly through the specimen, with negligible lateral spread (scattering); as a result, there will be relatively low loss of lateral resolution in an associated SS problem. Examples of mathematical techniques that can be used to solve an SS problem as alluded to here include, for example, Principal Component Analysis (PCA), Independent Component Analysis (ICA), Singular Value Decomposition (SVD) and Positive Matrix Formulation (PMF). More information with regard to SS techniques can, for example, be gleaned from:

  • [1] P. Comon and C. Jutten, Handbook of Blind Source Separation: Independent Component Analysis and Applications, Academic Press, 2010.
  • [2] A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5):411-430, 2000.
  • [3] I. T. Jolliffe, Principal Component Analysis, Series: Springer Series in Statistics XXIX, 2nd ed., Springer, NY, 2002.


At this point, it should be noted that the current invention is substantially different from the technique commonly referred to as “Confocal STEM” or “SCEM” (SCEM=Scanning Confocal Electron Microscopy). In this known technique:

    • One is not performing any imaging deconvolution. Instead, one is assuming that, for a given focus position within a specimen, all imaging information is produced within the corresponding focal plane (so-called “waist” of the incoming focused charged-particle beam/probe), with no significant contributions from overlying or underlying layers. Consequently, for each employed focus setting, there is no mixing/degeneration/convolution of imaging information from a stack of co-contributing layers, and thus no associated deconvolution/disentanglement problem.
    • Considering a single layer of the specimen (thinner than the beam waist), one is not imaging φ(x,y) (since one is not using iVF imagery), but one is instead imaging a complicated non-linear function of φ(x,y). More specifically, the image formation process in SCEM is given by the expression:

      ISCEM({right arrow over (r)}p)=|(ψinL1(−{right arrow over (r)})ψinL2({right arrow over (r)})*eiφ({right arrow over (r)}))({right arrow over (r)}p)|2

      wherein:
    • ψinL1(−{right arrow over (r)}p) and ψin2({right arrow over (r)}) are wave functions respectively associated with the condenser lens (illuminator) and projection lens (imaging system);
    • ψinL1(−{right arrow over (r)}) describes the probe (irradiating charged-particle beam) impinging upon the specimen;
    • I represents intensity and {right arrow over (r)}p denotes scanning coordinate/probe position;
    • The “*” operator indicates a convolution.


      In contrast, in the present invention (with iVF imaging), one instead obtains:








I
iVF



(


r


p

)


=


1

2

π




(






ψ
in



(

r


)




2

*

φ


(

r


)



)



(


r


p

)







in which the “*” operator indicates a cross-correlation, and in which linear dependence on φ({right arrow over (r)}) is immediately evident.


For good order (and purposes of comparison), the imaging situation in a HAADF-STEM is given by:

IHAADF-STEM({right arrow over (r)}p)=CHAAD(|ψin({right arrow over (r)})|2*({right arrow over (r)})({right arrow over (r)}p)

where CHAADF is a constant whose value depends on particulars of the employed detector configuration, and “*” again indicates a cross-correlation. It is clear that, in this technique, imaging is a function of φ2({right arrow over (r)}) [as already stated above].

    • One needs to make use of a special type of detector, viz. a so-called pinhole detector. The reason such a detector is necessary is to simplify the imaging mathematics, by allowing a rather complicated (generic) “window function” to be replaced by a much simpler (specific) Dirac delta function (this simplification has already been processed in the mathematics set forth in the previous item). Such a pinhole detector needs to be carefully manufactured (to have a well-defined, sufficiently small pinhole) and kept properly aligned with the imaging beam, thus introducing extra complications relative to the present invention.


For purposes of completeness, it is noted that the “iVF documents” referred to above inter alia make it clear that:

    • To obtain iVF imagery, the detector in the current invention may, for example, be a four-quadrant detector, pixelated detector or Position-sensitive Detector (PSD), for instance.
    • To obtain iVF imagery, the vector(ized) output from such a detector will be subjected to a two-dimensional integration operation (e.g. Vector Field Integration or Gradient Field Integration), thus yielding a scalar image.
    • If desired, a “raw” iVF image can be post-processed, e.g. by applying high-, low- or band-pass filtering, Opening Angle Correction (OAC), or deconvolution correction, for instance, thus obtaining a so-called PiVF image.
    • If desired, one can apply a Laplacian operator to an iVF or PiVF image, thus yielding so-called LiVF or LPiVF images, respectively.


      Any such iVF, PiVF, LiVF or LPiVF image is considered as falling within the scope of the “integrated vector field image I” of the current invention.


In a particular embodiment of the current invention, an approach is adopted wherein:

    • The specimen is conceptually sub-divided into a (depth) series [S1, . . . , Sm] of m slices disposed along and normal to said particle-optical axis;
    • For each value of n, the corresponding image is expressed as a linear sum Σj=1j=min(Sj) of discrete sub-images, each associated with a different one of said slices.


      For example, in a particular instance of such an embodiment, PCA is applied to a set of m spatially aligned (and, if necessary, scaled) iVF images acquired at different/incremental focus values. After mean-centering each image and applying PCA, one obtains a set of m de-correlated images that are related to the input ones by linear transformations (i.e. each input image can be expressed as a linear combination of these de-correlated images). The linear mappings can be obtained using various suitable methods, such as a so-called Karhunen-Loeve Transform, for example. New information in iVF images acquired at increasingly deeper focus is mostly due to signals coming from new depth layers reached by the incident charged-particle beam; the effect of PCA de-correlation thus results in the effective separation of the different depth layers. The inventors observed that sets of images with lower Eigenvalues in a Karhunen-Loeve Transform correspond to deeper layers. In the image associated with these deeper components, top layers are canceled using information from all available shallower-focus images. Based on these observations, one can develop an exemplary algorithm that uses m input images, as follows:
    • Step1: Acquire m iVF images at increasingly deep focal levels (focus series).
    • Step2: Laterally align and/or scale the image sequence thus obtained.
    • Step3: To compute (distil) the image associated with a discrete layer (level) of ordinal k counted from the specimen surface (k=1 . . . m):
      • Apply PCA decomposition to the first k images in the sequence.
      • Boost independent components having low weight (which emanate from deeper layers); this can, for example, be done by multiplying such components by a weighting factor that is equal or proportional to the reciprocal of their PCA (e.g. Karhunen-Loeve) Eigenvalue.
      • Reconstruct a depth image with re-weighted independent components (including, for example, a background (matrix or bulk) component).


Step 4: Post-process the obtained sequence using de-noising and restoration methods. Using such an approach, the relative thickness of the computed slices (layers/levels) can be adjusted by suitable choice of the focus increments applied during acquisition of the focus series. This can result in very high depth resolution in many applications. Although the example just given makes specific use of PCA, one could also solve this problem using ICA or another SS technique. For more information on the above-mentioned Karhunen-Loeve Transform, see, for example:


http://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem


In a variant/special case of the embodiment set forth in the previous paragraph, the following applies:

    • For each given focus value Pn, a particular slice SBn is associated with a position of best focus within the specimen;
    • in(Sj) is set to zero for each integer j≠Bn, so that In is taken to derive solely from SBn. In this embodiment, one is essentially assuming that a relatively large fraction of the imaging information is originating from the plane of best focus of the impinging charged-particle beam, to the extent that one chooses to ignore contributions coming from outside that plane. When viewed in longitudinal cross-section within the specimen, such a beam has a (quasi-) hourglass shape, with a relatively narrow “waist” (or beam cross-over) in its middle. The charged-particle intensity (energy per unit area) in this waist region will be maximal compared to the rest of the beam cross-section, and what the current embodiment essentially assumes is that imagery from this high-intensity waist will overwhelm any imagery contributions coming from outside the waist. The inventors have observed that this is a reasonable assumption to make when, for example, the beam has a relatively large opening angle (˜numerical aperture), in which case the abovementioned hourglass will have relatively broad (and, thus, low-intensity) “shoulders and hips” compared to its waist. For example, an opening angle in a range greater than about 20 mrad could be expected to give this effect.


In the context of the present invention, the set {Pn} (={Fn}) can be referred to as a “focus series” (as already alluded to above). The skilled artisan will understand that the cardinality of this set, and the (incremental) separation of its elements, are matters of choice, which can be tailored at will to suit the particulars of a given situation. In general, a larger cardinality/closer spacing of elements can lead to higher deconvolution resolution, but will generally incur a throughput penalty. In a typical instance, one might, for example, employ a cardinality of the order of about 20, with focus increments of the order of about 5 nm; such values are exemplary only, and should not be construed as limiting.





The invention will now be elucidated in more detail on the basis of exemplary embodiments and the accompanying schematic drawings, in which:



FIG. 1 renders a longitudinal cross-sectional elevation of an STCPM in which an embodiment of the current invention can be carried out.



FIG. 2 depicts a plan view of a particular embodiment of a segmented detector (quadrant detector) that can be used in the subject of FIG. 1, in accordance with the current invention.



FIG. 3 depicts a plan view of another embodiment of a segmented detector (pixelated detector) that can be used in the subject of FIG. 1, in accordance with the current invention.





EMBODIMENT 1


FIG. 1 is a highly schematic depiction of an embodiment of a STCPM M according to the current invention, which, in this case, is a (S)TEM (though, in the context of the current invention, it could just as validly be an ion-based or proton microscope, for example). In the Figure, within a vacuum enclosure E, an electron source 4 (such as a Schottky emitter, for example) produces a beam (B) of electrons that traverse an electron-optical illuminator 6, serving to direct/focus them onto a chosen part of a specimen S (which may, for example, be (locally) thinned/planarized). This illuminator 6 has an electron-optical axis B′, and will generally comprise a variety of electrostatic/magnetic lenses, (scan) deflector(s) D, correctors (such as stigmators), etc.; typically, it can also comprise a condenser system (in fact, the whole of item 6 is sometimes referred to as “a condenser system”).


The specimen S is held on a specimen holder H. As here illustrated, part of this holder H (inside enclosure E) is mounted in a cradle A′ that can be positioned/moved in multiple degrees of freedom by a positioning device (stage) A; for example, the cradle A′ may (inter alia) be displaceable in the X, Y and Z directions (see the depicted Cartesian coordinate system), and may be rotated about a longitudinal axis parallel to X. Such movement allows different parts of the specimen S to be irradiated/imaged/inspected by the electron beam traveling along axis B′ (and/or allows scanning motion to be performed as an alternative to beam scanning [using deflector(s) D], and/or allows selected parts of the specimen S to be machined by a (non-depicted) focused ion beam, for example).


The (focused) electron beam B traveling along axis B′ will interact with the specimen S in such a manner as to cause various types of “stimulated” radiation to emanate from the specimen S, including (for example) secondary electrons, backscattered electrons, X-rays and optical radiation (cathodoluminescence). If desired, one or more of these radiation types can be detected with the aid of sensor 22, which might be a combined scintillator/photomultiplier or EDX (Energy-Dispersive X-Ray Spectroscopy) module, for instance; in such a case, an image could be constructed using basically the same principle as in a SEM. However, of principal importance in a (S)TEM, one can instead/supplementally study electrons that traverse (pass through) the specimen S, emerge (emanate) from it and continue to propagate (substantially, though generally with some deflection/scattering) along axis B′. Such a transmitted electron flux enters an imaging system (combined objective/projection lens) 24, which will generally comprise a variety of electrostatic/magnetic lenses, deflectors, correctors (such as stigmators), etc. In normal (non-scanning) TEM mode, this imaging system 24 can focus the transmitted electron flux onto a fluorescent screen 26, which, if desired, can be retracted/withdrawn (as schematically indicated by arrows 26′) so as to get it out of the way of axis B′. An image (or diffractogram) of (part of) the specimen S will be formed by imaging system 24 on screen 26, and this may be viewed through viewing port 28 located in a suitable part of a wall of enclosure E. The retraction mechanism for screen 26 may, for example, be mechanical and/or electrical in nature, and is not depicted here.


As an alternative to viewing an image on screen 26, one can instead make use of the fact that the depth of focus of the electron flux emerging from imaging system 24 is generally quite large (e.g. of the order of 1 meter). Consequently, various types of sensing device/analysis apparatus can be used downstream of screen 26, such as:

    • TEM camera 30. At camera 30, the electron flux can form a static image (or diffractogram) that can be processed by controller 10 and displayed on a display device (not depicted), such as a flat panel display, for example. When not required, camera 30 can be retracted/withdrawn (as schematically indicated by arrows 30′) so as to get it out of the way of axis B′.
    • STEM detector 32. An output from detector 32 can be recorded as a function of (X,Y) scanning position of the beam B on the specimen S, and an image can be constructed that is a “map” of output from detector 32 as a function of X,Y. Typically, detector 32 will have a much higher acquisition rate (e.g. 106 points per second) than camera 30 (e.g. 102 images per second). In conventional tools, detector 32 can comprise a single pixel with a diameter of e.g. 20 mm, as opposed to the matrix of pixels characteristically present in camera 30; however, in the context of the present invention, detector 32 will have a different structure (see below), so as to allow iVF imaging to be performed. Once again, when not required, detector 32 can be retracted/withdrawn (as schematically indicated by arrows 32′) so as to get it out of the way of axis B′ (although such retraction would not be a necessity in the case of a donut-shaped annular dark field detector 32, for example; in such a detector, a central hole would allow beam passage when the detector was not in use).
    • As an alternative to imaging using camera 30 or detector 32, one can also invoke spectroscopic apparatus 34, which could be an EELS module, for example.


      It should be noted that the order/location of items 30, 32 and 34 is not strict, and many possible variations are conceivable. For example, spectroscopic apparatus 34 can also be integrated into the imaging system 24.


Note that the controller/computer processor 10 is connected to various illustrated components via control lines (buses) 10′. This controller 10 can provide a variety of functions, such as synchronizing actions, providing setpoints, processing signals, performing calculations, and displaying messages/information on a display device (not depicted). Needless to say, the (schematically depicted) controller 10 may be (partially) inside or outside the enclosure E, and may have a unitary or composite structure, as desired. The skilled artisan will understand that the interior of the enclosure E does not have to be kept at a strict vacuum; for example, in a so-called “Environmental (S)TEM”, a background atmosphere of a given gas is deliberately introduced/maintained within the enclosure E. The skilled artisan will also understand that, in practice, it may be advantageous to confine the volume of enclosure E so that, where possible, it essentially hugs the axis B′, taking the form of a small tube (e.g. of the order of 1 cm in diameter) through which the employed electron beam passes, but widening out to accommodate structures such as the source 4, specimen holder H, screen 26, camera 30, detector 32, spectroscopic apparatus 34, etc.


In the context of the current invention, the following specific points deserve further elucidation:

    • The detector 32 is embodied as a segmented detector, which, for example, may be a quadrant sensor, pixelated CMOS/CCD/SSPM detector, or PSD, for instance. Specific embodiments of such detectors are shown in plan view in FIGS. 2 and 3, and will be discussed below.
    • If a charged-particle beam propagating along the particle-optical axis B′ traverses the specimen S without undergoing any scattering/deflection in the specimen, then it will impinge (substantially) symmetrically on the center/origin O of the detector 32, and (essentially) give a “null” reading. This situation is shown in more detail in FIGS. 2 and 3, which show Cartesian axes X, Y with an origin at point O, on which is centered a dashed circle that schematically represents an impingement footprint F′ of a (ghost) charged-particle beam with barycenter C′, such that:
      • In FIG. 2, this footprint F′ is symmetrically overlaid on detection quadrants (electrodes) Q1, Q2, Q3, Q4. If the detection signals (electrical currents) from these quadrants are respectively denoted by S1, S2, S3, S4, then this situation will yield zero difference signals S1-S3 and S2-S4 between opposing pairs of quadrants.
      • In FIG. 3, which depicts an orthogonal matrix of detection pixels p (e.g. in a CMOS detector, possibly with an overlaid scintillation layer), there is zero deviation between the elected origin O of said pixel matrix and barycenter C′.
    • If, on the other hand, a charged-particle beam undergoes some scattering/deflection in the specimen S, it will land on the detector 32 at a position displaced from the origin O. In this context, FIGS. 2 and 3 show a beam footprint F with barycenter C that is no longer centered on O. The position of point C with respect to O defines a vector V, with an associated magnitude (length) and direction (pointing angle with respect to X axis, for example). This vector V can be expressed in terms of the coordinates (XC, YC) of point C, which can be distilled as follows:
      • In FIG. 2, one can derive (rudimentary) estimators for XC, YC using the following formulae:











X
C





S





1

-

S





3




S





1

+

S





2

+

S





3

+

S





4




,






Y
C





S





2

-

S





4




S





1

+

S





2

+

S





3

+

S





4








(
1
)











      • In FIG. 3, one can derive values for XC, YC by examining output signals from the various pixels p, because pixels p that are impinged upon by the beam footprint F will give a different output signal (electrical resistance, voltage or current, for example) to pixels p outside the footprint F. The location of C can then be directly deduced by noting the coordinates of that particular pixel that yields an extremal signal, or indirectly determined by mathematically calculating the barycenter of the cluster of pixels p impinged on by F, or via a hybrid technique that combines both approaches, for example.


        The skilled artisan will understand that the size of beam footprint F can be altered by adjusting the so-called “camera length” of the STCPM of FIG. 1, for example.



    • As the input charged-particle beam B is scanned across the specimen S so as to trace out a two-dimensional scan path (area), the approach set forth in the previous item can be used to obtain a value of V for each coordinate position along said scan path. This allows compilation of a “map” of vector V as a function of scan position on the specimen S, which amounts to a mathematical field (and also a physical field, in that the vector V can be assigned a (proportional) physical meaning, such as electrostatic field vector).

    • The vector field resulting from the previous step can now be integrated two-dimensionally, so as to obtain an integrated vector field (iVF) image that is intrinsic to the current invention. This aspect of the invention will be elucidated in more detail in the next Embodiment (which again makes specific reference to (S)TEM, but is equally applicable to a generic STCPM).

    • In accordance with the present invention, such an iVF image is obtained at each of a series {Fn} of different focus values (focus position being the beam parameter P that, in accordance with the current invention, is selected to be varied so as to obtain input data pairs for the ensuing mathematical deconvolution procedure; this contrasts with HAADF-STEM tomography, for example, where the varied parameter P is chosen to be beam tilt). Focus value may, for example, be varied by:
      • Changing a (focus) setting of (at least one of) the particle-optical components in illuminator 6; and/or
      • Changing a Z position of specimen holder H.


        In this way, one accrues a measurement set M={(In, Pn)}={(In, Fn)}, in which is the iVF image corresponding to a given focus value Fn. According to the invention, this measurement set M can then be (automatically) deconvolved/spatially resolved into a (tomographic) result set representing depth-resolved/depth-reconstructed imagery of the specimen, e.g. using SS mathematical techniques as set forth above. See, in this regard, Embodiments 3 and 4 below.





EMBODIMENT 2

A further explanation will now be given regarding some of the mathematical techniques that can be used to obtain an iVF image as employed in the present invention.


Integrating Gradient Fields


As set forth above, a measured vector field {tilde over (E)}(x,y)=({tilde over (E)}x(x,y), {tilde over (E)}(x,y))T can (for example) be derived at each coordinate point (x,y) from detector segment differences using the expressions:











E
~

x

=



S
1

-

S
3




S
1

+

S
2

+

S
3

+

S
4







(

2

a

)








E
~

y

=



S
2

-

S
4




S
1

+

S
2

+

S
3

+

S
4







(

2

b

)








where, for simplicity, spatial indexing (x,y) in the scalar fields {tilde over (E)}x, {tilde over (E)}y and Si=1, . . . 4 has been omitted, and where superscript T denotes the transpose of a matrix.


It is known from the theory of (S)TEM contrast formation that {tilde over (E)} is a measurement of the actual electric field E in an area of interest of the imaged specimen. This measurement is inevitably corrupted by noise and distortions caused by imperfections in optics, detectors, electronics, etc. From basic electromagnetism, it is known that the electrostatic potential function φ(x,y) [also referred to below as the potential map] is related to the electric field by:

E=∇φ  (3)


The goal here is to obtain the potential map at each scanned location of the specimen. But the measured electric field in its noisy form {tilde over (E)} will most likely not be “integrable”, i.e. cannot be derived from a smooth potential function by the gradient operator. The search for an estimate {tilde over (φ)} of the potential map given the noisy measurements Ē can be formulated as a fitting problem, resulting in the functional minimization of objective function J defined as:

J(φ)=∫∫∥(−∇φ)−{tilde over (E)}∥2dxdy=∫∫∥∇φ+{tilde over (E)}∥2dxdy  (4)

where








φ

=



(


φ
x

,

φ
y


)

T

=



(




φ



x


,



φ



y



)

T

.







One is essentially looking for the closest fit to the measurements, in the least squares sense, of gradient fields derived from smooth potential functions φ.


To be at the sought minimum of J one must satisfy the Euler-Lagrange equation:



















φ

+

E
~




2




φ


-



d





dx











φ

+

E
~




2





φ
x




-



d





dy











φ

+

E
~




2





φ
y





=
0




(
5
)








which can be expanded to:












-


d





dx







[



(


φ
x

+


E
~

x


)

2

+


(


φ
y

+


E
~

y


)

2


]





φ
x




-



d





dy






[



(


φ
x

+


E
~

x


)

2

+


(


φ
y

+


E
~

y


)

2


]





φ
y





=
0




(
6
)








finally resulting in:














2


φ




x
2



+




2


φ




y
2




=

-

(






E
~

x




x


+





E
~

y




y



)






(
7
)








which is the Poisson equation that one needs to solve to obtain {tilde over (φ)}.


Poisson Solvers


Using finite differences for the derivatives in (7) one obtains:













φ


i
+
1

,
j


-

2


φ

i
,
j



+

φ


i
-
1

,
j




Δ
2


+



φ

i
,

j
+
1



-

2


φ

i
,
j



+

φ

i
,

j
-
1





Δ
2



=

-

(





(


E
~

x

)



i
+
1

,
j


-


(


E
~

x

)



i
-
1

,
j



Δ

+




(


E
~

y

)


i
,

j
+
1



-


(


E
~

y

)


i
,

j
-
1




Δ


)






(
8
)








where Δ is the so-called grid step size (assumed here to be equal in the x and y directions). The right side quantity in (8) is known from measurements and will be lumped together in a term ρi,j to simplify notation:













φ


i
+
1

,
j


-

2


φ

i
,
j



+

φ


i
-
1

,
j




Δ
2


+



φ

i
,

j
+
1



-

2


φ

i
,
j



+

φ

i
,

j
-
1





Δ
2



=

ρ

i
,
j






(
9
)








which, after rearranging, results in:

φi−1,ji,j−1−4φi,ji,j+1i+1,j2ρi,j  (10)

for i=2, . . . , N−1 and j=2, . . . , M−1, with (N,M) the dimensions of the image to be reconstructed.


The system in (10) leads to the matrix formulation:

Lφ=ρ  (11)

where φ and ρ represent the vector form of the potential map and measurements, respectively (the size of these vectors is N×M, which is the size of the image). The so-called Laplacian matrix L is of dimensions (N×M)2 but is highly sparse and has a special form called “tridiagonal with fringes” for the discretization scheme used above. So-called Dirichlet and Neumann boundary conditions are commonly used to fix the values of {tilde over (φ)} at the edges of the potential map.


The linear system of (11) tends to be very large for typical (S)TEM images, and will generally be solved using numerical methods, such as the bi-conjugate gradient method. Similar approaches have previously been used in topography reconstruction problems, as discussed, for example, in the journal article by Ruggero Pintus, Simona Podda and Massimo Vanzi, 14th European Microscopy Congress, Aachen, Germany, pp. 597-598, Springer (2008). One should note that other forms of discretization of the derivatives can be used in the previously described approach, and that the overall technique is conventionally known as the Poisson solver method. A specific example of such a method is the so-called multi-grid Poisson solver, which is optimized to numerically solve the Poisson equation starting from a coarse mesh/grid and proceeding to a finer mesh/grid, thus increasing integration speed.


Basis Function Reconstruction


Another approach to solving (7) is to use the so-called Frankot-Chellapa algorithm, which was previously employed for depth reconstruction from photometric stereo images. Adapting this method to the current problem, one can reconstruct the potential map by projecting the derivatives into the space-integrable Fourier basis functions. In practice, this is done by applying the Fourier Transform FT(⋅) to both sides of (7) to obtain:

x2y2)FT(φ)=√{square root over (−1)}(ωxFT({tilde over (E)}x)+ωyFT({tilde over (E)}y))  (12)

from which {tilde over (φ)} can be obtained by Inverse Fourier Transform (IFT):










φ
^

=

IFT
(


-


-
1








ω
x



FT


(


E
~

x

)



+


ω
y



FT


(


E
~

y

)






ω
x
2

+

ω
y
2




)





(
13
)







The forward and inverse transforms can be implemented using the so-called Discrete Fourier Transform (DFT), in which case the assumed boundary conditions are periodic. Alternatively, one can use the so-called Discrete Sine Transform (DST), which corresponds to the use of the Dirichlet boundary condition (φ=0 at the boundary). One can also use the so-called Discrete Cosine Transform (DCT), corresponding to the use of the Neumann boundary conditions (∇φ·n=0 at the boundary, n being the normal vector at the given boundary location).


Generalizations and Improved Solutions


While working generally well, the Poisson solver and Basis Function techniques can be enhanced further by methods that take into account sharp discontinuities in the data (outliers). For that purpose, the objective function J can be modified to incorporate a different residual error R (in (4), the residual error was R(v)=∥v∥2). One can for example use exponents of less than two including so-called Lp norm-based objective functions:











J


(
φ
)


=







R


(


-


φ


,

E
~


)



dxdy



=










(

-


φ


)

-

E
~





1
p



dxdy





,





p

1





(
14
)







The residual can also be chosen from the set of functions typically used in so-called M-estimators (a commonly used class of robust estimators). In this case, R can be chosen from among functions such as so-called Huber, Cauchy, and Tuckey functions. Again, the desired result from this modification of the objective function will be to avoid overly smooth reconstructions and to account more accurately for real/physical discontinuities in the datasets. Another way of achieving this is to use anisotropic weighting functions wx and wy in J:

J(φ)=∫∫wxxk-1)(−φx−{tilde over (E)}x)2+wyyk-1)(−φy−{tilde over (E)}y)2dxdy  (15)

where the weight functions depend on the residuals:

Rzk-1)=R(−φxk-1,{tilde over (E)}x) and Ryk-1)=R(−φyk-1,{tilde over (E)}y)  (15a)

at iteration k−1.


It can be shown that, for the problem of depth reconstruction from photometric stereo images, the use of such anisotropic weights, which can be either binary or continuous, leads to improved results in the depth map recovery process.


In another approach, one can also apply a diffusion tensor D to the vector fields ∇φ and {tilde over (E)} with the aim of smoothing the data while preserving discontinuities during the process of solving for {circumflex over (φ)}, resulting in the modification of (4) into:

J(φ)=∫∫∥D(−∇φ)−D({tilde over (E)})∥2dxdy  (16)


Finally, regularization techniques can be used to restrict the solution space. This is generally done by adding penalty functions in the formulation of the objective criterion J such as follows:

J(φ)=∫∫[∥(−∇φ)−{tilde over (E)}∥2+λƒ(∇φ)]dxdy  (17)


The regularization function ƒ(∇φ) can be used to impose a variety of constraints on cp for the purpose of stabilizing the convergence of the iterative solution. It can also be used to incorporate into the optimization process prior knowledge about the sought potential field or other specimen/imaging conditions.


Position Sensitive Detector (PSD)


Using a Position Sensitive Detector (PSD) and measuring a thin, non-magnetic specimen, one obtains (by definition) the vector field image components as components of the center of mass (COM) of the electron intensity distribution ID({right arrow over (k)},{right arrow over (r)}p) at the detector plane:

IxCOM({right arrow over (r)}p)=∫∫−∞kxID({right arrow over (k)},{right arrow over (r)}p)d2{right arrow over (k)} IyCOM({right arrow over (r)}p)=∫∫−∞kyID({right arrow over (k)},{right arrow over (r)}p)d2{right arrow over (k)}  (18)

where {right arrow over (r)}p represents position of the probe (focused electron beam) impinging upon the specimen, and {right arrow over (k)}=(kx,ky) are coordinates in the detector plane. The full vector field image can then be formed as:

ICOM({right arrow over (k)}p)=IxCOM({right arrow over (r)}p{right arrow over (x)}0+IyCOM({right arrow over (r)}p{right arrow over (y)}0(19)

where {right arrow over (x)}o and {right arrow over (y)}0 are unit vectors in two perpendicular directions.


The electron intensity distribution at the detector is given by:

ID({right arrow over (k)},{right arrow over (r)}p)=|custom characterin({right arrow over (r)}−{right arrow over (r)}p)eiφ({right arrow over (r)})}({right arrow over (k)})|2  (20)

where ψin({right arrow over (r)}−{right arrow over (r)}p) is the impinging electron wave (i.e. the probe) illuminating the specimen at position {right arrow over (r)}p, and eiφ({right arrow over (r)}) is the transmission function of the specimen. The phase φ({right arrow over (r)}) is proportional to the specimen's inner electrostatic potential field. Imaging φ({right arrow over (r)}) is the ultimate goal of any electron microscopy imaging technique. Expression (19) can be re-written as:












I
COM





(


r


p

)


=



1

2

π




(






ψ

i





n




(

r


)




2







φ


(

r


)




)



(


r


p

)


=


-

1

2

π





(






ψ

i





n




(

r


)




2










E




(

r


)



)



(


r


p

)







(
21
)








where {right arrow over (E)}({right arrow over (r)})=∇φφ({right arrow over (r)}) is the inner electric field of the specimen which is the negative gradient of the electrostatic potential field of the specimen—and the operator “*” denotes cross-correlation. It is evident that the obtained vector field image ICOM({right arrow over (r)}p) directly represents the inner electric field {right arrow over (E)}({right arrow over (r)}) of the specimen. Its components are set forth in (18) above. Next, an integration step in accordance with the current invention is performed, as follows:











I
ICOM



(


r


p

)


=




l
=


r


ref




r


p








I
COM





(

r


)


·




d



l








(
22
)








using any arbitrary path l. This arbitrary path is allowed because, in the case of non-magnetic specimens, the only field is the electric field, which is a conservative vector field. Numerically this can be performed in many ways (see above). Analytically it can be worked out by introducing (21) into (22), yielding:











I
ICOM



(


r


p

)


=


1

2

π




(






ψ

i





n




(

r


)




2



★φ


(

r


)



)







(


r


p

)






(
23
)







It is clear that, with this proposed integration step, one obtains a scalar field image that directly represents φ({right arrow over (r)}), as already alluded to above.


EMBODIMENT 3

The linearity assumptions in image formation elucidated above can be represented in the model:

Q=AI  (24)

in which:


I=(I1, I2, . . . , IN)T is the set of iVF images acquired by varying focus value;


Q=(Q1, Q2, . . . , QN)T is a set of source images that are statistically de-correlated and that represent information coming from different depth layers (levels);


A=(a1, a2, . . . , aN)T is a square matrix transforming the original images into so-called principal components.


PCA decomposition obtains the factorization in equation (24) by finding a set of orthogonal components, starting with a search for the one with the highest variance. The first step consists in minimizing the criterion:










a
1

=



arg





max




a


=
1







E


{


(


a
T


I

)

2

}






(
25
)








The next step is to subtract the found component from the original images, and to find the next layer with highest variance.


At iteration 1<k≤N, we find the kth row of the matrix A by solving:










a
k

=



arg











max




a


=
1



E


{


(


a
T

(

I
-




i
=
1


k
-
1









w
i



w
i
T


I



)

)

2

}






(
26
)







It can be shown (see, for example, literature references [1] and [3] referred to above) that successive layer separation can be achieved by using so-called Eigenvector Decomposition (EVD) of the covariance matrix ΣI of the acquired images:

ΣI=E{ITI}=EDET  (27)

in which:


E is the orthogonal matrix of eigenvectors of ΣI;


D=diag(d1, . . . , dN) is the diagonal matrix of Eigenvalues.


The principal components can then be obtained as

Q=ΣTI  (28)

The Eigenvalues are directly related to the variance of the different components:

di=(var(Qi))2  (29)


In cases in which noise plays a significant role, the components with lower weights (Eigenvalues) may be dominated by noise. In such a situation, the inventive method can be limited to the K (K<N) most significant components. The choice to reduce the dimensionality of the image data can be based on the cumulative energy and its ratio to the total energy:









r
=





i
=
1

K







d
i






i
=
1

N







d
i







(
30
)








One can choose a limit for the number of employed layers K based on a suitable threshold value t. A common approach in PCA dimensionality reduction is to select the lowest K for which one obtains r≥t. A typical value for t is 0.9 (selecting components that represent 90% of the total energy).


Noise effects can be minimized by recombining several depth layers with a suitable weighting scheme. Additionally, re-weighting and recombination of layers can be useful to obtain an image contrast similar to the original images. In the previously described PCA decomposition, the strongest component (in terms of variance) is commonly associated with the background (matrix) material. Adding this component to depth layers enhances the visual appearance and information content of the obtained image. One can achieve the effect of boosting deeper-lying layers, reducing noise, and rendering proper contrast by re-scaling the independent components by their variances and reconstructing the highest-energy image using the rescaled components, as follows:









Q
=


ED

-

1
2





E
T


I





(
31
)








The skilled artisan will appreciate that other choices for the linear weighting of depth layers can also be used.


EMBODIMENT 4

As an alternative to the PCA decomposition set forth above, one can also employ an SS approach based on ICA. In ICA, one assumes a linear model similar to (24). The main difference with PCA is that one minimizes a higher-order statistical independence criterion (higher than the second-order statistics in PCA), such as so-called Mutual Information (MI):










MI


(


Q
1

,





,





Q
N


)


=





i
=
1

N







H


(

Q
i

)



-

H


(
Q
)







(
32
)








With marginal entropies computed as:










H


(
Q
)


=

-




k
=
1

S








P


(


Q
i

=

q
k


)




log


(

P


(


Q
i

=

q
k


)


)









(
33
)








and the joint entropy:










H


(
Q
)


=

-




k
=
1

S








P


(



Q
i

=

q
k


,





,


Q
N

=

q
k



)




log


(

P


(



Q
i

=

q
k


,





,


Q
N

=

q
k



)


)









(
34
)








in which:

    • P(Q) is the probability distribution of the imaging quantity Q;
    • qk is a possible value for said imaging quantity; and
    • S is the total number of scanned sites on the specimen (e.g. in the case of rectangular images, this is the product of height and width).


Other criteria such as the so-called Infomax and Negentropy—can also be optimized in ICA decomposition. Iterative methods—such as FastICA—can be employed to efficiently perform the associated depth layer separation task. Adding more constraints to the factorization task can lead to more accurate reconstruction. If one adds the condition that sources (layers) render positive signals and that the mixing matrix is also positive, one moves closer to the real physical processes underlying image formation. A layer separation method based on such assumptions may use the so-called Non-Negative Matrix Decomposition (NNMD) technique with iterative algorithms.


For more information, see, for example, literature references [1] and [2] cited above.

Claims
  • 1. A method of performing sub-surface imaging of a specimen in a charged-particle microscope of a scanning transmission type, comprising: providing a beam of charged particles that is directed from a source along a particle-optical axis through an illuminator so as to irradiate the specimen;providing a detector for detecting a flux of charged particles traversing the specimen;
  • 2. A method according to claim 1, wherein: the specimen is conceptually sub-divided into a series [S1, . . . , Sm] of m slices disposed along and normal to said particle-optical axis;for each value of n, the corresponding image In is expressed as a linear sum Σj=1j=min(Sj) of discrete sub-images, each associated with a different one of said slices.
  • 3. A method according to claim 2, wherein: for each given focus value Pn, a particular slice SBn is associated with a position of best focus within the specimen;tn(Sj) is set to zero for each integer j≠Bn, so that In is taken to derive solely from SBn.
  • 4. A method according to claim 3, wherein said beam irradiates the specimen with an opening angle of at least 20 milliradians.
  • 5. A method according to claim 1, wherein said deconvolution is performed using a Source Separation algorithm.
  • 6. A method according to claim 5, wherein said Source Separation algorithm is selected from the group consisting of Independent Component Analysis, Principal Component Analysis, Non-Negative Matrix Factorization, and combinations and hybrids hereof.
  • 7. A charged-particle microscope of a scanning transmission type comprising: a specimen holder, for holding a specimen;a source, for producing a beam of radiation;an illuminator, for directing said beam so as to irradiate said specimen;an imaging system, for receiving a flux of charged particles transmitted through the specimen and directing it onto a detector;deflectors for causing said beam to traverse a scan path relative to a surface of the specimen;
  • 8. The method of claim 2, wherein said deconvolution is performed using a Source Separation algorithm.
  • 9. The method of claim 3, wherein said deconvolution is performed using a Source Separation algorithm.
  • 10. The method of claim 4, wherein said deconvolution is performed using a Source Separation algorithm.
  • 11. The apparatus of claim 7, wherein the controller is further configured to conceptually subdivide the sample into a series [S1, . . . , Sm] of m slices disposed along and normal to said particle-optical axis, where for each value of n, the corresponding image In is expressed as a linear sum Σj=1j=min(Sj) of discrete sub-images, each associated with a different one of said slices.
  • 12. The apparatus of claim 7, wherein the controller is further configured to, for each given focus value Pn, associate a particular slice SBn with a position of best focus within the specimen; and set in(Sj) to zero for each integer j≠Bn, so that In is taken to derive solely from SBn.
  • 13. The apparatus of claim 7, wherein said illuminator directs said beam to irradiate the specimen with an opening angle of at least 20 milliradians.
  • 14. The apparatus of claim 7, wherein said controller is further configured to perform said deconvolution using a Source Separation algorithm.
  • 15. The apparatus of claim 14, wherein said Source Separation algorithm is selected from the group consisting of Independent Component Analysis, Principal Component Analysis, Non-Negative Matrix Factorization, and combinations and hybrids hereof.
  • 16. The apparatus of claim 11, wherein said controller is further configured to perform said deconvolution using a Source Separation algorithm.
  • 17. The apparatus of claim 16, wherein said Source Separation algorithm is selected from the group consisting of Independent Component Analysis, Principal Component Analysis, Non-Negative Matrix Factorization, and combinations and hybrids hereof.
  • 18. The apparatus of claim 12, wherein said controller is further configured to perform said deconvolution using a Source Separation algorithm.
  • 19. The apparatus of claim 18, wherein said Source Separation algorithm is selected from the group consisting of Independent Component Analysis, Principal Component Analysis, Non-Negative Matrix Factorization, and combinations and hybrids hereof.
Priority Claims (1)
Number Date Country Kind
15163623 Apr 2015 EP regional
US Referenced Citations (4)
Number Name Date Kind
20070194225 Zorn Aug 2007 A1
20130037715 Boughorbel Feb 2013 A1
20130193322 Blackburn Aug 2013 A1
20150243474 Lazic et al. Aug 2015 A1
Non-Patent Literature Citations (2)
Entry
A.J. D'Alfonso, et al., “Depth sectioning in scanning transmission electron microscopy based on core-loss spectroscopy,” Ultramicroscopy, Elsevier, Amsterdam, NL, vol. 108, No. 1, Oct. 25, 2007, pp. 17-28.
Niels De Jonge et al., “Three-Dimensional Scanning Transmission Electron Microscopy of Biological Specimens,” Microscopy and Microanalysis, Springer, New York, NY, U.S., vol. 16, No. 1, Feb. 2010, pp. 54-63.
Related Publications (1)
Number Date Country
20160307729 A1 Oct 2016 US