1. Field of the Invention
The present invention relates to an electron microscope and, more particularly, to a method and an electron microscope for forming an image on the basis of comparison between images obtained by tilting a specimen or an electron beam.
2. Background Art
JP Patent Publication (Kokai) No. 2005-19218 (corresponding to U.S. Pat. No. 7,064,326) describes an example of a technique to construct a three-dimensional-construction image of a specimen by changing the angle at which an electron beam is applied to the specimen and combining images (transmission images) obtained at a plurality of angles. Through a three-dimensional image of a specimen constructed by such a method, the cubic structure of the specimen can be analyzed.
JP Patent Publication (Kokai) No. 2005-19218 also describes an example of performing three-dimensional reconstruction in which a shift of the position of a specimen is corrected by selectively cutting, by means of two-dimensional correlation processing with respect to a reference image, the same visual field out of a series of transmission images of a specimen obtained in a tilting manner.
In a transmission electron microscope, if an image is taken in for three-dimensional construction in a state where the tilt angle of a stage or a beam is increased, a peripheral blur occurs due to the tilt and, therefore, the degree of matching between an image at a large tilt and an image at a smaller tilt is reduced. If positioning based on the degree of matching between such images, suitable three-dimensional image construction cannot be performed. There is also a problem that setting of the degree of matching itself is difficult to perform.
JP Patent Publication (Kokai) No. 2005-19218 describes a technique to correct a position shift by searching the same visual field as a reference image, but includes no consideration of the existence of a peripheral blurred region resulting from tilting.
Also, in a visual field including a characteristic contrast in a portion of an image as shown in
The present invention provides, as an image forming method including comparison between images for three-dimensional image construction or the like and an apparatus for forming such images, an image forming method and an electron microscope capable of obtaining with high accuracy or efficiency information required for comparison.
To solve the above-described problems, according to one aspect of the present invention, there is provided an image forming method in which an image is formed on the basis of comparison between a plurality of images obtained by applying an electron beam to a specimen at different tilt angles, the method including obtaining a first transmission image with the electron beam applied in a first direction and a second transmission image with the electron beam applied in a direction different from the first direction, the second transmission image corresponding to an area on the specimen smaller than that corresponding to the first image, the second transmission image being formed within a region different from a peripheral blurred region resulting from tilting, and making a search in the first transmission image by using the second transmission image, and there is also provided an apparatus for image forming in accordance with the image forming method.
According to the above-described aspect of the present invention, a high search accuracy can be obtained regardless of the existence of the peripheral blurred region caused due to tilting of the beam or the specimen in a method and apparatus for forming an image with comparison between images.
A concrete example of a three-dimensional image construction method using a transmission electron microscope will be described as one form of implementation of the present invention. A transmission electron microscope will be described as one form of implementation. The transmission electron microscope described below has an electron gun, a converging lens through which an electron beam is applied to a specimen, a mechanism for deflecting the electron beam, the specimen, an objective lens which is focused on the specimen, an imaging lens which expands the electron beam transmitted through the specimen, a mechanism for taking in a transmission image as image data, a mechanism for computing a luminance distribution of image data, and a mechanism for making comparison between the luminance distribution of image data after changing an objective lens current and the luminance distribution of image data before changing the luminance distribution and the objective lens, and a monitor which displays values for the results of comparison between the luminance distributions and the transmission image.
Techniques described below are conceivable as techniques to compute, for example, the amount of movement between images in the above-described comparison step.
About a Computation Method Used in this Example
The operation of the transmission electron microscope having the above-described construction will be described by using an example of image correlation shown in
Each image is a natural image, m=0, 1, 2, . . . M−1, and n=0, 1, 2, . . . N−1.
Discrete Fourier images F1(m, n) and F2(m, n) of f1(m, n) and f2(m, n) are respectively defined by (1) and (2):
F1(u,v)=A(u,v)ejθ(u,v) (1)
F2(u,v)=B(u,v)ejφ(u,v) (2)
In these equations, u=0, 1, 2 . . . M−1; v=0, 1, 2 . . . N−1; A(u, v) and B(u, v) are amplitude spectra; and θ(u, v) and φ(u, v) are phase spectra.
In phase correlation, when an parallel image movement between two images occurs, the position of a peak of correlation is shifted by a value corresponding to the amount of movement. A method of driving the amount of movement will be described.
First, a move of the original image f2(m, n) by r′ in the x-direction and f4(m, n)=f2(m+r′, n) are assumed. Equation (2) is transformed into equation (3):
If the amplitude spectrum B (u, v) is constant, a phase image independent of image contrast results. A phase image F′4(u, v) of f4 is shown by equation (4):
F4′(u,v)=ej(φ+2πr′u/M) (4)
A phase image F′1(u, v) is multiplied by the complex conjugate of F′2(u, v) to obtain a synthetic image H14(u, v) shown by equation (5):
A correlation intensity image G14(r, s) shown by equation (6) below is obtained by inverse Fourier transform of the synthetic image H14(u, v).
According to equation (6), if a position shift R′ in the x-direction exists between the two images, the position of a peak of the correlation intensity image is shifted by −r′. Since the correlation is computed from phase components, the amount of movement can be computed even if the two images differ in lightness and contrast. If a position shift in the x-direction exists between the two images, a peak occurs at a position of ΔG (pixels) from the center of the correlation intensity image. For example, a correlation intensity image such as shown in
For example, if a shift in the x-direction by 2 pixels exists between the two images, the resulting synthetic image is waves having two kinds of periods. This image is inverse Fourier transformed to obtain a correlation intensity image in which a peak occurs at a position shifted by 2 pixels from the center. This ΔG (pixels) corresponds to the amount of movement on the light receiving surface of a sensor and is converted into the amount of movement Δx on the specimen surface. If the diameter of the light receiving surface of the sensor is L; the magnification of the transmission electron microscope on the light receiving surface is M; and the number of pixels on the light receiving surface of the sensor is Lm, Δx is as shown by equation (7).
Δx=ΔG(pixels)×L/Lm(pixels)/M (7)
Δx is the amount of movement on the specimen surface between the two images.
Description will be made of the accuracy of the amount of movement between images, magnification and angle of rotation. In phase computation using only phase components, a peak appearing in correlation intensity is a 6 peak because only the phase is mathematically used. For example, if a shift by 1.5 pixels occurs between two images, the resulting synthetic image is waves of a period 1.5. When this image is inverse Fourier transformed, a peak rises at a position shifted by 1.5 pixels from the center of the correlation intensity image. However, no pixel exists at the 1.5-pixel position and the value of δ peak is therefore divided into values at the first and second pixels.
Then the centroid of pixels having a high degree of matching is taken and the true δ peak position is computed from the divided values. The computation result is obtained with an accuracy of about 1/10 pixel. Also, since the peak of the correlation intensity image is a δ peak, the similarity between the two images is evaluated from the height of the peak of the correlation intensity image. For the image f1(m, n), if the height of the peak is “Peak” (pixels), the degree of matching (%) is shown by equation (8):
Degree of matching(%)=(Peak)/(m×n)×100 (8)
For example, if the number of pixels to be processed is 128×128 pixels, and if “Peak” is 16384 (pixels), the degree of matching=(16384)/(128×128)×100=100(%).
An example of template image searching by image template matching is shown below.
Correlation computation shown by equation (9) is performed with respect to all pixels in a designated area of a source image, and a point at which the matching degree coefficient (r) is maximized (1.0) is detected as an amount of movement. The degree of matching is defined as r×100.
f: Source image
g: Template image
n: Number of effective pixels in template area
(1≦n≦=65536:256×256)
If this method is used, the degree of matching is increased under variation in lightness and blurring because computation by the correlation coefficient computation equation itself normalizes data.
The computation is performed on the area of the template image and one corresponding area of the source image. For a normalized correlation search in accordance with the present invention, three stages: a setup stage, a training stage and a search stage are set. In the setup stage, the template image is cut out of an input image. In the training stage, the cut-out image is registered as a template image for a normalized correlation search. In the search stage, a search with the template registered in the training stage is made. In computation of the amount of movement, the moved position is computed as shown in
As shown in
In the above equation, I represents a resampled image, R represents a restored image; (dxi,j, dyi,j) represents an estimated value of a movement vector in a grid (i, j).
An amount of defocus Δf is computed by substituting in equation (11) ΔX computed by equation (7).
Δf=ΔX/(α×M)−CS×α2 (11)
One embodiment of the present invention will be described with reference to the flowchart of
Subsequently, the degree of matching (“Image compare”) and the number of times correction is to be made (“Correction”), for prevention of erroneous operations, are input. A specimen stage is used by being tilted at a designated tilt angle to find a field of view.
In “Auto Focus”, a magnification ratio is input as shown in
An image is thereafter taken in by tilting the electron beam at an angle of +α and is recorded in 128×128 pixels as a template (2), as shown in
If the template size is increased, the shift caused at each tilt angle is increased because correction is made at the image center. Therefore the template size may be made as small as possible for an improvement in accuracy.
After automatic focusing, a high-resolution image in 1k×1k or more pixels is taken in and the specimen is tilted. During tilting, the amount of image movement is computed according to a sampling time input, and correction is made with the specimen stage shown in
The computation methods in the present embodiment ensure a high degree of matching even in the case of tilting at ±60°, as shown in
After the completion of specimen tilting, the image is taken in and stored as a template (3). At this time, the final amount of movement and degree of matching are computed from the search area (1) and the template (3) by using the above-described computation methods (C), (D), and (E) adopted in the present embodiment, and positioning is performed with the specimen stage shown
The above-described operation is repeatedly performed to take in images until a set tilt angle is reached. The above-described computation methods (A) and (B) adopted in the present embodiment may be used.
One embodiment of the present invention will be described with reference to the flowchart of
A template size, a tilt angle and a tilt step are first input, and the specimen stage is tilted. After specimen tilting, a field of view is found and an image is taken in and recorded as search area (1).
The specimen is tilted and an image is taken in a template size with an input image center to be stored as template (2).
The amount of movement is computed by using the above-described computation methods (C), (D), and (E) adopted in the present embodiment. If the degree of matching is equal to or lower than 80%, the template size is increased and the amount of movement and the degree of matching are again computed.
If the degree of matching exceeds 80%, correction is made by using image shifting shown in
One embodiment of the present invention will be described with reference to the flowchart of
Description will next be made of the flow shown in
If the specimen is tilted as shown in
The specimen stage is thereafter tilted at a designated tilt angle to find a field of view. Subsequently, an image is taken in and recorded as search area (1). The specimen is thereafter tilted and an image is taken in and a central image is recorded as template (2). Correction is made by means of image shifting shown in
As described above, a first transmission image of a specimen is obtained with an electron beam applied in a first direction and a second transmission image is obtained with the electron beam applied in a direction different from the first direction. The second transmission image corresponds to an area on the specimen narrower than that corresponding to the first image, and is formed within a region different from a peripheral blurred region resulting from tilting. A search using the second transmission image is made in the first transmission image. In this way, a high search accuracy can be achieved regardless of the existence of the peripheral blurred region caused due to tilting of the beam or the specimen.
Number | Date | Country | Kind |
---|---|---|---|
2007-065979 | Mar 2007 | JP | national |