The present invention relates to a charged particle beam apparatus.
Along with advances in semiconductor devices in recent years, the device structure has been complicated. For semiconductor manufacturers who manufacture advanced devices, how rapidly and efficiently a fabrication-process of such devices can be developed is an important issue. In the semiconductor fabrication-process development, it is essential to optimize conditions for processing a material deposited and laminated on a silicon (Si) wafer to have a shape as designed, and thus pattern-processed shape observation of a cross section is necessary.
Since a processed pattern of an advanced semiconductor device has a fine structure at a nanometer level, a charged particle beam apparatus such as a transmission electron microscope (TEM), and a scanning electron microscope (SEM) with a high resolution are used for the pattern-processed shape observation of the cross section.
In the current situation, the processed shape observation of the wafer cross section by using the charged particle beam apparatus is left to operation by humans, and requires a lot of time and effort for observation-field-of-view search and imaging operation. Therefore, for the sake of enhancing the speed and efficiency of the semiconductor fabrication-process development, an apparatus is demanded, which automates the observation operation as much as possible, and acquires a large amount of observation data in a rapid and personnel saving manner.
Furthermore, also regarding observation target objects other than semiconductors, such as metallic materials and biological samples, demand for an apparatus which can acquire a large amount of observation data in a rapid and personnel saving manner has been increased because of advance in materials informatics technology, etc.
In terms of the above issue, for example, PTL 1 discloses a method of automatically correcting a position of an observation field of view in SEM observation by using a pattern matching technique. This method reduces a work load when an operator adjusts the field of view to an observation target position. However, in this disclosure, only detecting a target pattern in a state in which a substrate is seen from an upper surface side (top view) is considered.
In observation using a charged particle beam apparatus, in order to grasp an observation position, a shape or a structure which exists on a sample and serves as a marker is found, and shifting to the field of view for observation is carried out with reference to the mark. However, when the target observation part is a sample cross section, positional relation between a final observation part and the marker is difficult to be grasped based on the top-view image. Therefore, in the method disclosed in PTL 1, a situation may occur, in which a pattern which serves as the marker cannot be detected, or the observation field of view is set to a wrong position. Furthermore, since it takes time to visually search for the field of view, the demand to acquire a large amount of observation data in a rapid and personnel saving manner cannot be satisfied.
An object of the present disclosure is to provide a charged particle beam apparatus having a function to automatically recognize a marker pattern in observation of a sample cross section utilizing the charged particle beam apparatus.
In field-of-view search of a charged particle beam image, in many cases, a marker can more easily be found in a “tilt image” where a sample is observed in an inclined state such that both of an upper surface and a cross section of the sample can be seen, when compared to an observation image (direct-front image) in which the sample cross section is directly facing to a charged particle beam. For example, in a case in which an observed sample is a semiconductor wafer where a pattern is formed, or a cut coupon of the semiconductor wafer, etc., assuming that directions within a surface of the sample are X and Y directions, and a thickness direction of the sample is a Z direction, a pattern scale in the X and Y directions is much larger than a processed pattern scale in the Z direction. Therefore, a possibility that a large-scale shape or structure exists is high, and a marker with high visibility can easily be found even in a low-magnification image. Thus, in the field-of-view search of the observation part in the low-magnification image, it is considered that use of a tilt image but not a top-view image makes identifying of the field of view easier when compared to the conventional technique.
An exemplary charged particle beam apparatus of the disclosure automatically recognizes a marker which serves as a reference for an observation position by a machine leaning model or a template matching using a tilt image, shifts a field of view to a given observation position with reference to the marker, and performs cross-section observation. The machine learning model is generated based on an actual observation image, or based on a three-dimensional model generated from two-dimensional layout data such as design data, and including a cross section. In detail, an exemplary charged particle beam apparatus of the disclosure includes an imaging device which acquires image data of a sample at a given magnification by radiating a charged particle beam to the sample, a computer system which executes arithmetic processing for field-of-view search at the time of acquisition of the image data, by using the image data, and a display unit on which a graphical user interface (GUI) to input a setting parameter for the field-of-view search is displayed. The imaging device is provided with a sample stage configured to be capable of transferring the sample by at least two drive shafts, and shifting an imaging field of view corresponding to positional information of the sample, the positional information being obtained by the computer system. The computer system includes a classifier which outputs, in response to input of image data of a tilt image where the sample is imaged in an inclined state with respect to the charged particle beam, positional information of one or more feature parts existing on the tilt image. The classifier is trained in advance by using training data in which input is image data of the tilt image, and output is positional information of the feature part. The computer system executes, with respect to new tilt image data inputted into the classifier, a processing to output positional information of the feature part.
According to the embodiments of the present disclosure, the charged particle beam apparatus capable of largely reducing time and effort required for the field-of-view search in observation of the sample cross section, and of automatic capturing of the cross-section image, can be achieved.
Hereinafter, embodiments of the present disclosure are described in detail in the respective embodiments. However, disclosed contents of the respective embodiments are not limited to the described examples, but configurations in which the disclosure in the respective embodiments and elemental technologies suggested therein are suitably combined within a knowledge of the person skilled in the art, are also included in the scope of the embodiments.
A first embodiment realizes, in a charged particle beam apparatus in which a scanning electron microscope (SEM) is an imaging device, a function to automatically recognize a field of view for an observation target object, and proposes a method for automatic observation of a sample by using the function.
The electron gun 11 includes a radiation source which emits an electron beam 12 accelerated at a given acceleration voltage. The emitted electron beam 12 is concentrated by the condenser lens 13 and the objective lens 15, and is radiated on a sample 20. The deflection coils 14 deflect the electron beam 12 by a magnetic field or an electric field, and thus a surface of the sample 20 is scanned by the electron beam 12.
The sample stage 17 has a function to transfer the sample 20 along a given drive shaft, or incline or rotate the sample 20 about a given drive shaft in order to shift an imaging field of view of the imaging device 10, and is provided with an actuator, such as a motor and a piezoelectric element, for such function.
The secondary electron detector 16 is an E-T detector, which includes a scintillator, a light guide, and a photomultiplier tube, a semiconductor detector, or the like, and detects a secondary electron 100 emitted from the sample 20 irradiated with the electron beam 12. A detection signal outputted from the secondary electron detector 16 is transmitted to the image forming unit 31. Note that, along with the secondary electron detector 16, a backscattered electron detector which detects a backscattered electron, and a transmission electron detector which detects a transmission electron may be provided.
The image forming unit 31 includes an AD converter which converts, into a digital signal, the detection signal transmitted from the secondary electron detector 16, an arithmetic unit which forms an observation image of the sample 20 based on the digital signal outputted from the AD converter, and the like (none of them are illustrated). As the arithmetic unit, for example, an MPU (Micro Processing Unit), a GPU (Graphic Processing Unit), and the like, are used. The observation image formed by the image forming unit 31 is sent to the display unit 35 to be displayed thereon, and to the computer system 32 to be applied with various processing.
The computer system 32 includes an interface 900 which inputs and outputs data and command from and to an external device, a processor or a CPU (Central Processing Unit) 901 which executes various arithmetic processing to provided information, a memory 902, and a storage 903.
The storage 903 includes, for example, an HDD (Hard Disk Drive), an SSD (Solid State Drive), and the like, and stores software 904 which constitutes a field-of-view search tool according to this embodiment, and a training data DB (data base) 44. The software (field-of-view search tool) 904 of this embodiment may include as functional blocks, for example, a feature classifier 45 which extracts, from inputted image data, a marker pattern 23 for the field-of-view search, and an image processing unit 34 which calculates, from a position of the detected marker pattern on the image, position coordinates of the marker pattern 23 with reference to positional information of the sample stage 17.
The memory 902 illustrated in
The feature classifier 45 is a program implemented with a machine learning model, and is trained by using image data of the marker pattern 23 stored in the training data DB 44 as training data. When new image data is inputted into the trained feature classifier 45, a position of the marker pattern learned in the image data is extracted, and center coordinates of the marker pattern in the new image data is outputted. The outputted center coordinates are used for identification of an ROI (Region Of Interest) at the time of the field-of-view search, and also various positional information calculated based on the center coordinates is transmitted to the controller 33 to be used for drive control of the sample stage 17.
The image processing unit 34 executes processing such as edge-line detection of a wafer surface based on image processing, and calculation and evaluation of image sharpness when focus adjustment, astigmatism correction, and the like, are automatically executed in a cross-section image in which a sample cross section is directly facing to the field of view.
The controller 33 is an arithmetic unit which controls the respective units, and processes and transmits data generated by the respective units. The controller 33 is, for example, a CPU, an MPU, or the like. The input unit 36 is a device to input an observation condition for observation of the sample 20, and to input a command for execution, stop, etc., of the observation. The input unit 36 may be configured by, for example, a keyboard, a mouse, a touch panel, a liquid crystal display, or a combination thereof. The display unit 35 displays a GUI (Graphical User Interface) which constitutes an operation screen for an operator, and a captured image.
Next, with reference to
In
On the upper surface 22, a shape and a structure larger than the size of the above-mentioned fine structure (that is, the marker pattern 23 which can be used as a marker during the field-of-view search) is formed. As the marker pattern 23, for example, a characteristic shape marker for identification of a chip processing region on the wafer, a processed pattern including a label information, and the like, may be used.
Orthogonal axes XYZ illustrated at an upper right in
When the fine shape on the cut surface 21 is observed, the electron beam 12 is radiated substantially vertically to the cut surface 21, and a region of a cross-section-observation field of view 24 is observed. However, in many cases, the cut surface 21 which is cut manually is not fully orthogonal to the upper surface 22, and attachment angles when the operator places the sample 20 on the sample stage 17 are not necessarily the same every time.
Therefore, as an angle adjustment shaft for making the cut surface 21 orthogonal to the electron beam 12, the first tilt shaft 61 and the second tilt shaft 62 are provided to the sample stage 17. The first tilt shaft 61 is a drive shaft for rotating the sample 20 in the Y-Z plane. Since a longitudinal direction of the cut surface 21 is the X-axis direction, a rotational angle of the first tilt shaft 61 is adjusted in a case in which a tilt angle is adjusted when the sample 20 is inclined and observed from an oblique direction (that is, a tilt angle of a tilt image). Similarly, the second tilt shaft 62 is a drive shaft for rotating the sample 20 in the X-Z plane. In the case in which the field of view is at a direct-front position with respect to the cut surface 21, by a rotational angle of the second tilt shaft 62 being adjusted, an image can be rotated centering on an axis in the up-and-down direction passing through the center of the field of view.
A configuration of the sample stage 17 is described with reference to
Next, a training procedure in the feature classifier 45 of this embodiment is described with reference to
In order for automatic execution of the field-of-view search according to this embodiment, the feature classifier 45 which detects the marker pattern 23 needs to be built in advance before the sample observation. The flowchart in
First, the sample 20 is placed on the sample stage 17 in the charged particle beam apparatus illustrated in
The acquired tilt image is displayed on the main screen 401. The tilt image includes the cut surface 21, the upper surface 22, and the marker pattern 23. At Step S302 in
Returning back to discussion on
At Step S307, the operator inputs the acquired training data to the feature classifier 45 to perform training. A configuration example of a GUI screen used by the operator at the time of training is illustrated in
In an upper row of the field-of-view search tool screen illustrated in
In a case where the batch input mode in the units of a folder is executed, first, an input button 412 is pressed to specify a folder storing the training data. A name of the specified folder is shown in a folder name display field 413. In order to change the specified folder name, a clear button 414 is pressed. A training start button 415 is pressed to start training of the model. Next to the training start button 504, a status display field 416 indicating a status is displayed. When “Done” is displayed in the status display field 416, the training step at Step S307 ends.
In a case where the training is performed in the individual input mode, the operator presses the input button 412 to specify the folder, and then, selects the “Folder” tab 417 to activate the lower-row screen. When the lower-row screen is activated, training data 43 stored in the specified folder is thumbnail-displayed on the training data display screen 418. The operator suitably inputs a checkmark to a checkmark input field 420 displayed in the thumbnail image. Note that the selected result can be cancelled by a clear button 423 being pressed. A sign 421 is the checkmark input field after the checkmark being inputted. The displayed thumbnail image can be changed by a scroll button 419 displayed on both ends of the screen being operated.
When the individual selection of the training data ends, the operator presses an enter button 422 to confirm the input result. After confirmation, a training start button 424 is pressed to start the training. Next to the training start button 424, a status display field cell 425 indicating a status is displayed, and when “Done” is displayed in the status display field cell 425, the training step in the individual input mode ends.
When the training is carried out to some extent, check operation to check whether the training is completed is executed (Step S308). The check operation can be executed by inputting, into the feature classifier 45, an image of a pattern having a known dimension to cause the feature classifier 45 to estimate the dimension, and determining whether a percentage of correct answers is above a given threshold. If the percentage of correct answers is below the given threshold, whether additional creation of training data is necessary (in other words, whether unused training data is stored in the storage 903) is determined (Step S309). If there is stock of the training data, and it is the training data suitable for the training, the process returns to Step S307 to execute additional training of the feature classifier 45. If stock of the training data does not exist, creation of new training data is determined to be necessary, and the process returns to Step S301 to again execute the flow in
As a method for the machine learning, an object detection algorithm using a deep neural network (DNN), or a cascade classifier may be used. When the cascade classifier is used, both of a positive image containing the marker pattern 23 and a negative image not containing the marker pattern 23 are set as the training data 43, and Step S307 in
Next, a field-of-view search sequence after the completion of the training is described with reference to
The condition setting step for the field-of-view search at Step S502 includes, as illustrated in
When the condition setting for the field-of-view search ends, a test run of the field-of-view search is executed (Step S503). The test run is a step in which a tilt image of the sample 20 is acquired at a magnification set in advance, and output of the center coordinates of the marker pattern from the feature classifier 45 is confirmed. Depending on the number of target imaging parts and the set magnification, the tilt image of the sample cross section may need one image or a plurality of images captured. In the case in which a plurality of images are captured, after the computer system 32 automatically transfers the sample stage 17 in the X-axis direction by a certain distance, an image is acquired, and then, after further transferring the sample stage 17 by a certain distance, an image is a acquired, which are performed repeatedly. The feature classifier 45 is operated with respect to the plurality of tilt images acquired in such a manner, and detects the marker pattern 23. The detection result is displayed on the main GUI 400 in a form in which the marker indicative of the ROI and the acquired image are displayed to be superimposed to each other. The operator confirms whether the ROI of the marker pattern included in the image is correctly extracted from the obtained output result.
As a result of the test run, if a problem occurs, a processing to solve the malfunction is performed at Step S504-2. The problem which may occur includes, for example, a case in which the marker pattern 23 within the field of view is not found and the center coordinates of the marker pattern 23 cannot be outputted even by the operation of the feature classifier 45, and a case in which a region other than the marker pattern 23 is erroneously recognized as the marker pattern 23, and incorrect center coordinates are outputted. When a problem related to the imaging device or the whole apparatus, such as failure in the optical system, occurs, the processing to execute the test run is temporally suspended.
In a case in which the test run goes well without occurrence of a problem, condition setting for image auto capture (i.e., image acquisition at a high magnification) is performed (Step S505). Note that the test run at Step S503 and the malfunction confirmation step at Step S504 may be omitted, and after the condition setting for the field-of-view search (Step S502), the process may be proceeded to the condition setting step for the image auto capture (Step S505) to start the actual performance without the test run.
Step S505 includes, as illustrated in
Here, a GUI used during the execution of the flowcharts in
The main GUI 400 is the same one as described with reference to
The GUI 600 illustrated in
The stage condition setting panel 604 is a setting field for registering, to the computer system 32, information on each of XYZ coordinates of the sample stage 17, a first tilt angle (the rotational angle about the first tilt shaft 61 in
The registration can be cancelled by a clear button (Clear) 613 being pressed. The operation of the registration button 612 and the clear button 613 is in common in the following description. Note that an execution button (Run) 614 is a button to instruct the computer system 32 to start the field-of-view search, and by this button being pressed, Step S503 (test run) in
When an adjustment button 609 is pressed to finely adjust the field of view of the tilt image, each of the XYZ coordinates or the tilt angle of the sample stage 17 is changed in a positive or negative direction. The image after the change is displayed on the main screen 401 in real time, and the operator registers the condition of the sample stage 17 where the field of view which is determined to be most appropriate can be obtained, while looking at the image. Note that when the field of view is adjusted such that the direct-front image of the cut surface 21 is shown in the main screen 401 in the state where the “High magnification capture” is selected, the stage condition in this state is a stage condition in the direct-front condition. By this condition being registered to the computer system 32, Step S505-2 in
Moreover, the setting and registration of the stage direct-front condition may be performed through automatic adjustment based on a given algorithm other than the manual adjustment described above. As the algorithm for adjusting the tilt angle of the sample 20, an algorithm which acquires tilt images at various tilt angles, and calculates the tilt angles by numerical calculation based on an edge line of a wafer extracted from the images, may be adopted.
The magnification setting panel 605 is a setting field to set a final magnification for the high-magnification imaging, and a halfway magnification when the magnification is increased from the imaging magnification during the field-of-view search to the final magnification. In a right field of the part showing “Current”, the imaging magnification of the tilt image which is currently displayed on the main screen 401 is displayed. A right side of “Final” in a middle row is a setting field to set the final magnification, and the final magnification is selected by an adjustment button similar to that in the stage condition setting panel 604. “Step*” in the lower row is a setting field to set a step number of the halfway magnification when counted from the imaging magnification of the tilt image, and when an adjustment button at a right side of the setting field is operated, a number is displayed in the “*” field. For example, in a manner of “Step1”, “Step2”, etc. On the further right side of the adjustment button at the right side of the setting field, a magnification setting field to set an imaging magnification at each step is displayed. The halfway magnification is set by the adjustment button being operated in the same manner as described above. After the setting ends, when the registration button 612 is pressed in the same manner as described above, the set final magnification and halfway magnification are registered to the computer system 32.
The ROI size setting panel 606 is a setting field to register the size of the ROI. When the number of pixels is set by using the ROI size setting panel 606, a range of the set pixels in the up-and-down and left-and-right directions centering the center coordinates of the ROI outputted by the feature classifier 45 is imaged. When the appropriate number of pixels is set by an adjustment button being operated, and then the registration button 612 is pressed, the set number of pixels is registered to the computer system 32.
The final observation position setting panel 607 is a setting field to set the center position of the field of view at the time of imaging at the final magnification, by a distance from the marker pattern 23. The main screen 401 shows the tilt image of the sample cross section together with the ROI 25 for the marker pattern setting. The operator drags and drops the selection tool 410 to a desired final observation position 426 by operating the pointer 409, and thus a relative positional information on the final observation position with respect to the marker pattern 23 can be set. On the final observation position setting panel 607, a distance from the center coordinates of the ROI 25 in the X direction is displayed in a “Left” display field or a “Right” display field, and a distance in the Z direction is displayed in an “Above” display field or a “Below” display field.
In a case in which a plurality of final observation positions are set, the drag-and-drop of the selection tool 410 is repeated, and thus the plurality of final observation positions are set. Moreover, as will be described later, a keyboard, a numeric keypad, etc., provided to the input unit 36 may be used to directly input a numerical value in each of the display fields of “Left”, “Right”, “Above”, and “Below”. This method is highly convenient for the operator in a case in which, for example, a plurality of images are captured on the basis of a position apart from the marker pattern 23 by a given distance, at a determined interval (for example, at equal intervals) from the reference position.
The setting of the optical conditions for the field-of-view search and the high-magnification image capturing are performed by using the GUI 400 which is the main GUI. In a state where the GUI 600 is displayed, when a button related to the optical condition in the selection panel 404 or the operation panel 405 in the GUI 400 is pressed, a screen to set the optical condition is displayed. For example, when a “Scan” button 427 is pressed, a scanning speed setting panel 608 is displayed, and the operator operates a setting thumb 611 while looking at an indicator 610, thus setting a scanning speed for the imaging to an appropriate value. When the registration button 612 is pressed after the setting, the set scanning speed is registered to the computer system 32. In the procedure as described above, by registering to the computer system 32 the optical conditions such as the acceleration voltage and a beam current value while switching the “FOV search” and the “High magnification capture”, Step S502-1 in
Note that, in the description with reference to
Returning back to
At this step, a tilt image of a given region of the sample cross section is imaged. Image data obtained from the captured image is sequentially inputted into the classifier 45, and center-coordinate data of the marker pattern is outputted. The outputted center-coordinate data is provided with a serial number such as ROI1, ROI2, etc., and is stored in the storage 903 together with the meta information described above.
When the field-of-view search ends, the controller 33 calculates a transfer amount of the sample stage 17 based on the current stage positional information and the center-coordinate data of each ROI, and field-of-view shift to the marker pattern position 23 is executed (Step S507). After the field-of-view shift, a high-magnification image is acquired at the final observation position, in accordance with the image auto capture condition at the high magnification set at Step S505 (Step S508). Below, details of Step S508 is described with reference to
After the field of view is shifted to the position of the marker pattern 23 at Step S507 in
By execution of Step S508-1 and Step S508-2, the observation field of view shifts to the final observation position, and also becomes the direct-front condition with respect to the sample cross section, and then, the magnification is increased at the field of view (Step S508-3). The magnification is increased in accordance with the halfway magnification set in the magnification setting panel 605 in FIG. 6A.
At Step S508-4, the computer system 32 executes focus adjustment and astigmatism correction processing. As an algorithm for the correction processing, for example, a method in which an image is acquired while a current value of an objective lens and an astigmatism correction coil is swept within a given range, image sharpness is evaluated by the acquired image being applied with fast Fourier transform (FFT) or Wavelet transform, and a setting condition with a high score is derived. A correction processing of another aberration may be included as necessary. At step S508-5, the computer system 32 executes imaging at the increased magnification, and image data of the current field of view is acquired.
At Step S508-6, the computer system 32 executes a first field-of-view deviation correction. The first field-of-view deviation correction of this embodiment includes a correction processing of a horizontal line of the image, and correction processing of deviation of the field-of-view center position. However, another necessary field-of-view deviation correction processing may be executed according to the magnification.
First, the correction processing of the horizontal line is described. As illustrated in
Next, the correction processing of deviation of the field-of-view center position is described. Immediately after the field-of-view shift at Step S508-1, although the position set in the final observation position setting panel 607 in
Note that in a case in which the computer system 32 is provided with another feature classifier which has been trained by using the image obtained in the magnification increasing process as training data, the center-coordinate data of the field of view may be obtained by directly inputting the image data acquired at Step S508-5 into the another feature classifier, without using the template matching.
Moreover, the field-of-view deviation correction at this step may be executed not by adjustment of the sample stage 17, but by image shift. In this case, an adjustment amount of field-of-view deviation is converted, by the computer system 32, into control information related to the scanning region by the electron beam in the X and Y directions, and sent to the controller 33. The controller 33 controls the deflection coils 14 based on the received control information, and executes the adjustment of the field-of-view deviation by the image shift.
At Step S508-7, it is determined whether the adjustment amount in the first field-of-view deviation correction executed at Step S508-6 is appropriate. In
At Step S508-8, a second field-of-view deviation correction is executed. In the second field-of-view deviation correction, a deficiency or an excess in the adjustment amount of the rotational angle θ, or the adjustment amount of the X-drive shaft or the Y-drive shaft is obtained by image processing, and the stage 17 is re-adjusted. In the case in which the original field-of-view center is not positioned at the field-of-view center, at Step S508-8, the image before the execution of the specified distance transfer, and the image after the execution of the transfer is compared, an actual transfer distance is measured, and correction is executed by adding the deficient amount. In a case in which a target object for the above-described processing does not exist in the field of view, the magnification is changed to a lower magnification, and the above-described processing is executed after an image-classifiable target object is brought within the field of view.
Note that the second field-of-view deviation correction of this step may be executed not by adjustment of the sample stage 17, but by image shift. The first field-of-view deviation correction processing and the second field-of-view deviation correction processing described above may comprehensively be referred to as “fine adjustment”.
At Step S508-9, it is determined whether the current imaging magnification matches the final observation magnification set in the magnification setting panel 605
At Step S508-10, the optical condition is changed to the optical condition for the high-magnification image capturing which is set in the GUI 400 in
At Step S509, it is determined, based on the serial number of the ROI imaged at Step S508, whether imaging at the final observation positions ends for all the ROIs extracted in the field-of-view search. If the imaging does not end, the processing returns to Step S507 to execute the field-of-view shift to the next ROI. If the imaging ends, the automatic capturing processing of this embodiment ends (Step S510).
During the execution of the automatic capturing processing, a status of the automatic capturing processing is displayed on the GUI illustrated in
The high-magnification image displayed on the main screen 401 is a cross-section image at a high magnification at which the shape of a processed pattern 26 formed on the wafer can be confirmed, and the magnification is ×200 k as shown in the magnification adjustment field 403. Moreover, for displaying the imaging part of the high-magnification image to be emphasized, the marker pattern and a marker 428 indicating the final imaging position are displayed on the sub screen 407.
As described above, as a result of building the feature classifier 45 for the marker pattern 23 by using a set of 250 pieces of training data and a cascade classifier, and implementing in the apparatus the flow presented in
Next, a scanning electron microscope according to a second embodiment is described. The second embodiment proposes a scanning electron microscope provided with the sample stage 17 having a structure different from that in the first embodiment. The target sample, and the building methods of the automatic capturing flow and the field-of-view recognizing function are the same as those in the first embodiment, and the configuration of the sample stage 17 is different.
Next, a scanning electron microscope according to a third embodiment is described. In the third embodiment, a sequence which can shorten the entire execution time compared to the automatic capturing sequence discussed in the first embodiment is described. The entire configuration of the charged particle beam apparatus which executes the automatic capturing sequence, and a GUI used by an operator according to the third embodiment are similar to those in the first embodiment, and thus redundant description is omitted below. Mainly, different points are described while referring to
With reference to a flowchart in
In the flowchart in
At Step S508-6-4, a processing to determine whether to skip a processing from Step S508-6-5 to Step S508-6-8 is executed. A determination criterion is whether the current magnification is higher or lower than a threshold set in advance (whether the current magnification is at or above the threshold may be determined). This is because, when the magnification is low, an amount of field-of-view center deviation on the image, attributed to the magnification increase, is small (a possibility that the original field-of-view center deviates from the field of view is low). Empirically, it is known that the amount of field-of-view center deviation increases to an extent to deviate from the field of view when the magnification is approximately from ×50 k to ×100 k. Moreover, at the time in which the magnification is increased from the imaging magnification for the field-of-view search to the final observation magnification, it is desirable to increase the magnification in a phased manner to avoid occurrence of field-of-view shift. The magnification setting for the halfway magnification in the GUI in
If the current magnification is determined to be larger than the threshold at Step S508-6-4, deviation correction of the field-of-view center at Step S508-6-5 is executed. This processing is similar to the processing of the field-of-view center deviation correction included in the “first field-of-view deviation correction” at Step S508-6 in
Thereafter, the determination step at Step S508-9 and the optical condition changing step at S508-10 are executed, and then the target high-magnification image is acquired at Step S508-11. The processing at these steps is already described in the first embodiment, and thus description thereof is omitted.
According to the automatic capturing sequence of the third embodiment, the focus adjustment and the astigmatism correction in the magnification increasing process, and also the first field-of-view deviation correction and the second field-of-view deviation correction can be omitted depending on the situation. The optical adjustment such as the focus adjustment and the astigmatism correction, which takes long time, and the image processing such as the first field-of-view deviation correction and the second field-of-view deviation correction, which takes long time can be skipped, and therefore, the total time required for a series of observation flow can be reduced. Furthermore, the reduction effect increases as the number of capturing images on the sample increases. Note that only one of the determination steps at Steps S508-6-2 and S508-6-4 may be adopted to configure the flow (that is, a flow in which only the focus adjustment and the astigmatism correction, or the first field-of-view deviation correction and the second field-of-view deviation correction are skipped), and also in this case, the reduction effect of the total time required may be obtained. As described above, according to the automatic capturing sequence of the third embodiment, a charged particle beam apparatus with large capturing throughput with respect to the cross-section image is achievable.
Next, a scanning electron microscope according to a fourth embodiment is described. The scanning electron microscope of the fourth embodiment is different from the embodiments described above in that layout data such as design data is used to build the feature classifier 45 for the marker pattern 23.
An external server 905 is connected to the computer system 32 directly or through a network. In
With reference to
Although the layout data 40 is device design data such as CAD, alternative to this, a two-dimensional image generated based on the design data, a photograph observed by using an optical microscope, and the like may be used. In this layout diagram, a region indicated by a reference numeral 70 corresponds to a side remained as an observation sample.
Next, the operator sets, on the layout data 40 by using the pointer 409 and the selection tool 410, the region of interest (ROI) 25 including the marker pattern 23 to automatically be detected during the observation. After the setting, the operator presses the registration button 432 to register the region of interest (ROI) 25 to the computer system 32.
After the setting of the cut line 71 and the ROI 25, in response to instruction by the operator, the cross-section 3D image data generation unit 41 starts processing to generate a 3D geometric image 72 (pseudo tilt image) from the layout data 40. A method of the instruction by the operator is, for example, a method in which the operator presses a button on a GUI (not illustrated) to start a generation processing of training data from the layout data. When the start button is pressed, the processor 901 executes a program of the cross-section 3D image data generation unit 41, and a three-dimensional (3D) model is built on the computer system 32 based on the layout data corresponding to the ROI 25. The processor 901 further changes a tilt angle and an observation scale of the 3D model on an imaginary space, and thus a plurality of 3D geometric images 72 with different observation conditions are automatically generated. A second row in
The processor 901 automatically executes, by using coordinate information of the ROI 25 specified on the main GUI 400, an image cutout processing in which a region including the ROI is cutout on the 3D geometric image 72, and generates a 3D tilt image 73. A third row in
A method for generating the similar image 74 is described in detail. As illustrated in
The image style transfer model 46 includes a neural network, for example. In this case, learning can be carried out by using a data set for image recognition model learning, without using an actual sample image or the layout data 40. In a case in which a data set of a structure image similar to the target generation image, and the real image (actual SEM observation image corresponding to the 3D tilt image 73) exists, the data set may be used for learning of the image style transfer model. In such a case, since the image style transfer model can directly output the similar image from the 3D tilt image, a style image is not required when the similar image 74 is generated from the 3D tilt image 73. Furthermore, instead of the image style transfer model 46, an electron beam simulator may be used to generate the similar image 74.
Note that the cross-section 3D image data generation unit 41 and the similar image generation unit 42 may be operated in the external server 905 as illustrated in
The similar image 74 generated in the procedure described above is stored in the training data DB 44 in the storage 903. In the case in which the training of the feature classifier 45 is performed by using the generated similar image 74, in the same procedure as the operation described with reference to
The charged particle beam apparatus described in the fourth embodiment does not require the operator to prepare the training data 43 by capturing a large amount of SEM images in order to build the feature classifier 45. The training data is automatically registered to the training data DB 44 only by the operator setting the cut line 71 and the region of interest (ROI) 25 with reference to the layout data 40, and the feature classifier 45 can be built. This means that the charged particle beam apparatus of this embodiment can substantially omit the process from Step S301 to Step S306 in
Next, a scanning electron microscope according to a fifth embodiment is described. In the fifth embodiment, a method of field-of-view search (automatic detection of the marker pattern 23) by using the similar image 74 generated in the method in the fourth embodiment is described. In this embodiment, not the feature classifier 45 by machine learning, but pattern matching is used for detection of the marker pattern 23.
The 3D tilt image 73 and the similar image 74 are generated from the layout data 40 in a method similar to that in the fourth embodiment. In conventional pattern matching, as mentioned in the problem discussed earlier, acquiring a large amount of actual observation images in various conditions are required as reference images for the pattern matching, and a practical problem exists. On the other hand, in the method of this embodiment, the similar image 74 generated from the layout data 40 can mechanically be outputted, and thus a large amount of reference images for pattern matching can be prepared without human effort. Therefore, pattern matching from a tilt image, which has conventionally been difficult, becomes achievable.
Next, a scanning electron microscope according to a sixth embodiment is described. The sixth embodiment proposes a configuration to supplementarily support a part of observation operation by an operator. In the sixth embodiment, the feature classifier 45 for the marker pattern 23 is built in any one of the methods described in the first embodiment to the fourth embodiment. Then, the operator causes the feature classifier 45 to operate during SEM observation so as to detect the marker pattern 23 in real time. At the same time, a function to display the region of interest (ROI) 25 including the detected marker pattern 23 on a GUI 400 of the display unit 35 is provided.
Next, a scanning electron microscope according to a seventh embodiment is described. The seventh embodiment provides a configuration example which achieves automatic alignment of layout data such as design data, and a sample position in actual SEM observation. Note that the feature classifier 45 of the computer system 32 is already trained by using a real image or a pseudo SEM image.
Operation on the layout data in the seventh embodiment is described with reference to
An operator reads out, on the GUI, the desired layout data 40 in the same procedure as described in the fourth embodiment, and sets, on the GUI, the cut line 71 of the sample and the position (X coordinate) of the marker pattern 23 as the detection target object with reference to the layout data 40. The position setting of the cut line 71 and the marker pattern 23 is performed by using the pointer 409 and the selection tool 410 similarly to
Through this operation, the X coordinate of each marker pattern 23 on the layout data is registered to the computer system 32, and an X-coordinate list 77 on the layout data is obtained. Next, a tilt image at a low magnification is acquired while the sample stage 17 is transferred in the X-axis direction in a step-and-repeat method. This imaging processing is performed from a position where a X-direction left end of the sample 20 is brought within a field of view, to a position where a right end thereof is brought within a field of view. During this process, as illustrated in
In the method described above, as illustrated in
Next, with reference to
Image data of the captured tilt image is sequentially stored in the storage 903, and when the imaging from the tip to end of the sample 20 in the X direction ends, the imaging step ends. Next, at Step S502-5, the processor 901 executes the comparison processing of the X-coordinate list 77 and the X-coordinate list 78 illustrated
Moreover, the transformation data which associates the layout space and the real space coordinate may be utilized not only during searching for the marker pattern, but also during the field-of-view shift to the final observation position. Since coordinate deviation does not occur in the layout data even when it is displayed in an enlarged manner, by the layout data being displayed on the GUI to be enlarged, the operator can precisely specify the final observation position on the layout data at a resolution corresponding to the final observation magnification. Furthermore, the transformation data also allows the computer system 32 to precisely grasp the coordinate of the final observation position in the real space, and thus field-of-view deviation attributed to the magnification increase is eliminated in principle (in an actual situation, field-of-view deviation occurs due to error contained in the transformation data). This effect is similarly applied to the case in which the feature classifier 45 is built by using the real image as the training data.
Note that although in the above description the coordinate data of the marker pattern in the real space is used to calculate a stage transfer amount, pattern pitch information of the marker pattern may be used to calculate the stage transfer amount.
Next, a scanning electron microscope according to an eighth embodiment is described. In the eighth embodiment, a configuration example in which the disclosure is applied to observation of not a semiconductor sample but a metallic material structure is described. In a cross section of a metallic material structure, a feature structure such as a peritectic structure and a eutectic structure appears, and an operator performs field-of-view observation and detailed analysis such as elementary analysis, while focusing on the feature structure. In the following description of this embodiment, a premised apparatus configuration is the same as that in the embodiments described above as illustrated in
A peritectic structure illustrated in
When image data of a metallic structure surface acquired by the imaging device is inputted into the built feature classifier A 45a and the feature classifier B 45b, as schematically illustrated in
The GUI in
After completion of the setting described above, when a start button 1704 is pressed, an automatic execution flow of the elementary analysis starts, a newly captured image data of the metallic material structure is inputted into the feature classifier A 45a and the feature classifier B 45b, and the peritectic structure and the eutectic structure contained in the metallic material structure are extracted as the ROI together with information on the center coordinates. Based on the image data of the extracted ROI, positional information of the A phase, the B phase, the C phase, the D phase, and the E phase are obtained in units of a pixel by using a contrast. For detection of the positional information of each phase, a method of machine learning such as semantic segmentation may be used.
Then, by the computer system 32 and the controller 33 working together, the field-of-view shift to the respective ROIs are automatically executed in order, and an imaging processing at a high magnification (a second magnification higher than the first magnification), and an elementary analysis (EDX mapping, EDS, etc.) processing with respect to the focusing field of view specified by the operator in the GUI, are automatically executed. Such an embodiment is significantly effective particularly in development in which a large amount of data is highly efficiently acquired like materials informatics. Note that, different from the other embodiments described above, in this embodiment, a tilt image is not required to be used for the training of the feature classifier, and training of the feature classifier is also possible by using an image captured from above the sample.
Next, a scanning electron microscope according to a ninth embodiment is described. The ninth embodiment proposes an example in which the technology of the disclosure is applied to a charged particle beam apparatus provided with an FIB-SEM (Focused Ion Beam-Scanning Electron Microscope) as an imaging device.
Other than the embodiments described above, configuration examples having the following features are also suitable.
1. A charged particle beam apparatus having a function to execute a field-of-view search test run based on instruction by an operator, and a recording medium storing a program to implement the function.
2. A charged particle beam apparatus having a function to detect a problem caused during execution of field-of-view search and automatically stop the field-of-view search, and a recording medium storing a program to implement the function.
3. The charged particle beam apparatus of the above section, having a function to resume a flow of the field-of-view search which is automatically stopped, from a process where the flow is stopped, and a recording medium storing a program to implement the function.
4. A charged particle beam apparatus including a GUI which displays design data of an observation target object sample, a computer system which executes a processing to match, based on a real image data acquired by an imaging device, coordinate information of an ROI set by an operator on the design data and coordinate information of the observation target object sample in a real space, and calculation of a sample stage transfer amount based on coordinate information of the ROI in the real space obtained through the matching, and a stage which moves based on the calculated stage transfer amount, and a recording medium storing a program to implement the processing.
5. The charged particle beam apparatus described above, having a function to execute field-of-view shift in the real space based on coordinate information of a final observation position set by the operator on the design data, and a recording medium storing a program to implement the function.
6. A charged particle beam apparatus comprising an imaging device and a computer system storing a first feature classifier trained by using real image data including a first shape and a second feature classifier trained by using real image data including a second shape, wherein automatic execution of elementary analysis is performed by radiating a charged particle beam to regions on a sample corresponding to a first coordinate and a second coordinate which are outputted by new image data being inputted into the first feature classifier and the second feature classifier, and a recording medium storing a program to implement the automatic execution processing.
7. The charged particle beam apparatus described above, including a GUI to set a type of elementary analysis to be executed to each of the first shape and the second shape, wherein the elementary analysis set on the GUI is automatically executed by a charged particle beam being radiated to the regions on the sample corresponding to the first coordinate and the second coordinate, and a recording medium storing a program to implement the automatic execution processing.
Note that the present invention is not limited to the above-described embodiments, but includes various modifications.
The embodiments provide detailed description such that the description of the invention is easy to understand, and the invention is not limited to include all the described configurations. Moreover, a partial configuration of one embodiment may be replaced by the configuration of an other embodiment, and the configuration of one embodiment may be added to the configuration of an other embodiment. Furthermore, in terms of a partial configuration of each embodiment, addition of an other configuration, deletion, and replacement are possible. Moreover, each configuration, function, processing unit, processing means, etc., described above may be implemented by hardware by, for example, a part of or all of them being designed by an integrated circuit etc.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/029862 | 8/16/2021 | WO |