The present invention relates to a technology for an analysis system that analyzes a sample.
Semiconductor devices have been miniaturized. In particular, in semiconductor devices with 3-dimensional structures, high density and large capacity have been significantly advanced in combination of stacking technologies, and multi-layers of stacking structures have been advanced. To manage dimensions of multi-layered pattern structures, it is necessary to evaluate the performance of patterns in each layer. In order to improve quality of semiconductor devices, formation of vertical and uniform patterns is indispensable and quick and highly accurate evaluation of pattern shapes is required.
As a present evaluation scheme, there is a scheme of obtaining depth information of patterns by observing a sample while shaping the sample by a focused ion beam (FIB). In addition, there is a scheme of obtaining depth information of a pattern by observing a sample produced by mechanical polishing in a charged particle beam apparatus and predicting a slope angle of a polished surface.
For example, WO2016/002341A (PTL 1) discloses a technology for processing a sample into a tapered shape using an FIB, acquiring a top surface observation image of a formed slope surface using an electron microscope, and calculating a depth of a pattern based on a start position of a downward slope surface, a scanning distance of an electron beam, and a slope angle.
JP2010-97768A (PTL 2) discloses a technology for using a measurement optical system capable of obtaining depth information in combination with a charged particle beam apparatus, although an application target is not a semiconductor device and the measurement optical system is not a means for obtaining depth information.
A means for using an FIB can perform highly accurate evaluation on a pattern, but there are problems that a processing region is narrow, it takes a time to perform evaluation, and it is difficult to acquire data again. A means for predicting a slope angle of a polished surface can perform rapid evaluation, but there is a problem that accuracy of an evaluation value of a pattern is low since depth information of the pattern cannot be calculated except for prediction. Further, the means for using an FIB also has a problem that an observation throughput is lowered due to multi-layers of a future stacking structure since the number of layers needs to be identified one by one.
A technology capable of acquiring depth information of a multi-layered structure of a sample quickly and accurately without using an FIB is required.
An object of the present invention is to provide a technology capable of acquiring depth information of a sample quickly and accurately with regard to a technology of an analysis system.
A representative embodiment of the present invention has the following configuration. An analysis system according to an embodiment is an analysis system including a computer system. The computer system acquires 3-dimensional coordinate information of a top surface shape of a sample having a stacking structure, the 3-dimensional coordinate information being measured by a top surface shape measurement apparatus, acquires 2-dimensional coordinate information based on a photographic image of the sample imaged by a charged particle beam apparatus, performs coordinate conversion for association between the 3-dimensional coordinate information of the top surface shape measurement apparatus and the 2-dimensional coordinate information of the charged particle beam apparatus and acquires association data which is a result, and acquires depth information on a coordinate system of the charged particle beam apparatus based on the association data.
According to a representative embodiment of the present invention, it is possible to acquire depth information of a sample quickly and accurately with regard to a technology of an analysis system. The other problems, configurations, effects, and the like will be clarified in forms for carrying out the invention.
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the drawings, the same reference numerals are given to the same units in principle and repeated description thereof will be omitted. In the drawings, to facilitate understanding of the present invention, actual positions, sizes, shapes, ranges, or the like are not represented in expressions of constituent elements in some cases.
In description of a process by a program, a program, a function, a processing unit, or the like serves as a main entity in some cases. A main hardware entity for the program, the function, the processing unit, or the like is a processor or a controller, an apparatus, a computer, a system, or the like including the processor or the like. The computer performs a process as a processor in accordance with a program read on a memory while appropriately using resources such as a memory, a communication interface, and the like. Accordingly, a predetermined function, a processing unit, or the like is implemented. The processor is configured as a semiconductor device or the like such as a CPU or a GPU. The processor is configured as a device or a circuit capable of performing predetermined calculation. A process is not limited to software program processing and can also be mounted on a dedicated circuit. As the dedicated circuit, an FPGA, an ASIC, a CPLD, or the like can be applied.
The program may be installed in advance as data in a target computer or may be distributed to be installed as data from a program source to a target computer. The program source may be a program distribution server on a communication network or may be a non-transitory computer-readable storage medium (for example, a memory card or a magnetic disk). The program may include a plurality of modules. A computer system may include a plurality of apparatuses. The computer system may be configured as a cloud computing system or the like.
Various types of data or information have structures such as a table or a list, but the present invention is not limited thereto. An expression such as identification information, an identifier, an ID, a name, a number can be substituted. In description, X, Y, and Z directions or the like are used in some cases. These directions (in other words, axes) intersect each other and are particularly orthogonal to each other. In particular, the Z direction is assumed to be a direction corresponding to top and bottom, a height, a depth, a thickness, or the like.
An analysis system according to an embodiment is a system capable of acquiring depth information of a multi-layered structure included in a sample. The analysis system according to the embodiment includes a computer system that includes a processor. The analysis system includes hardware and software executing the following steps. In other words, an analysis method corresponding to the analysis system is a method in which the computer system or the like executes the following steps.
(a) A step of processing a top surface of a sample such as a semiconductor device having a multi-layered structure (in other words, a stacking structure) to, for example, a bowl or tapered shape using a polishing technology such as a dimple grinder or an ion milling and forming a slope surface serving as an observation surface.
(b) A step of acquiring, as a height map, 3-dimensional coordinate information of a top surface shape of the sample viewed in a top surface direction by irradiating a sample top surface having a slope surface with light in the top surface direction using a top surface shape measurement apparatus such as an optical interference microscope (CSI).
(c) A step of observing/imaging the sample in a charged particle beam apparatus such as a scanning electron microscope (SEM) and acquiring 2-dimensional coordinate information based on particles (in other words, a photographic image) such as a detected secondary electron or reflected electron image.
(d) A step in which the computer system acquires the 3-dimensional coordinate information of the top surface shape measurement apparatus and the 2-dimensional coordinate information of the charged particle beam apparatus.
(e) A step in which the computer system performs coordinate conversion for association between the 3-dimensional coordinate information of the top surface shape measurement apparatus and the 2-dimensional coordinate information of the charged particle beam apparatus and generates association data. The coordinate conversion is performed by a projective conversion method using, for example, markers. In the association data, height depth information in the top surface shape measurement apparatus is associated with the 2-dimensional coordinate information of a stage coordinate system in the charged particle beam apparatus.
(f) A step in which the computer system on the charged particle beam apparatus corrects a height in consideration of a slight slope or the like of a sample occurring when the height map is acquired in the foregoing (b) with respect to the height map acquired in the foregoing (d).
(g) A step in which a user designates an observation target position of the sample by using a depth or the like from the sample top surface on a graphical user interface (GUI) screen provided by the computer system.
(h) A step of performing movement to the observation target position corresponding to the depth or the like designated on 3-dimensional coordinate information of the association data in the charged particle beam apparatus based on the designation of the observation target position, focusing a charged particle beam using an objective lens, and capturing an observation image.
The computer system 1 includes a processor 201, a memory 202, a storage device 203, a communication interface 204, and an input/output interface 205, which are connected to each other via a bus. The storage device 203 stores various types of data or programs. The programs include analysis software 210 to be described below. A display device 206 or an input device 207 are connected to the input/output interface 205. The communication interface 204 is connected to the LAN 9. An external device such as MES 5 stores design data or the like of the sample.
In step S1, the user produces a slope surface (a surface in which a stacking structure is exposed) in a part of the top surface of a sample (for example, a 3-dimensional NAND device) with the polishing apparatus 4. The user may acquire design data of the sample with the computer system 1 from the MES 5 or the like. The design data is data including 3-dimensional data of the stacking structure in a coordinate system of the sample.
In step S2, the user sets markers for the coordinate conversion on the sample. In a specific example, markers of four points are formed in advance on a sample holder holding the sample so as to be commonly used in the CSI and the SEM. The present invention is not limited thereto. For example, when there is a pattern of three points or more which are distinguishable and identifiable on the sample top surface (near a slope surface), the points may be used as markers. The markers may be formed on the sample top surface by the polishing apparatus 4 or another apparatus.
The user sets the sample to which the markers are attached in the top surface shape measurement apparatus (CSI), and measures and acquires a height map (3-dimensional coordinate information) by the CSI with an entire region of the slope surface of the sample as a target. The height map also includes 3-dimensional coordinate information of the markers. The computer system 1 acquires the height map (3-dimensional coordinate information).
On the CSI side, accuracy of data may be improved by further performing surface correction to be described below.
In step S3, the user sets the sample to which the markers are attached in the charged particle beam apparatus (SEM) and images or displays 2-dimensional coordinate information (in other words, an image) with the SEM. The computer system 1 acquires the 2-dimensional coordinate information.
In step S4, the user performs a coordinate conversion process from the height map (3-dimensional coordinate information) of the CSI to the 2-dimensional coordinate information of the SEM with the computer system 1 (the analysis software 210). The coordinate conversion process is implemented by a projective conversion method using the position coordinate information of the markers of four points. The computer system 1 acquires the association data of CSI-SEM which is a result of the coordinate conversion.
In step S5, the computer system 1 further performs a height correction process on the association data. The height correction is correction in which a slight slope or the like of the sample occurring when the height map is acquired with the CSI is considered. The height correction process is correction performed so that a reference height position is the same height based on the reference height position (which can be designated, for example, by the user on a GUI screen). The computer system 1 acquires association data after the height correction as a result of the height correction process.
In step S6, the user designates an observation target position by using a depth or the like from the top surface of the sample based on the association data after the height correction on the GUI screen of the computer system 1.
In step S7, the computer system 1 controls an imaging condition or the like of the SEM based on the observation target position designated by the user, causes the SEM to capture an observation image, and displays the observation image on the GUI screen. The computer system 1 analyzes the observation image and displays analysis result information on the GUI screen.
On an SEM coordinate system (X, Y, Z), 2-dimensional coordinate information (X, Y) is acquired. The 2-dimensional coordinate information includes 2-dimensional coordinate information of positions of the markers 7 (M1, M2, M3, M4) of four points. Alternatively, on the same SEM coordinate system, 2-dimensional coordinate information of the markers 7 of four points is acquired in association with the outer side in the 2-dimensional coordinate information of only the sample 6. For example, 2-dimensional coordinates of the marker M1 are (X1, Y1). Pattern matching of the markers 7 is used as a technical means for detecting the positions of the markers 7 from an image (2-dimensional coordinate information) of the SEM.
In the height map (the 3-dimensional coordinate information) (the data 302 of
The computer system 1 can obtain height depth information (zi) in the CSI corresponding to the position 401 (Xi, Yi) in the SEM. The height depth information (zi) can be converted into height information (Zi) of the Z axis on the SEM coordinate system by the association or conversion. That is, in the association data 304 or the association data 305, 3-dimensional coordinate information (X, Y, Z) on the SEM coordinate system can be configured.
The height depth information (zi) can be converted into depth information (for example, denoted by reference sign D) from the top surface in the coordinate system (a coordinate system known from the design data 301 or the like) of the sample 6. That is, in the association data 304 or the association data 305, 3-dimensional coordinate information (X, Y, D) on a sample coordinate system associated with the SEM coordinate system can also be configured. In addition, 3-dimensional coordinate information (X, Y, L) in a format in which the number of layers (for example, denoted by reference sign L) is associated with a depth D can also be configured.
When the function (A) is used, the user first designates and inputs a depth (or a layer or the like) from the top surface of the sample as an observation target position on the GUI screen of the computer system 1 in step 411. The computer system 1 inputs the designated depth information into the association data 305 of the CSI-SEM. In step 412, the computer system 1 acquires the 3-dimensional coordinate information (X, Y, Z) in the SEM corresponding to the designated depth as an output from the association data 305. The acquired 3-dimensional coordinate information (X, Y, Z) includes another candidate (another position at the same layer and the same height), and one or more 3-dimensional positions can be automatically acquired. In step 413, the computer system 1 controls an imaging condition or the like of the SEM in association with the acquired 3-dimensional position, captures an observation image in the SEM, and displays the observation image on the GUI screen.
When the function (B) is used, the user first views an observation image captured with the SEM on the GUI screen of the computer system 1, designates and inputs 2-dimensional coordinates of a point of interest in the observation image in step 421. The computer system 1 inputs the designated portion (the 2-dimensional coordinates) into the association data 305 of the CSI-SEM. In step 422, the computer system 1 acquires the 3-dimensional coordinate information (X, Y, Z) in the SEM corresponding to the designated point as an output from the association data 305. In step 423, the computer system 1 calculates a depth position or the like from the top surface of the sample by conversion from the acquired 3-dimensional position and displays the depth position on the GUI screen.
(B) illustrates an observation target region (in other words, a candidate region) 600 corresponding to the depth D designated in (A) in a plan view of the slope surface 60 on the X-Y plane. A straight region corresponding to the certain same layer (for example, the third layer) is the observation target region 600. In the layer, a pattern such as a channel hole 601 indicated by an ellipse is formed.
The computer system 1 acquires 3-dimensional coordinate information (X, Y, Z) at the observation target position in the SEM from the depth designated in (A) based on the association data 305 after the height correction of
(D) illustrates a case in which an observation image with the SEM is acquired at each point of the observation target position of (C). The number of acquired observation images, a size of the observation image, and the like can be set by the user. The user can observe an observation image at an appropriate position with regard to the observation images at one or more acquired observation target positions at the same designated depth (the same layer).
As described above, in the analysis system according to the embodiment, the user can almost automatically (automatically except for a setting or an input by the user) perform detailed observation in the SEM by moving to 2-dimensional coordinates (X, Y) of a goal depth (or layer) from the sample top surface based on the coordinate conversion from the CSI to the SEM and the association data. In this way, by observing an aimed position from accurate depth information, it is possible to analyze or evaluate a channel hole shape or the like of a sample (for example, a 3-dimensional NAND device) with high accuracy for a short time.
In the embodiment, as the coordinate conversion and the association, the 2-dimensional coordinates (xy coordinates) of depth data (depth map) measured with the CSI are associated with 2-dimensional coordinates (XY coordinates) on the SEM coordinate system. The coordinate conversion and the association can be implemented by a projective conversion method (in other words, 4-point alignment method) using, for example, the markers 7 (
In this system, depth information of the entire slope surface 60 can be ascertained by acquiring a height map of the CSI in the entire region of the polished slope surface 60 and performing the coordinate conversion and the height correction. Therefore, this system can automatically estimate points (the plurality of observation target positions and candidates in
When a certain acquired point (observation target position) is not appropriate for imaging and observation due to abnormality such as a foreign matter, different points in the same depth/layer can be set as observation target positions so as to re-obtain data. In particular, in the case of mechanical polishing, a foreign matter such as a polishing waste is likely to remain unlike the FIB. When a plurality of points in the same depth/layer are acquired as observation candidates, appropriate measure can be taken even in this case.
An analysis system according to a first embodiment which is a detailed configuration example will be described with reference to
First, a configuration example of the sample 6 will be described. The sample 6 (in other words, an observation target) is, for example, a thin piece acquired from a part of a semiconductor wafer in which various semiconductor devices are formed. Accordingly, examples of the sample 6 include, as a conceptual and specific examples, a semiconductor substrate, a semiconductor element such as a transistor formed on the semiconductor substrate, a large scale integrated circuit (LSI) device in which a plurality of transistors are highly integrated, a multi-layered wiring layer including a plurality of gate electrodes, an inter-layer insulating film formed between such layers, and a semiconductor memory device including a 3-dimensional NAND device. In an example of the first embodiment, the sample 6 is a 3-dimensional NAND device.
In
The slope surface 60 of the sample 6 includes a plurality of patterns 61 (for example, the channel holes 601 in
In
On the cross-section of
In the example of
In
A display device 1120 and an operational device 1121 provided inside or outside of the CSI are electrically connected to the general control unit C10. An example of the display device 1120 is a liquid crystal display and an example of the operational device 1121 is a mouse and a keyboard. The user can operate and use the CSI with the display device 1120 and the operational device 1121. When a connection configuration of the computer system 1 in
Inside the lens-barrel 102, a white light source 103, a first beam splitter 104, a second beam splitter 105, an objective lens 106, a reference surface 107, and a camera 108 are provided.
The stage 109 and the stage control device 110 connected to the stage 109 are provided outside of the lens-barrel 102 and is stationary in the atmosphere. The sample 6 can be mounted on the stage 109 via the sample holder 8. When the sample 6 is measured with the CSI, the sample holder 8 on which the sample 6 is mounted is installed on the stage 109. In this example, the stage 109 of the CSI is a stage that can move on four axes (X, Y, Z, T) (where T is a tilt direction). The present invention is not limited thereto and the stage 109 may be another type of stage. The stage control device 110 can displace a position and a direction of the stage 109. When the stage 109 is displaced, a position and a direction of the sample 6 are displaced.
The white light source 103 releases irradiation light WL1 (an optical axis is indicated by a one-dot chain line). The first beam splitter 104 and the second beam splitter 105 splits the released irradiation light WL1 into two pieces of light, emit one piece of light WL1a to the reference surface 107, and emit the other piece of light WL1b to the top surface of the sample 6. The objective lens 106 focuses the irradiation light WL1 to converge on the sample 6 installed on the stage 109. Reflected light reflected from the reference surface 107 and reflected light reflected from the sample 6 are collected as one piece of reflected light WL2 through the first beam splitter 104 and the second beam splitter 105 to be formed as an image in the measurement camera 108.
The general control unit C10 includes an optical system control unit C11, a stage control unit C12, and a calculation unit C13 and generally controls these units. Therefore, it is assumed that control performed by each of the optical system control unit C11, the stage control unit C12, and the calculation unit C13 is performed by the general control unit C10 in some cases in description. The general control unit C10 is simply called one control unit, a controller, a control device, or the like in some cases. The general control unit C10 can be mounted in a computer system, a dedicated circuit, or the like.
The optical system control unit C11 is electrically connected to the white light source 103, the first beam splitter 104, the second beam splitter 105, the objective lens 106, and the reference surface 107 to control operations of these units. The optical system control unit C11 controls an imaging condition of the camera 108 to obtain a signal (image information) imaged by the camera 108. The optical system control unit C11 controls focus by the objective lens 106.
The stage control unit C12 is electrically connected to the stage control device 110 to control an operation of each driving mechanism included in the stage control device 110. The stage control unit C12 controls the stage control device 110 to move the stage 109, so that a position or a field of view of measurement on the top surface of the sample 6 can be set.
The calculation unit C13 includes a top surface information acquisition unit C14, an instruction input unit C15, and a storage unit C16. The calculation unit C13 has a function of performing program processing by a processor.
The top surface information acquisition unit C14 is connected to the camera 108 and converts a signal (image information) of the reflected light WL2 detected by the camera 108 into 3-dimensional coordinate information (
The instruction input unit C15 receives information input on the screen of the display device 1120 (or the display device 206 in
The lens-barrel 132 is mounted on the sample chamber 137 to configure a charged particle beam column. The lens-barrel 132 includes the electron gun 133, a capacitor lens 134, a deflection coil 135, and an objective lens 136. The electron gun 133 can radiate the charged particle beam EB1 downwards in the z direction. The capacitor lens 134 causes the charged particle beam EB1 to converge. The deflection coil 135 scans the charged particle beam EB1 to the top surface of the sample 6 by polarized light. The objective lens 136 causes the charged particle beam EB1 to converge on the top surface of the sample 6.
Inside the sample chamber 137, the sample holder 8 for mounting and holding the sample 6, a stage (in other word, a sample stand) 139 for providing the sample holder 8, a stage control device 1310 connected to the stage 139 and driving the stage 139, and the like are installed. Although not illustrated, a mechanism conveying the sample 6 onto the stage 139, an inlet/outlet port, and the like are provided in the sample chamber 137.
When the sample 6 is observed and analyzed in the SEM, the sample holder 8 on which the sample 6 is mounted is conveyed into the sample chamber 137 through the inlet/outlet port and is installed on the stage 139. When the sample 6 is taken out from the sample chamber 137, the sample holder 8 on which the sample 6 is mounted is conveyed out of the sample chamber 137 via the inlet/outlet port.
The stage control device 1310 can displace a position and a direction of the stage 139 under the control of the stage control unit C2. When the stage 139 is displaced, a position and a direction of the sample 6 are displaced. In this example, the stage 139 of the SEM is a stage that can move on five axes (X, Y, Z, T, R). T is a tilt direction (a direction sloped with respect to the X-Y plane) and R is a rotational axis. The present invention is not limited thereto and the stage 139 may be, for example, a stage or the like that can move on four axes (X, Y, T, R) except for the Z axis.
The stage control device 1310 includes an XY axis driving mechanism that can be driven in directions (X, Y) parallel to a placement surface of the SEM, a Z axis driving mechanism that can be driven in a direction (Z) perpendicular to the placement surface, an R axis driving mechanism that can be driven in the rotational direction (R), and a T axis driving mechanism that can be driven in the tilt direction (T). These driving mechanisms can be used to analyze any point (including the markers 7) of the sample 6 and the sample holder 8 installed on the stage 139. A target point of the sample 6 is moved to a center of an imaging field of view of the SEM and is sloped in any direction by these mechanisms.
A detector 1311 is provided in the lens-barrel 132 (or may be provided in the sample chamber 137). When the charged particle beam EB1 is radiated to the top surface of the sample 6 in the observation and analysis of the sample 6, particles EM2 such as secondary electrons or reflected electrons released from the top surface of the sample 6 can be detected by the detector 1311. The detector 1311 converts the particle EM2 into an electric signal and detects the electric signal. The detector 1311 outputs the detected signal (in other words, image information).
A display device 1320 and an operational device 1321 electrically connected to the general control unit C0 are provided outside of or inside the SEM. The user operates the operational device 1321 to perform work while viewing a screen of the display device 1320. Accordingly, various types of data information are input to the general control unit C0 and various types of data information are output from the general control unit C0. When a connection configuration of the computer system 1 in
The general control unit C0 includes a scanning signal control unit C1, a stage control unit C2, and a calculation unit C3 and generally controls these units. Therefore, it is assumed that control performed by each of the scanning signal control unit C1, the stage control unit C2, and the calculation unit C3 is performed by the general control unit C0 in some cases in description. The general control unit C0 is simply called one control unit (in other words, a controller or a control device) in some cases.
The scanning signal control unit C1 is electrically connected to the electron gun 133, the capacitor lens 134, the deflection coil 135, and the objective lens 136 to control operations of these units. For example, the electron gun 133 receives a control signal from the scanning signal control unit C1, generates the charged particle beam EB1, radiates the charged particle beam EB1 downwards to the sample 6 in the Z direction. Each of the capacitor lens 134, the deflection coil 135, and the objective lens 136 receives a control signal from the scanning signal control unit C1 and excites a magnetic field. The charged particle beam EB1 converges so that a beam diameter becomes appropriate by the magnetic field by the capacitor lens 134. The charged particle beam EB1 is deflected to scan 2-dimensionally in the X and Y directions on the top surface of the sample 6 by a magnetic field of the deflection coil 135. The charged particle beam EB1 converges again on the top surface of the sample 6 by a magnetic field of the objective lens 136. The scanning signal control unit C1 controls the objective lens 136 such that excitation strength of the objective lens 136 is adjusted to focus the charged particle beam EB1 on the top surface of the sample 6.
The stage control unit C2 is electrically connected to the stage control device 1310 and has a function of controlling an operation of each driving mechanism included in the stage control device 1310 and constantly linking a field of view of the SEM with coordinates of the stage 139.
The calculation unit C3 includes an image acquisition unit C4, an other-apparatus data reading unit C5, an instruction input unit C6, a storage unit C7, and a pattern shape analysis unit C8. The calculation unit C3 has a function of performing program processing of the processor.
The image acquisition unit C4 is connected to the detector 1311 to control the detector 1311. The image acquisition unit C4 processes a signal of the particles EM2 such as secondary electrons or reflected electrons detected by the detector 1311 to convert the signal into a photographic image 1401 as in
The other-apparatus data reading unit C5 reads data information from other apparatuses including the CSI in
The instruction input unit C6 receives information input on the screen of the display device 1320 by the user using the operational device 21. The general control unit C0 stores data information such as information regarding the sample 6, the imaging condition of the SEM, the coordinates of the stage 139, the photographic image 1401, the coordinate data (2-dimensional coordinate information) 1402, and the height map acquired from the CSI in the storage unit C7. Various types of data information are stored and managed in association with each other.
The pattern shape analysis unit C8 analyzes shapes of the plurality of patterns (the patterns 601 in
The calculation unit C3 can perform analysis of stage coordinates or a pattern shape and calculation of depth information of a multi-layered structure or the like, as will be described below, using the information received by the instruction input unit C6 and the data information stored in the storage unit C7.
The general control unit C10 of the above-described CSI or the general control unit C0 of the SEM may be integrated into one unit and mounted as a part of the computer system 1 in
The sample holder 8 includes markers 7 of four points as the markers 7 for coordinate conversion. The markers 7 of four points are referred to as M1, M2, M3, and M4 to distinguish the markers from each other. In this example, as illustrated, two markers 7 are formed respectively on the upper surface of each of two right and left sample fixing portions 81. A total of four markers 7 (M1 to M4) are disposed at positions corresponding to vertexes of a rectangle. The present invention is not limited thereto. The markers 7 may be disposed near the sample 6 or may be disposed near the sample fixing portions 81 of the sample holder 8. Four markers 7 of four points are disposed at the same height.
The markers 7 are provided to perform coordinate conversion (the above-described step S4) called a projective conversion method that is more accurate than affine transformation. In the projective conversion method, coordinate conversion is performed using coordinates of four points located on the same plane.
During the coordinate conversion, the markers 7 are set as markers having edges of an μm order so that a computer or a user can ascertain or designate coordinates of the markers 7 as accurately as possible in both the SEM and the CSI. The four markers 7 preferably have different shapes so that the computer and the user can identify the markers 7. When the four markers 7 can be identified using another means together, the four markers 7 may have the same shape. In this example, the four markers 7 have a square when the basic shape thereof is viewed from above. Further, as illustrated in the enlarged view, a marker ID (in this example, a number) is engraved on the top surface of the square. Since the shape of the marker ID of each marker 7 is different, the computer and the user can identify each marker 7.
As a modification example, the markers 7 may be formed on a sample surface. When there are points with a plurality of identifiable shapes formed on the sample top surface, the points can be used as the markers 7. The user can convey the sample holder 8 on which the sample 6 is held from the CSI to the SEM as it is. When the sample holder 8 is installed inside the sample chamber 137 of the SEM, the user performs work, but a conveyance robot or the like may be used.
The screen example is appropriately referred to in description.
The analysis system includes a step (S1 in
In step S101, the user installs the sample holder 8 on which the sample 6 is mounted on the stage 109 of the CSI (
In step S102, in the CSI, the general control unit C10 receives an instruction to measure a shape of the top surface of the sample 6 from the user and starts measuring the shape of the top surface of the sample 6. The shape of the top surface of the sample 6 measured by the CSI is acquired as 3-dimensional coordinate information (the 3-dimensional coordinate information 1201 of
When the shape of the top surface is measured including the positions of the markers 7 (
In step S102, when the height map (3-dimensional coordinate information) of the CSI is acquired, the user can also determine performance of the sample 6 using data of the height map before the sample 6 is inserted into the SEM for observation.
In step S103, the general control unit C10 of the CSI (or the computer system 1) performs surface correction on the 3-dimensional coordinate information acquired in step S102. The surface correction is not essential, but accuracy can be improved when the surface correction is performed. In step S103, a slope of the entire sample 6 which may occur depending on an installation state of the sample 6 in the CSI is corrected through the surface correction. The surface correction is correction in which the top surface of the sample 6 is parallel to an installation surface (for example, a horizontal surface) of the stage 109 without being sloped in the 3-dimensional coordinate information (height map). As the surface correction, one-surface proximity correction or the like can be applied. By performing the surface correction, it is possible to improve accuracy of the 3-dimensional coordinate information acquired in the CSI and it is possible to improve accuracy of each process using the subsequent 3-dimensional coordinate information.
In step S104, the user conveys the sample holder 8 on which the sample 6 is mounted from the CSI to the SEM. The user installs the sample holder 8 on the stage 139 inside the sample chamber 137 of the SEM (
The user reads the 3-dimensional coordinate information (height map) after the surface correction in step S103 to the general control unit C0 of the SEM. The general control unit C0 stores the 3-dimensional coordinate information (height map) in the storage unit C7 in response to an operation of the user.
In step S105, irradiation of the charged particle beam EB1 by the SEM is started as imaging preparation. At this time, when the user operates the operational device 1321 while viewing the screen of the display device 1320, the charged particle beam EB1 by the EM is radiated. As imaging preparation, based on an operation of the user, an observation condition (in other words, an imaging condition) of the charged particle beam EB1 of the SEM is set. At this time, when the user operates the operational device 1321 while viewing the screen of the display device 1320, the observation condition is set. Subsequently, the user focuses the charged particle beam EB1 on the top surface of the sample 6 and performs general alignment including a change in magnification or the like. The preparation of step S105 may be performed at a stage at which imaging is actually performed subsequently.
In step S106, the analysis software 210 (also referred to as an application) is started in response to an operation of the user. When the application is started, an operational screen as in
The operational screen of
Steps S107 to S110 of
On the operational screen of
The condition setting field 1801 includes a 3-dimensional coordinate information (CSI data) selection button B4, a marker pattern image selection field 1802 (a low-magnification selection button B5, a middle-magnification selection button B6, and a high-magnification selection button B7), a marker position coordinate registration field 1803 (a marker position coordinate registration button B8), a sample information input field 1804 (a number-of-layers input button B9, a one-layer thickness input button B10, and a sample information registration button B11), a height reference coordinate registration field 1805 (a correction method selection button B12, a 2-dimensional information display field B13, a height reference coordinate input field B14, and a height reference coordinate registration button B15) in this order from the top.
In step S107, the 3-dimensional coordinate information stored in the storage unit C7 of the SEM in step S104 is called when the user operates the CSI data selection (3-dimensional coordinate information reference) button B4. The processor of the general control unit C0 reads various types of data on the memory, performs a process based on a program, and appropriately stores a processing result in the storage unit C7.
In step S108, a photographic image of the markers 7 (
The image for the pattern matching of the markers 7 is an image in which some or all of the markers 7 for performing the coordinate conversion are cut out. In this example, images of three types of resolutions (magnifications) such as a low magnification (LM), a middle magnification (MM), and a high magnification (HM) can be registered. The image for the pattern matching of the markers 7 is used to automatically detect position coordinates (X, Y) of the markers 7 for pattern matching during the coordinate conversion. Here, in the image for the pattern matching of the markers 7, positions serving as marks such as edges of the markers 7 are disposed at the center. In this example, a pattern of each resolution can be selected and set for pattern matching with higher accuracy.
In step S109, the position coordinates of the markers 7 for the coordinate conversion are registered. The user confirms and registers the position coordinates of the markers 7 in the marker position coordinate registration field 1803. In the marker position coordinate registration field 1803, coordinates (x, y) of the centers of the markers 7 acquired in the CSI and coordinates (X, Y) of the centers of the markers 7 acquired in the SEM which are stored in advance in the storage unit C7 are registered by the number of markers 7 used during the coordinate conversion. At this time, when the user confirms the coordinates of the markers 7 (in this example, three) of the table of the marker position coordinate registration field 1803 and then operates a marker position coordinate registration button B8, marker position coordinate information is registered.
In the marker position coordinate registration field 1803, the processor may input and present values automatically at an early stage. In the case of an automatic input, for example, as 2-dimensional coordinates (x, y) of the CSI, 2-dimensional coordinates (x, y) of the markers 7 on an external CSI stage coordinate system of the height map of the CSI are input. In the case of a user input, the user may view data of the CSI map or the like on the screen and input 2-dimensional coordinates (x, y) of the markers 7. For 2-dimensional coordinates (X, Y) of the SEM, in the case of an automatic input, the processor inputs position coordinates of the markers 7 detected from the photographic image. Alternatively, in the case of a user input, the user may designate and input the positions of the markers 7 from the photographic image of the SEM on the screen.
The operations of steps S107 and S109 can also be implemented as operations of executing an application of step S111 to be described below. In this case, operations of the user can be limited as minimum as possible and operations can be fully automated.
In step S110, information regarding the sample 6 is input, height reference coordinates are designated, and any observation position is designated (set). First, as an input of the information regarding the sample 6, the user inputs information regarding the sample 6 (3-dimensional NAND device) in the sample information input field 1804. In this example, in the sample information input field 1804, the number of layers of a multi-layered structure (the multi-layered structure 62 or the like in
The input of the sample information is not limited to a manual input of the user. For example, the design data 301 of
Here, the calculation unit C3 of the general control unit C0 can acquire depth information of the multi-layered structure of the sample 6 and information regarding the number of layers by combining the sample information (the number of layers of the multi-layered structure, the thickness of one layer or the thickness of each layer, and the depth at which the first layer of the multi-layered structure begins, or the like) input by the user with the 3-dimensional coordinate information of the CSI.
That is, the calculation unit C3 can know how deep a predetermined position in the 3-dimensional coordinate information of the CSI is from the top surface TS of the sample 6 and to which layer of the multi-layered structure the predetermined position corresponds to. In other words, the depth information of the multi-layered structure includes the number of layers and the depth of the predetermined position on the 3-dimensional coordinate information from the top surface TS of the sample 6.
In designation of the height reference coordinates in step S110, the user registers the height reference coordinates in the height reference coordinate field 1805. The user displays a secondary electron or reflected electron image on the display device 1320 of the SEM, and first moves a first point of the multi-layered structure to the center of a display area B13 of the secondary electron or reflected electron image using the operational device 1321. The user operates the height reference coordinate registration button B15 at coordinates of the center. The first point is a point of the first layer of the multi-layered structure in this example. In this example, the stacking structure 62 when the slope surface 60 illustrated in
The example of
When the 2-point correction is selected as the correction method in the designation of the height reference coordinates, as in
When the first to third points are designated in the designation of the reference coordinates as in
In this example, the first layer of the multi-layered structure 62 is a layer (the first layer L1) closest to the top surface TS of the sample 6 as in
In the sample 6 in which the multi-layered structure can be clearly confirmed in the SEM, for example, layers in which the number of layers can be easily confirmed like the first layer of the multi-layered structure are easily designated to height reference coordinates. However, depending on the sample 6, it is also assumed that the multi-layered structure cannot be clearly confirmed. In this case, for example, a position of a specific pattern of the sample 6 or a position of a structure with another shape can also be designated as height reference coordinates (X, Y).
A screen of
In setting of any observation target position in step S110, as illustrated in
The 2-point correction and the 3-point correction have been described as the setting of the height reference coordinates, but the setting of the height reference coordinates may be a height setting in which at least one point of the sample is used. When the horizontal surface setting of CSI coordinates or SEM coordinates is completed or unnecessary, the setting of one point can be performed as a height setting.
In a right table B19, each observation position coordinates designated in the display area B18 are displayed. Alternatively, each observation position coordinates can be input in the table B19. One observation position coordinates and one observation field of view have a correspondent relation.
For example, when a variation in a structure in the same layer is evaluated, a plurality of points are designated as observation regions in the same layer. In the example of the display area B18 in
For example, when a variation in a structure over a plurality of layers is evaluated, a plurality of observation regions over the plurality of layers may be designated. For example, two points indicated by Numbers 1 and 5 are designated or two points indicated by Numbers 2 and 6 are designated. When a single field of view is observed, a single field of view (observation position coordinates) can also be designated. The designated observation regions are observed in an order of the numbers or are observed in any order selected by the user.
When the observation target positions are designated in step S110, observation target regions may be limited on a GUI screen and observation target positions may be designated in the limited regions. For example, in the example of the sample 6 in
As a modification, another processing flow configuration example may be as follows. Even in a state in which the user does not input and set information regarding all items on an operational screen, the processor temporarily executes the application of step S111 and performs coordinate conversion and association between the CSI and the SEM. Thereafter, the user can set, for example, a height reference position or the like on the operational screen and the processor can perform height correction using the height reference position.
In step S111, the application of the analysis software 210 is performed. When the user operates the automatic coordinate conversion execution button B1 of the operational screen, the application (the general control unit C0 of the SEM) performs a process such as coordinate conversion automatically based on various types of data information input and set in steps S107 to S110.
First, when the automatic coordinate conversion execution button B1 is operated, a magnification similar to the low-magnification image for the pattern matching of the markers 7 is set in the SEM. Subsequently, the general control unit C0 calls the 3-dimensional coordinate information of the CSI stored in the storage unit C7 and extracts information regarding the coordinates of the markers 7 for the coordinate conversion included in the 3-dimensional coordinate information. The general control unit C0 calculates coordinates of the marker 7 on the SEM from the information and moves the stage 139 to the vicinity of the first point (the first marker M1) of the marker 7.
Next, the general control unit C0 reads the low-magnification image of the marker pattern image read in step S108 and performs a pattern matching process using an image of the first point of the marker 7 acquired at a low magnification. After the pattern matching, the general control unit C0 calculates a deviation amount of the central position of the marker 7 from the marker pattern image and stores correct coordinates of the marker 7 in the storage unit C7. This operation is performed similarly on the other three points (M2 to M4) of the markers 7.
Further, the general control unit C0 performs similar operations in order using the middle-magnification image and the high-magnification image for the pattern matching of the markers 7 and accurately registers the coordinates of the markers 7 on the SEM. At this time, an imaging magnification of the marker pattern image and the magnification of the secondary electron or reflected electron image displayed in the display area of the secondary electron or reflected electron image on the display device 1320 of the SEM are set to the same magnification.
The general control unit C0 calculates a conversion coefficient of the projective conversion method using xy coordinate information of the markers 7 on the 3-dimensional coordinate information of the CSI and XY coordinate information of the markers 7 on the SEM, and converts all the xy coordinates of the 3-dimensional coordinate information of the CSI into XY coordinates on the SEM using the projective conversion method.
Subsequently, the general control unit C0 corrects a height of the 3-dimensional coordinate information after the coordinate conversion so that the positions are at the same height using the reference height coordinates of, for example, three points registered in step S110 (
The height correction will be supplemented. After the coordinate conversion, it is necessary to perform the height correction. This is because, when there is a slope or the like which cannot be corrected through the surface correction (step S103 in
The general control unit C0 associates the number of layers with the depth from the sample top surface in the 3-dimensional coordinate information after the height correction based on the information input in step S110. Accordingly, the user can designate observation positions with not only the depth but also with the number of layers.
Finally, the general control unit C0 moves the stage 139 to the observation position registered in step S110 by controlling the stage control device 1310 from the stage control unit C2. The general control unit C0 irradiates the top surface TS of the sample 6 with the charged particle beam EB1 in the Z direction under the control of the scanning signal control unit C1 and performs focusing at the observation position registered in step S110 for imaging using the objective lens 136.
When the number of observation positions designated in step S110 is plural, for example, the plurality of observation positions can be imaged and observed by performing a process continuously and automatically in an order from the upper portion of, for example, the operational screen of
Each coordinates and depth information of the multi-layered structure acquired in step S111 or the like are recorded as a recording table as in
As described above, the analysis system can acquire the 3-dimensional information of the sample 6 in an nm order and acquire depth information of the multi-layered structure quickly and accurately.
Depending on the sample 6, the observation surface (the slope surface 60) formed through the polishing process does not have a desired top surface shape in some cases. For example, there is an unevenness on an observation surface in some cases. In these cases, the user can quickly determine whether the shape of the observation surface is successful in accordance with the 3-dimensional coordinate information of the CSI. For example, when there is a large difference in the unevenness of the observation surface, the user can use another observation range of the same height.
In step S112, the general control unit C0 analyzes the plurality of patterns (for example, the pattern (channel hole) 601 and the pattern 61 in
In the image reading setting field 2402, the user inputs the number of layers (L) or the depth (D) of the sample 6 and operates the reading button B33. Accordingly, the general control unit C0 displays a photographic image captured with the SEM in step S112 in the photographic image display field 2401. When the user operates the reference button B34, the user can also select and refer to a previously acquired photographic image. In this example, in the photographic image display field 2401, a top surface including the pattern 61 when the slope surface 60 is a flat surface as in
Subsequently, when the user operates the pattern detection button B31, the general control unit C0 detects the plurality of patterns 61 using an image recognition technology, assigns numbers for identifying to the plurality of patterns 61, and also displays the numbers in the photographic image display field 2401.
Subsequently, when the user operates the pattern analysis button B32, the pattern shape analysis unit C8 automatically measures a diameter of each of the plurality of patterns 61 at the observation coordinates (x, y, z) using an image recognition technology. The pattern shape analysis unit C8 acquires pattern shape information such as a major-axis diameter, a minor-axis diameter, an average diameter, and roundness in each of the plurality of patterns 61. The pattern shape information is displayed in the table of
The observation coordinates (x, y, z) described here are coordinates of a central position of a photographic image which is observed. Accordingly, the calculated number of layers is the number of layers at the central position of the photographic image which is observed.
The general control unit C0 can record the acquired pattern shape information in a recording table (pattern information) as in
Further, the observation image acquired in step S111 and the pattern analysis result obtained in step S112 can be displayed simultaneously in association with the observation position indicated on the 3-dimensional height map, as illustrated in
As described above, the analysis system according to the first embodiment can acquire depth information of the multi-layered structure 62 of the sample 6 and can also acquire pattern shape information of a plurality of patterns included in the sample 6 by analysis based on the coordinate conversion and link between the CSI and the SEM.
According to the first embodiment, the scheme in which the user designates the depth or the layer from the sample top surface and then observes the position of the designated depth or layer with the SEM has been mainly described. The present invention is not limited thereto. In the first embodiment, similarly to the function (B) of
In the height display field 1807 of the display field of view in the lower part of the screen of
The present invention has been described specifically based on the embodiments, but the present invention is not limited to the above-described embodiments and can be modified in various forms without departing from the gist of the present invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/036215 | 9/30/2021 | WO |