The present invention relates to a method and apparatus for effectively inspecting a circuit pattern with a scanning charged particle microscope.
In forming a circuit pattern on a semiconductor wafer, there is employed a method for forming a circuit pattern by applying a coating material (referred to as a resist) on the semiconductor wafer, stacking a mask (a reticle) for exposure of the circuit pattern on the resist and irradiating a visible ray, an ultra violet ray or an electron beam thereon to expose and develop the resist in order to form a circuit pattern of the resist on the semiconductor wafer, and further etching the semiconductor wafer with the circuit pattern of the resist as a mask, for example.
In design and manufacturing of semiconductor devices, it is important to manage dust emission in manufacturing devices such as exposing and etching devices, and to evaluate the shape of the circuit pattern formed on the wafer. Because the circuit pattern is very fine, image capturing and inspection are performed with a scanning charged particle microscope having a high image capture magnification.
The scanning charged particle microscopes include a scanning electron microscope (SEM), a scanning ion microscope (SIM) and the like. Further, SEM-type image capturing devices include a critical dimension scanning electron microscope (CD-SEM) or a defect review scanning electron microscope (DR-SEM).
A region imaged with the scanning charged particle microscope in order to evaluate the shape of the pattern is referred to as an evaluation point, hereinafter EP in short. In order to capture an image of the EP with a small image capture deviation amount and with a high image quality, a part or all of adjustment points, i.e. addressing points (hereinafter APs), autofocus points (hereinafter AFs), automatic astigmatism correction points (hereinafter ASTs) or auto brightness and contrast points (hereinafter ABCCs) are set as required and the EP is imaged after addressing, autofocus adjustment, automatic astigmatism correction, auto brightness and contrast adjustment in each adjustment points. The image capture deviation amount in the addressing is corrected by matching a SEM image in the AP whose coordinates are known and which is previously registered as a registered template and a SEM image observed in the actual image capture sequence, the deviation amount of the matching being considered as a deviation amount of the image capture position. The evaluation points (EPs) and adjustment points (APs, AFs, ASTs, ABCCs) are collectively referred to as image capture points. Size, coordinates and image capture conditions of the EPs, image capture conditions and an adjustment method of each adjustment point, an image capture sequence of each image capture point and the registered template are managed as an image capture recipe and the scanning charged particle microscope performs imaging of the EPs based on the image capture recipe.
Conventionally, the image capture recipe has been manually created by an operator, which is a task requiring much effort and time. On the other hand, a semiconductor inspection system is disclosed in which APs are determined based on design data of a circuit pattern of a semiconductor described in the GDS II format or the like and data in the APs is further cut out from the design data and registered in an image capture recipe as a registered template, in order to reduce a burden of the image capture recipe generation (PATENT LITERATURE 1: JP-A-2002-328015).
Further, a “panorama composition technology” is disclosed which generates a seamless image by connecting a plurality of images separately captured, as means of obtaining an image having a wide field of vision with the scanning charged particle microscope (PATENT LITERATURE 2: JP-A-2010-067516).
Conventionally, the quality of a circuit pattern in an evaluation point (EP) has been evaluated by imaging a region on a wafer as the EP with a fixed point inspection. However, it has been not easy to effectively inspect a disconnection or a shape defect in the circuit pattern, which can cause an electrical fault, with a scanning charged particle microscope, if coordinates of the EP are not determined as a fixed point. For example, even if an electrical fault such as a disconnection is found by a burn-in test in which a probe is applied on the circuit pattern at certain two points, it has been not easy to precisely specify where between the two points the problem occurs. This also applies to a case where it is not clear whether a fault is present or not and a fault inspection of certain circuit patterns having a same electrical potential is desired. This is because it is difficult to determine the inspection region that can be a cause of an electrical fault, and the inspection region is generally wide and cannot be fit in the field of vision of the scanning charged particle microscope. In the latter case, enlargement of the field of vision is expected with the panorama composition technology described in patent document 1, but an efficient inspection is difficult from the viewpoint of the number of imaging actions. Further, due to imaging with a low image capture magnification, the field of vision can be enlarged to some degree, but an image resolution is reduced, which results in a risk of reducing inspection performance.
The present invention provides a method for effectively and automatically inspect a disconnection or a shape defect in the circuit pattern, which can cause an electrical fault, with a scanning charged particle microscope. In order to solve the problem, the present invention provides a method and apparatus for evaluating a circuit pattern having the following characteristics.
The present invention provides a method for evaluating a circuit pattern including: a permissible distance value specification step of specifying a permissible distance value, the permissible distance value being a permissible value of a distance between adjacent first image and second image included in a plurality of images obtained by imaging an evaluation pattern; an image capture region determination step of determining an image capture region which includes at least a part of the evaluation pattern and in which the distance between the adjacent images is smaller than the permissible distance value specified in the permissible distance value specification step; and an image capture step of performing imaging of the evaluation pattern in the image capture region determined in the image capture region determination step to obtain a plurality of images.
According to the present invention, a pattern inspection apparatus and a pattern inspection method for enhancing an inspection throughput of the circuit pattern can be provided.
The present invention provides, in design or manufacturing procedures of a semiconductor device, an apparatus and method for efficiently inspecting a disconnection or a shape defect in circuit patterns formed on a wafer, which can cause electrical failures, by imaging the circuit patterns with a scanning charged particle microscope which is an image capturing device. Although embodiments according to the present invention will be described hereinafter in respect of a scanning electron microscope (SEM), which is one of the scanning charged particle microscopes, the present invention is not limited to the SEM, but is applicable to other scanning charged particle microscopes such as a scanning ion microscope (SIM). Further, the present invention is not limited to inspection of the semiconductor device, but is applicable to inspection of samples having patterns required to be imaged and evaluated.
An electro-optical system 202 includes an electron gun 203 therein to generate an electron ray 204. After making the electron ray emitted from the electron gun 203 narrower by a condenser lens 205, an irradiation position of the electron ray and a diaphragm are controlled by a deflector 206 and an objective lens 208 so that the electron ray is irradiated and focused onto a semiconductor wafer 201 which is a sample disposed on a stage 221. From the semiconductor wafer 201 irradiated with the electron ray, secondary electrons and backscattered electrons are emitted and the secondary electrons are diverted from the trajectory of the irradiating electron ray by an ExB deflector 207 so as to be detected by a secondary electron detector 209. On the other hand, the backscattered electrons are detected by backscattered electron detectors 210 and 211. The backscattered electron detectors 210 and 211 are arranged in different directions to each other. The secondary electrons and the backscattered electrons detected by the secondary electron detector 209 and the backscattered electron detector 210 and 211 are converted into digital signals by A/D converters 212, 213, 214 and then they are input to a processing and control part 215 and stored in an image memory 217 to perform image processing by a CPU 216 depending on purposes. Although the embodiment including two detectors for the backscattered electron image is shown in
The processing and control part 215 in
The details of the image capture recipe will be described hereinafter. The measurement recipe is a file specifying image processing algorithms and processing parameters for performing evaluation, such as defect detection, pattern shape measurement and the like in the captured SEM images. The SEM obtains inspection results by processing the SEM images based on the measurement recipe. Specifically, the measurement recipe is a method of determining a length measurement value of the pattern shape, a pattern contour line, an image characteristic amount for evaluating the pattern, a deformation amount of the pattern shape, a normality or abnormality of the pattern shape based on the above described information and the like, for each part of the evaluation pattern. The presence or absence of electrical failures or the degree of risk of electrical failures, even if electrical failures do not yet occur, can be quantitatively identified by observing changes in pattern shape or texture associated with changes in exposure conditions, optical proximity effect (OPE), electromigration or the like, presence or absence of sticking of foreign bodies originating from manufacturing devices or the like, or a position of the sticking (pattern deformation, disconnection or short between wirings is caused depending on the position of the sticking), etc.
Further, the processing and control part 215 is connected to a processing terminal 218 (which includes an input and output means, such as a display, a keyboard or a mouse) and includes a graphic user interface (GUI) displaying the images or the like to the user or accepting inputs from the user. Reference numeral 221 denotes a XY stage, which moves the semiconductor wafer 201 to allow image capturing at any position of the semiconductor wafer. Changing the image capture position with the XY stage 221 is referred to as a stage shift, and changing the observation position by deflecting the electron ray with the deflector 206 or the like is referred to as an image shift. Generally, the stage shift provides a wider range of motion, but provides a lower positioning accuracy of the image capture position, in nature. On the contrary, the image shift provides a narrower range of motion, but provides a higher positioning accuracy of the image capture position, in nature.
A recipe generating part 222 in
An image capture recipe is a file specifying an image capture sequence of the SEM. In other words, the image capture recipe specifies coordinates of an image capture region to be imaged as an evaluation target (referred to as an evaluation point (EP)), and an imaging procedure for imaging the EPs without a position deviation and with a high precision. A plurality of EPs may be present on one wafer and the EPs cover the entire wafer for inspection of the entire surface of the wafer.
Firstly, in step 401 of
The accuracy of the deviation amount determined by the matching can be not sufficient, because the image capture magnification of the OM image in step 402 is low. Therefore, in step 403, a SEM image is captured by irradiation of the electron ray 204 and alignment is performed with the SEM image. Although there is a risk of the pattern to be imaged being out of the FOV depending on the deviation amount of the wafer because the FOV of the SEM is lower than the FOV of the optical microscope, an approximate deviation amount is known from step 402 and the irradiation position of the electron ray 204 is moved in consideration of the deviation amount. Specifically, firstly, in step 404, the image capture position of the SEM is moved to an autofocus pattern for alignment pattern imaging 423 and imaging is performed to determine a parameter of autofocus adjustment. Then, the autofocus adjustment is performed based on the determined parameter. Next, in step 405, the image capture position of the SEM is moved to an alignment pattern 424 and imaging is performed. Then, by matching the data (template) for matching in the alignment pattern 424, which is previously prepared, and the SEM image, a more accurate deviation amount of the wafer is calculated.
Alignments with the optical microscope and the SEM in steps 402, 403 are performed at a plurality of positions on the wafer and a large origin deviation of the wafer and rotation of the wafer are calculated based on the position deviation amount determined at the plurality of positions (global alignment). In
On completion of alignment on the wafer level, in step 407, more accurate positioning (addressing) and image quality adjustment are performed for each evaluation pattern (EP) to capture an image of the EP. The addressing is performed in order to cancel a stage shift error which occurs when moving the field of vision to each EP. Specifically, firstly, the stage shift to the EP 433 is performed. In other words, the stage 221 is moved so that the vertical incidence position of the electron ray 204 is the center of the EP. The vertical incidence position of the electron ray 204 is referred to as Move coordinates (hereinafter, MP) and shown by a cross mark 426. Although an example wherein the MP is set at the center position of the EP will be described here, the MP may be set on the periphery of the EP. Once the MP 426 is determined, the stage is no longer moved from there and a range 427 (dotted frame), in which the field of vision can be moved, is determined only with the image shift. Of course, in practice, there is a deviation by the amount of a stop error in the stage shift, even if the stage shift is performed to the MP. Next, in step 408, the image capture position of the SEM is moved to an autofocus pattern for addressing pattern imaging 428 (hereinafter, AF) with the image shift and imaging is performed. Then, a parameter of the autofocus adjustment is determined and the autofocus adjustment is performed based on the determined parameter. Next, in step 409, the image capture position of the SEM is moved to an addressing pattern 429 (hereinafter, AP) and imaging is performed. Then, by matching the data (template) for matching in the AP 424, which is previously prepared, and the SEM image, a more accurate stage shift error is calculated. In subsequent image shifts, the field of vision is moved to cancel the calculated stage shift error. Next, in step 410, the image capture position of the SEM is moved to an AF for EP imaging 430 with the image shift and imaging is performed. Then, a parameter of the autofocus adjustment is determined and the autofocus adjustment is performed based on the determined parameter. Next, in step 411, the image capture position of the SEM is moved to an automatic astigmatism correction pattern 431 (hereinafter, AST) with the image shift and imaging is performed. Then, a parameter of the automatic astigmatism correction is determined and the autofocus adjustment is performed based on the determined parameter. The automatic astigmatism correction means that astigmatism is corrected so that the sectional shape of the focused electron ray is like a spot, in order to obtain an image having no distortion in SEM imaging. Next, in step 412, the image capture position of the SEM is moved to an auto brightness and contrast pattern 432 (hereinafter, ABCC) with the image shift and imaging is performed. Then, a parameter of the auto brightness and contrast is determined and the auto brightness and contrast adjustment is performed based on the determined parameter. The auto brightness and contrast means that, in order to obtain a sharp image having an appropriate brightness value and contrast in the EP imaging, a setting is performed so that a full or almost full contrast is formed between the highest part and lowest part of an image signal, by adjusting parameters such as a voltage value of a photomultiplier in the secondary electron detector 209. Because the field of vision is moved to the AF for the AP, and the AP, AF, AST and ABCC for the EP with the image shift, it is required to be set the field of vision in the range 427 in which the image shift can be performed.
After performing the addressing and the image adjustment in step 407, in step 413, the image capture position is moved to the EP with the image shift and imaging is performed.
On completion of imaging of all Nb EPs (step 414), in step 415, the wafer is removed from the SEM device.
Here, the alignment and the image adjustment in steps 404, 405 and 408 to 412 described above may be partly omitted or the order of the steps may be changed, depending on situations.
Here, in view of a problem of sticking (contamination) of contaminants on the sample caused by the electron ray irradiation, the adjustment points (AP, AF, AST, ABCC) are generally set so that the EP and the image capture region do not overlap each other. When one region is imaged two times, phenomena such as a dark image or variations in pattern line width appear more strongly in the second image due to contamination. Therefore, in order to remain the pattern shape accuracy in an EP used for evaluation of the evaluation pattern, various adjustments are performed with patterns around the EP and the EP is imaged with parameters after the adjustments so as to minimize the electron ray irradiation to the EP.
In this way, the image capture sequence includes coordinates of various image capture patterns (EP, AP, AF, AST, ABCC), sizes (field of vision or image capture magnification), image capture order (including means of moving the field of vision to each image capture pattern (stage shift or image shift)), image capture conditions (probe current, acceleration voltage, scanning direction of the electron beam or the like). The image capture sequence is specified by the image capture recipe. The data (template) for matching used for alignment or addressing is also registered in the image capture recipe. Further, the matching algorithms (an image processing image and image processing parameters) in alignment or addressing are also registered in the image capture recipe. The SEM captures images of the EPs based on the image capture recipe.
a) shows how a circuit pattern 100 (an evaluation pattern) to be evaluated is determined among a plurality of circuit patterns formed on the wafer. Although circuit patterns other than the evaluation pattern are not shown, the circuit patterns are generally present around the evaluation pattern. Patterns selected as the evaluation patterns include (1) a pattern in which electrical failure occurs, which is found by a burn-in test or the like, (2) a pattern in which it is estimated with litho-simulation or the like that a fault is likely to occur, (3) a pattern which is a wiring important to the circuit and whose quality should be particularly carefully inspected, or the like. The evaluation pattern may be automatically determined or specified by a user, based on these criteria. In respect of a way of specifying the evaluation pattern by the user, as shown in
In order to efficiently inspect the circuit pattern, the position and shape of the evaluation pattern are recognized by determining the evaluation pattern as described above and images are captured separately several times so that at least a part of the evaluation pattern is included in the field of vision. Here, the images can efficiently captured by setting the permissible distance value between any adjacent first image and second image.
The distance between adjacent images will be described. The distance between images may be a distance between the centers of adjacent images, a distance between the ends of adjacent images, or a length of the evaluation pattern included in an overlap region of the adjacent images or a space between the adjacent images. A plurality of definitions of the distance will be described in reference to
a) shows a case where the distance between two EPs 500, 501 is given as the distance between the centers 502, 503 of the respective image capture ranges. Distances Ax, Ay between the centers in the X, Y directions are denoted by reference numerals 504, 505 and the EP is determined so that the distance Ax, Ay, MAX(Ax, Ay), MIN(Ax, Ay), SQRT(Ax̂2+Aŷ2) or the like satisfy the permissible distance value. Here, MAX(a, b), MIN(a, b) are maximum and minimum values of a, b, respectively, and SQRT(a) is a function that returns the square root of a.
b) shows a case where the distance between the EPs 506, 507 is given as a width of the overlap region of both image capture regions. Widths Bx, By of the overlap region in the X, Y directions are denoted by reference numerals 508, 509, respectively. Although
d) shows a case where the distance between the EPs 514, 515 is given as a length D (517) of the evaluation pattern 516 included in the overlap region of both image capture regions. Although
In order to efficiently capture the image of the evaluation pattern, it is effective that the distance as shown in
The permissible value of the distance between adjacent images (the permissible distance value) will be described. The permissible distance value may be given as one value or as a region (minimum value, maximum value). The permissible distance value is roughly categorized into the following two, depending on a magnitude of the value.
(Condition 1) a permissible distance value in which image capture regions of adjacent first image and second image overlap each other.
(Condition 2) a permissible distance value in which image capture regions of adjacent first image and second image do not overlap each other.
b) is an example where the EP is determined so that the distance between adjacent images satisfies the permissible distance value in the condition 1, for the evaluation pattern (the part of the pattern 100 expect the part 120). In this figure, nine EPs (referred to as EP1 to EP9, sequentially) shown by dotted frames 103 to 111 are arranged. Coordinates of the EPs and the number of the EPs are determined so that the distance between any adjacent EPs, which is represented by a distance 112 between coordinates of the centers (shown by cross marks) of EP1 (103) and EP2 (104), satisfies the permissible distance value in the condition 1 and there is an overlap region between the adjacent EPs. If the minimum value of the distance between adjacent images is set as the permissible distance value in the case of the condition 1, the adjacent images cannot be closer than the minimum value. Therefore, the length of the evaluation pattern redundantly imaged is restricted to a certain degree and the evaluation pattern can be efficiently imaged. On the other hand, if the maximum value is set, there is at least an overlap region between adjacent images. Therefore, any part of the evaluation pattern is likely to be included in either of the captured images so that inspection omission can be avoided.
c) shows an example where the EP is determined so that the distance between adjacent images satisfies the permissible distance value in the condition 2, for the evaluation pattern (the part of the pattern 100 expect the part 120). In this figure, six EPs (referred to as EP1 to EP6, sequentially) shown by dotted frames 113 to 118 are arranged. Coordinates of the EPs and the number of the EPs are determined so that the distance between any adjacent EPs, which is represented by a distance 119 between coordinates of the centers (shown by cross marks) of EP1 (113) and EP2 (114), satisfies the permissible distance value in the condition 2 and there is a space between the adjacent EPs. In the case of the condition 2, since there is a space between adjacent images, there is a risk of inspection omission in a part of the evaluation pattern in the space where imaging is not performed. However, by setting the minimum value and maximum value of the distance between adjacent images as the permissible distance value, the evaluation pattern can be sampled and inspected in a constant rate so that there is no non-uniformity of inspection positions and the overall tendency of the quality can be recognized.
In the conditions 1 and 2, either or both of the maximum value and minimum value may be set.
Further, as an embodiment including both conditions 1 and 2, the permissible distance value may be given as a region (minimum value, maximum value) wherein the minimum value is the distance with which adjacent images overlap each other and the maximum value is the distance with which there is a space between the adjacent images.
The positions of a plurality of evaluation points (EPs) are optimized so as to satisfy as much as possible the permissible distance value given in this way and the evaluation pattern can be inspected based on the group of captured images of the EPs.
Variations of determination of the image capture sequence according to the present invention are roughly categorized into the following three modes:
(Mode 1) determining the image capture sequence before imaging, by previously recognizing the position and shape of the evaluation pattern using the design data or the like (referred to as an off-line determination mode).
(Mode 2) determining the image capture sequence based on captured images, in the course of repetition of imaging (referred to as an on-line determination mode).
(Mode 3) using both of the off-line determination mode and the on-line determination mode (referred to as a combined determination mode).
These three modes can be executed by switching among them with a GUI or the like. The details will be described hereinafter, sequentially.
Alternatively, in place of the design data, it is characterized in that steps 600, 601 are performed based on a low magnification image by previously obtaining the low magnification image of a wide region including at least the evaluation pattern with the scanning charged particle microscope or the optical microscope, the low magnification image being captured with a lower magnification than the image capture magnification of the EP (606). In order to avoid inspection omission, a high image resolution is required for the EP image used to inspect the evaluation pattern. On the other hand, when the EP image is used to recognize the evaluation pattern, a certain level of image resolution is sufficient. Further, the low magnification image has generally a wide field of vision and is convenient to recognition of the evaluation pattern.
Although the design data, the low magnification image or both of them may be used in steps 600, 601, a case using the design data will be particularly described in the following description. In step 600, the design data is displayed on a screen and the evaluation pattern can be specified, as shown
In step 601, the image capture sequence is determined with a permissible distance value between adjacent images (EPs) 607, a field of vision or an image capture magnification of the EP 608 and a permissible image capture deviation amount of the EP 609 being as inputs, in addition to the layout information for the pattern obtained from the design data or the like. The field of vision or the image capture magnification of the EP, which is one of the image capture conditions, can be given as a range, such as 1 μm to 2 μm. The image capture sequence is optimized in the computer so as to satisfy constraint conditions for the distance between the EPs, the field of vision of the EP, the permissible image capture deviation amount of the EP or the like and the image capture sequence can be automatically determined.
An example of the image capture sequence will be described in reference to
Thus,
(a) It is required that the AP is previously determined. (b) There is not always an appropriate AP around the EP. (c) Throughput is decreased because it takes a considerable time to capture the image of the AP and estimate the image capture deviation.
Thus, according to the present invention, it is characterized in that the actual image capture position of a m-thly imaged EP among the EP group is estimated based on the m-thly imaged EP and a stage shift amount or image shift amount to the image capture position of an n-thly (n>m) imaged EP is adjusted based on the estimated actual image capture position. In other words, the image capture deviation amount which occurs in the EP is estimated from the EP image and the stage shift amount or image shift amount to the next EP to be imaged is determined to cancel the image capture deviation amount. In this way, the accumulation of the image capture deviation amounts with each repetition of moving the field of vision can be avoided. Further, it is not required to capture the image of the AP or the like only for addressing. The image capture deviation in the EP can be estimated by matching an actually captured EP image and the design data or the low magnification image at the predetermined image capture positions of the EP. Furthermore, if the EP is determined according to an arrangement rule (for example, the evaluation pattern is in the center of the field of vision of the EP image) in determination of the image capture sequence, the image capture deviation can be estimated by recognizing the position of the evaluation pattern in the actually captured EP image and detecting a deviation from the center of the image of the evaluation pattern. In
The position and size of the EP can be also determined in consideration of such an image capture deviation.
In step 602 in
In step 603, an image is captured according to the image capture recipe (610) to obtain a captured image 611 in the EP. In step 604, inspection results 612 of the evaluation pattern are obtained by processing the captured image based on the measurement recipe. The inspection results include a part or all of a length measurement value of the pattern shape, a pattern contour line, an image characteristic amount for evaluating the pattern, a deformation amount of the pattern shape, for each part of the evaluation pattern. They also includes a normality or abnormality of the pattern shape based on the above described information. Based on these information, the user can monitor determination of defect positions or risk positions in the evaluation pattern or the quality of the evaluation pattern. Here, if imaging of the EP is performed several times, timings of the image capturing in step 603 and the evaluation pattern inspection in step 604 can be arbitrarily changed. In other words, the imaging and the inspection may be alternately performed in such a way that, immediately after imaging EP1, inspection of the evaluation pattern in EP1 is performed and meanwhile the next EP2 is imaged, or inspection of the evaluation patterns in all EPs may be collectively performed after all EPs are imaged.
A processing in a case where a pattern is branched will be described in reference to
In comparison to this,
Further, a variation of the shape of the EP image capture region is shown in
According to the present invention, it is characterized in that a plurality of patterns that are electrically interconnected to each other are specified based on positions of contact holes and the plurality of patterns are set as the evaluation pattern. For example, it is characterized in that, in determination of a problem position when finding an electrical fault such as a disconnection, not only a circuit pattern represented as one closed figure is inspected as the evaluation pattern, but also patterns that are electrically interconnected to the circuit pattern are included in the evaluation pattern and inspected. Here, it is difficult to determine an electrical interconnecting relationship between two patterns present in respective different layers, for stacked layers of the circuit pattern in the wafer. Thus, the interconnecting relationship is determined based on the positions of the contact holes that interconnect patterns in different layers to each other. The positions of the contact holes can be determined from design data, captured images or the like.
a) shows a specific example of inspecting an electrical path which bridges two layers (referred to as an upper layer and an lower layer) in stack layers stacked in the Z-axis direction. In this figure, there are two upper layer patterns 900, 901 (shown by patterns hatched from top right to bottom left), two lower layer patterns 902, 903 (shown by patterns hatched from top left to bottom right) and two contact holes 904, 905 (shown by clear squares) electrically interconnecting the upper layers and the lower layers. These representations are performed by drawing the design data, for example. Contrary to this configuration, it is considered to inspect an electrical path between a start point and an end point as the evaluation pattern, wherein the start point and the end point are specified with mouse cursors 906, 907. For example, the upper layer pattern 900 and the lower layer pattern 902 appear to cross each other in the XY plane. However, the layers are insulated and have no electrical interconnection to each other because there is no contact hole. On the other hand, the upper layer pattern 900 and the lower layer pattern 903 are connected via a contact hole 904 and have an electrical interconnection. In view of the above description, the electrical path from the start point 906 to the end point 907 is identified by a thick arrow 930 and the parts of the patterns through which the thick arrow 930 passes are set as the evaluation pattern. When the permissible distance value is set so that EPs overlap each other to some degree and the EPs are determined with the Rectangular scan mode as the image capture mode of the EPs, nine EPs (EP1 (908) to EP3 (916)) for inspecting the evaluation pattern are arranged, for example.
According to the present invention, it is characterized in that an image capture region (EP) is determined in consideration of attribute information in each part of an evaluation pattern. The attribute information means information for determining a priority of inspection, such as a deformation property of pattern. In other words, determination criteria of the EPs include constraint conditions for the position and shape of the evaluation pattern, the distance between the EPs, the field of vision of the EPs, the permissible image capture deviation amounts of the EPs or the like, as described above. However, in addition to these criteria, the attribute information such as the deformation property of pattern can be considered for the evaluation pattern in the EPs. The deformation property of the pattern can be expected with litho-simulation of the shape of the circuit pattern equipped in an EDA (Electronic Design Automation) tool, for example. Further, attribute information about the deformation property of the pattern can be calculated from the pattern shape by introducing knowledge about deformation of the pattern shape, such as “corners of the pattern are in danger of being rounded”, “an isolation pattern is in danger of being thinner”, “a line end is in danger of being retracted”. For example, an EP determination criterion given by the permissible distance value between adjacent EPs that is input in step 607 of
A specific example of determining EPs in consideration of attribute information will be described in reference to
An example of the above described knowledge about deformation of the pattern shape will now be described for the evaluation pattern 1101 in reference to
An example of determining EPs in consideration of attribute information will be shown in
As an embodiment of a case where information such as the design data cannot be used and the position and shape of the evaluation pattern cannot be previously recognized, it is characterized to estimate a position of the evaluation pattern outside a first image capture region based on a first image obtained by imaging the first image capture region and set a second image capture region so that the estimated evaluation pattern is imaged. In other words, the evaluation pattern included in the captured image is recognized from the image and if it is determined that the evaluation pattern continues to outside the image, a next image capture position is determined so that the evaluation pattern outside the image is included in the field of vision and imaging is performed. By repeating this process, images can be captured while tracking the evaluation pattern. Further, the image capture sequence determined here while imaging can be recorded and saved as the image capture recipe. Such a determination mode of the image capture sequence is referred to as an on-line determination mode.
Rectangular frames 1201 to 1204 denote processing contents and frames with rounded corners 1205 to 1210 denote information used for the processes. The on-line determination mode will be described hereinafter in reference to
In step 1202 with m=1, EP1 is imaged. A specific example is shown in
In determination of the evaluation pattern, a pattern in the EP1 image may be automatically recognized as the evaluation pattern or the evaluation pattern may be specified by the user among the patterns in the EP1 image. Although there is only one pattern in the EP1 image in this example, one or more pattern may be selected as the evaluation pattern among a plurality of patterns if the plurality of patterns are imaged.
By processing the EP1 image based on a measurement recipe in step 1203 with m=1, inspection results 1210 of the evaluation pattern are obtained (similarly to step 604 and the inspection result 612 in the off-line determination mode). Here, if imaging of the EP is performed several times, timings of the image capturing in step 1202 and the evaluation pattern inspection in step 1203 can be arbitrarily changed, similarly to steps 603, 604 of the off-line determination mode. In other words, inspection of the evaluation pattern in the EP is performed for each EP imaging (an example thereof is shown in
In step 1204 with m=1, the evaluation pattern in the EP1 image is recognized and if it is determined that imaging of all regions of the evaluation pattern is completed, the processing is finished. In the EP1 image 1301 shown in
Thereafter, steps 1201 to 1204 are repeated until the entire evaluation pattern is imaged at an interval of the permissible distance value.
Now, several examples of a way of determining a (m+1)-th image capture region from a m-th EP will be further described.
A way of estimating EP3 (1303) from EP2 (1302) is shown in
The image capture sequence after EP7 (1307) will be described in
Here, the following processes (A) to (D) can be performed also in the on-line determination mode, in the same manner as in the off-line determination mode.
(A) As shown in
(B) If the evaluation pattern is branched in the imaged EP, a pattern to be tracked may be selectively specified as shown in
(C) As shown in
(D) As shown in
An embodiment using both the off-line determination mode and the on-line determination mode will be described. Such a determination mode of the image capture sequence is referred to as a combined determination mode. In this mode, firstly, the image capture sequence is determined off-line with layout information for the pattern obtained from the design data or the like according to the off-line determination mode. However, the design data or the like can be different from the actual pattern shape so that the off-line determination may be not always successful. In order to determine a correct image capture sequence in consideration of such a difference in shape, a simulated shape of the actual pattern estimated with litho-simulation or the like from the design data may be used as the layout information, but the precision of the estimation may not yet be sufficient. Thus, imaging is performed according to the image capture sequence that is determined off-line and the image capture sequence is examined based on the currently captured image and changed as required.
A specific example will be described in reference to
The embodiment of a system configuration according to the present invention will be described in reference to
In
b) shows how the components 1506, 1508, 1509, 1510 and 1512 to 1514 in
An example of a GUI for inputting various information, setting or displaying image capture recipe generation and output, and controlling the SEM device in the present invention is shown in
The window 1602 is a display for creating and confirming the image capture sequence. By selection with check boxes in a window 1605, design data, simulated shape of an actual pattern estimated from the design data with litho-simulation or the like, a circuit diagram or the like can be displayed in an overlapped manner. In this example in the figure, the design data is displayed. The user can specify an evaluation pattern on the window 1602 with a mouse, a keyboard or the like. Additionally, the image capture sequence determined with the off-line determination mode, the on-line determination mode and the combined determination mode described hereinafter can be displayed.
A window 1607 is a setting screen for determining the image capture sequence with the off-line determination mode. In determination of the image capture sequence, the layout information for the evaluation pattern or its surrounding patterns is required. Therefore, information used as the layout information is specified in a window 1608. Options include design data or litho-simulation data, a low magnification image with a SEM or an optical microscope, etc. A window 1609 is a screen of specifying processing parameters for determining the image capture sequence. A permissible distance value, an EP size and a permissible image capture deviation, which are examples of the processing parameters, are specified in steps 1610, 1611 and 1612, respectively. Because there are a plurality of definitions of the distance between EPs as described in
A window 1621 is a screen for setting a method for imaging with a SEM. Image capturing can be performed based on an image capture recipe by selecting an “imaging method 1” with a radio button of an imaging method window 1622 and specifying the image capture recipe with a box 1623. If the image capture recipe created by pressing the button 1615 in the box 1623 is specified, images can be captured in the image capture sequence according to the off-line determination mode. Further, by checking a check box 1624, images can be captured with the combined determination mode of the image capture sequence described in
A window 1616 is a screen for displaying captured images and can display the captured image of the EP group (for example, EP1 (1617)). The images of the adjustments point can be also displayed (not shown). By checking an item “performing alignment between images”, which is one of the items in a window 1620 for specifying the display method, the group of the EP images can be displayed so that the images are connected to each other with overlap regions. Although the layout information such as the design data and the captured images are displayed vertically in other respective windows 1602, 1616 in the display example in this figure, both windows may be also cascaded by switching from a radio box “tile vertically” to a radio box “cascade” in the window 1606 for specifying the display method. By cascading the windows, the difference in shape between the design data and the actual pattern can be clearly visualized, for example. Further, if the windows 1602, 1616 are “tiled vertically” as shown in the display example in this figure, when a vertical or horizontal scroll bar in one window 1602 or 1616 is moved, a scroll bar in the other window is also synchronously moved so that a corresponding image can be displayed, by checking a check box “synchronize captured image and display position” in the window 1606. Furthermore, in imaging with the on-line determination mode, captured images can be serially displayed in the window 1621 to accept designation of an evaluation pattern, designation of a tracked pattern in imaging of a branched pattern, designation of an image capture sequence including image capture regions of EPs or the like from the user, as required and they can be reflected in imaging.
Further, by checking a check box “defect candidate” in the window 1619, defect points or possible defect points in the evaluation pattern can be displayed, as shown in the frames 1604 or 1618. This is based on pattern evaluation results according to the measurement recipe. The evaluation pattern in the frame 1618 is significantly thin in comparison to the evaluation pattern in the frame 1604, which indicates the user that a defect is likely to occur. Further, by checking a check box “pattern shape deformation estimation amount” in the window 1619, a vector indicating a difference from the design data at each point on the contour line of the evaluation pattern can be calculated and displayed.
A display variation of the pattern shape evaluation result in the window 1616 is shown in
With the forgoing measure, the present invention can effectively inspect a disconnection or a shape defect in the circuit pattern, which can cause electrical fault, with an image capturing device. As a result, determination of a cause of a fault found in an electrical test or the like, or determination of a part affecting a process window due to deformation of the pattern shape or the like, even if an electrical fault does not occur, can be quickly performed. Further, the image capture recipe for this inspection can be automatically and rapidly created and reduction in inspection preparation time (recipe creation time) and obviation of operator skills can be expected.
Thus, the present invention is characterized by the following contents, as described above.
(1) A method for evaluating an evaluation pattern with a group of images obtained by imaging a particular circuit pattern (an evaluation pattern) formed on a semiconductor wafer with a SEM in a plurality of actions while shifting an image capture position is characterized by including: an evaluation pattern determination step of determining the evaluation pattern among the circuit patterns; a permissible distance value specification step of specifying a permissible value of distance (a permissible distance value) between any adjacent first image and second image included in the group of images; an image capture region determination step of determining an image capture region of the group of images so that at least a part of the evaluation pattern is included in the image capture region and the adjacent images satisfy the permissible distance value; and an image capture step of capturing the group of images of the evaluation pattern by imaging the determined image capture region of the group of images. Supplemental explanation about this characterization will be described. In order to efficiently inspect the circuit pattern, not only the field of vision is enlarged, but the circuit pattern to be inspected is specified as an evaluation pattern and images are captured separately several times so that at least a part of the evaluation pattern is included in the field of vision. Here, the images can efficiently captured by setting the permissible distance value between any adjacent first image and second image.
The permissible distance value may be given as one value or as a region (minimum value, maximum value). Further, the permissible distance value is roughly categorized into the following two, depending on a magnitude of the value:
(a) a permissible distance value in which image capture regions of adjacent first image and second image overlap each other, and
(b) a permissible distance value in which image capture regions of adjacent first image and second image do not overlap each other.
The distance between images may be a distance between the centers of adjacent images, a distance between the ends of adjacent images, or a length of the evaluation pattern included in an overlap region of the adjacent images (in the case of (a)) or a space between the adjacent images (in the case of (b)), for example. Although either of the definitions of the distance between adjacent images may be employed, a case where the distance between the centers of the images is given will be described in the following description.
Firstly, if the minimum value of the distance between adjacent images is set as the permissible distance value in the case of (a), the adjacent images cannot be closer than the minimum value. Therefore, the length of the evaluation pattern redundantly imaged is restricted to a certain degree and the evaluation pattern can be efficiently imaged. On the other hand, if the maximum value is set, there is at least an overlap region between adjacent images. Therefore, any part of the evaluation pattern is likely to be included in either of the captured images so that inspection omission can be avoided.
In the case of (b), since there is a space between adjacent images, there is a risk of inspection omission in a part of the evaluation pattern in the space where imaging is not performed. However, by setting the minimum value and maximum value of the distance between adjacent images as the permissible distance value, the evaluation pattern can be sampled and inspected in a constant rate so that there is no non-uniformity of inspection positions and the overall tendency of the quality can be recognized.
In (a) and (b), either or both of the maximum value and minimum value may be set.
Further, as an embodiment including both (a) and (b), the permissible distance value may be given as a region (minimum value, maximum value) wherein the minimum value is the distance with which adjacent images overlap each other and the maximum value is the distance with which there is a space between the adjacent images.
The positions of a plurality of evaluation points (EP) are optimized so as to satisfy as much as possible the permissible distance value given in this way and the evaluation pattern can be inspected based on the group of captured images of the EPs.
(2) In the image capture region determination step described in the item (1), it is characterized in that the image capture region of the evaluation pattern is determined based on the design data of the circuit pattern having at least the evaluation pattern.
In order to determine the image capture region (EP) so as to include the evaluation pattern, the position and shape of the evaluation pattern have to be recognized, firstly. As an embodiment for this purpose, the evaluation pattern is recognized using design data, which is layout information for the circuit pattern formed on the wafer. Further, it is characterized in that the image capture sequence is determined by using the design data. The image capture sequence includes at least the above described image capture positions of the EPs and, besides these, a part or all of image capture positions, image capture conditions, an image capture order, various adjustment methods or the like of the EPs and various adjustment points (AP, AF, AST, ABCC).
(3) In the image capture region determination step described in the item (1), it is characterized to previously obtain the low magnification image of a wide region including at least the evaluation pattern with the scanning charged particle microscope or the optical microscope, the low magnification image being captured with a lower magnification than the image capture magnification in the imaging step described in the item (1) and determine the image capture region of the evaluation pattern based on the low magnification image. Similarly to (2), the low magnification image is used as an embodiment of recognizing the position and shape of the evaluation pattern. In order to avoid inspection omission, a high image resolution is required for the image used to inspect the evaluation pattern. On the other hand, when the image is used to recognize the evaluation pattern, a certain level of image resolution is sufficient. Further, the low magnification image has generally a wide field of vision and is convenient to recognition of the evaluation pattern. Further, it is characterized in that the image capture sequence is determined by using the low magnification image, in a similar manner to (2).
(4) In the imaging step described in the item (1), it is characterized in that the actual image capture position of a m-thly imaged image among the image group is estimated based on the m-thly imaged image and a stage shift amount or image shift amount to the image capture position of an n-thly (n>m) imaged image is adjusted based on the estimated actual image capture position.
Measures of changing the image capture position to any EP in the scanning charged particle microscope include a stage shift which changes irradiation positions of charged particles by moving a stage on which a wafer is mounted, and an image shift which changes irradiation positions of charged particles by changing the trajectory of the charged particles by a deflector. Both shifts have a limit of positioning accuracy so that an image capture deviation occurs. Generally, in order to reduce the image capture deviation in the EP, it is required to initially capture an image of a pattern for positioning whose coordinates and a template are provided, referred to as an addressing point (AP), in order to estimate a position deviation amount. However, such an addressing has the following problems. (a) It is required that the AP is previously determined. (b) There is not always an appropriate AP around the EP. The suitable AP means an AP having a unique pattern shape in order to estimate the image capture deviation. Further, in order to reduce sample damage by irradiation of the charged particles, it is generally required that the AP is selected from the region not overlapping the EP. (c) Throughput is decreased because it takes a considerable time to capture the image of the AP and estimate the image capture deviation. In particular, the number of AP imaging actions is large because a plurality of EPs are imaged in the present invention. In order to solve this problem, AP imaging is obviated or the number of AP imaging actions is reduced through the use of imaging of a plurality of EPs in the present invention. In other words, the image capture deviation amount which occurs in the EP is estimated from the EP image and the stage shift amount or image shift amount to the next EP to be imaged is determined to cancel the image capture deviation amount. In this way, the accumulation of the image capture deviation amounts with each repetition of moving the field of vision can be avoided. Further, it is not required to capture the image of the AP or the like only for addressing.
(5) In the image capture region determination step described in the item (1), it is characterized in that the image capture sequence for imaging the image capture region with the scanning charged particle microscope is determined and saved as the image capture recipe.
The image capture recipe is a file specifying the image capture sequence for imaging the EPs without a position deviation and with a high precision and the scanning charged particle microscope operates based on the image capture recipe. Once the image capture recipe is created, wafers having the same circuit pattern can be automatically inspected any number of times. Further, by sharing the recipe among a plurality of scanning charged particle microscopes, a plurality of wafers can be inspected in parallel. Further, for similar wafers, an image capture recipe therefor can be created in a short time, by modifying the above described image capture recipe more or less.
(6) In the evaluation pattern determination step described in the item (1), it is characterized in that a plurality of patterns that are electrically interconnected to each other are specified based on positions of contact holes and the plurality of patterns are set as the evaluation pattern.
For example, it is characterized in that, in determination of a problem position when finding an electrical fault such as a disconnection, not only a circuit pattern represented as one closed figure is inspected as the evaluation pattern, but also patterns that are electrically interconnected to the circuit pattern are included in the evaluation pattern and inspected. Here, it is difficult to determine an electrical interconnecting relationship between two patterns present in respective different layers, for stacked layers of the circuit pattern in the wafer. Thus, the interconnecting relationship is determined based on the positions of the contact holes that interconnect patterns in different layers to each other. The positions of the contact holes can be determined from design data, captured images or the like.
(7) In the image capture region determination step described in the item (1), it is characterized in that the image capture region is determined in consideration of attribute information in each part of the evaluation pattern. The attribute information is information determining a priority of inspection, such as a deformation property of the pattern. In other words, in addition to the determination criterion of the EP in which the distance between adjacent EPs satisfies the specified permissible distance value as described item (1), the attribute information such as the deformation property of the pattern can be considered for the evaluation pattern in the EPs. The deformation property of the pattern can be expected with litho-simulation of the shape of the circuit pattern equipped in an EDA (Electronic Design Automation) tool, for example. Further, attribute information about the deformation property of the pattern can be calculated from the pattern shape by introducing knowledge about deformation of the pattern shape, such as “corners of the pattern are in danger of being rounded”, “an isolation pattern is in danger of being thinner”, “a line end is in danger of being retracted”.
The EP determination criterion wherein the distance between adjacent EPs satisfies the specified permissible distance value described in the item (1) has viewpoints of avoiding redundant imaging with too much overlap, avoiding non-uniformity of image capture points, and the like. On the other hand, an EP determination criterion based on attribute information such as the deformation property of the evaluation pattern described in the item (7) has a viewpoint of preferentially imaging a place where a defect is likely to occur. In EP determination, either or both of these criteria may be used.
(8) In the image capture region determination step and imaging step described in the item (1), it is characterized to estimate a position of the evaluation pattern outside a first image capture region based on a first image obtained by imaging the first image capture region and set a second image capture region so that the estimated evaluation pattern is imaged.
As an embodiment of a case where information such as the design data cannot be used and the position and shape of the evaluation pattern cannot be previously recognized, the evaluation pattern included in the captured image is recognized from the image and if it is determined that the evaluation pattern continues to outside the image, a next image capture position is determined so that the evaluation pattern outside the image is included in the field of vision and imaging is performed. By repeating this process, images can be captured while tracking the evaluation pattern. Further, the image capture sequence determined here while imaging can be recorded and saved as the image capture recipe.
The determination modes of the image capture sequence are roughly categorized into three modes: an off-line determination mode determining the image capture sequence before imaging, by previously recognizing the position and shape of the evaluation pattern using the design data or the like, as described in the items (2) and (3); an on-line determination mode determining the image capture sequence based on captured images, in the course of repetition of imaging, as described in the item (8); and a combined determination mode using both of the off-line determination mode and the on-line determination mode. Supplemental explanation about the last mode, i.e. the combined determination mode, will be described. In this mode, firstly, the image capture sequence is determined off-line with the design data or the like according to the off-line determination mode. However, the design data or the like can be different from the actual pattern shape so that the off-line determination may be not always successful. Thus, imaging is performed according to the image capture sequence that is determined off-line and the image capture sequence is examined based on the currently captured image and changed as required. These three modes can be executed by switching among them with a GUI or the like.
According to the present invention, determination of a cause of a fault found in an electrical test or the like, or determination of a part affecting a process window due to deformation of the pattern shape or the like, even if an electrical fault does not occur, can be quickly performed. Further, the image capture recipe for this inspection can be automatically and rapidly created and reduction in inspection preparation time (recipe creation time) and obviation of operator skills can be expected. It should be noted that the present invention is not limited to the above described embodiments, but includes various variations. For example, the above described embodiments have been described in detail for purposes of clear explanation of the present invention and the present invention is not limited to embodiments including all components described above. Further, a part of a configuration of an embodiment can be replaced by a configuration of another embodiment and it is also possible to add a configuration of other embodiments to a configuration of an embodiment. Further, for a part of a configuration of each embodiment, addition, deletion or substitution of other configurations is possible.
Additionally, a part or all of configurations, features, processing parts, processing means or the like described above can be realized in hardware, e.g. designed with integrated circuits. Further, configurations, features or the like described above may be realized in software, in such a manner that a processor interprets and runs a program realizing each feature. Information such as programs, tables, files or the like realizing each feature can be stored in a recording device such as a memory, a hard disk, a SSD (Solid State Drive) or the like, or a recording medium such as an IC card, a SD card, a DVD or the like.
Further, only control lines and information lines which are considered to be required for the purpose of explanation are shown and not all control lines and information lines in a manufactured article may be shown. In practice, it may be considered that almost all configurations are connected to each other.
100 . . . circuit pattern, 101, 102 . . . mouse cursor, 103-111, 113-118 . . . image capture range of evaluation point (EP), 112, 119 . . . distance between EPs, 120 . . . a part of the pattern 100, 200 . . . x-y-z coordinates (coordinates of electro-optical system), 201 . . . semiconductor wafer, 202 . . . electro-optical system, 203 . . . electron gun, 204 . . . electron ray (primary electron), 205 . . . condenser lens, 206 . . . deflector, 207 . . . ExB deflector, 208 . . . objective lens, 209 . . . secondary electron detector, 210, 211 . . . backscattered electron detector, 212-214, 215 . . . processing and control part, 216 . . . CPU, 217 . . . image memory, 218, 225 . . . processing terminal, 219 . . . stage controller, 220 . . . deflector control part, 221 . . . stage, 222 . . . recipe creating part, 223 . . . image capture recipe generating device, 224 . . . measurement recipe generating device, 226 . . . database server, 227 . . . database (storage), 301-306 . . . incident direction of focused electron ray, 307 . . . surface of sample, 308 . . . Ix-Iy coordinates (image coordinates), 309 . . . image, 416 . . . wafer, 417-420 . . . chip to be aligned, 421 chip, 422 . . . image capture range of OM alignment pattern, 423 . . . image capture range of autofocus pattern for SEM alignment pattern imaging, 424 . . . image capture range of SEM alignment pattern, 425 . . . partly enlarged range of design data, 426 . . . MP, 427 . . . range in which image shift can be performed from MP, 428 AF, 429 . . . AP, 430 . . . AF, 431 . . . AST, 432 . . . ABCC, 433 . . . EP, 500, 501, 506, 507, 510, 511, 514, 515, 518, 519 . . . EP, 502, 503 . . . center of EP, 504, 505, 508, 509, 512, 513, 517, 521, 522 . . . distance between EPs, 516, 520 . . . evaluation pattern, 700, 747 . . . evaluation pattern, 701-704, 722-725, 748-750 . . . EP (setting point), 705-708, 727-730, 731 . . . maximum image capture deviation range, 709-712, 732-735, 736 . . . maximum image capture deviation amount in x direction, 713-716, 737-740, 741 . . . maximum image capture deviation amount in y direction, 717-720, 742-745 . . . actually imaged EP position, 726 . . . AP (setting point), 746 . . . actually imaged AP position, 800 . . . pattern, 801, 802, 821 . . . mouse cursor, 803, 804 . . . a part of pattern 800, 805-819 . . . EP, 900, 901, 917, 918 . . . upper layer pattern, 902, 903, 919, 920 . . . lower layer pattern, 904, 905 . . . contact hole, 906, 907 . . . mouse cursor, 908-916, 921-929 . . . EP, 930 . . . electrical path between mouse cursor points 906, 907, 1001-1014 . . . EP, 1015, 1016 . . . electrical path between mouse cursor points 906, 907, 1100-1102 . . . pattern, 1103-1108 . . . a part of pattern 1101, 1109-1116, 1124-1130 . . . EP, 1117-1123, 1131-1136 . . . distance between EPs, 1300, 1310, 1312, 1315 . . . pattern, 1301-1308 . . . EP, 1309 . . . distance between EPs, 1314, 1317 . . . direction in which expected pattern continues, 1313, 1316 . . . motion vector between EPs, 1401, 1402 . . . pattern, 1403-1410, 1414-1421 . . . EP, 1411, 1422 . . . AP, 1423-1425 . . . direction in which expected pattern continues, 1501 . . . mask pattern design device, 1502 . . . mask drawing device, 1503 . . . exposing and developing device, 1504 . . . etching device, 1505, 1507 . . . SEM device, 1506, 1508 . . . SEM controlling device, 1509 . . . EDA tool server, 1510 . . . database server, 1511 . . . database, 1512 . . . image capture and measurement recipe creating arithmetic device, 1513 . . . image capture and measurement recipe server, 1514 . . . image processing server (shape measurement and evaluation), 1515 . . . network, 1516 . . . integration server and arithmetic device for EDA tool, database management, image capture and measurement recipe creation, image processing (shape measurement and evaluation), image capture and measurement recipe management, SEM control, 1601 . . . GUI window, 1602 . . . pattern layout and image capture sequence display window, 1603, 1617 . . . EP, 1604, 1618 . . . evaluation pattern risk position, 1605, 1619 . . . display data selection window, 1606, 1620 . . . display method selection window, 1607 . . . off-line determination mode setting window, 1608 . . . processing data selection window, 1609, 1625 . . . processing parameter setting window, 1610 . . . permissible distance value setting window, 611 . . . EP size setting box, 1612 . . . permissible image capture deviation amount setting box, 1613 . . . image capture sequence optimization executing button, 1614 . . . image capture sequence confirming button, 1615, 1627 . . . image capture recipe saving button, 1616 . . . captured image display window, 1621 . . . image capture control setting window, 1622 image capture method setting window, 1623 image capture recipe specification box, 1624 . . . combined determination mode selection check box, 1626 . . . image capture starting button, 1700 . . . evaluation pattern in which pattern normality and abnormality are displayed in gray scale, 1701 . . . gauge of gray scale value, 1702 . . . risk position
Number | Date | Country | Kind |
---|---|---|---|
2012-122638 | May 2012 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/061966 | 4/24/2013 | WO | 00 |