This application is the U.S. National Phase under 35 U.S.C. §371 of International Application No. PCT/JP2011/006129, filed on Nov. 2, 2011, which in turn claims the benefit of Japanese Application No. 2010-289105, filed on Dec. 27, 2010, the disclosures of which Applications are incorporated by reference herein.
The present invention relates to an inspection apparatus and an inspection method which are used for inspecting a substrate for a defect. Typically, the present invention relates to a defect inspecting apparatus for inspecting a sample on which a pattern has been created. An example of such a sample is a wafer used for manufacturing a semiconductor device. In particular, the present invention relates to an optical defect inspecting apparatus.
In a process of manufacturing a semiconductor device, operations are carried out a number of times. The operations include film formation making use of sputtering and/or chemical vapor deposition, flattening making use of chemical/mechanical polishing and patterning making use of lithography and/or etching. In order to sustain a high yield of the semiconductor device, the wafer is pulled out from the manufacturing process and inspected for a defect.
The defect existing on the surface of the wafer is a foreign substance, a bulge, a scratch or a pattern defect (such as a short, an opening or a hole aperture defect).
A first objective of the inspection for a defect is management of conditions of the manufacturing apparatus whereas a second objective thereof is identification of a process generating the defect and a cause of the defect. With the semiconductor device becoming finer and finer, the defect inspecting apparatus is required to have a high detection sensitivity.
On the wafer, several hundreds of devices (each referred to as a chip) having the same pattern are created. In addition, in typically a memory of the device, a large number of cells having repetitive patterns are created. The defect inspecting apparatus adopts a method of comparing images of adjacent chips or images of adjacent cells with each other.
An optical defect inspecting apparatus for taking an image of a wafer by radiating light to the wafer has a high throughput in comparison with a defect inspecting apparatus of another type. Thus, a large number of optical defect inspecting apparatus are used for inline inspection. An example of the defect inspecting apparatus of another type is a defect inspecting apparatus radiating an electron beam or the like to a wafer.
The conventional optical defect inspecting apparatus is described in patent reference 1 which is JP-A-2005-521064. In the conventional optical defect inspecting apparatus described in patent reference 1, a plurality of movement lenses generate a plurality of spot beams from a beam generated by a laser-beam source and the spot beams are then radiated to a wafer. While the spot beams are being used for scanning lines, detectors for the spot beams are moved in parallel to give a high throughput in comparison with a defect inspecting apparatus making use of a single spot beam.
Other technologies are described in patent references 2 to 4.
There are raised serious problems that, with the semiconductor device becoming finer and finer, the optical defect inspecting apparatus is required to have a high detection sensitivity and the optical system employed in the apparatus is required to have a high S/N ratio.
Since the strength of a signal indicating a fatal defect decreases with the semiconductor device becoming finer and finer, in order to give a high S/N ratio, it is necessary to reduce noises caused by light scattered by the wafer. Pattern edge roughness and surface roughness which each serve as a scattered light source are spread over the entire wafer. The present invention has discovered the fact that contraction of an illuminated area is an effective technique for reducing such noises. That is to say, if the illuminated area has a spot shape for example, reduction of the dimensions of a spot beam is an effective technique for decreasing such noises.
In accordance with the technology described in patent reference 1, a movement lens is created by making use of an acousto-optic device. A refraction distribution is generated by temporally and spatially controlling the propagation of an audio wave inside the medium. Since aberration remains, however, there is a limit to the reduction of the dimensions of the spot beam. In addition, there is also a limit to the scanning speed of the spot beam so that it is difficult to further increase the throughput.
It is thus an object of the present invention to provide a high-sensitivity and high-throughput defect inspecting apparatus making use of a small-size spot beam to serve as an apparatus for semiconductor devices which become finer and finer.
The present invention typically has characteristics described as follows.
The present invention is characterized in that the present invention has a temporal and spatial division optical system for creating a plurality of temporally and spatially divided illuminated areas on a sample. In this case, the technical term “illuminated area” is used to express the area of a spot illumination, a line illumination or a fine-line illumination obtained by squeezing a line illumination. As an alternative, the technical term “illuminated area” can also be used to express the area of an illumination obtained by making a spot illumination or a line illumination small. In addition, the technical term “temporal division” is typically used to express creation of a plurality of illuminated areas at different times on an object of the inspection. On the other hand, the technical term “spatial division” is typically used to express creation of a plurality of illuminated areas separated from each other on an object of the inspection. The present invention is characterized in that the present invention controls at least one of the temporal division and the spatial division.
The present invention is characterized in that illuminated areas are discretely created at different times on a sample and, on a detector side, the illuminated areas are detected as a continuous signal.
The present invention is characterized in that an illumination optical system thereof arranges the illuminated areas along a single line on the sample.
The present invention is characterized in that a temporal and spatial division optical system comprises: a pulse-beam generating unit for generating a pulse beam; a temporal division unit for dividing the pulse beam and providing a temporal difference; a spatial division unit for dividing the pulse beam and providing a spatial difference; and an integration unit for radiating the pulse beam temporally divided by the temporal division unit and spatially divided by the spatial division unit to the sample as a plurality of illuminated spots.
The present invention is characterized in that at least one of the number of the illuminated areas, dimensions of the illuminated areas and distances between the illuminated areas can be changed.
The present invention is characterized in that the present invention includes a scanning section for scanning a sample in a direction perpendicular to the line.
The present invention is characterized in that a detection optical system thereof is a detection optical system of a dark visual field type.
The present invention is characterized in that an illumination optical system thereof creates a plurality of the illuminated spots on the sample from a direction perpendicular to the sample.
The present invention is characterized in that an illumination optical system thereof creates a plurality of the illuminated spots on the sample from a slanting direction inclined with respect to the sample.
The present invention is characterized in that the present invention includes a plurality of detection optical systems and a plurality of image sensors and each of the detection optical systems and each of the image sensors are used for taking an image.
The present invention is characterized in that the present invention carries out processing to combine a plurality of taken images.
The present invention is characterized in that a detection optical system thereof is a detection optical system of a bright visual field type.
The present invention is characterized in that a defect inspecting apparatus according to the present invention is a defect inspecting apparatus for inspecting a sample on which wires have been created; and the defect inspecting apparatus has a processing section for sampling a detection result from a sensor at a frequency computed from pitches of the wires.
The present invention is characterized in that: the sensor is a sensor having at least one pixel; and the present invention has a control section for changing a start time at which an image taking operation is started and an end time at which the image taking operation is ended, within a period corresponding to the size of one pixel of the sensor.
The present invention is characterized in that the present invention has a spatial division optical system for creating a plurality of illuminated spots separated away from each other along a plurality of lines parallel to each other on the sample.
The present invention is characterized in that the present invention has: a mask on which a plurality of apertures are laid out; and a projection optical system for projecting the image of the apertures on the sample.
The present invention is characterized in that the illumination optical system has: an array-formed light source on which a plurality of light emitting devices are laid out; and a projection optical system for projecting the image of the light emitting devices on the sample.
The present invention exhibits typical effects described below. The effects may be exhibited independently of each other or exhibited as a combination of the effects.
The present invention is explained by referring to the figures as follows. It is to be noted that what are disclosed in embodiments described below can be implemented independently of each other or implemented by combining them.
First Embodiment
A first embodiment of the present invention implements a dark visual field defect inspecting apparatus for inspecting a semiconductor wafer by temporal and spatial division illumination.
A rough configuration of the first embodiment is shown in
A beam emitted by the light source 1 is reflected by a mirror 2 and propagates to the temporal/spatial-division optical system 3. In the temporal/spatial-division optical system 3, the beam is adjusted to a beam having a predetermined shape, a predetermined polarization and a predetermined power. The temporal/spatial-division optical system 3 also divides the beam temporally as well as spatially in order to emit a plurality of beams. The temporal and spatial division of the beam will be described later in detail.
The beams emitted by the temporal/spatial-division optical system 3 are each converged by the illumination optical system 4 into a spot shape. The spot-shaped beams are radiated to different locations on the wafer 5 in a direction perpendicular to the wafer 5. The radiation positions of the spot beams are on a line on the wafer 5. The line is parallel to the Y axis.
Light scattered by the wafer 5 is converged by the detection optical system 6. The direction of the optical axis of the detection optical system 6 is inclined with respect to the direction perpendicular to the wafer 5, forming a predetermined angle in conjunction with the perpendicular direction. Since regularly reflected light is emitted to the outside of an aperture of the detection optical system 6, a dark visual field image at a plurality of spot-beam positions creates an image on the image sensor 7.
An A/D converter (shown in none of the figures) converts the image into a digital signal which is supplied to the image processing system 8. Concurrently with the operations described above, the stage 9 is moved in the direction of the X axis for a scanning purpose.
The image processing system 8 is used for storing a reference image taken from a chip which is adjacent to the inspected chip and has the same pattern as the inspected chip. The image processing system 8 outputs a difference image between an inspected image taken from the inspected chip and the reference image after the image processing system 8 has carried out processing such as position collation for the inspected and reference images. The image processing system 8 detects a defect by comparing the luminance of the difference image with a threshold value set in advance.
The coordinates of the position of the defect are supplied to the control system 10 and displayed on the operation system 11 at an inspection end determined in advance.
Next, the temporal and spatial division processing of a beam is explained in detail by referring to
Beams are parallel beams.
To begin with, the temporal division is explained. The input beam is a pulse beam of linearly polarized light. First of all, the temporal-division unit 12a divides the input beam into beams L1 and L1′ at a strength ratio of 1:3. The temporal-division unit 12b divides the beam L1′ into beams L2 and L2′ at a strength ratio of 1:2. The temporal-division unit 12c divides the beam L2′ into beams L3 and L4 at a strength ratio of 1:1.
Since the beams L1, L2, L3 and L4 have optical-path lengths different from each other at their input positions at the spatial-division units, there are generated differences in time between output pulses to serve as time differences corresponding to the optical-path lengths. That is to say, by providing a mechanism (which can be mechanical or optical) for changing the lengths of the optical paths along which the beams L1, L2, L3 and L4 propagate to the spatial-division units, the division interval of the temporal division can be changed. In addition, in order to obtain a required optical-path difference for example, an optical fiber having a proper length can be provided between the temporal-division units.
In addition, the beams L1, L2, L3 and L4 have strengths equal to each other. As the temporal-division unit, it is possible to make use of typically a ½ wavelength plate and a polarized-light beam splitter. By setting the optical axis of the ½ wavelength plate in a direction determined in advance with respect to the polarized-light beam splitter, the beam divisions described above can be carried out.
Next, the spatial division is described as follows. The spatial-division unit 13a divides the beam L1 into beams L11 and L12 having propagation directions different from each other at a strength ratio of 1:1.
By the same token, the beam L2 is divided into beams L21 and L22 whereas the beam L3 is divided into beams L31 and L32. In the same way, the beam L4 is divided into beams L41 and L42. As the spatial-division unit, it is possible to make use of a Wollaston prism and a ½ wavelength plate, a diffraction grating, an acousto-optical device or the like.
As an example, let 1 spatial-division unit comprise a plurality of Wollaston prisms having optical characteristics different from each other, a plurality of ½ wavelength plates or a plurality of diffraction gratings. In this case, the division interval of the spatial division can be changed. In addition, if the acousto-optical device is used, the division interval of the spatial division can be changed by carrying out control to vary the driving signal of the acousto-optical device.
Next, integration of beams is described as follows. The beams L41, L42, L31 and L32 are supplied to the integration unit 14c which outputs the beams to the integration unit 14b. The beams L41, L42, L31, L32, L21 and L22 are supplied to the integration unit 14b which outputs the beams to the integration unit 14a. The beams L41, L42, L31, L32, L21, L22, L11 and L12 are supplied to the integration unit 14a which outputs the eight beams having propagation directions different from each other at four pulse time differences. These pulse time differences each correspond to the sum of the optical-path differences at the temporal-division unit and at the integration unit. As the integration unit, it is possible to make use of typically a ½ wavelength plate and a polarized-light beam splitter.
Next, time charts of the temporal/spatial-division optical system 3 are explained by referring to
The time difference ΔT is the time difference caused by the temporal/spatial-division optical system 3 as described before. After the lapse of a light-emission period ΔTs of the light source, the radiations of the beams to the positions on the wafer are repeated in the same way. With the radiation times different from each other, even if the positions of the radiated beams are adjacent to each other, noises in the image sensor can be suppressed in the same way as a case in which the positions of the radiated beams are separated away from each other.
In addition, by dividing a beam, the peak value of the radiation output can be made small in comparison with the peak value of the light generated by the light source. Thus, there is provided a merit that the radiation damage of the wafer can be reduced.
Next, by referring to
In addition, the distance Ss between the positions of 2 spot beams radiated at the same time is set to a value greater than the resolution of the detection optical system so that scattered light is not mixed in other pixels of the image sensor. (It is desirable to set the distance Ss to a sufficiently large value.)
In addition, the distance St between the positions of 2 spot beams adjacent to each other is set to such a value that, if these spot beams are projected on the Y axis, they overlap each other. By moving the stage in the direction of the X axis under this condition in a scanning operation, an image can be taken with no gaps. That is to say, by combining a set of spot beams and a stage scanning operation, it is possible to obtain an image taking area equivalent to that obtained as a result of illumination making use of a line beam.
It is to be noted that the profiles of the strengths of spot beams form a Gauss distribution. For this reason, in
Expressed in other words, if seen from the side of the image sensor 7, the total sum of strength profiles of illuminated spots shown in
Expressed in still other words, temporally and spatially discrete and different illuminated areas created on the sample are detected by a detector as a continuous signal.
By making use of a line beam for strength profiles made approximately or essentially flat in illuminated areas Y1 to Y8 in such a configuration, it is possible to carry out a scanning operation equivalent to that of an illuminated wafer.
It is to be noted that the method for radiating an illuminated spot obtained as a result of temporal and spatial division of light is by no means limited to this embodiment. That is to say, it is possible to adopt any method by optically laying out the temporal-division unit, the spatial-division unit and the integration unit with a high degree of freedom as long as, in accordance with the method, a pulse beam is temporally and spatially divided and an illuminated spot obtained as a result of the temporal and spatial division of the pulse beam is created on the sample.
The image sensor 7 is typically a CCD 1-dimensional sensor or a CCD 2-dimensional sensor. The CCD 1-dimensional sensor or the CCD 2-dimensional sensor can typically be a photoelectric conversion sensor for converting light into an electrical signal. In the case of a CCD 1-dimensional sensor (a rectangular pixel), the X-axis direction dimension of the whole illuminated area is set to a value smaller than the long-side direction dimension of the pixel. As will be described later, a rectangular pixel is capable of taking an image by adoption of an oversampling technique.
In addition, a multi pixel photo counter (MPPC) can be used as an image sensor 7. Since the MPPC is appropriate for detection of extremely weak light, the MPPC is effective for detection of an infinitesimal defect.
The configuration described above makes it possible to carry out both the high-sensitivity inspection making use of a spot beam having a dimension of about 1 micron and the high-throughput inspection based on a visual-field dimension corresponding to a line beam.
In addition, in the first embodiment, during a stage scanning operation, at least one of the number of spot beams, the dimension of the spot beam and the interval between the spot beams is controlled dynamically in order to change the length of the illuminated area. For example, a liquid shutter is provided on the downstream side of the temporal/spatial-division optical system 3 to serve as a control means for controlling operations to block and transmit light for every spot beam. As shown in
In addition, examples of the light source 1 include not only a pulse laser, but also a continuous wave laser. Other examples of the light source 1 are an LED and a continuous oscillation light source such as a discharge lamp. In the case of a continuous oscillation light source, a means for converting the beam generated by the light source into pulses is provided on the upstream side of the temporal/spatial-division optical system 3. In the case of a visible-light area, an ultraviolet-light area or a far-ultraviolet-light area, a proper light source is selected in accordance with a wavelength and a power which are required by the area.
The detection optical system 6 can be typically an optical system of a refraction type comprising a lens, a reflection type comprising a mirror, a refraction/reflection type combining a lens and a mirror or a diffraction type comprising a Fresnel zone plate.
In addition, as shown in
In the case of a ring-shaped aperture, the optical convergence 3-dimensional angle of scattered light can be increased. Thus, it is possible to assure a sufficient signal strength even if the scattered light from a defect is weak.
Second Embodiment
A second embodiment of the present invention is shown in
In the second embodiment, a detection optical system 6a and an image sensor 7a take a dark visual field image whereas a detection optical system 6b and an image sensor 7b take another dark visual field image. These images are supplied to the image processing system 8. It is also possible to further provide a detection optical system and an image sensor which are not shown in the figure.
For a case in which a plurality of detection optical systems are provided as is the case with the second embodiment,
In the second embodiment, an image having a high SN ratio is selected from a plurality of images and used in order to increase the probability of the defect detection in comparison with the first embodiment making use of only a single image. In addition, by carrying out processing of integrating a plurality of images to generate an output image, it is possible to raise the SN ratio of the output image in comparison with that of the original image and further increase the probability of the defect detection.
Third Embodiment
In the following description, a third embodiment of the present invention is also referred to simply as a third embodiment which is shown in
Light scattered by the wafer 5 is converged by the detection optical system 6. The optical axis of the detection optical system 6 is perpendicular to the wafer 5. Since regularly reflected light is emitted to the outside of an aperture of the detection optical system 6, a dark visual field image at a plurality of spot-beam positions creates an image on the image sensor 7.
In the third embodiment, the optical axis of the detection optical system 6 is perpendicular to the wafer 5. Thus, the optical convergence 3-dimensional angle of scattered light can be further increased to a value which is large in comparison with the first embodiment.
In addition, also in the third embodiment, it is possible to provide a plurality of sets each comprising a detection optical system 6 and an image sensor 7. For a plurality of provided detection optical systems,
In this case, at least, the elevation angles (the angles from the surface of the wafer 5) of the optical axes of the detection optical systems or the azimuth angles of the optical axes of the detection optical systems are different from each other. In general, the angle distribution of scattered light coming from a defect varies from defect to defect in accordance with, among others, the type of the defect, the dimensions of the defect, the shape of the pattern and the structure of the foundation. In addition, the angle distribution of scattered light coming from a noise source also varies from noise source to noise source in accordance with, among others, the shape of the pattern and the structure of the foundation. Thus, by selecting an image having a high SN ratio among a plurality of images and making use of the selected image, the probability of the defect detection can be increased to a high value in comparison with that of a case in which a defect is detected by making use of only a single image. In addition, by carrying out processing of integrating a plurality of images in order to generate an output image, the SN ratio of the output image can be increased to a high value in comparison with that of the original image so that it is possible to further increase the probability of the defect detection.
Fourth Embodiment
A fourth embodiment of the present invention implements a dark visual field defect inspecting apparatus for inspecting a semiconductor wafer for a defect by carrying out spatial-division illumination.
The fourth embodiment is different from the first to third embodiments described above in that, in the case of the fourth embodiment, illuminated spots are created as a result of beam spatial division making use of an optical device having apertures. (An example of such an optical device is a mask 17 to be described later.)
Next, the following description explains an operation carried out to take an image by radiating spot beams and performing scanning based on a moving stage. FIG. 12 is a diagram showing wafer illumination making use of spot beams obtained as a result of spatial division of light emitted by a light source. Illumination lines 1, 2 and 3 are perpendicular to the X-axis direction (that is, the scanning direction of the stage). If the detection optical system carries out detections in an inclined direction, a defocus state is generated outside the optical axis. Thus, the distance between the illumination lines is set to a value not exceeding the focus depth. Spot beams are laid out along each of the illumination lines in such a way that the spot beams do not overlap each other. The dimension of each of the spot beams is set to such a value that noises caused by scattered light can be reduced sufficiently. For example, the dimension of each of the spot beams is set to about 1 micron. In addition, the length of each of the illumination lines is made equal to the dimension of the visual field of the defect inspecting apparatus. The distance between spot beams is set to such a value that the spot beams would overlap each other if each of the illumination lines were projected on the Y axis. If the stage is moved under these conditions in the direction of the X axis in a scanning operation, an image can be taken without generating gaps. That is to say, by combining a set of spot beams and a stage scanning operation, it is possible to obtain an image taking area equivalent to that obtained as a result of illumination making use of a line beam.
The image sensor 7 is typically a CCD 1-dimensional sensor or a CCD 2-dimensional sensor.
In the case of a CCD 1-dimensional sensor (a rectangular pixel), the X-axis direction dimension of the whole illuminated area is set to a value smaller than the long-side direction dimension of the pixel. In addition, the Y-axis direction dimension of the spot beam is set to a multiple of the short-side dimension of the pixel. As will be described later, a rectangular pixel is capable of taking an image by adoption of an oversampling technique.
In the case of a CCD 2-dimensional sensor, on the other hand, the distance between the illumination lines is set to a multiple of the dimension of the pixel. In addition, the Y-axis direction dimension of the spot beam is also set to a multiple of the dimension of the pixel. With both the dimension of the spot beams and the layout of the spot beams fixed, the dimension of the pixels in the 2-dimensional sensor is small in comparison with the dimension of the pixels in the 1-dimensional sensor. Thus, an operation can be carried out to take an image at a high resolution.
In addition, a multi pixel photo counter (MPPC) can be used as an image sensor 7. Since the MPPC is appropriate for detection of extremely weak light, the MPPC is effective for detection of an infinitesimal defect.
The configuration described above makes it possible to carry out both the high-sensitivity inspection making use of a spot beam having a dimension of about 1 micron and the high-throughput inspection based on a visual-field dimension corresponding to a line beam.
In the structure of the mask 17, as shown in
In addition, as shown in
Examples of the light source 1 include not only a pulse laser, but also a continuous wave laser. Other examples of the light source 1 are an LED and a continuous oscillation light source such as a discharge lamp. In the case of a visible-light area, an ultraviolet-light area or a far-ultraviolet-light area, a proper light source is selected in accordance with a wavelength and a power which are required by the area.
Fifth Embodiment
A fifth embodiment of the present invention is shown in
Sixth Embodiment
A sixth embodiment of the present invention is shown in
Seventh Embodiment
A seventh embodiment of the present invention is shown in
The seventh embodiment employs an array-formed light source 19 which includes a plurality of light emitting devices laid out 2-dimensionally. Typically, each of the light emitting devices is an LED.
The control system 10 has a function of controlling each of the light emitting devices to transmit and block light.
In the seventh embodiment, the dimension of the spot beams and the distance between the beams can be set with a high degree of freedom. Thus, the seventh embodiment can be adapted with ease to a variety of pixel dimensions.
In addition, during a stage scanning operation, the number of spot beams is controlled dynamically in order to change the length of the illuminated area. Thus, a function to change the length of the illuminated area is effective for inspection of the edge of the wafer. In comparison with the fourth embodiment, the seventh embodiment does not have a mask illumination optical system and a mask. Thus, the seventh embodiment has a merit that the configuration of the defect inspecting apparatus can be made simple.
Eighth Embodiment
An eighth embodiment implements a variation of the spatial division. The eighth embodiment is characterized in that the eighth embodiment implements a spatial division illumination optical system which is configured by making use of mainly a continuous wave laser and an acousto-optical device. (The continuous wave laser is referred to hereafter as a CW laser.) The following description explains mainly the spatial division illumination optical system.
A continuous wave laser beam LS0 radiated from a light source propagates to an acousto-optical device 801. The acousto-optical device 801 is controlled by a driving signal generated by a control section 802 as a signal having a certain frequency. Thus, the acousto-optical device 801 is capable of handling the continuous wave laser beam LS0 as pulse laser beams LS1 and LS2 which have a time difference depending on the frequency of the driving signal. It is to be noted that, by controlling the frequency, it is possible to change the time difference between the pulse laser beams LS1 and LS2. The pulse laser beams LS1 and LS2 are reflected by mirrors 803 and 804 respectively and supplied to power/polarization/ON-OFF control systems 805 and 806 respectively. The power/polarization/ON-OFF control systems 805 and 806 have respectively a λ/2 plate and a λ/4 plate which are each used for illuminance and polarization control. In addition, each of the power/polarization/ON-OFF control systems 805 and 806 also includes a shutter for carrying out illumination ON and OFF control. It is thus possible to present a spatial division illumination optical system making use of a CW (continuous wave) laser.
Ninth Embodiment
A ninth embodiment implements another variation of the spatial division. The ninth embodiment is characterized in that the ninth embodiment implements a spatial division illumination optical system which is configured by making use of mainly a continuous wave laser and a liquid-crystal shutter. (The continuous wave laser is referred to hereafter as a CW laser.) The following description explains mainly the spatial division illumination optical system.
A continuous wave (CW) laser beam LS0 radiated from a light source propagates to a λ/2 plate 901. After passing through the λ/2 plate 901, the continuous wave laser beam LS0 propagates to a polarized beam splitter 902 which splits the continuous wave laser beam LS0 into 2 beams. Liquid-crystal shutters 903 and 904 are provided on the downstream side of the polarized beam splitter 902. The two beams are supplied to the liquid-crystal shutters 903 and 904 respectively. The ON and OFF states of the liquid-crystal shutters 903 and 904 are controlled by a control section 802 so as to provide a time difference between the states. Thus, it is possible to handle the continuous wave laser beam LS0 as pulse laser beams LS1 and LS2 which depend on the time difference between the ON and OFF states of the liquid-crystal shutters 903 and 904. The pulse laser beams LS1 and LS2 are supplied to power/polarization/ON-OFF control systems 905 and 906 respectively. The power/polarization/ON-OFF control systems 905 and 906 have respectively a λ/2 plate and a λ/4 plate which are each used for illuminance and polarization control. In addition, each of the power/polarization/ON-OFF control systems 905 and 906 also includes a shutter for carrying out illumination ON and OFF control.
It is to be noted that the time difference described above can be generated by controlling the shutters employed in the power/polarization/ON-OFF control systems 905 and 906. In addition, if the time difference is set to 0, illuminations are carried out at the same time.
Tenth Embodiment
Next, a tenth embodiment is described as follows. The tenth embodiment is obtained by radiating the pulse laser beams LS1 and LS2 in the eighth or ninth embodiment at elevation angles different from each other.
The tenth embodiment is characterized in that, in the tenth embodiment, an area of radiation to the object of the inspection is created from a plurality of elevation angles at a certain time difference. Then, scattered light generated by the object of the inspection is detected at the elevation angles. Detection results include additional information on the time difference used at the illumination time and additional information on the elevation angles adopted at the detection time.
Scattered light generated by the wafer 10015 is converged by lenses 10013 and 10014 and detected by detectors 10005 and 10006 before being subjected to photoelectric conversions. Analog signals obtained as a result of the photoelectric conversions are then converted by A/D conversion sections 10007 and 10008 into digital signals.
In this case, if seen from the detector side, it is impossible to know a time at which the detected signal has been generated. In order to solve this problem, the tenth embodiment is configured as follows.
In the tenth embodiment, a mirror 10018 is provided on the optical path of the pulse laser beam LS1 whereas a mirror 10010 is provided on the optical path of the pulse laser beam LS2. Then, the pulse laser beam LS1 is detected by a photodiode 10009 whereas the pulse laser beam LS2 is detected by a photodiode 10011. Subsequently, a detected signal ADS1 output by the photodiode 10009 and a detected signal ADS2 output by the photodiode 10011 are supplied to a logical add section 10012 as well as multiplexers 10016 and 10017. A signal ADS output by the logical add section 10012 is supplied to the A/D conversion sections 10007 and 10008 whereas signals output by the A/D conversion sections 10007 and 10008 are supplied to the multiplexers 10016 and 10017 respectively.
The multiplexer 10016 adds information on the time difference to the signal output by the detector 10005. To put it more concretely, the following pieces of information are added to the signal output by the detector 10005.
By the same token, the multiplexer 10017 adds information on the time difference to the signal output by the detector 10006. To put it more concretely, the following pieces of information are added to the signal output by the detector 10006.
That is to say, it is possible to make a statement that, in the tenth embodiment, information on the time difference used at the illumination time and information on the elevation angles adopted at the detection time are added to detection results.
The shape of the defect, the type of the defect and the like appear as differences in elevation and azimuth angles of scattered light. In the tenth embodiment, information on the illumination elevation angle and information on the detection elevation angle are known correctly. Thus, it is possible to improve the precision of classification of defects.
In addition, in the description of the tenth embodiment, different illumination and detection elevation angles have been explained. However, the explanation also holds true for the azimuth angle.
Eleventh Embodiment
Next, an eleventh embodiment is described as follows. In the case of the first to tenth embodiments described above, a spatial filter shown in none of the figures can be provided on the Fourier surface of the detection optical system in order to eliminate effects of diffracted light coming from typically a circuit pattern created on the object of the inspection and detect only scattered light coming from a defect by making use of a detector.
By providing only the spatial filter, however, the diffracted light cannot be blocked in some cases. This is because, on the circuit pattern, in spite of the fact that there are a plurality of patterns such as a logic section created as a complicated pattern and a peripheral section created as a repetitive pattern, blocked light patterns of the spatial filter are uniform. That is to say, even though the spatial filter is capable of blocking diffracted light coming from a certain area, the spatial filter is not capable of completely blocking diffracted light coming from other areas. Thus, the detector inevitably detects also diffracted light coming from an area other than a defect and raises an undesirable problem of saturation. The eleventh embodiment is an embodiment for solving this problem.
In the eleventh embodiment, prior to inspection, as shown in
Next, light is illuminated and scattered light is detected by making use of sensor having a plurality of pixels. As a result, saturation characteristics shown in
Next, an actual inspection is carried out. At that time, on the basis of the wafer coordinates 2001 and the chip coordinates 2002, information on an area from which pixels of the sensor are detecting scattered light is grasped. Then, the saturation characteristic is controlled for each pixel of the sensor. To put it more concretely, from a signal of a carrier system carrying the object of the inspection, the area is detected on the basis of the wafer coordinates 2001 and the chip coordinates 2002. After the saturation characteristic shown in
As described above, the saturation characteristic can be controlled for every pixel in order to prevent the sensor from getting saturated. It is to be noted that the control method according to the eleventh embodiment can be applied to the other embodiments.
Twelfth Embodiment
Next, a twelfth embodiment is described as follows. The twelfth embodiment is another variation for preventing the sensor from getting saturated.
The twelfth embodiment monitors the amount of accumulated electric charge in order to control the amount of accumulated electric charge for every pixel.
To be more specific,
On the other hand,
Thirteenth Embodiment
Next, a thirteenth embodiment is described as follows. The thirteenth embodiment implements at least one of the temporal-division/spatial-division illumination, the spatial-division illumination and the temporal-division illumination which are disclosed in this specification. In addition, the thirteenth embodiment also implements the optical layout of the detection optical systems.
In the thirteenth embodiment, a light beam 24001 is radiated to the wafer 24003 at an incidence angle θ (in an illumination in a slanting direction), creating at least one of a temporal-division/spatial-division illuminated spot 24011, a spatial-division illuminated spot 24012 and a temporal-division illuminated spot 24013, which are disclosed in this specification, on the wafer 24003.
The thirteenth embodiment includes detection optical systems 24005 and 24006 for detecting scattered light in order to create an image. As shown in
In addition, as shown in
In addition, if seen from the observing point of a detector for detecting an image created by the detection optical systems 24005 and 24006, the thirteenth embodiment has the following characteristics:
By having what is described above, there are only few effects of noises in comparison with a case in which a beam is actually radiated to a line-shaped illumination and, if seen from the detector, it is possible to obtain effects equivalent to those for a case in which a line-shaped illuminated area is essentially created.
In addition, the strength profile seen from the detector is made flat in order to reduce the number of noises to a small value in comparison with a case in which a beam is actually radiated to a line-shaped illumination and it is possible to obtain effects equivalent to those for a case in which scanning is carried out by making use of line-shaped illumination light having a strength profile which is essentially flat over a wide range. On the top of that, with the optical layout of this embodiment, higher-sensitivity scanning can be carried out.
Fourteenth Embodiment
Next, a fourteenth embodiment is described as follows. The fourteenth embodiment is described by explaining mainly detection of scattered light and processing of an image. It is to be noted that configurations adopted by the fourteenth embodiment to implement the defect inspecting apparatus can be properly configured to be identical with those of the first to seventh embodiments and, in addition, line-shaped illumination light not temporally and spatially divided can be used in the fourteenth embodiment. The configurations include the configuration of the illumination optical system.
As a result, a signal shown in
It is to be noted that, if the sampling operation is carried out at a sampling frequency sufficiently higher than the frequency of the signal generated by the wire (that is, if the sampling operation is carried out at a sampling frequency equal to or higher than a frequency computed in accordance with the sampling theorem) so that the strength of the signal generated by the wire can be recovered by signal interpolation, the problem described above can be solved.
On the other hand,
To be more specific,
In the case of the fourteenth embodiment, the sampling operation is carried out at intervals approximately equal to the wire pitch p.
In accordance with the fourteenth embodiment, in comparison of a signal of a comparison object with a signal of an adjacent circuit pattern (that is, a pattern created on the object of the invention in addition to a wire pattern and a hole pattern), a circuit pattern at a corresponding position in an adjacent die or a circuit pattern computed from CAD information, the difference in signal between the comparison object and the circuit pattern becomes smaller so that defect detection noises can be decreased.
As described above, in accordance with the fourteenth embodiment, the sampling interval is adjusted to the wire pitch p of the wire bundle 1021. However, another statement can also be expressed as follows. The timing to sample a taken image by making use of the detector 1912 is adjusted among circuit patterns subjected to signal comparison processing.
Thus, a sampling operation can be carried out at a sampling interval equal to the wire pitch p, a multiple of the wire pitch p or a fraction of the wire pitch p. A sampling operation carried out at a sampling interval equal to a multiple of the wire pitch p has a merit that the image-taking resolution in the sampling direction (or the scanning direction) can be made better and the detection sensitivity can be improved. A sampling operation carried out at a sampling interval equal to a fraction of the wire pitch p has a merit that fewer image taking pixels are required so that the detection speed can be raised.
Next, the following description explains how the sampling interval is determined. If the pitch of the circuit patterns is known in advance, what needs to be done is to determine the sampling interval from the pitch information.
If the pitch of the circuit patterns is not known in advance, on the other hand, what needs to be done is to determine the sampling interval in accordance with a sequence explained by referring to
As shown in the figure, the sequence begins with a step 2201 at which the initial value of the sampling interval is set.
Then, at the next step 2202, an image is acquired on the basis of a sampling interval determined in advance. This acquired image is an image at a position serving as a comparison object of the signal comparison processing.
Then, at the next step 2203, for the acquired image, a difference between images is computed. Subsequently, at the next step 2204, a sum of the absolute values of the differences is computed.
Then, at the next step 2205, the sampling interval is changed. (That is to say, the sampling interval is changed by typically 10%.) Then, another image is acquired and a sum of the absolute values of the differences is computed. These operations are carried out in order to create a graph shown in
The graph plotted in
The above description has explained a sampling method according to the fourteenth embodiment in the case of a single detection optical system.
Fifteenth Embodiment
Next, the following description explains problems raised by a plurality of detection optical systems and a method for solving the problems in accordance with a fifteenth embodiment.
To put it in detail,
On the other hand,
In order to solve the problem, an image taking method according to the fifteenth embodiment is provided.
The image taking method according to the fifteenth embodiment is characterized in that the image taking operation start S and the image taking operation end E can be arbitrarily changed to times within a period corresponding to the size of one pixel of the detector. The changes of the image taking operation start S and the image taking operation end E can be considered to be changes of the image taking operation start S and the accumulation period Δt. In this way, by controlling the image taking timing in one pixel, a shift of an image taking position in one pixel can be corrected. As a result, it is possible to solve the problem explained earlier by referring to
In the case of the fifteenth embodiment, on the other hand, the start time tS and the end time tE are controlled in order shift the start time tS by the shift of the image taking position in a pixel. Thus, an image can be obtained by correcting the image taking position shifted by a Z-axis direction movement of the wafer.
It is to be noted that, if the direction of the optical axis of a detection optical system is inclined with respect to the direction normal to the wafer, the pixel dimension on the wafer is lengthened in the direction in which the optical axis of a detection optical system is inclined. Thus, the on-wafer pixel dimension for the detection optical system 1011 is different from the on-wafer pixel dimension for the detection optical system 1061.
As shown in
In the detection optical system 1061 in which the direction of the optical axis is inclined with respect to the direction normal to the wafer, on the other hand, control is executed to shift the time S (tSd) of the focus shifted position and the time E (tEd) of the focus shifted position from the time S (tSc) of the best focus position and the time E (tEd) of the best focus position so that the effects of the focus shift can be eliminated.
The configuration of the defect inspecting apparatus according to the fifteenth embodiment is explained by referring to
In the defect inspecting apparatus according to the fifteenth embodiment, first of all, the wafer-altitude measuring mechanism 1091 measures a wafer altitude in the vicinity of the focus position of the detection optical systems 1011 and 1061. Then, the wafer-altitude measuring mechanism 1091 supplies the altitude information to the timing control mechanism 1092. In the timing control mechanism 1092, for a signal obtained from the detector, electric charge is accumulated and transmitted with a timing determined in advance.
It is to be noted that the timing determined in advance is a timing determined from the inclination angle of the optical axis of the detection optical system and the Z-axis direction position of the wafer. In the signal integrating processor 1093, a signal output by the timing control mechanism 1092 is subjected to signal processing and defect detection processing.
It is to be noted that, in order to prevent the altitude of the wafer from being much shifted, it is possible to add an automatic focus adjustment mechanism for controlling the altitude of the wafer.
Next, the following description explains a method for measuring a shift of the image taking timing between the detection optical systems 1011 and 1061. As described above, geometrical calculation can also be adopted. In accordance with this method, however, actual measurements are carried out as explained below.
In order to measure a shift of the image taking timing, what needs to be done is to take the image of the same defect (standard particles are also OK) and measure the magnitude of a signal.
The embodiments described above implement a dark vision field defect inspecting apparatus taking a semiconductor wafer to be inspected for a defect as the object of inspection. However, the present invention can also be applied to a bright vision field defect inspecting apparatus.
In addition, the present invention can also be applied widely to mirror wafers with no patterns created thereon and samples with patterns created thereon. The samples with patterns created thereon include a magnetic storage medium and a liquid-crystal device.
Number | Date | Country | Kind |
---|---|---|---|
2010-289105 | Dec 2010 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2011/006129 | 11/2/2011 | WO | 00 | 6/24/2013 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2012/090371 | 7/5/2012 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6248988 | Krantz | Jun 2001 | B1 |
6724473 | Leong et al. | Apr 2004 | B2 |
7643139 | Ohshima | Jan 2010 | B2 |
7864310 | Okawa | Jan 2011 | B2 |
8922764 | Urano | Dec 2014 | B2 |
20050258366 | Honda | Nov 2005 | A1 |
20060139629 | Ohshima | Jun 2006 | A1 |
20080013084 | Matsui | Jan 2008 | A1 |
20100004875 | Urano | Jan 2010 | A1 |
20100188656 | Matsui | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
06-347418 | Dec 1994 | JP |
08-261949 | Oct 1996 | JP |
10-282010 | Oct 1998 | JP |
2003-004654 | Jan 2003 | JP |
2005-517906 | Jun 2005 | JP |
2005-521064 | Jul 2005 | JP |
2005-283190 | Oct 2005 | JP |
2005-300581 | Oct 2005 | JP |
2006-078421 | Mar 2006 | JP |
2006-162500 | Jun 2006 | JP |
2008-14849 | Jan 2008 | JP |
2008-020359 | Jan 2008 | JP |
2008-268140 | Nov 2008 | JP |
2009-276273 | Nov 2009 | JP |
2010-014635 | Jan 2010 | JP |
2010-236966 | Oct 2010 | JP |
WO-03069263 | Aug 2003 | WO |
WO-03083449 | Oct 2003 | WO |
Entry |
---|
English translation of Korean Office Action issued in Korean Application No. 10-2013-7016614 dated Jun. 17, 2014. |
Japanese Office Action, w/English translation thereof, issued in Japanese Application No. 2010-289105 dated Dec. 24, 2013. |
International Search Report issued in International Application No. PCT/JP2011/006129 dated Feb. 14, 2012. |
English translation Notification of Reasons for Refusal Japanese Patent Applicaiton No. 2010-289105 dated Jul. 8, 2014. |
Number | Date | Country | |
---|---|---|---|
20130286191 A1 | Oct 2013 | US |