The present disclosure relates generally to optical metrology and, more particularly, to optical metrology of memory structures including buried CMOS structures.
One approach to meeting demands for increased performance of memory devices (e.g., 3D memory devices) while maintaining or reducing physical size is to fabricate CMOS circuitry (e.g., logic circuitry) beneath memory array structures. This approach is commonly referred to as a complementary metal-oxide-semiconductor (CMOS) under array (CuA) technique. However, CuA techniques present unique challenges for metrology systems used for process control since the underlying CMOS circuitry may influence measurements of the memory array structures. There is therefore a need to develop systems and methods to address the above challenges.
A method is disclosed, in accordance with one or more illustrative embodiments of the present disclosure. In embodiments, the method includes generating optical measurement data for one or more training samples including complementary metal-oxide-semiconductor (CMOS) under array (CuA) devices, wherein the CuA devices include CMOS structures disposed beneath periodic memory array structures. In embodiments, the method includes generating reference data for the one or more training samples, wherein the reference data includes measurements of geometric parameters of the CuA devices. In embodiments, the method includes training a machine learning model with the optical measurement data for the one or more training samples and the reference data. In embodiments, the method includes generating optical measurement data for one or more test samples including CuA devices. In embodiments, the method includes determining one or more measurements of the geometric parameters of the CuA devices on the one or more test samples using the machine learning model with the optical measurement data for the one or more test samples.
A system is disclosed in accordance with one or more illustrative embodiments of the present disclosure. In embodiments, the system includes a controller including one or more processors configured to execute program instructions causing the one or more processors to implement a measurement recipe by: receiving optical measurement data for one or more training samples including complementary metal-oxide-semiconductor under array (CuA) devices, wherein the CuA devices include CMOS structures disposed beneath periodic memory array structures; receiving reference data for the one or more training samples, wherein the reference data includes measurements of geometric parameters of the CuA devices; training a machine learning model with the optical measurement data for the one or more training samples and the reference data; receiving optical measurement data for one or more test samples including CuA devices; and determining one or more measurements of the geometric parameters of the CuA devices on the one or more test samples using the machine learning model with the optical measurement data for the one or more test samples.
A system is disclosed in accordance with one or more illustrative embodiments of the present disclosure. In embodiments, the system includes an optical characterization system. In embodiments, the system includes a reference characterization system. In embodiments, the system includes a controller including one or more processors configured to execute program instructions causing the one or more processors to implement a measurement recipe by: receiving optical measurement data for one or more training samples including complementary metal-oxide-semiconductor under array (CuA) devices from the optical characterization system, wherein the CuA devices include CMOS structures disposed beneath periodic memory array structures; receiving reference data for the one or more training samples, wherein the reference data includes measurements of geometric parameters of the CuA devices from the reference characterization system; training a machine learning model with the optical measurement data for the one or more training samples and the reference data; receiving optical measurement data for one or more test samples including CuA devices from the optical characterization system; and determining one or more measurements of the geometric parameters of the CuA devices on the one or more test samples using the machine learning model with the optical measurement data for the one or more test samples.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not necessarily restrictive of the invention as claimed. The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and together with the general description, serve to explain the principles of the invention.
The numerous advantages of the disclosure may be better understood by those skilled in the art by reference to the accompanying figures.
Reference will now be made in detail to the subject matter disclosed, which is illustrated in the accompanying drawings. The present disclosure has been particularly shown and described with respect to certain embodiments and specific features thereof. The embodiments set forth herein are taken to be illustrative rather than limiting. It should be readily apparent to those of ordinary skill in the art that various changes and modifications in form and detail may be made without departing from the spirit and scope of the disclosure.
Embodiments of the present disclosure are directed to systems and methods for optical metrology of complementary metal-oxide-semiconductor (CMOS) under array (CuA) devices based on a machine learning model trained on optical measurement data of fully-fabricated CuA devices (e.g., full loop data) and reference data generated with a different tool providing ground truth measurements of parameters of interest.
A CuA structure (e.g., a CuA memory structure) may generally include logic circuitry (e.g., CMOS logic circuitry, or the like) physically located (e.g., disposed) beneath memory array structures (e.g., 3-dimensional (3D) memory stacks, 3D NAND structures, or the like). As used herein, the term CuA structure may cover a wide variety of designs of the logic and memory array structures. In this way, the present disclosure is not limited to any particular CuA architecture.
Optical metrology is commonly used for semiconductor process control since it may provide relatively high measurement throughput and is typically non-destructive. In optical metrology, a sample is illuminated with light and a measurement is generated based on corresponding light emanating from the sample. Optical metrology of sub-surface features typically requires that light propagate through at least upper portions of the sample to reach the sub-surface features of interest. As a result, optical metrology systems typically utilize wavelengths of light selected to propagate through the structures of interest with relatively low absorption.
However, in the case of CuA structures, incident light may interact with both memory array structures and buried CMOS structures in a way that may impair the ability to generate isolated measurements for the memory array structures. It is contemplated herein that existing optical metrology techniques may be inadequate to accurately characterize CuA structures, particularly as the dimensions shrink and device complexity increases. For example, some techniques may rely on wavelengths of light that are in a transparency window for the memory array structures of interest, but may be at least partially absorbed by the underlying logic circuitry. As an illustration, some logic circuitry may utilize poly-silicon layers that absorb light with wavelengths greater than around 450 nanometers (nm). In this case, optical metrology with wavelengths lower than about 450 nm may generate isolated measurements of memory array structures. However, such a technique may be limited to select CuA designs that incorporate such absorbing materials, may provide limited sensitivity for deep structures, and may further provide limited value for broadband optical measurement methods that benefit from multi-wavelength data. As another example, some techniques rely on supervised training of artificial neural networks using optical measurement data with labels generated through additional measurement methods. However, these techniques may have various limitations such as, but not limited to, a requirement of high sampling of ground truth reference data, substantial time required to generate sufficient labels for training, limited performance for deeply buried structures, insensitivity to process changes, and general inapplicability for CuA structures outside the training dataset.
In some embodiments of the present disclosure, a machine learning model is trained to generate measurements of CuA devices or portions thereof based on training data that includes optical measurement data of fully-formed CuA devices and reference data providing ground truth measurements of parameters of interest. In this way, although the optical measurement data may be influenced by the presence of underlying CMOS structures, isolated measurements of periodic memory array structures may be generated. In particular, the machine learning model may identify patterns between the optical metrology data of fully-formed CuA devices and the ground truth measurements.
Referring now to
The characterization sub-system 102 may include any components or combination of components suitable for generating measurement data on a sample 104.
In some embodiments, the characterization sub-system 102 includes an optical characterization sub-system 102 to generate measurement data based on interaction of the sample 104 with light. For example, the characterization sub-system 102 may include, but is not limited to, a spectroscopic ellipsometer (SE), an SE with multiple angles of illumination, an SE measuring Mueller matrix elements (e.g. using rotating compensator(s)), a single-wavelength ellipsometer, a beam profile ellipsometer (angle-resolved ellipsometer), a beam profile reflectometer (angle-resolved reflectometer), a broadband reflective spectrometer (spectroscopic reflectometer), a single-wavelength reflectometer, an angle-resolved reflectometer, an imaging system, a scatterometer (e.g., speckle analyzer), or any combination thereof.
In some embodiments, the characterization sub-system 102 includes an x-ray characterization sub-system 102 to generate measurement data based on interaction of the sample 104 with x-rays. For example, the characterization sub-system 102 may include, but is not limited to, a small-angle x-ray scattering (SAXS) system or an x-ray reflection scatterometry (SXR) system.
In some embodiments, the characterization sub-system 102 includes a particle-beam characterization sub-system 102 to generate measurement data based on interaction of the sample 104 with a particle beam such as, but not limited to, an electron beam (e-beam), an ion beam, or a neutral particle beam.
In some embodiments, a characterization sub-system 102 provides multiple types of measurements. In some embodiments, a measurement system 100 includes multiple measurement sub-systems 102, each providing a different combination of one or more measurements. Further, the measurement system 100 may be provided as a single tool or multiple tools. A single tool providing multiple measurement configurations is generally described in U.S. Pat. No. 7,933,026 issued on Apr. 26, 2011, which is incorporated herein by reference in its entirety. Multiple tool and structure analysis is generally described in U.S. Pat. No. 7,478,019 issued on Jan. 13, 2009, which is incorporated herein by reference in its entirety.
Further, U.S. Pat. No. 10,458,912, titled “Model based optical measurements of semiconductor structures with anisotropic dielectric permittivity,” issued on Oct. 29, 2019; U.S. Pat. No. 11,573,077, titled “Scatterometry based methods and systems for measurement of strain in semiconductor structures,” issued on Feb. 7, 2023; U.S. Pat. No. 11,036,898, titled “Measurement models of nanowire semiconductor structures based on re-useable sub-structures,” issued on Jun. 15, 2021; U.S. Pat. No. 11,555,689, titled “Measuring thin films on grating and bandgap on grating,” issued on Jan. 17, 2023; U.S. Patent No. 11, 156,548, titled “Measurement methodology of advanced nanostructures,” issued on Oct. 26, 2021; and U.S. Pat. No. 10,794,839, titled “Visualization of three-dimensional semiconductor structures,” issued on Oct. 6, 2020, which are all incorporated herein by reference in their entirety.
In some embodiments, the controller 106 includes one or more processors 108 configured to execute a set of program instructions maintained in a memory 110, or memory device, where the program instructions may cause the processors 108 to implement various actions.
The one or more processors 108 of a controller 106 may include any processor or processing element known in the art. For the purposes of the present disclosure, the term “processor” or “processing element” may be broadly defined to encompass any device having one or more processing or logic elements (e.g., one or more micro-processor devices, one or more application specific integrated circuit (ASIC) devices, one or more field programmable gate arrays (FPGAs), or one or more digital signal processors (DSPs)). In this sense, the one or more processors 108 may include any device configured to execute algorithms and/or instructions (e.g., program instructions stored in memory). In some embodiments, the one or more processors 108 may be embodied as a desktop computer, mainframe computer system, workstation, image computer, parallel processor, networked computer, or any other computer system configured to execute a program configured to operate or operate in conjunction with the characterization sub-system 102, as described throughout the present disclosure. Moreover, different subsystems of the measurement system 100 may include a processor or logic elements suitable for carrying out at least a portion of the steps described in the present disclosure. Therefore, the above description should not be interpreted as a limitation on the embodiments of the present disclosure but merely as an illustration. Further, the steps described throughout the present disclosure may be carried out by a single controller or, alternatively, multiple controllers. Additionally, the controller 106 may include one or more controllers housed in a common housing or within multiple housings. In this way, any controller or combination of controllers may be separately packaged as a module suitable for integration into measurement system 100.
The memory 110 may include any storage medium known in the art suitable for storing program instructions executable by the associated one or more processors 108. For example, the memory 110 may include a non-transitory memory medium. By way of another example, the memory 110 may include, but is not limited to, a read-only memory (ROM), a random-access memory (RAM), a magnetic or optical memory device (e.g., disk), a magnetic tape, a solid-state drive and the like. It is further noted that the memory 110 may be housed in a common controller housing with the one or more processors 108. In some embodiments, the memory 110 may be located remotely with respect to the physical location of the one or more processors 108 and the controller 106. For instance, the one or more processors 108 of the controller 106 may access a remote memory (e.g., server), accessible through a network (e.g., internet, intranet and the like).
The controller 106 may be communicatively coupled with any component or combination of components of the measurement system 100. In some embodiments, the controller 106 may receive data (e.g., measurement data, or the like) from one or more components of the measurement system 100. In some embodiments, the controller 106 controls one or more components of the measurement system 100 via drive signals. More generally, the controller 106 may implement any steps described in the present disclosure.
In some embodiments, the controller 106 generates one or more measurements of the sample 104 based at least in part on measurement data generated by the characterization sub-system 102. Measurements of parameters of interest may include a number of algorithms, which may be executed by the controller 106. For example, optical interaction of the incident beam with the sample 104 may be modeled using an EM (electro-magnetic) solver and may utilize algorithms as, but not limited to, rigorous coupled-wave analysis (RCWA), finite element method (FEM), method of moments, surface integral method, volume integral method, or finite-difference time-domain (FDTD) method. The sample 104 may be modeled (e.g., parametrized) using a geometric engine, a process modeling engine, or a combination of both. The use of process modeling is generally described in U.S. Pat. No. 10,769,320 issued on Sep. 8, 2020, which is incorporated herein by reference in its entirety. A geometric engine is implemented, for example, in AcuShape software by KLA Corporation.
The controller 106 may analyze collected measurement data using any suitable combination of data fitting and/or optimization techniques such as, but not limited to, libraries, fast-reduced-order models, regression, statistical methods, see e.g., “Statistical model-based metrology,” by S. Pandev et al, U.S. Pat. No. 10,101,670; machine-learning algorithms (e.g., neural networks, support-vector machines (SVM), principal component analysis (PCA), independent component analysis (ICA), local-linear embedding (LLE), dimensionality reduction techniques more generally), sparse representation techniques, Fourier transform techniques, wavelet transform techniques, or Kalman filtering. Statistical model-based metrology is generally described in U.S. Pat. No. 10,101,670 issued on Oct. 16, 2018, which is incorporated herein by reference in its entirety. The controller 106 may analyze collected measurement data using algorithms that do not include modeling, optimization, and/or fitting. Patterned wafer characterization is generally described in U.S. Pat. No. 10,502,694 issued on Dec. 10, 2019, which is incorporated herein by reference in its entirety. In some embodiments, the controller 106 utilizes one or more algorithms to promote matching from the same or different tool types (e.g., different instances or configurations of a characterization sub-system 102).
The controller 106 may be designed to provide efficient performance through any suitable techniques such as, but not limited to, parallelization, distribution of computation, load balancing, multi-service support, dynamic load optimization, or the like. Further, the controller 106 may implement any steps using any type or combination of configurations such as, but not limited to, dedicated hardware (e.g., FPGAs, or the like), software, or firmware.
The controller 106 may further generate any type of measurement of the sample 104 (or a portion thereof) based at least in part on measurement data from the characterization sub-system 102. In some embodiments, the controller 106 generates a metrology measurement such as, but not limited to, an overlay measurement, a critical dimension (CD) measurement, a shape measurement (e.g., a height measurement, a tilt measurement, a sidewall angle measurement, or the like), a stress measurement, a composition measurement, a bandgap measurement, a measurement of electrical properties, or a measurement of process conditions (e.g., focus and/or dose conditions, a resist state, a partial pressure, a temperature, a focusing model, or the like). In some embodiments, the controller 106 generates an inspection measurement in which one or more defects on the sample 104 are at least one of identified or classified.
The measurement system 100 and any of its components (e.g., the characterization sub-system 102, the controller 106, or the like) may be configured to implement a recipe (e.g., a measurement recipe), which may define various configuration parameters and/or steps to be performed in a measurement or a series of measurements.
For example, a recipe may include various aspects of a design of a sample 104 (e.g., a design of CuA devices 202 on a sample 104) including, but not limited to, a layout of features on one or more sample layers, feature sizes, or feature pitches. As another example, a recipe may include illumination parameters such as, but not limited to, an illumination wavelength, an illumination pupil distribution (e.g., a distribution of illumination angles and associated intensities of illumination at those angles), a polarization of incident illumination, a spatial distribution of illumination, or a sample height. By way of another example, a recipe may include collection parameters such as, but not limited to, a collection pupil distribution (e.g., a desired distribution of angular light from the sample to be used for a measurement and associated filtered intensities at those angles), collection field stop settings to select portions of the sample of interest, polarization of collected light, or wavelength filters. By way of another example, a recipe may include various processing steps (e.g., that may be implemented by the controller 106 to generate measurements based on measurement data generated according to the recipe.
Referring now to
The memory array structures 204 may include any number or type of structures suitable for forming a memory array. For example, the memory array structures 204 may include, but are not limited to, 3D NAND structures formed from patterned features 208 within a multi-layer stack 210. Further, such memory array structures 204 are typically periodic structures with periodicity along one or more dimensions.
The CMOS structures 206 may include any number or type of structures fabricated beneath the memory array structures 204. For example, the CMOS structures 206 may be, but are not required to be, suitable for controlling and/or powering the memory array structures 204. In this way, the combination of the CMOS structures 206 and the memory array structures 204 may form a memory device (e.g., a 3D memory device). Further, the CMOS structures 206 may typically have a spatially-varying distribution such that the number and/or design of the constituent features may not be periodic across the CuA device 202. In this way, the CMOS structures 206 may generally be described as non-periodic. However, it is noted that CMOS structures 206 may exhibit local periodicity in some regions.
Further, the memory array structures 204 and/or the CMOS structures 206 may generally have any design such that the term CuA device 202 as used herein is not limited to any particular design. For example, a CuA device 202 may include intervening layers between the memory array structures 204 and the CMOS structures 206 such as, but not limited to a source layer 212 (e.g., a poly-silicon source layer, or the like). As another example, though not shown, a CuA device 202 may include intervening layers between the CMOS structures 206 and a substrate 214.
Referring now to
It may be desirable to generate measurements of the constituent structures of a CuA device 202 (e.g., the CMOS structures 206 and/or the memory array structures 204) at various stages of a fabrication process. Such measurements may include, but are not limited to, metrology measurements or defect measurements (e.g., inspection measurements. Metrology measurements may include, but are not limited to, overlay measurements between structures fabricated with different lithographic exposures, critical dimension (CD) measurements of one or more features, heights of one or more features, or tilts of one or more features. Inspection measurements may include, but are not limited to, identification and/or characterization of defects of a fabrication process (e.g., unwanted features, missing features, improperly shaped or positioned features, or the like). Further, such measurements may be used for a wide variety of purposes including, but not limited to, process control, disposition, or for estimating performance of fabricated CuA devices 202.
Measurements may be generated after any process step for the fabrication of a CuA device 202. For example, measurements may be generated after the fabrication of CMOS structures 206 and/or after the fabrication of memory array structures 204 to form a full CuA device 202. For the purposes of illustration, measurements of a full CuA device 202 including both memory array structures 204 and underlying CMOS structures 206 are referred to herein as “full loop” measurements.
It is contemplated herein that measurements at one process step may generally provide information about any features fabricated on the sample 104 depending on the interaction of an illumination beam 114 with the sample 104. In this way, it may be challenging to perform isolated measurements of newly fabricated features. For example, full loop measurements may generally provide information for or be impacted by both the memory array structures 204 and underlying CMOS structures 206, which may limit or impair the ability to generate isolated measurements of the memory array structures 204.
In some embodiments, measurements of various test structures may be generated to assist in generating isolated measurements of certain features. For example, measurements of a test structure including memory array structures 204 without corresponding buried CMOS structures 206 are referred to herein as “short loop” measurements.
Further, measurements at any process step may generally be generated using any suitable technique including, but not limited to, optical techniques, x-ray techniques, particle-based techniques, or the like. However, different measurement techniques may provide different tradeoffs. For example, optical measurement techniques may generally provide non-destructive measurements with a high measurement throughput but may have limited resolution or may be limited to certain types of structures (e.g., periodic structures) based on the corresponding analysis or modeling steps. Optical measurements are thus commonly utilized during run-time when throughput is particularly important. As another example, x-ray and/or particle-based techniques may provide higher resolution than some optical techniques but may suffer from relatively low throughput and/or may be destructive measurements. As a result, such techniques are commonly used for reference measurements.
However, it is contemplated herein that it may not be feasible or desirable in all applications to generate all possible types of measurements at all measurement steps. In such cases, different techniques may be utilized to generate measurements of specific structures (e.g., isolated measurements of memory array structures 204) depending on available data.
It is contemplated herein that method 300 may be suitable for, but not limited to, applications in which it is desirable to generate isolated measurements of memory array structures 204 without the use of short loop measurements. One goal of this approach is to develop a machine learning model to correlate optical measurement data (e.g., data generated by an optical characterization sub-system 102 of CuA devices 202 (e.g., full loop data) with measurements of interest of the CuA devices 202 such as, but not limited to, an overlay measurement, a critical dimension (CD) measurement, a shape measurement (e.g., a height measurement, a tilt measurement, a sidewall angle measurement, or the like), a stress measurement, a composition measurement, a bandgap measurement, a measurement of electrical properties, a measurement of process conditions (e.g., focus and/or dose conditions, a resist state, a partial pressure, a temperature, a focusing model, or the like), identification of defects, or classification of defects. Such a machine learning model may be trained using optical measurement data of CuA devices 202 on training samples (e.g., full loop data on the training samples) as well as reference data of the same training samples. The reference data may be generated with any suitable tool such as, but not limited to, an x-ray characterization sub-system 102, a particle-beam characterization sub-system 102, or the like. Further, the reference data may provide direct ground truth measurements of the parameters of interest. Once trained, the machine learning model may be used to generate measurements of CuA devices 202 on test samples (e.g., during production) based on optical measurement data of the test samples.
In some embodiments, the method 300 includes a step 302 of generating optical measurement data for one or more training samples including CuA devices 202. The optical measurement data may include any type of data generated by any type of optical system such as, but not limited to, the optical characterization sub-system 102 depicted in
Further, in some embodiments, the one or more training samples include known variations of the CuA devices 202 (e.g., known variations associated with process deviations). In this way, the impact of such variations may be identified and incorporated into the models of the method 300 described below. Such a procedure may be referred to as a design of experiments (DOE) and may improve the robustness of the method 300.
In some embodiments, the method 300 includes a step 304 of generating reference data for the one or more training samples, where the reference data includes measurements of geometric parameters of the CuA devices 202. Geometric parameters may include any physical properties of any associated structures such as, but not limited to, CD, feature height, feature tilt (e.g., sidewall angles), film thickness, overlay between features fabricated with different lithographic exposures, or composition.
The reference data may be generated using any suitable technique. In some embodiments, the reference data is generated with a high-resolution characterization sub-system 102 such as, but not limited to, an x-ray characterization sub-system 102 or a particle-beam characterization sub-system 102. Non-limiting examples include, but are not limited to, transverse electron microscope (TEM) data, scanning electron microscope (SEM) data (e.g., critical dimension SEM (CD-SEM) data, electron-beam SEM (EB-SEM) data, or the like), SAXS data (e.g., transmission SAXS (T-SAXS) data, CD-SAXS data, or the like), x-ray photoelectron spectroscopy (XPS) data, or x-ray diffraction (XRD) data. It is noted that CD-SAXS data may be particularly suitable for use as reference data, but this is not limiting on the present disclosure.
In this way, accurate measurements of the various geometric parameters associated with the CuA devices 202 may be provided either directly from the reference data or through an analysis of the reference data.
In some embodiments, the method 300 includes a step 306 of training a machine learning model with the optical measurement data (e.g., from step 302) and the reference data (e.g., from step 304).
The machine learning model may thus determine relationships between physical geometric parameters of interest on the CuA devices 202 (or portions thereof) as provided by the reference data and the optical measurement data, which is the type of data that may be generated during a characterization sub-system 102 during production. As described previously herein, the optical measurement data of the fully-fabricated CuA devices 202 (e.g., full loop data) may be influenced by both the memory array structures 204 and the underlying CMOS structures 206. In applications where short loop data (e.g., optical measurement data of memory array structures 204 without underlying CMOS structures 206) is not available for any reason, it may be difficult or impossible to provide isolated measurements of the memory array structures 204 using traditional techniques. However, it is contemplated herein that training a machine learning model in step 306 trained with the full-loop optical measurement data and reference data providing ground truth measurements (e.g., of the memory array structures 204) may generate isolated measurements of the memory array structures 204 notwithstanding the influence of the underlying CMOS structures 206.
The machine learning model may include any suitable type of machine learning technique known in the art and may include any combination of learning techniques including, but not limited to, supervised, unsupervised, or reinforcement learning techniques. In some embodiments, the machine learning model is a neural network model.
In some embodiments, the method 300 includes a step 308 of generating optical measurement data for one or more test samples including CuA devices 202. In some embodiments, the method 300 includes a step 310 of determining one or more measurements of the geometric parameters of the CuA devices 202 on the one or more test samples using the machine learning model with the optical measurement data for the one or more test samples.
Once trained, the machine learning model may be used to generate measurements of the CuA devices 202 or portions thereof (e.g., the memory array structures 204 or the CMOS structures 206) based on optical measurement data from additional test samples. The machine learning model may provide any type of measurement such as, but not limited to, an overlay measurement, a critical dimension (CD) measurement, a shape measurement (e.g., a height measurement, a tilt measurement, a sidewall angle measurement, or the like), a stress measurement, a composition measurement, a bandgap measurement, a measurement of electrical properties, a measurement of process conditions (e.g., focus and/or dose conditions, a resist state, a partial pressure, a temperature, a focusing model, or the like), identification of defects, or classification of defects.
Referring now to
In some embodiments, the characterization sub-system 102 is an optical measurement sub-system that generates measurement data based on interaction of the sample 104 with light.
In some embodiments, the characterization sub-system 102 includes an illumination source 112 configured to generate at least one illumination beam 114. The illumination from the illumination source 112 may include one or more selected wavelengths of light including, but not limited to, ultraviolet (UV) radiation, visible radiation, or infrared (IR) radiation. For example, the characterization sub-system 102 may include one or more apertures at an illumination pupil plane to divide illumination from the illumination source 112 into one or more illumination beams 114 or illumination lobes. In this regard, the characterization sub-system 102 may provide dipole illumination, quadrature illumination, or the like. Further, the spatial profile of the one or more illumination beams 114 on the sample 104 may be controlled by a field-plane stop to have any selected spatial profile.
The illumination source 112 may include any type of illumination source suitable for providing at least one illumination beam 114. In some embodiments, the illumination source 112 is a laser source. For example, the illumination source 112 may include, but is not limited to, one or more narrowband laser sources, a broadband laser source, a supercontinuum laser source, a white light laser source, or the like. In some embodiments, the illumination source 112 includes a laser-sustained plasma (LSP) source. For example, the illumination source 112 may include, but is not limited to, a LSP lamp, a LSP bulb, or a LSP chamber suitable for containing one or more elements that, when excited by a laser source into a plasma state, may emit broadband illumination. In some embodiments, the illumination source 112 includes a lamp source. In some embodiments, the illumination source 112 may include, but is not limited to, an arc lamp, a discharge lamp, an electrode-less lamp, or the like.
The illumination source 112 may provide the one or more illumination beams 114 using free-space techniques and/or optical fibers.
In some embodiments, the characterization sub-system 102 directs the illumination beam 114 to the sample 104 through at least one illumination lens 116 (e.g., an objective lens) via an illumination pathway 118. The illumination pathway 118 may include one or more optical components suitable for modifying and/or conditioning the illumination beam 114 as well as directing the illumination beam 114 to the sample 104. In some embodiments, the illumination pathway 118 includes one or more illumination-pathway optics 120 to shape or otherwise control the illumination beam 114. For example, the illumination-pathway optics 120 may include, but are not limited to, one or more field stops, one or more pupil stops, one or more polarizers, one or more filters, one or more beam splitters, one or more diffusers, one or more homogenizers, one or more apodizers, one or more beam shapers, or one or more mirrors (e.g., static mirrors, translatable mirrors, scanning mirrors, or the like).
The characterization sub-system 102 may position the sample 104 for a measurement using any suitable technique. In some embodiments, as illustrated in
In some embodiments, the characterization sub-system 102 includes at least one collection lens 124 to capture light or other radiation emanating from the sample 104, which is referred to herein as collected light 126, and direct this collected light 126 to one or more detectors 128 through a collection pathway 130. The collection pathway 130 may include one or more optical elements suitable for modifying and/or conditioning the collected light 126 from the sample 104. In some embodiments, the collection pathway 130 includes one or more collection-pathway optics 132 to shape or otherwise control the collected light 126. For example, the collection-pathway optics 132 may include, but are not limited to, one or more field stops, one or more pupil stops, one or more polarizers, one or more filters, one or more beam splitters, one or more diffusers, one or more homogenizers, one or more apodizers, one or more beam shapers, or one or more mirrors (e.g., static mirrors, translatable mirrors, scanning mirrors, or the like).
The characterization sub-system 102 may generally include any number or type of detectors 128. For example, the characterization sub-system 102 may include at least one single-pixel detector 128 such as, but not limited to, a photodiode, an avalanche photodiode, or a single-photon detector. As another example, the characterization sub-system 102 may include at least one mutli-pixel detector 128 such as, but not limited to, a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) device, a line detector, or a time-delay integration (TDI) detector.
A detector 128 may be located at any selected location within the collection pathway 130. In some embodiments, the characterization sub-system 102 includes a detector 128 at a field plane (e.g., a plane conjugate to the sample 104) to generate an image of the sample 104. In some embodiments, the characterization sub-system 102 includes a detector 128 at a pupil plane (e.g., a diffraction plane) to generate a pupil image. In this regard, the pupil image may correspond to an angular distribution of light from the sample 104 detector 128. For instance, diffraction orders associated with diffraction of the illumination beam 114 from the sample 104 (e.g., an overlay target on the sample 104) may be imaged or otherwise observed in the pupil plane. In a general sense, a detector 128 may capture any combination of reflected (or transmitted), scattered, or diffracted light from the sample 104.
The illumination pathway 118 and the collection pathway 130 of the characterization sub-system 102 may be oriented in a wide range of configurations. For example, as illustrated in
In some embodiments, the illumination source 112 is an x-ray source configured to generate an x-ray illumination beam 114 having any particle energies (e.g., soft x-rays, hard x-rays, or the like). The characterization sub-system 102 may then include any combination of components suitable for capturing an associated collection signal 134, which may include, but is not limited to, x-ray emissions, optical emissions, or particle emissions.
For example, the characterization sub-system 102 may include an x-ray illumination lens 116 suitable for collimating or focusing an x-ray illumination beam 114 and collection pathway lenses (not shown) suitable for collecting, collimating, and/or focusing the collection signal 134 from the sample 104. Further, the characterization sub-system 102 may include various illumination-pathway optics (not shown) and/or collection-pathway optics (not shown) such as, but not limited to, x-ray collimating mirrors, specular x-ray optics such as grazing incidence ellipsoidal mirrors, polycapillary optics such as hollow capillary x-ray waveguides, multilayer optics, or systems, or any combination thereof. In embodiments, the characterization sub-system 102 includes an x-ray detector 128 such as, but not limited to, an x-ray monochromator (e.g., a crystal monochromator such as a Loxley-Tanner-Bowen monochromator, or the like), x-ray apertures, x-ray beam stops, or diffractive optics (e.g., such as zone plates).
In one embodiment, the illumination source 112 includes a particle source (e.g., an electron beam source, an ion beam source, or the like) such that the illumination beam 114 includes a particle beam (e.g., an electron beam, a particle beam, or the like). The illumination source 112 may include any particle source known in the art suitable for generating a particle illumination beam 114. For example, the illumination source 112 may include, but is not limited to, an electron gun or an ion gun. In another embodiment, the illumination source 112 is configured to provide a particle beam with a tunable energy. For example, an illumination source 112 including an electron source may, but is not limited to, provide an accelerating voltage in the range of 0.1 kilovolt (kV) to 30 kV. As another example, an illumination source 112 including an ion source may, but is not required to, provide an ion beam with an energy in the range of 1 kilo-electron-volte (keV) to 50 keV.
In another embodiment, the illumination pathway 118 includes one or more particle focusing elements (e.g., an illumination lens 116, a collection lens 124, or the like). For example, the one or more particle focusing elements may include, but are not limited to, a single particle focusing element or one or more particle focusing elements forming a compound system. In another embodiment, the one or more particle focusing elements include an illumination lens 116 configured to direct the particle illumination beam 114 to the sample 104. Further, the one or more particle focusing elements may include any type of electron lenses known in the art including, but not limited to, electrostatic, magnetic, uni-potential, or double-potential lenses.
In another embodiment, the characterization sub-system 102 includes one or more particle detectors 128 to image or otherwise detect particles emanating from the sample 104. For example, the detector 128 may include an electron collector (e.g., a secondary electron collector, a backscattered electron detector, or the like). As another example, the detector 128 may include a photon detector (e.g., a photodetector, an x-ray detector, a scintillating element coupled to photomultiplier tube (PMT) detector, or the like) for detecting electrons and/or photons from the sample surface.
The herein described subject matter sometimes illustrates different components contained within, or connected with, other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “connected” or “coupled” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “couplable” to each other to achieve the desired functionality. Specific examples of couplable include but are not limited to physically interactable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interactable and/or logically interacting components.
It is believed that the present disclosure and many of its attendant advantages will be understood by the foregoing description, and it will be apparent that various changes may be made in the form, construction, and arrangement of the components without departing from the disclosed subject matter or without sacrificing all of its material advantages. The form described is merely explanatory, and it is the intention of the following claims to encompass and include such changes. Furthermore, it is to be understood that the invention is defined by the appended claims.