1. Field of the Invention
The present invention relates to a shape measurement method, a shape measurement apparatus, a program, and a recording medium for acquiring shape data of an aspheric subject surface.
2. Description of the Related Art
In recent years, aspheric optical elements have been often used for optical apparatuses such as cameras, optical drives, and exposure apparatuses. Further, with the improvement of accuracy of these optical apparatuses, the aspheric optical elements have been achieving increases in both height accuracy and lateral coordinate accuracy. For example, lenses used in cameras for professional use should have height accuracy of at least 20 nm and lateral coordinate accuracy of at least 50 μm.
Realization of such high shape accuracy requires a shape measurement apparatus that can highly accurately measure a shape of an aspheric lens surface.
Japanese Unexamined Patent Application Publication (Translation of PCT Application) No. 2008-532010 discusses a scanning interferometer as one of this type of apparatus. The scanning interferometer is configured to measure a shape of a whole subject surface by scanning the subject surface along an optical axis of the interferometer. The scanning interferometer forms an interference fringe by causing reference light reflected from a reference spherical surface and subject light reflected from the subject surface to interfere with each other. Then, the scanning interferometer analyzes the interference fringe to acquire a phase, and acquires the shape of the subject surface based on the phase.
To acquire the phase accurately from the analysis of the interference fringe, a spatial change in the intensity of the interference light should be gradual, i.e., the interference fringe should be in a sparse state. To achieve this state, the two light beams that form the interference light should travel in directions substantially parallel with each other. However, the reference light, which is one of the two wave fronts that form the interference light on the reference spherical surface, is a spherical wave and another subject light is an aspheric wave. Therefore, it is impossible that this condition can be satisfied over the whole region of the wave front of the interference light. This condition is satisfied only on a partial region corresponding to the subject light reflected substantially perpendicular from the subject surface, and this region is generated as a ring zone if the subject surface is axially symmetrical. Therefore, the phase of the interference fringe can be accurately calculated only in this ring zone region.
Scanning the subject surface relative to the reference spherical surface in a direction along the optical axis of the interferometer changes the radius of the ring zone region where the interference fringe is sparse according to the scanning position. The measurement is performed by repeatedly moving the subject surface and imaging the interference fringe by an imaging unit. As a result, the phase of the interference fringe over the whole subject surface can be acquired as a plurality of divided ring zone regions.
To form shape data of the whole subject surface, first, phase data of an interference fringe in a further narrow ring zone region where the phase has an extremal value is extracted from a phase distribution of each of the interference fringes as the ring zones. After that, height data of the plurality of ring zones is calculated by multiplying a phase value by a value of a wavelength of a light source, thereby forming the shape data.
As described above, the measurement of the shape of the optical element requires not only high accuracy as to height but also high lateral coordinate accuracy. One of causes for a reduction in lateral coordinate accuracy of the scanning interferometer is an aberration of an optical system of the scanning interferometer. A lateral aberration may be generated due to, for example, a wrong placement of the optical element in the scanning interferometer, leading to generation of a distortion of 100 μm or more in the interference fringe, resulting in an error in lateral coordinates in the shape data. The error in the lateral coordinates due to such an aberration of the optical system should be eliminated in order to highly accurately measure the shape.
One possible method therefor is adopting a method discussed in Japanese Patent Application Laid-Open No. 9-61121 in the scanning interferometer. More specifically, first, a mask having a plurality of apertures formed at known positions is placed over a standard device having an aspheric surface shaped in a similar manner to the subject surface, and this device is used as a calibrator. These apertures serve as characteristic points of the calibrator.
Next, this calibrator is scanned along the optical axis of the interferometer in a similar manner to the subject surface, and the positions of the apertures are read out at respective scanning positions during scanning. After that, lateral coordinates are calibrated with respect to the phase data of each interference fringe using the read aperture positions as lateral coordinate references. Then, shape data is formed from results thereof.
However, the positions of the characteristic points read out during the calibration contain a distortion due to a deviation of a scanning axis when the calibrator is scanned. This distortion is generated only due to an error in alignment of the calibrator, and is not contained in the data acquired by scanning the subject surface. Therefore, the above-described method leads to an erroneous correction of the distortion due to the deviation of the scanning axis.
The present invention is directed to a shape measurement method, a shape measurement apparatus, a program, and a recording medium that allow shape data to be more accurately acquired than conventional techniques.
According to an aspect of the present invention, a shape measurement method includes emitting subject light as a spherical wave to an aspheric subject surface, causing the subject surface to be scanned relative to a reference spherical surface that faces the subject surface along an optical axis of the subject light, and acquiring shape data of the subject surface by a calculation unit based on phase data of an interference fringe generated when the subject light reflected by the subject surface and reference light reflected by the reference spherical surface interfere with each other. The shape measurement method further includes causing an imaging unit to image the interference fringe generated from interference between the subject light and the reference light at each scanning position when the subject surface is scanned relative to the reference spherical surface along the optical axis of the subject light to form a captured image, causing the calculation unit to acquire the captured image from the imaging unit, performing a phase distribution calculation in which the calculation unit extracts a ring zone region where the interference fringe is sparse in the captured image from each captured image acquired in the image acquisition, and calculates a phase distribution of the interference fringe in each ring zone region, performing a deviation component analysis in which the calculation unit acquires a deviation component having an orientation and an amount both unchangeable along a circumferential direction of a circle centered at the optical axis of the subject light by analyzing the interference fringe contained in each image captured in the image acquisition, performing calibrator image acquisition in which, after the imaging unit images an interference fringe generated from interference between reflection light from a calibrator and reflection light from the reference special surface at each scanning position when the calibrator having a plurality of characteristic points is scanned relative to the reference spherical surface to form a captured image, the calculation unit acquires the captured image from the imaging unit, causing the calculation unit to calculate positions of the respective characteristic points from each captured image acquired in the calibrator image acquisition, causing the calculation unit to calculate errors between the calculated positions of the respective characteristic points and actual positions of the respective characteristic points, performing a distortion component calculation in which the calculation unit calculates a distortion component having an orientation and an amount, at least one of which is changeable along the circumferential direction of the circle centered at the optical axis of the subject light, based on the errors, and causing the calculation unit to calculate the shape data corrected based on the deviation component and the distortion component.
Further features and aspects of the present invention will become apparent from the following detailed description of exemplary embodiments with reference to the attached drawings.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the invention and, together with the description, serve to explain the principles of the invention.
Various exemplary embodiments, features, and aspects of the invention will be described in detail below with reference to the drawings.
The scanning interferometer 400 includes a laser light source 401 as a light source, a beam splitter 414, and a wavemeter 430. A linearly-polarized plane wave is emitted from the laser light source 401. A part of this light is transmitted through the beam splitter 414, and a part of this light is reflected to be incident on the wavemeter 430.
Further, the scanning interferometer 400 includes a lens 402, an aperture plate 403 having an aperture, a polarized beam splitter 404, a quarter-wave plate 405, a collimator lens 406, a Fizeau lens 407, an aperture lens 409 having an aperture, and a lens 410. Further, the scanning interferometer 400 includes a movement mechanism 420 as a scanning unit, and a driving device 490 that drives and controls the movement mechanism 420.
The laser light transmitted through the beam splitter 414 is converted into a circularly-polarized plane wave having an increased beam diameter by passing through the lens 402, the aperture of the aperture plate 403, the polarized beam splitter 404, the quarter-wave plate 405, and the collimator lens 406.
The Fizeau lens 407 has a reference spherical surface 407a that faces the subject surface W1a. The plane wave transmitted through the collimator lens 406 is incident on the Fizeau lens 407, and is converted into a spherical wave by the time when it reaches the reference spherical surface 407a. The reference spherical surface 407a is a spherical surface, and a center thereof coincides with a center of the spherical wave incident on the reference spherical surface 407a. In other words, the spherical wave is incident perpendicularly to the reference spherical surface 407a over the whole region. A part of the spherical wave incident on the reference spherical surface 407a is reflected by the reference spherical surface 407a as reference light, and a part of the spherical wave is transmitted through the reference spherical surface 407a as subject light.
The reference light is perpendicularly reflected by the reference spherical surface 407a, thereby traveling as a spherical wave even after the reflection, similar to the reference light before the entry into the reference spherical surface 407a. The subject light transmitted through the reference spherical surface 407a is a spherical wave but becomes an aspheric wave after reflected by the subject surface W1a of the subject W1, and is then incident on the reference spherical surface 407a again. A part of the subject light incident on the reference spherical surface 407a again is transmitted through the reference spherical surface 407a, and is combined with the reference light reflected from the reference spherical surface 407a, by which interference light, i.e., an interference fringe is generated.
The interference light combined on the reference spherical surface 407a is converted into a circular-polarized plane wave by passing through the Fizeau lens 407. After that, the interference light is converted into a linearly-polarized plane wave having a reduced beam diameter after passing through the collimator lens 406, the quarter-wave plate 405, the polarized beam splitter 404, the aperture of the aperture plate 409, and the lens 410. The camera 440 is in an imaging relationship with the subject surface W1, and an image of an interference fringe 501 illustrated in
The movement mechanism 420 includes a movable stage 412 on which the subject W1 or a calibrator Wc as a lateral coordinate calibrator is mounted, and a lead 413 fixed to the movable stage 412. The movement mechanism 420 can move the subject W1 or the calibrator Wc along an optical axis C1 of the Fizeau lens 407.
The subject surface W1a is processed based on an axially symmetric design shape z0(h), and is placed in such a manner that an axis of the subject surface W1a substantially coincides with an optical axis of the interferometer 400, i.e., the optical axis C1 of the Fizeau lens 407.
Further, a position of the subject W1 in a direction perpendicular to the optical axis C1, and an angle of the subject W1 relative to the optical axis C1 can be finely adjusted by the movable stage 412. Further, the subject W1 is scanned along the optical axis C1 by the lead 413.
The present exemplary embodiment is based on a case in which the subject surface W1a of the subject W1 is scanned relative to the reference spherical surface 407a, but scanning may be carried out in any manner as long as relative scanning is achieved between the subject surface W1a and the reference spherical surface 407a. In other words, the reference spherical surface 407a may be scanned relative to the subject surface W1a, or both of the surfaces 407a and W1a may be scanned. In this case, the whole interferometer 400 may be scanned, or only the Fizeau lens 407 may be scanned.
The ROM 452, the RAM 453, the HDD 454, the recording disk drive 455, and the various kinds of interfaces 461 to 465 are connected to the CPU 451 via a bus 456. The ROM 452 stores a basic program such as a Basic Input/Output System (BIOS). The RAM 453 is a storage device that temporarily stores a result of calculation made by the CPU 451.
The HDD 454 is a storage unit that stores, for example, various kinds of data that are results of the calculation made by the CPU 451. In addition, the HDD 454 stores a program 457 for causing the CPU 451 to perform various kinds of calculation processing, which will be described below. The CPU 451 performs the various kinds of calculation processing based on the program 457 recorded (stored) in the HDD 454.
The recording disk drive 455 can read out various kinds of data, a program, and the like recorded in a recording disk 458.
The wavemeter 430 is connected to the interface 461. The wavemeter 430 measures an emission wavelength of the laser light source 401, and outputs a result of the measurement. The CPU 451 receives a signal that indicates the wavelength data from the wavemeter 430 via the interface 461 and the bus 456.
The camera 440 is connected to the interface 462. The camera 440 outputs a signal that indicates a captured image. The CPU 451 receives the signal that indicates the captured image from the camera 440 via the interface 462 and the bus 456.
A monitor 470 is connected to the interface 463. Various kinds of images (for example, the image captured by the camera 440) are displayed on the monitor 470. An external storage device 480 such as a rewritable nonvolatile memory or an external HDD is connected to the interface 464. The driving device 490 is connected to the interface 465. The CPU 451 controls the lead 413, and thus scanning of the subject W1 or the calibrator We via the driving device 490 is controlled.
First, in step S1, the number of scanning steps N (N is a positive integer of 2 or more), and a position Vm of the subject surface W1a in each scanning step m (m=1, 2, . . . , N) are determined as scanning conditions when the subject W1 is scanned. This position Vm is defined as a distance in the direction along the optical axis C1 from a position where a curvature radius of a light wave front (a spherical wave 301) contacting a top of the subject surface W1a is equal to a curvature radius Ro of the subject surface W1a at the top of the subject surface W1a (refer to
It is desirable that the position Vm in each step m is determined in such a manner that the distance hm scans the whole subject surface W1a at an equal interval in light of the relationship expressed by the equation (1). Further, it is desirable that the number of scanning steps N is determined according to a lateral coordinate resolution required for intended shape data.
After the scanning conditions are determined, in step S2, the subject W1 is aligned in such a manner that an axis (an optical axis) of the aspheric surface of the subject surface W1a coincides with the optical axis C1. At this time, the position and the angle of the subject W1 are adjusted by operating the stage 412 while observing the interference fringe.
After the subject W1 is aligned, in step S3, the CPU 451 moves the subject W1 to a first measurement position Vm=1 along the optical axis C1, and then captures an image of an interference fringe Im=1(x, y) on the camera 440 while measuring a wavelength λm-1(x, y) of the laser light source 401 by the wavemeter 430. Here, (x, y) represents an orthogonal coordinate system (an imaging coordinate system) on the camera 440. After that, the CPU 451 repeats a movement of the subject W1 to the position Vm, capturing the image of the interference fringe Im(x, y), and measurement of the wavelength λm according to the scanning conditions. In other words, in step S3, the camera 440 captures the image of the interference fringes generated from interference between the subject light and the reference light at the respective scanning positions when the subject surface W1a is scanned relative to the reference spherical surface 407a along the optical axis C1 of the subject light to capture the image, and the CPU 451 acquires the captured images from the camera 440. Further, in step S3, the CPU 451 acquires the wavelength data from the wavemeter 430 in addition to acquiring the captured images from the camera 440. This step S3 is an image acquisition process and a wavelength acquisition process, i.e., image acquisition processing and wavelength acquisition processing, which are performed by the CPU 451.
After acquiring the image data of the interference fringes and the wavelength data through all relevant steps in step S4, the CPU 451 calculates phase distributions of the interference fringes from the respective interference fringes (a phase distribution calculation step or phase distribution calculation processing). Because an interference fringe 501 formed by reflection light around a position corresponding to h=hm among the reflection light from the subject surface W1a is sparse (
In calculating the shape data, the CPU 451 uses only a phase φm of an interference fringe on a circle 502 illustrated in
However, actually, deviation components A1 and A2 illustrated in
First, the CPU 451 calculates and corrects these deviation components A1 and A2 that indicate the distortions illustrated in
More specifically, the CPU 451 substitutes r=√{square root over ([x−x0,m)2+(y−y0,m)2])}{square root over ([x−x0,m)2+(y−y0,m)2])} into an appropriate function g (r) such as a polynomial equation, and performs fitting on the respective interference fringe phases Φm(x, y) while changing x0,m and y0,m. The CPU 451 corrects the equation (2) by x0,m and y0,m calculated in this manner, thereby acquiring an equation (3).
In this manner, the deviation components A1 and A2 that indicate the distortions illustrated in
Next, the CPU 451 calculates the distortion components A3 to A5 illustrated in
The specific procedure for calibrating the lateral coordinates will be described now. First, in step S6, the calibrator Wc is mounted on the movable stage 412 in such a manner that an optical axis of the calibrator Wc (the aspheric standard device Ws) coincides with the optical axis C1 as close as possible. Because an observable interference fringe has only a small area so that it is difficult to align the calibrator Wc while observing the interference fringe, a mechanical abutting member or the like is utilized to mount the calibrator Wc. At this time, it is expected that the optical axis of the calibrator Wc deviates from the optical axis C1 by approximately 100 μm, but an influence of this offset is removed later so that this does not cause a problem.
Next, in step S7, the calibrator Wc is scanned under the same conditions as the scanning of the subject surface W1a. The CPU 451 acquires captured images I′m(x, y) imaged by the camera 440 in the respective scanning steps m (a calibrator image acquisition step or calibrator image acquisition processing). More specifically, the camera 440 images interference fringes generated by the reflection light from the calibrator Wc and reflection light from the reference spherical surface 407a at the respective scanning positions when the calibrator We is scanned relative to the reference spherical surface 407a to capture images I′m(x, y), and the CPU 451 acquires the captured images I′m(x, y) from the camera 440. In these captured images I′m(x, y), light is not detected in regions covered by the mask Wm, and light is detected only in regions in the apertures.
Further, in step S8, the CPU 451 extracts I′m(x0,m+(hm/k)cos θ, y0,m+(hm/k)sin θ) from the respective captured images I′m(x, y), converts them into the coordinate system of the subject surface W1a, and sets them as I′m(hm cos θ, hm sin θ).
The images extracted here are images at positions that substantially coincide with the circle 502 illustrated in
After that, in step S10, the CPU 451 calculates central positions Xp,q and Yp,q of the respective apertures from the aperture image, i.e., the positions of the characteristic points. In other words, the CPU 451 calculates the positions of the respective apertures, which are the respective characteristic points, based on the respective captured images acquired in step S7 by the processes in steps S8 to S10 (a characteristic point position calculation step, or characteristic point position calculation processing).
Next, in step S11, the CPU 451 calculates errors between the calculated positions of the respective apertures (the characteristic points) and the actual positions of the respective apertures (the actual positions of the respective characteristic points) (an error calculation step, or error calculation processing). More specifically, the CPU 451 calculates differences ΔX (pΔh, qΔθ) in an X direction and differences ΔY (pΔh, qΔθ) in a Y direction between the calculated positions of the apertures and the actual positions of the apertures according to an equation (4). The actual positions of the apertures (the actual positions of the characteristic points) may be stored in a storage unit such as the HDD 454 in advance and may be read out by the CPU 451 from the storage unit, or may be acquired from an external apparatus. Alternatively, the CPU 451 may calculate them based on data of p, q, Δh, and Δθ.
In this equation, ΔX(pΔh, qΔθ) and ΔY(pΔh, qΔθ) are distortion data that contains the distortion components A3 to A5 due to the aberration of the optical system, which correspond to
In step S12, the CPU 451 fits to the errors ΔX(pΔh, qΔθ) and ΔY(pΔh, qΔθ) a fitting function of an equation (5), which contains a function corresponding to the distortion components each having an orientation and an amount, at least one of which is changeable along the circumferential direction of the circle centered at the optical axis C1 of the subject light. Then, the CPU 451 calculates the distortion components from the functions of the equation (5) after the fitting and an equation (7) (a distortion component calculation step, or distortion component calculation processing). In other words, the CPU 451 performs fitting on the errors ΔX(pΔh, qΔθ) and ΔY(pΔh, qΔθ) with use of the function of the equation (5) to extract the distortion components.
In the above-described equations, fX,ab(h) and fY,ab(h) are functions defined by the equation (6), and the first and second terms on the right side of the equation (6) correspond to the components illustrated in
In the above-described equations, fX,cde(h, θ) and fY,cde (h, θ) are functions defined by the equation (7). The first, second, and third terms on the right side of the equation (7) correspond to
The CPU 451 performs fitting by changing coefficients ka,j, kb,j, kc,j, kd,2, and ke,2 with use of these functions. Then, the CPU 451 extracts the component (fX,cde(h, θ) and fY,cde (h, θ)) having an orientation and an amount, at least one of which is changeable along the circumferential direction, from the lateral coordinate error (ΔX, ΔY).
The relationship between the coordinates (x, y) on the camera 440 and the coordinates (X, Y) on the subject surface W1a can be expressed anew by an equation (8) with use of the extracted lateral coordinate error component.
In step S13, the CPU 451 converts the coordinates in the phases Φm (x, y) with use of this equation (8), and corrects the distortion components A3 to A5 illustrated in
In other words, according to the present exemplary embodiment, the CPU 451 corrects the deviation components A1 and A2 contained in the respective phase distributions Φm(x, y). In addition, the CPU 451 corrects the distortion components A3 to A5 contained in the respective phase distributions Φm(x, y). Further, the CPU 451 converts the respective phase distributions Φm(x, y) in the coordinate system of the camera 440 into the phase distributions Φm(X, Y) in the coordinate system on the subject surface W1a at the same time as these corrections.
In step S14, the CPU 451 extracts the phase data φm(hm cos θ, hm sin θ) of the interference fringes corresponding to h=hm from the phase distributions Φm (X, Y) in which the distortions are corrected in this manner.
After that, in step S15, the CPU 451 calculates the shape data of the whole subject surface W1a from the phase data φm(hm cos θ, hm sin θ) and the wavelength data λm in the respective steps m. In other words, the CPU 451 calculates the shape data of the subject surface W1a, which is corrected based on the deviation components A1 and A2 and the distortion components A3 to A5, in steps S13 to S15 (a shape data calculation step, or shape data calculation processing).
This series of measurement processes allows the CPU 451 to calculate the shape data in which the lateral coordinates are accurately corrected.
Further, regarding the distortions contained in the shape data acquired by the scanning interferometer 400, the CPU 451 generates data to be used for the correction after removing the deviation components each having an orientation and an amount both unchangeable along the circumferential direction centered at the optical axis of the interferometer 400 in step S12.
In other words, the deviation of the axis when the subject W1 is scanned is different from the deviation of the axis when the calibrator Ws is scanned, whereby only the distortion components due to the aberration can be acquired by removing the components due to the deviation of the axis when the calibrator Ws is scanned from the distortion data acquired by scanning of the calibrator Ws. The deviation of the axis when the subject W1 is scanned is calculated in step S5, whereby an accurate correction can be made based on results of them. Therefore, the present exemplary embodiment can prevent an erroneous correction from being made regarding the deviation of the axis, thereby preventing the distortions contained in the shape data from increasing.
Further, a more accurate correction can be made, because the distortion components to be used for the correction are calculated by fitting with use of a hypothetical appropriate function in step S12. Further, the distortion components to be corrected can be more easily calculated, because the fitting function is simplified by limiting the distortion components to be used for the correction.
The present exemplary embodiment has described the method for indirectly correcting the lateral coordinates of the shape data by correcting the lateral coordinates of the interference fringe phases, which are original data of the shape data. However, the method for correcting the lateral coordinates is not limited thereto. The lateral coordinates of the shape data formed from the interference fringe phases may be directly corrected based on the distortion data acquired by scanning of the calibrator Ws and an analysis of the interference fringes. Alternatively, the lateral coordinates may be corrected with respect to the images captured by the camera 440, which are original data of the interference fringe phases.
Further, in step S12, the distortion components are calculated with use of the fitting function, but the distortion components may be calculated by, for example, complementing data.
Next, an operation of a shape measurement apparatus according to a second exemplary embodiment of the present invention will be described. The shape measurement apparatus according to the second exemplary embodiment is configured in a similar manner to the shape measurement apparatus 100 according to the above-described first exemplary embodiment illustrated in
Major differences from the above-described first exemplary embodiment are that the subject W2 illustrated in
In the following description, a measurement procedure according to the present second exemplary embodiment will be described according to the flowchart illustrated in
Further, according to the present second exemplary embodiment, these reference marks 803 to 808 are arranged to be located two by two line-symmetrically around a Y axis at positions where distances h thereof from the axis C2 of the aspheric surface are equal. More specifically, a characteristic point group constituted by a plurality of (two) reference marks 803 and 806 is formed at positions where the distances h thereof from the optical axis C2 of the subject surface W2a are equal. Further, a characteristic point group constituted by a plurality of (two) reference marks 804 and 807 is formed at positions where the distances h thereof from the optical axis C2 of the subject surface W2a are equal. Further, a characteristic point group constituted by a plurality of (two) reference marks 805 and 808 is formed at positions where the distances h thereof from the optical axis C2 of the subject surface W2a are equal. In other words, a plurality of characteristic point groups is formed in the other regions than the optical effective region 801 of the subject surface W2a to be placed by different distances h from the optical axis C2 of the subject surface W2a. In the present second exemplary embodiment, three characteristic point groups are formed.
Suppose that (Xl,1, Yl,1) is the position of the reference mark 805. Suppose that (Xr,1, Yr,1) is the position of the reference mark 808. Suppose that (Xl,2, Yl,2) is the position of the reference mark 804, (Xr,2, Yr,2) is the position of the reference mark 807, (Xl,3, Yl,3) is the position of the reference mark 803, and (Xr,3, Yr,3) is the position of the reference mark 806. These positions are expressed by a following equation equation (9) and equation (10) in an orthogonal coordinate system (X, Y) in which the axis C2 of the aspheric surface is set as an origin thereof.
The arrangement of the reference marks is not limited thereto. Two or more reference marks may be formed at positions where the distances h thereof are equal, and the reference marks do not necessarily have to be arranged line-symmetrically around the Y axis. Further, a maximum value of k may be a value larger than 3.
After the reference marks 803 to 808 are formed, in step S22, scanning conditions under which the subject surface W2a is scanned are determined.
The scanning conditions in the present embodiment are the number of times of scanning M and arranging directions θj of the subject surface W2a at each scanning (j=1, 2, . . . , M), in addition to the number of scanning steps N and the positions Vm of the subject surface W2a in the respective steps m. For example, if M is set to 8 and θj is set to π(j−1)/4, the scanning positions are located as illustrated in
In the present second exemplary embodiment, the subject surface W2a is arranged in different directions and scanning is performed a plurality of times for the purpose of acquiring distortion data over the whole subject surface W2a by referring to only the reference marks 803 to 808 outside the optical effective region 801.
Therefore, it is desirable that the directions θj are evenly distributed as much as possible within a range of 0 to 2π so that the reference marks 803 to 808 scan various positions on a spherical wave. Further, it is desirable that the value of M is determined according to required accuracy for the lateral coordinate calibration.
After the scanning conditions are determined, first, in step S23, the variable j is set to 1. Then, in step S24, the subject surface W2a is arranged in such a manner that the arranging direction matches the direction θj (firstly, j is set to 1). Then, in step S25, the subject surface W2a is aligned in a similar manner to the above-described first exemplary embodiment. Next, in step S26, the CPU 451 sequentially acquires interference fringes and wavelength values according to the determined scanning conditions N and Vm.
More specifically, the camera 440 images interference fringes generated from interference between the subject light and the reference light at the respective scanning positions when the subject surface W2a is scanned relative to the reference spherical surface 407a along the optical axis C2 to capture images, and the CPU 451 acquires the captured images from the camera 440. Further, in step S26, the CPU 451 acquires wavelength data from the wavemeter 430 in addition to acquiring the captured images from the camera 440. This step S26 corresponds to an image acquisition step or a wavelength acquisition step or, i.e., image acquisition processing and wavelength acquisition processing, which are performed by the CPU 451.
Next, in step S27, after acquiring the interference fringes and the wavelengths, the CPU 451 acquires interference fringe phases Φj, (x, y) of regions where the interference fringes are sparse in a similar manner to the above-described first exemplary embodiment (a phase distribution calculation step, phase distribution calculation processing). More specifically, the CPU 451 extracts ring zone regions where the interference fringes are sparse in the captured images, from the respective images captured in step S26, and calculates the phase distributions Φj,m(x, y) of the interference fringes in the respective ring zone regions. Next, in step S28, the CPU 451 extracts phase data Φj,m(x0,m+(hm/k)cos θ, y0,m+(hm/k)sin θ) corresponding to the phase distribution of the interference fringe on the circle 502 illustrated in
After that, in step S29, the CPU 451 converts the coordinate systems of these interference fringes into the coordinate systems on the subject surface W2a, and sets them as phase data θj,m(hm cos θ, hm sin θ). Then, in step S30, the CPU 451 generates provisional shape data by using them together with the wavelength data.
After calculating the provisional shape data, in step S31, the CPU 451 determines whether the variable j reaches M. If the variable j does not reach M (NO in step S31), the CPU 451 sets j to j+1, i.e., increments the variable j by one. Then, the processing proceeds to step S24 again. After that, steps S24 to S30 are repeated according to the flowchart. In other words, the CPU 451 acquires, from the camera 440, the images captured at the respective scanning positions of scanning when the scanning is performed a plurality of times while the rotational position of the subject surface W2a is changed around the optical axis C2 of the subject surface W2a, by repeating steps S24 to S30.
By performing the above-described operation, the CPU 451 calculates M pieces of provisional shape data. These provisional shape data pieces each contain a lateral coordinate error due to the deviation components A1 and A2 illustrated in
These errors are corrected by referring to the positions of the reference marks 803 to 808 in the provisional shape data. As a procedure therefor, first, the distortion components A3 to A5 illustrated in
First, in step S32, the CPU 451 reads out the positions of the reference marks 803 to 808 from the respective shape data pieces to acquire the distortion components A3 to A5 illustrated in
The reference marks 803 to 808 can be read out by, for example, performing fitting on shape data around the reference marks 803 to 808 based on the design shapes of the reference marks 803 and 808, and acquiring central positions thereof. In this manner, the CPU 451 calculates the positions of the reference marks 803 to 808 (X′l,j,k, Y′l,j,k) and (X′r,j,k, Y′r,j,k) (k=1, 2, 3, and j=1, 2, . . . , M).
However, these calculated positions of the reference marks 803 to 808 are affected by not only the distortion components A3 to A5 illustrated in
Therefore, the CPU 451 utilizes a relative positional relationship between the reference marks having an identical value h, which is unaffected by the deviation components A1 and A2 illustrated in
In other words, the CPU 451 refers to the calculated positions of the two reference marks 803 and 806, and acquires a relative position of the calculated position of one of them relative to the calculated position of the other of them. Similarly, the CPU 451 refers to the calculated positions of the two reference marks 804 and 807, and acquires a relative position of the calculated position of one of them relative to the calculated position of the other of them. Similarly, the CPU 451 refers to the calculated positions of the two reference marks 805 and 808, and acquires a relative position of the calculated position of one of them relative to the calculated position of the other.
(X1, Y1) is an actual relative position of the reference mark 806 relative to the reference mark 803, (X2, Y2) is an actual relative position of the reference mark 807 relative to the reference mark 804, and (X3, Y3) is an actual relative position of the reference mark 808 relative to the reference mark 805. These relative positions are calculated by an equation (12) from the equations (9) and (10). The actual relative positions (Xk, Yk) may be stored in a storage unit such as the HDD 454 in advance and may be read out from the storage unit by the CPU 451, or may be acquired from an external apparatus. Alternatively, the actual positions (Xl,k, Yl,k) and (Xr,k, Yr,k) may be stored in a storage unit such as the HDD 454 in advance, and the CPU 451 may read out them from the storage unit to calculate the relative positions (Xk, Yk). Further alternatively, the CPU 451 may acquire data of the actual positions (Xl,k, Yl,k) and (Xr,k, Yr,k) from an external apparatus to calculate the relative positions (Xk, Yk). Further alternatively, the CPU 451 may acquire data of hk and φk from a storage unit such as the HDD 454 or an external apparatus to calculate the relative positions (Xk, Yk).
In step S34, the CPU 451 calculates an error amount (ΔXj,1, ΔYj,1) of the relative position of the reference mark 806 relative to the reference mark 803 in the provisional shape data according to an equation (13). Similarly, the CPU 451 calculates an error amount (ΔXj,2, ΔYj,2) of the relative position of the reference mark 807 relative to the reference mark 804 according to the equation (13). Similarly, the CPU 451 calculates an error amount (ΔXj,3, ΔYj,3) of the relative position of the reference mark 808 relative to the reference mark 805 according to the equation (13) (a relative error calculation step, or relative error calculation processing). In other words, the CPU 451 calculates errors between the relative positions calculated in step S33 and the actual relative positions.
The errors (ΔXj,k, ΔYj,k) are distortion data that contains information regarding distortions contained in the provisional shape data. However, they are deviation amounts of the relative positions between the points away from the axis C2 of the subject surface W2a by an equal distance. Therefore, they do not contain the deviation components A1 and A2 illustrated in
Therefore, in step S35, the CPU 451 extracts the components A3 to A5 illustrated in
According to this method, the CPU 451 can extract the distortion components A3 to A5 illustrated in
The CPU 451 converts the lateral coordinates in the respective shape data piece with use of the thus-calculated distortion data (fx,cde(h, θ), fy,cde (h, θ)) according to an equation (15).
Based on this coordinate conversion, in step S36, the CPU 451 corrects the distortion components A3 to A5 illustrated in
After correcting the distortion components A3 to A5 illustrated in
First, the CPU 451 calculates positions (X″l,j,k, Y″l,j,k) and (X″r,j,k, Y″r,j,k) of the reference marks 803 to 808 in the shape data in which the distortion components A3 to A5 illustrated in
Next, the CPU 451 calculates amounts ΔXj(hk) and ΔY (hk) of the components A1 and A2 illustrated in
The CPU 451 calculates the amounts ΔXj(h) and ΔYj(h) of the deviation components A1 and A2 illustrated in
In step S38, the CPU 451 uses these amounts ΔXj(h) and ΔYj(h) to convert the lateral coordinates in the respective shape data pieces in which the distortion components A3 to A5 illustrated in
Lastly, in step S39, the CPU 451 averages the acquired M shape data pieces to calculate a single shape data piece. In other words, the CPU 451 calculates shape data of the subject surface W2a corrected based on the deviation components A1 and A2 and the distortion components A3 to A5 by performing steps S35 to S39 (a shape data calculation step or shape data calculation processing).
In this manner, the present second exemplary embodiment can calculate shape data with the lateral coordinates accurately corrected by this series of measurement operations.
An experiment of aspheric interference measurement was conducted to compare the lateral coordinate accuracy of the shape data between measurement that uses this method and measurement that does not use this method. As a result of this experiment, it was confirmed that the measurement that does not use the present second exemplary embodiment had a lateral coordinate error of 100 μm or more, while use of the present second exemplary embodiment could reduce this error to 20 μm or less. This indicates that the present second exemplary embodiment is largely effective in reducing a lateral coordinate error in aspheric interference measurement.
Further, according to the present second exemplary embodiment, the CPU 451 calculates the relative positional relationship among the plurality of lateral coordinate references placed by an equal distance from the central point, when calculating the distortion components. At this time, since no complicated calculation is required, the distortion components can be more easily calculated.
Further, according to the present second exemplary embodiment, the distortions contained in the shape data are corrected with use of the plurality of deviation and distortion components, and therefore can be corrected more accurately.
Further, according to the present second exemplary embodiment, since the deviation components and the distortion components are acquired while the subject surface W2a is scanned at various positions, the distortions contained in the shape data can be corrected more accurately.
Further, according to the present second exemplary embodiment, since an additional lateral coordinate calibrator does not have to be newly prepared, a cost reduction can be realized.
In the present second exemplary embodiment, the distortions in the shape data are directly corrected with use of the distortion data acquired from the positions of the reference marks. However, the correction method is not limited thereto. The distortions in the interference fringe phase data may be corrected with use of the acquired distortion data, and the shape data may be formed from this interference fringe phase data. Alternatively, the distortions in the captured images may be corrected, and the interference fringe phase data may be calculated therefrom. After that, the shape data may be formed.
A third exemplary embodiment will be described as follows. A surface shape measurement apparatus according to the third exemplary embodiment is also configured in a similar manner to the shape measurement apparatus 100 according to the above-described first exemplary embodiment illustrated in
A procedure according to the present third exemplary embodiment is performed according to the flowchart illustrated in
After calculating the deviations of the aperture positions (errors or distortion data) in step S51, in step S52, the CPU 451 calculates distortion data ΔX′(pΔh, qΔθ) and ΔY′(pΔh, qΔθ) in which the deviation components A1 and A2 illustrated in
In the equation (21), the second and third terms on the right side indicate overall positional deviations of 2π/δθ apertures arranged at positions placed by an equal distance (=pΔh) from the axis of the aspheric surface, i.e., indicate the deviation components A1 and A2 illustrated in
After extracting the deviation components each having an orientation and an amount unchangeable along the circumferential direction centered at the optical axis, in step S53, the CPU 451 performs fitting thereon with use of the equation (7) to calculate the distortion data (the distortion components) over the whole subject surface W1a.
After that, the CPU 451 calculates the shape data of the subject surface W1a according to steps S54 to S56, which are similar to steps S13 to S15.
The present invention is not limited to the above-described exemplary embodiments, and can be modified in a number of manners within the technical idea of the present invention by a person having ordinary knowledge in the art to which the present invention pertains.
Specifically, each processing operation in the above-described exemplary embodiments is performed by the CPU 451 serving as the calculation unit of the controller 450. Therefore, the above-described exemplary embodiments may be also achieved by supplying a recording medium storing a program capable of realizing the above-described functions to the controller 450, and causing the computer (the CPU or a micro processing unit (MPU)) of the controller 450 to read out the program stored in the recording medium to execute it. In this case, the program itself read out from the recording medium realizes the functions of the above-described exemplary embodiments, and the program itself and the recording medium storing this program constitute the present invention.
Further, the above-described exemplary embodiments have been described based on the example in which the computer-readable recording medium is the HDD 454, and the program 457 is stored in the HDD 454. However, the present invention is not limited thereto. The program 457 may be recorded in any recording medium as long as this recording medium is a computer-readable recording medium. For example, the ROM 452, the external storage device 480, and the recording disk 458 illustrated in
Further, the above-described exemplary embodiments may be realized in such a manner that the program in the above-described exemplary embodiments is downloaded via a network, and is executed by the computer.
Further, the present invention is not limited to the embodiments in which the computer reads and executes the program code, thereby realizing the functions of the above-described exemplary embodiments. The present invention also includes an embodiment in which an operating system (OS) or the like running on the computer performs a part or whole of actual processing based on an instruction of the program code, and this processing realizes the functions of the above-described exemplary embodiments.
Further, the program code read out from the recording medium may be written in a memory provided in a function extension board inserted into the computer or a function extension unit connected to the computer. The present invention also includes an embodiment in which a CPU or the like provided in this function extension board or function extension unit performs a part or whole of the actual processing based on the instruction of the program code, and this processing realizes the functions of the above-described exemplary embodiments.
According to the present invention, since the deviation of the scanning axis and the deviation due to the aberration and the like are corrected, the shape data can be more accurately acquired than the conventional techniques.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2013-029575 filed Feb. 19, 2013, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2013-029575 | Feb 2013 | JP | national |