1. Field of the Invention
The present invention relates to a method of acquiring centroid positions of light spots in a wavefront sensor to be used to measure a wavefront of light.
2. Description of the Related Art
Wavefront sensors include ones, such as a Shack-Hartmann sensor, which are constituted by a microlens array and an optical detector. Each of such wavefront sensors divides and condenses a wavefront (i.e., a phase distribution) of an entering light by multiple microlenses constituting the microlens array to capture an image of the wavefront as an image of the arrayed light spots. A calculation (measurement) can be made of wavefront aberration from a positional shift amount of the light spots shown by light intensity data acquired by the image capturing. Moreover, using such a wavefront sensor enables measuring even a wavefront having a large aberration, which enables measuring a shape of an aspheric surface as well.
However, accurately measuring the wavefront having the large aberration requires accurately acquiring centroid positions of the light spots formed by the microlenses. Japanese Patent Laid-Open No. 2010-185803 discloses a method of previously setting, for each microlens, an area of CCD data in which the centroid position of the light spot is calculated. On the other hand, Japanese Translation of PCT International Application Publication No. JP-T-2002-535608 discloses a method of setting an area in which a centroid position of a specific light spot is calculated by using a position of a light spot adjacent to the specific light spot and of calculating the centroid position in the area.
However, an increase in the wavefront aberration of the light entering the wavefront sensor increases the positional shift amounts of the light spots formed by the microlenses, which makes the centroid positions of the light spots located outside the area set by each of the methods respectively disclosed in Japanese Patent Laid-Open No. 2010-185803 and Japanese Translation of PCT International Application Publication No. JP-T-2002-535608. Consequently, the result of the centroid position calculation has an error, making it impossible to measure the wavefront with good accuracy.
The present invention provides a light spot centroid position acquisition method and others, each being a capable of accurately calculating centroid positions of light spots formed by microlenses even when a wavefront or a wavefront aberration of light entering a wavefront sensor is large. The present invention further provides a wavefront measurement method and a wavefront measurement method apparatus each using the light spot centroid position acquisition method.
The present invention provides as an aspect thereof a light spot centroid position acquisition method of acquiring a centroid position of each of light spots formed on an optical detector by multiple microlenses arranged mutually coplanarly in a wavefront sensor to be used to measure a wavefront of light. The method includes a first step of estimating, by using known centroid positions or known intensity peak positions of a first light spot and a second light spot respectively formed by a first microlens and a second microlens in the multiple microlenses, a position of a third light spot formed by a third microlens in the multiple microlenses, the first to third microlenses being collinearly arranged, a second step of setting, by using the estimated position of the third light spot, a calculation target area of a centroid position of the third light spot on the optical detector, and a third step of calculating the centroid position of the third light spot in the calculation target area.
The present invention provides as another aspect thereof a wavefront measurement method including performing the above light spot centroid position acquisition method, and measuring a wavefront of light by using the centroid positions of the light spots.
The present invention provides as yet another aspect thereof a wavefront measurement apparatus including a wavefront sensor including an optical detector and multiple microlenses arranged mutually coplanarly, and a processor configured to perform a light spot centroid position acquisition process to acquire a centroid position of each of light spots formed on the optical detector by the multiple microlenses and configured to measure the wavefront by using the centroid positions of the light spots. The light spot centroid position acquisition process includes a first process to estimate, by using known centroid positions or known intensity peak positions of a first light spot and a second light spot respectively formed by a first microlens and a second microlens in the multiple microlenses, a position of a third light spot formed by a third microlens in the multiple microlenses, the first to third microlenses being collinearly arranged, a second process to set a calculation target area of a centroid position of the third light spot on the optical detector by using the estimated position of the third light spot, and a third process to calculate the centroid position of the third light spot in the calculation target area.
The present invention provides as still another aspect thereof a method of manufacturing an optical element. The method includes measuring a shape of the optical element by using the above wavefront measurement method or apparatus, and manufacturing the optical element by using a result of the measurement.
The present invention provides as further still another aspect thereof a non-transitory computer-readable storage medium storing a light spot centroid position acquisition program to cause a computer to perform a process using the above spot centroid position acquisition method.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Exemplary embodiments of the present invention will be described below with reference to the attached drawings.
In
In
In this embodiment, with an assumption that the centroid positions (Gx(i−2, j),Gy(i−2, j)) and (Gx(i−1, j),Gy(i−1,j)) of the light spots a and b are known, the centroid position (Gx(i,j),Gy(i,j)) of the light spot c is acquired as described below.
The centroid position (G0x(i,j),G0y(i,j)) of the light spot corresponding to when the plane wavefront light enters the microlens 1a which is expressed by expression (1) is rounded to an integer value (g0x(i,j),g0y(i,j)) by using a definition expressed by expression (2) where round( ) represents a function to round the number in the parentheses to an integer closest to the number.
g
0x(i,j)=round(G0x(i,j))
g
0y(i,j)=round(G0y(i,j)) (2)
In this case, the centroid position (Gx(i,j),Gy(i,j)) of the light spot formed by the light (wavefront) entering one microlens is acquired by expression (3).
In expression (3), I(s,t) represents a light intensity at a pixel in the CCD 2 located in a column s and a row t. Symbol n represents a positive real number having a value of approximately 1 to 3. A value 2r+1 represents number of pixels along each of sides included in a calculation target area (hereinafter referred to as “a centroid calculation area”) on the CCD 2 where the centroid position of the light spot formed by one microlens is calculated. Since the light spots formed by the other microlenses are present at positions distant by the microlens pitch p from the light spot whose centroid position is to be calculated (the light spot is hereinafter referred to also as “a target light spot”), it is desirable that r be approximately a half of the microlens pitch p, which is expressed by expression (4).
In addition, since light intensity data (measurement data) acquired from the centroid calculation area on the CCD 2 contains a background noise such as a shot noise, a calculation of expression (3) may be performed after light intensity data corresponding to when the CCD 2 receives no light is subtracted from the measured data.
A wavefront W(x,y) and an angular distributions (ψx(x,y) and ψy(x,y)) of the light entering the microlens array 1 and the centroid position (Gx,Gy) of the light spot have thereamong relations expressed by expression (5).
For this reason, the wavefront W is calculated from the intensity I as follows. First, the centroid position (Gx,Gy) of the light spot is calculated by using expression (3) for all the microlenses 1a that the plane wavefront light enters, and then the angular distribution of or a differential value of the wavefront of the light (light rays) entering the microlenses 1a is calculated by using expression (5). Next, a two-dimensional integral calculation is performed on the angular distribution of the light rays or the differential value of the wavefront. As an integral calculation method, a method described in the following literature is known: W. H. Southwell, “Wave-front estimation from wave-front slope measurement”, J. Opt. Soc. Am. 70, pp 998-1006, 1980.
In the above-described manner, the wavefront W is calculated from the intensity I.
In the calculation method that the centroid calculation area is fixed beforehand for each microlens 1a, when the wavefront of the entering light is large, such as when the differentiated wavefront satisfies a condition of expression (6), or when an incident angle ψx,y satisfies a condition of expression (7), the position of the light spot is outside the centroid calculation area, which makes it difficult to calculate the centroid position.
On the other hand, the method disclosed in Japanese Translation of PCT International Application Publication No. JP-T-2002-535608 estimates the centroid position of the target light spot c by using the centroid position of one light spot b adjacent to the target light spot c. For instance, when a position distant by the microlens pitch p from the centroid position of the light spot b is defined as a primary estimation position (gx′(i,j),gy′(i,j)) of the target light spot c as illustrated in
From the estimated position (gx′(i,j),gy′(i,j)) of the target light spot c, the centroid calculation area is set as follows.
x direction: gx′(i,j)−r˜gx′(i,j)+r
y direction: gy′(i,j)−r˜gy′(i,j)+r
The centroid position of the target light spot c is calculated by expression (9).
In addition, a relation between the wavefront W and the centroid position (Gx,Gy) of the light spot in the x direction is expressed by expression (10):
Similarly, in the y direction, a relation between the wavefront W and the centroid position (Gx,Gy) of the light spot is expressed by expression (11).
Thus, when the wavefront W satisfies a condition of expression (12) or (13), the position of the target light spot c is outside of the centroid calculation area, which makes it difficult to calculate the centroid position of the target light spot c.
This embodiment sets the centroid calculation area corresponding to the light spot c formed by the microlens C, by using the known centroid positions or known intensity peak positions of the light spots a and b respectively formed by the microlenses A and B arranged on the identical x-y plane on which the microlens C is disposed. The expression “on the identical x-y plane on which the microlens C is disposed” can be rephrased as “on a straight line extending from the microlens C”. Moreover, number of the light spots (that is, the microlenses forming these light spots) whose centroid positions or intensity peak positions to be used to set the centroid calculation area are known may be three or more and only has to be at least two, as described later.
A detailed description will hereinafter be made of the method of setting the centroid calculation area. As illustrated in
g
x′(i,j)=round[G0x(i,j)+2{Gx(i−1,j)−G0x(i−1,j)}−{Gx(i−2,j)−G0x(i−2,j)}]
g
y′(i,j)=round[G0y(i,j)+2{G(i−1,j)−G0y(i−1,j)}−{Gy(i−2,j)−G0y(i−2,j)}] (14)
Also in this embodiment, the centroid calculation area is set, by using the primary estimation position (gx′(i,j),gy′(i,j)) of the target light spot c calculated by expression (14), as follows.
x direction: gx′(i,j)−r˜gx′(i,j)+r
y direction: gy′(i,j)−r˜gy′(i,j)+r
Expression (14) is based on an assumption that a vector from the light spot b to the target light spot c is equal to a vector v from the light spot a to the light spot b. In other words, first-order and second-order differential values of the wavefront are calculated from the known centroid positions of the two light spots a and b, and the position (gx′(i,j),gy′(i,j)) of the target light spot c is estimated by using the differential values. Thereafter, the centroid calculation area is set to a position acquired by adding the vector v to the position of the target light spot c.
For the estimation of the primary estimation position (gx′(i,j),gy′(i,j)) of the target light spot c, known intensity peak positions may be used instead of the known centroid positions of the light spots a and b.
On the other hand, in calculating the centroid position (Gx(i,j),Gy(i,j)) of the light spot by substituting the primary estimation position (gx′(i,j),gy′(i,j)) calculated by expression (14) into expression (9), there is a case where the centroid position (Gx(i,j), Gy(i,j)) satisfies a condition of expression (15) or (16). In this case, it is desirable to recalculate the primary estimation position (gx′(i,j),gy′(i,j)) by using expression (17) such that the centroid position (Gx(i,j),Gy(i,j)) is located at a center of the centroid calculation area and then to recalculate the centroid position (Gx(i,j),Gy(i,j)) by using expression (9).
|Gx(i,j)−gx′(i,j)|>0.5 (15)
|Gy(i,j)gy′(i,j|)>0.5 (16)
g
x′(i,j)=round{Gx(i,j)}
g
y′(i,j)=round{Gy(i,j)} (17)
The centroid calculation area is not necessarily required to be a rectangular area and may alternatively be, for example, a circular area whose center is the primary estimation position (gx′(i,j), gy′(i,j)). The wavefront for which the centroid position of the light spot can be calculated by this embodiment (that is, a wavefront that can be measured; hereinafter referred to as “a measurable wavefront”) is expressed by expression (18) or (19).
As an example, a calculation is made of a size of a measurable wavefront for which the centroid position of the light spot can be calculated by the wavefront sensor 3 having values shown by expression (20). In this calculation, the wavefront is expressed by expression (22) by using a coordinate h defined by expression (21), and the size of the wavefront is expressed by a coefficient Z.
In expressions (20) and (22), R represents an analytical radius. Since it is only necessary to calculate the size of the measurable wavefront at a position where a variation of the wavefront is largest, a largest coefficient Z is calculated by regarding h as being equal to R and substituting the above values into expressions (6), (13) and (19). The coefficient Z is derived as Z=5.8[μm] from the method fixing the centroid calculation area and is derived as Z=38.9[μm] from the method setting the centroid calculation area by using the centroid position of the one adjacent light spot.
In contrast, the method of this embodiment enables calculating a centroid position of a light spot formed by a wavefront with a largest allowable size of Z=540[μm]. That is, this embodiment enables providing a measurable wavefront having a size significantly larger as compared to those provided by conventional methods.
Although this embodiment described above the case of primarily estimating the position of the target light spot by using the known centroid positions (or the known intensity peak positions) of the two light spots, the position of the target light spot may be primarily estimated by alternatively using known centroid positions of three or more light spots. For instance, when centroid positions (Gx(i−3, j),Gy(i−3, j)), (Gx(i−2,j),Gy(i−2,j)) and (Gx(i−1,j),Gy(i−1,j)) of light spots formed by three microlenses whose positions are (i−3,j), (i−2,j) and (i−1,j) are known, the primary estimation position (gx′(i,j),gy′(i,j)) of the target light spot formed by the microlens whose position is (i,j) is estimated by using expression (23).
g
x′(i,j)=round[G0x(i,j)+3{Gx(i−1,j)−G0x(i−1,j)}−3{Gx(i−2,j)−G0x(i−2,j)}+{Gx(i−3,j)−G0x(i−3,j)}]
g
y′(i,j)=round[G0y(i,j)+3{Gy(i−1,j)−G0y(i−1,j)}−3{Gy(i−2,j)−G0y(i−2,j)}+{Gy(i−3,j)−G0y(i−3,j)}] (23)
In this estimation, the measurable wavefront for which the centroid position of the target light spot can be calculated is expressed by expression (24).
The microlenses forming the light spots whose centroid positions (or the intensity peak positions) are known are not necessarily required to be adjacent to the microlens (hereinafter referred to also as “a target microlens”) forming the target light spot. The centroid position of the target light spot may be primarily estimated by using known centroid positions of any light spots formed by the microlenses arranged coplanarly (or collinearly) with the target microlens.
For instance, when (i−2,j) and (i−4,j) represent positions of two microlenses arranged on an identical straight line y=j on which the target microlens whose position is (i,j) is disposed, the primary estimation position (gx′(i,j),gy′(i,j)) of the target light spot may be acquired by using known centroid positions (Gx(i−2, j),Gy(i−2, j)) and (Gx(i−4, j),Gy(i−4,j)) of the light spots formed by the two microlenses and expression (25).
g
x′(i,j)=round[G0x(i,j)+2{Gx(i−2,j)−G0x(i−2,j)}−{Gx(i−4,j)−G0x(i−4,j)}]
g
y′(i,j)=round[G0y(i,j)+2{G(i−2,j)−G0y(i−2,j)}−{Gy(i−4,j)−G0y(i−4,j)}] (25)
Alternatively, when (i−1,j−1) and (i−2,j−2) represent positions of two microlenses arranged on a straight line y=x−i+j, the primary estimation position (gx′(i,j),gy′(i,j)) may be calculated, by using known centroid positions (Gx(i−1, j−1),Gy(i−1, j−1)) and (Gx(i−2, j−2),Gy(i−2,j−2)) of light spots formed by the two microlenses and expression (26):
g
x′(i,j)=round[G0x(i,j)+2{Gx(i−1,j−1)−G0x(i−1,j−1)}−{Gx(i−2,j−2)−G0x(i−2,j−2)}]
g
y′(i,j)=round[G0y(i,j)+2{Gy(i−1,j−1)−G0y(i−1,j−1)}−{Gy(i−2,j−2)−G0y(i−2,j−2)}] (26)
Next, with reference to a flowchart of
At step A-1, the computer selects one light spot for which the computer calculates its centroid position first of all and then calculates that centroid position. As the first light spot, the computer can select one light spot located near a centroid of an intensity distribution of the light entering the wavefront sensor 3 or near a center of the CCD 2.
Next, at step A-2, the computer selects, from all the microlenses, a target microlens for which the computer calculates its centroid position by using the above-described light spot centroid position acquisition method. As illustrated in
Next, at step A-3 (a first step), the computer selects, as illustrated in
Next, at step A-4 (a second step), the computer sets the centroid calculation area by using the primary estimation position (gx′(i,j),gy′(i,j)) of the target light spot. When the wavefront to be measured is a divergent wavefront, the value of r representing the size of the centroid calculation area may be set to a value expressed by expression (4) since an interval between the light spots mutually adjacent is long. On the other hand, when the wavefront to be measured is a convergent wavefront, since the interval between the mutually adjacent light spots is short, it is desirable, for example, to calculate an interval between the known centroid positions of the two light spots and to set the value of r to a half of the calculated interval.
Subsequently, at step A-5 (a third step), the computer calculates the centroid position of the target light spot with expression (9) by using the primary estimation position (gx′(i,j),gy′(i,j)) of the target light spot and the value of r.
When the centroid position of the target light spot calculated at this step satisfies the condition of expression (15) or (16), the computer may return to step A-4 to set a new centroid calculation area such that the centroid position of the target light spot is located at a center of the newly set centroid calculation area. In this case, the computer recalculates the centroid position of the target light spot in the newly set centroid calculation area.
Thereafter, at step A-6, the computer determines whether or not the calculation of the centroid positions of all the light spots formed by all the microlenses has been completed. If not completed, the computer returns to step A-2. If completed, the computer ends this process. After returning to step A-2, the computer selects, as a new target microlens, a microlens D(i+1,j) adjacent to the target microlens C for which the calculation of the centroid position of the target light spot has been completed. Then, the computer calculates a centroid position of a target light spot formed by the target microlens D. In this manner, the computer sequentially calculates the centroid position of the target light spot for all the microlenses.
The above-described light spot centroid position acquisition method enables accurately calculating the centroid positions of all the light spots formed by all the microlenses even when the wavefront (or the wavefront aberration) of the light entering the wavefront sensor 3 is large. Moreover, this method performs neither a calculation process searching for an intensity peak position of the light received by the CCD 2 nor a repetitive calculation process including backtracking and therefore enables calculating the centroid positions of all of the light spots at high speed.
The light spot centroid position acquisition method described in this embodiment can be applied not only to a case of using a Shack-Hartmann sensor as the wavefront sensor, but also to a case of using a wavefront sensor constituted by a Shack-Hartmann plate provided with multiple microlenses and a CCD sensor.
In
Light from the light source 4 is condensed by the condenser lens 5 toward the pinhole 6. A spherical wavefront exiting from the pinhole 6 enters the measurement object lens 7. The light (wavefront) transmitted through the measurement object lens 7 is measured by the wavefront sensor 3.
As the light source 4, a single-color laser, a laser diode or a light-emitting diode is used. The pinhole 6 is formed with an aim to produce a spherical wavefront with less aberration and therefore may be constituted alternatively by a single-mode fiber.
As the wavefront sensor 3, a Shack-Hartmann sensor or a light-receiving sensor constituted by a Shack-Hartmann plate provided with multiple microlenses and a CCD sensor.
Data (light intensity data) on the wavefront measured by the wavefront sensor 3 is input to the analytical calculator 8. The analytical calculator 8, which is constituted by a personal computer, calculates centroid positions of all of light spots formed on the wavefront sensor 3 according to the light spot centroid position acquisition program described in Embodiment 1 and further calculates the wavefront by using the calculated centroid positions of all the light spots. This calculation enables acquiring aberration of the measurement object lens 7.
In
Light from the light source 4 is condensed by the condenser lens 5 toward the pinhole 6. A spherical wavefront exiting from the pinhole 6 is reflected by the half mirror 9 and then converted by the projection lens 10 into a convergent light. The convergent light is reflected by the reference surface 11a or the measurement object surface 12a, transmitted through the projection lens 10, the half mirror 9 and the imaging lens 13 and then enters the wavefront sensor 3.
When the reference surface 11a of the reference lens 11 or the measurement object surface 12a of the measurement object lens 12 is an aspheric surface, the wavefront of the light entering the wavefront sensor 3 is large.
In order to calibrate optical systems such as the projection lens 10 and the imaging lens 13, this embodiment measures the reference surface 11a having a known surface shape to calculate a shape of the measurement object surface 12a from a difference between the known surface shape of the reference surface 11a and the measurement result of the measurement object surface 12a.
Description will be made of a method of manufacturing the measurement object lens 12, the method including the measurement of the measurement object surface 12a. First, the wavefront sensor 3 receives the light reflected by each of the reference surface 11a and the object surface 12a. Next, the analytical calculator 8 calculates, from light intensity data acquired from the wavefront sensor 3, centroid positions of all light spots according to the light spot centroid position acquisition program described in Embodiment 1.
Then, the analytical calculator 8 calculates, by using the calculated centroid positions of all the light spots, an angular distribution (Sbx,Sby) of the reference surface 11a and an angular distribution (Sx,Sy) of the measurement object surface 12a.
Next, the analytical calculator 8 converts a position (x,y) of each microlens of the wavefront sensor 3 into coordinates (X,Y) on the reference surface 11a. In addition, the analytical calculator 8 converts the angular distribution (Sx,Sy) of the measurement object surface 12a and the angular distribution (Sbx,Sby) of the reference surface 11a respectively into angular distributions (Sx′,Sy′) and (Sbx′,Sby′) on the reference surface 11a.
Thereafter, the analytical calculator 8 calculates a shape difference between the reference surface 11a and the measurement object surface 12a by using a difference between the angular distributions (Sx′−Sbx′,Sy′−Sby′) and by using the coordinates (X,Y). The shape (actual shape) of the measurement object surface 12a can be calculated by adding the shape of the reference surface 11a to the shape difference.
From a difference between the actual shape of the object surface 12a thus calculated (measured) and a target shape thereof, lateral coordinates and shape correction amounts for shaping the measurement object lens 12 are calculated. Then, a shaping apparatus (not illustrated) shapes the measurement object surface 12a. This series of processes enables providing a target lens (measurement object lens) 12 whose surface 12a has the target shape.
The above embodiments enable calculating, at high speed and with good accuracy, the centroid positions of the light spots formed by the microlenses even when the wavefront of the light entering the wavefront sensor or the wavefront aberration of the light is large. This enables performing wavefront measurement using the wavefront sensor at high speed and with good accuracy.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2014-161973, filed on Aug. 8, 2014, which is hereby incorporated by reference wherein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2014-161973 | Aug 2014 | JP | national |