The present invention relates to a phase distribution measuring apparatus and a phase distribution measuring method.
In conventional phase distribution measuring apparatuses and phase distribution measuring methods, a bright spot centroid operating region is fixed to a section of the light receiving surface corresponding to each condensing lens.
However, in conventional phase distribution measuring apparatuses, when a bright spot greatly deviates, the bright spot is greatly displaced from the centroid operating region, so that an accurate centroid position calculation becomes impossible.
Therefore, the invention was made in view of the above-mentioned problem, and an object thereof is to provide a phase distribution measuring apparatus with which the centroid position can be accurately calculated even when the bright spot greatly deviates.
In order to achieve the above-mentioned object, a phase distribution measuring apparatus of the invention comprises an image pickup device that includes a fly-eye lens composed of a plurality of condensing lenses arranged in a matrix on a plane and a plurality of light receiving elements arranged in a matrix on a light receiving surface, and is arranged so that the light receiving surface becomes parallel to the plane at a distance corresponding to the focal length of the condensing lenses, and a phase calculating device that calculates a phase distribution of light made incident on the fly-eye lens from data outputted from the image pickup device, wherein the phase calculating device comprises a center position calculation means that calculates bright spot center positions at which the luminance on the light receiving surface becomes maximum and a centroid position calculation means that calculates the centroid positions of the luminances in centroid operating regions centered on the luminance center positions.
Since the centroid operating regions are set based on the bright spot center positions calculated by the center position calculation means, the centroid operating regions also move according to the deviations of the bright spots. Therefore, even when the bright spots greatly deviate, it becomes possible to calculate the accurate centroid positions.
Preferably, in the phase distribution measuring apparatus of the invention, the phase calculating device further comprises a bright spot area calculation means that calculates the areas with luminances exceeding a predetermined threshold in a predetermined region centered on the luminance center positions, and the centroid operating regions are set to occupy areas exceeding the areas calculated by the bright spot area calculation means.
Since the centroid operating regions are set so as to exceed the luminance areas calculated by the bright spot area calculation means, the centroid operating regions more reliably include the bright spots.
Preferably, in the phase distribution measuring apparatus of the invention, the center position calculation means calculates bright spot center positions based only on data of luminances exceeding a predetermined reference value in luminance data, and the centroid position calculation means calculates centroid positions based only on the data exceeding the reference value in the luminance data.
Since operation is carried out based only on the data of luminances exceeding the predetermined reference value in luminance data, noise occurring when the image pickup device picks-up images is eliminated, and the data processing amount is reduced.
Preferably, in the phase distribution measuring apparatus of the invention, the phase calculating device further comprises a smoothing means which converts luminance data of each light receiving element into a weighted average of the same and luminance data of adjacent light receiving elements.
By this smoothing, noise occurring when the image pickup device picks-up images is eliminated.
Preferably, in the phase distribution measuring apparatus of the invention, the phase calculating device further comprises a luminance moment calculation means that calculates the moments of the luminances in the centroid operating region, the center position calculation means and the luminance moment calculation means are formed of a hardware operational circuit, and the centroid position calculation means calculates the centroid positions based on outputs of the hardware operational circuit.
Since operations up to the luminance moment calculation with a large amount of data to be processed are executed by the hardware operational circuit, high-speed operations become possible.
Hereinafter, a preferred embodiment of the phase distribution measuring apparatus 1 of the invention is described in detail with reference to the accompanying drawings.
First, the construction of the phase distribution measuring apparatus 1 is described.
As shown in
The A/D converter 210j (j=1 through n2) of the signal processing part 12 comprises an integrating circuit 220j (j=1 through n2) including a charge amplifier 221j (j=1 through n2), a comparator circuit 230j (j=1 through n2), and a capacitance control mechanism 240j (j=1 through n2).
The integrating circuit 220 comprises a charge amplifier 221 that inputs output signals from the CMOS arrays 110 and amplifies charges of the input signals, a variable capacitance part 222 one end of which is connected to an input terminal of the charge amplifier 221 and the other end of which is connected to an output terminal of the charge amplifier 221, and a switching element 223 one end of which is connected to the input terminal of the charge amplifier 221 and the other end of which is connected to the output terminal to switch an integrating operation and a non-integrating operation of the integrating circuit 220 by being turned ON and OFF in response to a reset signal R.
The variable capacitance part 222 comprises capacitance elements C1 through C4 one-side terminals of which are connected to the input terminal of the charge amplifier 221, switching elements SW11 through SW14 that are connected between the other terminals of the capacitance elements C1 through C4 and the output terminal of the charge amplifier 221 and open and close in response to capacitance instruction signals C11 through C14, and switching elements SW21 through SW24 one-side terminals of which are connected between the capacitance elements C1 through C4 and the switching elements SW11 through SW14 and the other terminals of which are connected to the GND level, and which open and close in response to capacitance instruction signals C21 through C24. The capacitances C1 through C4 of the capacitance elements C1 through C4 satisfy the following relationships.
C1=2C2=4C3=8C4
C0=C1+C2+C3+C4
Herein, C0 is the maximum capacitance required in the integrating circuit 220, and satisfies the following relationship when the saturated charge of the photoelectrical converting part 120 is defined as Q0, and the reference voltage is defined as VREF.
C0=Q0/VREF
The comparator circuit 230 compares the value of an integral signal Vs outputted from the integrating circuit 220 with the reference value VREF and outputs a comparison result signal Vc. The capacitance control mechanism 240 outputs a capacitance instruction signal C to be notified for the variable capacitance part 222 within the integrating circuit 220 from the value of the comparison result signal Vc, and outputs a digital signal D1 corresponding to the capacitance instruction signal C.
Furthermore, the CMOS sensor 10 has a timing control part 300 (corresponding to a part of the control part 3 shown in
Digital signals transferred and outputted successively from the most significant bit (MSB) for each CMOS array 110 from the signal processing part 12 constructed as described above are stored in a buffer for a data length (4 bits) corresponding to one pixel, and parallel-serial converted into an output image.
Returning to
The image processing device 24 comprises, as functional components, a luminance data calculating part 241, a smoothing part 242, a center position calculating part 243, a bright spot area calculating part 244, and a centroid data processing part 245. The luminance data calculating part 241 has a function for composing digital image data of focal point images on the light receiving surface 11 by analyzing and arranging the outputs of the CMOS sensor 10.
The smoothing part 242 has a function for smoothing by converting luminance data of each pixel in the digital image data calculated by the luminance data calculating part 241 into a weighted average of the same and luminance data of pixels positioned on the upper, lower, left, and right sides.
The center position calculating part 243 has a function for calculating the center positions of the bright spots in the smoothed digital image data.
The bright spot area calculating part 244 has a function for calculating the area (pixel number) of each bright spot.
The centroid data processing part 245 has a function for calculating centroid data in a centroid operating region upon setting the centroid operating region based on the area (pixel number) of each bright spot. This centroid data contains 0-order luminance moment (the total of bright spot luminances in the centroid operating region, and first-order luminance moment in the x direction (the horizontal direction on the light receiving surface 11 or the digital image data) and first-order luminance moment in the y direction (the vertical direction in the light receiving surface 11 or the digital image data).
The data accumulation/display part 26 includes a centroid position calculating part 261, a phase calculating part 262, and an interpolating part 263. The centroid position calculating part 261 has a function for calculating the centroid positions of the bright spots based on the centroid data.
The phase calculating part 262 has a function for calculating phases based on deviations of the centroid positions of the bright spots from the centroid initial positions (centroid positions of the bright spots without phase lag).
The interpolating part 263 has a function for acquiring a continuous phase distribution by interpolating the calculated phase data.
Next, the operations of the phase distribution measuring apparatus 1 are described. When a measuring target laser beam passes through the fly-eye lens 30, images of focal points corresponding to the respective condensing lenses 32 are generated on the light receiving surface 11. The images are picked-up by the CMOS sensor 10 and data-processed by the phase calculating device 20.
First, the CMOS sensor 10 scans the images on the light receiving surface 11 to pick-up one frame (S502). Simultaneously, the luminance data calculating part 241 analyzes and arranges the luminances (4-bit digital data) of the pixels outputted from the CMOS sensor 10 to compose one-frame digital image data P(n) (n: frame number) (S504).
The smoothing part 242 applies smoothing to the digital image data P(n) (S506). Concretely, weighted average of the luminance of each pixel and luminances of the pixels on the upper, lower, and left, and right sides is repeated twice. The algorithm of the smoothing is shown as follows.
dnew (x,y)=[d(x−1,y)+d(x,y−1)+d(x+1,y)+d(x,y+1)+4d(x,y)]/8; d(x,y)=dnew (x,y);
dnew (x,y)=[d(x−1,y)+d(x,y−1)+d(x+1,y)+d(x,y+1)+4d(x,y)]/8; d(x,y)=dnew (x,y);
d indicates a luminance of a pixel, (x,y) indicates the coordinates of the pixel on the light receiving surface 11 or the digital image data P(n).
Furthermore, the smoothing part 242 deletes the luminance data equal to or lower than a predetermined reference value from the smoothed digital image data P(n) (S508). By such smoothing and deletion of luminance data equal to or lower than the reference value, noise occurring in the process of imaging of the CMOS sensor 10 can be reduced. Since unnecessary data is deleted, the operating speed is improved.
The center position calculating part 243 calculates the center position and luminance of each bright spot in the smoothed digital image data P(n) (S510). In detail, the luminance of each pixel is compared with the luminances on the upper, lower, left, and right sides, and when the luminance of this pixel is higher than all luminances on the upper, lower, left, and right sides, this pixel is judged as being at the center position of the bright spot. The algorithm for the center position calculation is shown as follows.
k=0;
for (x=0; x<pixel number in X direction; x++)[
for (y=0; y<pixel number in Y direction; y++)[
if(((d(x,y)>d(x−1,y) & ((d(x,y)>d(x,y−1) & ((d(x,y)>d(x+1,y)) & ((d(x,y)>d(x,y+1)))[
p(n,k)[d]=d(x,y); p(n,k)[x]=x; p(n,k)[y]=y; k=k+1;]))
p(n,k) [d] indicates the luminance at the k-th bright spot center position of the n-th frame, p(n,k)[x] indicates the x coordinate of the k-th bright spot center position of the n-th frame, and the p(n,k) [y] indicates the y coordinate of the k-th bright spot center position of the n-th frame.
The bright spot area calculating part 244 calculates the area (pixel number) of each bright spot (S512). In detail, the number of pixels with luminances exceeding the predetermined threshold th in a predetermined region (2 h×2 h) centered on the bright spot center position is counted. The algorithm for the bright spot area calculation is shown as follows.
p(n,k)[s]=0;
for (xx=x−h; xx<x+h;xx++)[
for (yy=y−h; yy<y+h;yy++)[
if (d(x,y)>th)[
p(n,k)[s]=p(n,k)[s]+1;]]]
The centroid data processing part 245 calculates a centroid operating region (2 r×2 r) corresponding to the bright spot area calculated by the bright spot area calculating part 244 for each bright spot. r is set so as to satisfy, for example, (r−1)2≦bright spot area≦4 r2.
The centroid data processing part 245 calculates centroid data (0-order luminance moment p(n,k) [sum], first-order luminance moment p(n,k) [x_sum] in the x direction, and first-order luminance moment p(n,k) [y_sum]) in the y direction of each bright spot) (S516, S518, S520), and transfers the centroid data to the data accumulation/display part 26 on the rear stage (S522). The algorithm for the centroid data calculation is shown as follows.
p(n,k)[sum]=0; p(n,k)[x_sum]=0; p(n,k)[y_sum]=0;
for (xx=x−r; xx<x+r; xx++)[
for (yy=y−r; yy<y+r; yy++)[
p(n,k)[sum]=p(n,k)[sum]+d(xx,yy);
p(n,k)[x_sum]=p(n,k)[x_sum]+xx* d(xx,yy);
p(n,k)[y_sum]=p(n,k)[y_sum]+yy* d(xx,yy);]]
The above-described processing of the image processing device 20 is carried out by a hardware circuit. Recently, as a device on which hardware for carrying out image operation processing as described above is easily developed and mounted, an FPGA (Field Programmable Gate Array) has been made practicable, and it has become possible to efficiently carry out operations for realizing processing suitable for operating targets on hardware. Furthermore, circuit designing with a description of software processing contents has become possible by using HDL (hardware description language), so that hardware for desired image processing can be prepared easily. By image processing on hardware thus prepared, operations at a higher speed than in the case of image processing on software by a general-purpose circuit have become possible. The CMOS sensor 10 realizes a high-speed frame rate at a level of 1 kHz due to serial-parallel processing of the A/D converter 210 corresponding to each CMOS array 210, and the image processing device 20 can also realize a high-speed response at a level of 1 kHz by hardware.
Furthermore, the data to be outputted to the data accumulation/display part 26 is centroid data and other characteristic amount data, so that the data amount to be processed by the data accumulation/display part 26 can be reduced. For example, in the case of a sensor including a photoelectrical converting part 120 for 128×128 pixels, if image data is outputted without change, the communications data amount is 128×128 bits=16 Kbytes, however, by using the luminance data and the centroid data obtained by data processing as communications data, data per bright spot can be reduced to 64 bits=8 bytes. Therefore, when data on 100 bright spots exists in one image plane, the communications data amount can be compressed to a total of 800 bytes (approximately, 1/20 of the image) and outputted. This compression rate becomes higher as a higher-resolution light receiving part is used.
The centroid position calculating part 261 calculates the centroid position of each bright spot based on the centroid data (S524). The algorithm for the centroid position calculation is shown as follows.
(bright spot centroid position in x direction)px=p(n,k)[x_sum]/p(n,k)[sum];
(bright spot centroid position in y direction)py=p(n,k)[y_sum]/p(n,k)[sum];
From the above-mentioned calculation, the centroid position can be calculated by sub pixels. Namely, the centroid position can be calculated in units smaller than the units of pixels.
The phase calculating part 262 calculates the phase wx in the x direction and the phase wy in the y direction based on the centroid position of each bright spot (S526) . The algorithm for phase calculation is shown as follows.
(phase in x direction) wx=(px−px0)/f
(phase in y direction) wy=(py−py0)/f
(px0,py0) indicates the initial values of the centroid position (centroid position of the luminance without phase lag), and f indicates the focal length of the condensing lens 32.
The interpolating part 263 acquires phase distribution data by interpolating phase discrete data obtained in S526 (S528). Namely, from the phase data calculated for each bright spot corresponding to each condensing lens 32, interpolation operation between blocks and interpolation calculation are carried out under a constrained condition of continuity to peripheral blocks. For example, when linear interpolation is carried out, from the phase (wx,wy) of a certain block (x,y) and values of peripheral blocks thereof, the phase (w′x,w′y) of the middle position (x′,y′) of the blocks is expressed as follows by general linear interpolation calculation.
w′x=wx0+(wx1−wx0)*(x′−x0)/(x1−x0)
w′y=wy0+(wy1−wx0)*(y′−y0)/(y1−y0)
However, the calculation above should satisfy x0<x′<x1, y0<y′<y1.
Continued from S528, the above-mentioned processing is repeated for the next frame.
In the above-mentioned embodiment, common coordinates are used for all bright spots when the luminance moment is calculated, however, the luminance moment of each bright spot may be calculated by setting the bright spot center position as an origin. In this case, the luminance moment is divided by the 0-order moment, whereby the difference between the bright spot center position and the centroid position is calculated. The centroid position is calculated by adding this difference to the coordinates of the bright spot center position.
Next, the effect of the phase distribution measuring apparatus 1 is described. Since the centroid operating region is determined according to the position of each bright spot for each frame, the centroid position can be accurately calculated. Furthermore, any lens form and pitch are applicable for designing the fly-eye lens 30.
The invention is applicable to, for example, astronomical observation apparatuses.
Number | Date | Country | Kind |
---|---|---|---|
2002-29104 | Oct 2002 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP03/12728 | 10/3/2003 | WO | 4/1/2005 |