The present invention relates to a microscope apparatus and an image generation method.
Recently, a super-resolution technology for observing a specimen with a higher solution than the resolution of a microscope optical system has been proposed (Patent Document 1, etc.). The Patent Document 1 discloses a method for exposing a sample to structured-illumination to generate a modulated image, obtaining plural modulated images while changing the phase of the structured-illumination, and demodulating these plural modulated images by a linear calculation, thereby obtaining a super-resolved image. In general, the linear calculation can be increased in speed as compared with a non-linear calculation, and thus it enables real-time observation or observation close to the real-time observation.
Patent Document 1:Japanese Unexamined Patent Application Publication No. Hei 11-242189
However, this calculation is based on the premise that the spatial frequency and the amount of phase change of the structured illumination are uniform. On the other hand, the real microscope optical system has an aberration, and thus it is difficult to make the spatial frequency and the amount of phase change of the structured illumination uniform. Therefore, the conventional method may induce a demodulating error and generate noise on the super-resolved image with some probability.
Therefore, the present invention has an object to provide a microscope apparatus based on structured illumination and an image generation method with which an excellent super-resolved image can be obtained by using even an optical system having distortion aberration remaining therein.
A microscope apparatus of the present invention is characterized by including: an illuminating optical system that illuminates a sample with light from a light source; a modulating unit that is disposed in the illuminating optical system and spatially modulates the light from the light source; an image-forming optical system that forms a modulated image from the sample illuminated with the spatially modulated light; an imaging unit that picks up the modulated image; a correcting unit that corrects distortion of the modulated image due to at least one of the illuminating optical system and the image-forming optical system; and an image generating unit that generates an image of the sample from the modulated image corrected by the correcting unit.
The modulating unit preferably includes a grating and a grating-modulating unit that modulates the light by moving the grating.
Furthermore, the correcting unit preferably carries out the correction on the basis of data of distortion aberration of at least one of the illuminating optical system and the image-forming optical system.
The correcting unit preferably carries out the correction on the basis of at least one of actual measurement data and design data of the distortion aberration.
The microscope apparatus according to the present invention is preferably further equipped with a recorrecting unit that corrects the distortion of the image of the sample.
Furthermore, the recorrecting unit preferably carries out the correction on the basis of the data of the distortion aberration of the image-forming optical system.
Furthermore, the recorrecting unit preferably carries out the correction on the basis of at least one of the actual data and the design data of the distortion aberration.
According to the present invention, an image generating method that generates a sample image through an image calculating procedure of an obtained image by illuminating a sample with spatially modulated illumination light, and forming an image of light from the sample illuminated with the illumination light is characterized by including: a correcting step that corrects distortion of the obtained image due to an illuminating optical system and an image-forming optical system, and an image generating step that generates an image of the sample from the corrected image.
According to the present invention, there are implemented a microscope apparatus and an image generating method with which an excellent super-resolved image can be attained even when an optical system having distortion aberration remaining therein is used.
An embodiment of the present invention will be described hereunder. This embodiment corresponds to an embodiment of a microscope apparatus to which a structured illumination method is applied.
First, the construction of the microscope apparatus will be described.
Light emitted from a light source (not shown) is guided to the optical fiber 1 to form a secondary light source at the end of the fiber. Illumination light emitted from he secondary light source is converted to collimated light by the collector lens 2 in the illuminating optical system LS1, and then incident to the grating 3 to induce diffraction components of respective orders. The grating 3 is a phase-type or amplitude-type one-dimensional transmission type grating or the like. The phase type is preferable because the diffraction coefficient of ±1st-order diffraction components is high.
The diffraction components of the respective orders occurring in the grating 3 generate spots on a plane conjugated with the pupil of the objective lens 9 by the lens 4. Unnecessary diffraction components other than the ±1st-order diffraction components are removed on the plane, and only the ±1st-order diffraction components are deflected by 90° by the light deflecting mirror 5, forms a sample conjugated plane on a field stop plane F.S. by the lens 6, and then forms spots on the pupil of the objective lens 9 through the lens 7 and the half mirror 8. Particularly, the ±1st-order diffraction components forms the spots at the position opposing each other at the outermost peripheral portion on the pupil. The ±1st-order diffraction components ejected from these spots emitted from these spots become collimated light beams when emitted from the objective lens 9, and form an angle in the neighborhood of the maximum NA of the objective lens 9. The ±1st-order diffraction components are an illumination pattern including an interference fringe of a substantially uniform spatial frequency, and illuminated (structured-illuminated) the surface of the sample 10.
The diffraction components of respective orders of light which is further diffracted from the sample 10 are passed through the objective lens 9, converted to collimated light and then forms an image of the sample 10 through the half mirror 8 by the second objective lens 11. The imaging unit 12 picks up this image to generate image data, and transmits the image data to the control-calculating unit 13. The sample 10 is modulated by the structured illumination, and thus the image of the sample 10 has become “modulated image”. This modulated image corresponds to an image achieved by superposing the pattern formed by the ±1st-order diffraction components on the pattern formed by the 0th-order diffraction component while the spatial frequency of the pattern based on the ±1st-order diffraction components is lowered by the amount corresponding to the spatial frequency of the structure illumination.
Here, the microscope apparatus of this embodiment is equipped with a function of obtaining plural image data while changing the phase of the structured illumination (that is, the phase of the illumination pattern on the sample 10). Therefore, an actuator 3A for moving the grating 3 in a direction perpendicular to the lattice lines is provided.
The control-calculating unit 13 controls the actuator 3A and the imaging unit 12 in synchronism with each other, whereby plural image data can be obtained while changing the phase of the illumination pattern. In this case, image data Irj′ of N (j represents a phase number, and j=1, 2, 3, . . . , N) while the grating 3 is changed by every equal amount, totally the amount corresponding to one pitch of the lattice.
The control-calculating unit 13 conducts the calculation on the obtained image data Irj′ of N to obtain the image data of the demodulated image of the sample 10 (the details of the calculation will be described later). The image data of the demodulated image represents a super-resolved image of the sample 10. The image data are transmitted to the display unit 14, and displayed. Programs associated with the control and the calculation are installed in the control-calculating unit 13 in advance. Some or all of the programs are installed in the calculating unit 13 through a storage medium or the Internet.
Next, the details of the calculation of the control-calculating unit 13 will be described.
(Step S1)
First, the control-calculating unit 13 subjects the image data Irj′ of N (j=1, 2, 3, . . . , N) to distortion correction to obtain image data Irj of N (j=1, 2, 3, . . . , N). The concept of the processing of this step S1 is shown in
Here, the distortion of the illumination pattern occurs due to both the aberration (mainly distortion aberration) when the illuminating optical system LS1 projects the grating 3 onto the sample 10 and the aberration (mainly distortion aberration) when the image-forming optical system LS2 projects the sample 10 onto the imaging unit 12 (onto the imaging plane).
Now, it is assumed that the projecting magnification at which the illuminating optical system LS1 projects the grating 3 onto the sample 10 is represented by M1 and the projecting magnification at which the image-forming optical system LS2 projects the sample 10 onto the imaging unit 12 is represented by M2. Furthermore, it is assumed that a coordinate Xg on the grating 3 is projected to a coordinate Xs on the sample 10 and a coordinate Xs on the sample 10 is projected to a coordinate Xi on the imaging unit 12.
At this time, the relationship between the coordinate Xg on the grating 3 and the coordinate Xi on the imaging unit 12 is ideally represented by the following equation:
X
i
=M
2
X
s
=M
1
M
2
X
g
However, the actual illumination system LS1 and image-forming optical system LS2 have distortion aberrations, and thus the relationship of the coordinates Xg, Xs, Xi is as follows:
X
s
=M
1(1+a1Xg2+a2Xg4+a3Xg6+ . . . )Xg,
X
i
=M
2(1+c1Xs2+c2Xs4+c3Xs6+ . . . )Xs
Accordingly, the relationship between the coordinate Xg on the grating 3 and the coordinate Xi on the imaging unit 12 is represented by the following equation (1):
Accordingly, in the distortion correction of this step S1, the control-calculating unit 13 may subject each of the image data Irj′ (j=1, 2, 3, . . . , N) to coordinate conversion by using the equation (1).
The coefficients M1, M2, d1, d2, d3, . . . of the equation (1) are determined from at least one of the design data and the actual measurement data of the illuminating optical system LS1 and the image-forming optical system LS2. As the number of the coefficients d1, d2, d3, . . . is larger, the precision of the correction can be enhanced more. If the coefficients are limited to the two coefficients d1 and d2, some degree of effect can be obtained. These coefficients are stored in the control-calculating unit 13 in advance.
In the coordinate conversion processing, a pixel interpolating procedure is preferably carried out as occasion demands so that the conversion error is as small as possible. This is because for example when a step (step of brightness) which has not occurred in the actual modulated image occurs on the corrected image data Irj (j=1, 2, 3, . . . , N), a noise pattern appears on the image data of a demodulated image.
According to the above step S1, as shown in
(Step S2)
The control-calculating unit 13 subjects each of the image data Irj (j=1, 2, 3, . . . , N) to two-dimensional Fourier Transformation to obtain image data Ikj (j=1, 2, 3, . . . , N) represented in the wave number space. A subscript [r] representing the coordinate r on the real space is affixed to the data represented in the real space, and a subscript [k] representing the coordinate k on the wave number space is affixed to the data represented in the wave number space.
A two-dimensional FFT method is preferably used for the two-dimensional Fourier Transformation. This is because the two-dimensional FFT method can complete the transformation on even image data having a large data amount such as 1000×1000 pixels within a realistic time.
The concept of the processing of the step S2 is shown in
(Step S3)
The control-calculating unit 13 applies the image data Ikj (j=1, 2, 3, . . . , N) to a predetermined calculation equation to separate and extract the 0th-order diffraction component Ik0, the +1st-order diffraction component Ik+1, and the −1st-order diffraction component Ik−1 which are commonly contained in the image data Ikj (j=1, 2, 3, . . . , N). The concept of the processing of this step S3 is shown in
Here, if it is assumed that “the spatial frequency of the illumination pattern is uniform on the image”, the following would be satisfied.
The spatial frequency of the illumination pattern is represented by K (constant). At this time, when the wave number expression of the actual pattern Or(r) owned by the sample 10 is represented by Ok(k) and the transfer function (OTF; Optical Transfer Function) of the image-forming optical system LS2 is represented by Pk(k), the L-order diffraction component IkL is represented as follows.
Ok(k+LK)Pk(k)
Furthermore, the phase (the amount of phase change) of the illumination pattern corresponding to the phase number j is represented as follows irrespective of the coordinate on the image.
2πj/N
Accordingly, the image data Ikj corresponding to the phase number j is represented by the following equation (2).
I
kj(k)=ΣLmLexp(2πij/N)Ok(k+LK)Pk(k) (2)
Here, mL represents the diffraction intensity mL of the L-order diffraction component IkL.
At this time, if the number of the image data Ikj is set to 3, three equations are obtained, and three diffraction components Ok(k)Pk(k), Ok(k+K)Pk(k), Ok(k−K)Pk(k) are determined.
Furthermore, if the least squares method is applied on the assumption of N>3, not only these diffraction components are determined, but also the effect of noise contained in each image data Ikj (j=1, 2, 3, . . . , N) can be suppressed to a small level. In the least squares method, the equation (3) may be used in place of the equation (2).
In the equation (3), it is assumed that bLj=mLexp(Lφj).
The control-calculating unit 13 of the step S3 separates and extracts the diffraction components Ok(k)Pk(k), Ok(k+K)Pk(k), Ok(k−K)Pk(k) by applying the image data Ikj (j=1, 2, 3, . . . , N) to the simple equation (2) or (3).
(Step S4)
The control-calculating unit 13 rearranges the extracted diffraction components Ok(k)Pk(k), Ok(k+K)Pk(k), Ok(k−K)Pk(k) while displaced on the wave number space by only the spatial frequency K of the illumination pattern, thereby obtaining the image data Ik(k) of the demodulated image of the sample 10. The concept of the processing of this step S4 is shown in
(Step S5)
The control-calculating unit 13 subjects the image data Ik(k) to inverse Fourier Transformation, thereby obtaining the image data Ir(r). The concept of the processing of this step S5 is shown in
However, the super-resolved image of the sample 10 is projected on the image data Ir(r) while being distorted. The reason is as follows.
The distortion correction of the step S1 is the distortion correction to eliminate the distortion of the illumination pattern on the image, that is, it is the combination of the distortion correction of the illuminating optical system LS1 and the distortion correction of the image-forming optical system LS2. On the other hand, the distortion of the sample 10 on the image is not related to the distortion aberration of the illuminating optical system LS1, and it is induced by only the distortion aberration of the image-forming optical system LS2. Therefore, the distortion correction of the step S1 described above becomes “over-correction” by the amount corresponding to the distortion correction of the illuminating optical system LS1 with respect to the distortion of the sample 10.
(Step S6)
Therefore, the control-calculating unit 13 subjects the image data Ir(r) to the coordinate conversion by using the following equation (4), and it is subjected to negative correction by the amount corresponding to the over-correction amount. This equation (4) represents the relationship between the coordinate Xg on the grating 3 and the coordinate Xs on the sample 10. By solving this equation for Xg, Xg is determined as a function of Xs, and Xg is calculated for equal-interval Xs, thereby performing the negative correction.
Xs=M
1(1+a1Xg2+a2Xg4+a3Xg6+ . . . )Xg (4)
The concept of the processing of the step S6 is shown in
The coefficients M1, a1, a2, a3, . . . of the equation (4) are determined from at least one of the design data and the actual measurement data of the illuminating optical system LS1 in advance. As the number of the coefficients a1, a2, a3, . . . is larger, the correcting precision can be more enhanced. If the coefficients are limited to the two coefficients a1, a2, some degree of effect can be obtained. These coefficients are stored in the control-calculating unit 13 in advance.
Furthermore, when the coordinate conversion is carried out, the pixel interpolating procedure is preferably carried out so that the conversion error is as small as possible as occasion demands (step S6).
Next, the effect of the microscope apparatus will be described.
As described above, in the microscope apparatus, the distortion correction (step S1 of
Accordingly, in the microscope apparatus of this embodiment, if the distortion correction is carried out with high precision, the demodulating error hardly occurs in spite of use of only the simple calculation equation (equation (2) or equation (3)) for the demodulating calculation (steps S2 to S5 of
(Others)
The image data of the demodulated image obtained in this embodiment contain not only the information of a pattern O of the sample 10, but also the information of the transfer function of the image-forming optical system LS2 (the information of a dot image distribution function of the image-forming optical system LS2). Therefore, the control-calculating unit 13 may subject the image data of the demodulated image to deconvolution to exclude the information of the transfer function as occasion demands.
However, the information of the distortion aberration of the image-forming optical system LS2 has been already excluded from the information of the image data of the demodulated image. Therefore, in the deconvolution, a function achieved by excluding the distortion aberration component of the image-forming optical system LS2 from the transfer function may be used in place of the transfer function. The super-resolved image of the sample 10 appears sharply on the image data after the deconvolution.
In the foregoing description, the kind of the sample 10 is not described, however, it may be a sample marked with fluorescent material. In this case, the half mirror 8 is replaced by a dichroic mirror, an excitation filter is inserted to the light source side of the dichroic mirror, and a barrier filter is inserted to a position nearer to the imaging unit 12 than the half mirror 8.
Furthermore, in the foregoing description, the direction of the super-resolved image is not described. However, if the information described above is obtained while the lattice direction of the grating 3 is fixed, a super-resolved image whose resolution is enhanced over the direction vertical to the lattice could be obtained. Furthermore, if the lattice direction of the grating 3 is changed to plural directions to obtain the same information, a super-resolved image whose resolution is enhanced over the plural directions. Furthermore, when a super-resolved image whose resolution is enhanced over the plural directions is obtained, the one-dimensional grating 3 may be replaced by a two-dimensional grating (a grating having a lattice formed in a grid shape). According to the two-dimensional grating, information in the two directions can be simultaneously formed.
Furthermore, in place of the demodulating calculation in the steps S2 to S5 of
According to the method disclosed in the Japanese Unexamined Patent Application Publication No. Hei 11-242189, the distortion correction in the demodulating calculation is not carried out, and thus the following distortion occurs in O(x)*P(x), O(x)*P_(x) in the Japanese Unexamined Patent Application Publication No. Hei 11-242189.
On the basis of f1=f0, φ1, the following representations: Δf2=f2−f0, Δf3=f3−f0, Δφ2=φ2−φ1, Δφ3=φ3−φ1 are adopted, and the following approximations: 2πΔf2x, 2πΔf3x<<1, Δφ2, Δφ3<<1 are adopted. Accordingly, according to the method of the Japanese Unexamined Patent Application Publication No. Hei 11-242189, a demodulating error caused by nonuniformity of the amount of phase change of the illumination pattern occurs mainly at the center portion of the image, and a demodulating error caused by nonuniformity of the spatial frequency of the illumination pattern occurs at the peripheral portion of the image. However, these demodulating errors do not occur according to this embodiment that carries out the distortion correction when the demodulating calculation is carried out.
Number | Date | Country | Kind |
---|---|---|---|
2005-295028 | Oct 2005 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP06/19717 | 10/2/2006 | WO | 00 | 8/28/2007 |