The present invention relates to an illumination optical system that illuminates a sample with light in a microscope, particularly to an illumination optical system suitable for a three-dimensional fluorescence microscope.
Observation of biological samples using microscopes, particularly a fluorescence microscope, is essential for biological studies including application to medicine. However, when a thick sample is observed by a general (or normal) fluorescence microscope, an image is observed which is formed by superimposition of images at all height positions of the sample through which light is transmitted. That is, an image of a height position plane (in-focus plane) on which the microscope is focused and defocused images of height position planes (out-of-focus plane) on which the microscope is not focused are superimposed and observed. Thus, in the general fluorescence microscope, it is not possible to selectively separate and extract only images of a desired in-focus plane. An effect of selectively separating and extracting only the images of the desired in-focus plane is referred to as “a sectioning effect”.
A fluorescence microscope configured to obtain the sectioning effect on the basis of a variety of mechanisms is called a three-dimensional fluorescence microscope, and is distinguished from general fluorescence microscopes. The sectioning effect enables producing a stereoscopic three-dimensional image by rendering images of arbitrary in-focus planes on a computer. That is, digital processing enables anyone to perform stereoscopic view of cell structure, which has been performed in a brain by an experienced pathologist or the like so far.
As a typical three-dimensional fluorescence microscope, a confocal microscope is used. The confocal microscope has a pinhole placed at a convergence point of light coming from a desired in-focus plane to allow passage of only the light coming from the desired in-focus plane and shield light of a low convergence degree coming from the out-of-focus planes. This confocal microscope has a high sectioning effect, but only captures at one image capturing a point-like narrow area, so that scanning is needed in order to capture (observe) the entire area of the sample.
Meanwhile, as a method for realizing the sectioning effect by utilizing image processing by the computer, a structured illumination method (see NPL 1) is used.
This method produces, for example, sinusoidal illumination intensity patterns on an object. The intensity patterns are similar figures but are given different initial phases. This method captures multiple images each corresponding to these phases.
Then, the method causes the computer to perform the image processing on the multiple images to obtain the sectioning effect. Such a structured illumination method requires producing the phase with high accuracy, that is, producing the sinusoidal structure whose position is controlled.
Furthermore, a method in which a speckle pattern randomly generated is utilized as illumination is also used (see PTL 1 and NPL 2, NPL 3, NPL4, NPL 5 and NPL 6). Although this method also uses the image processing by the computer, since the illumination intensity on the object plane depends on the random speckle pattern, the method has an unavoidable disadvantage that non-uniform intensity unevenness remains in a final image.
Thus, it is desired to develop a three-dimensional fluorescence microscope capable of providing a high quality sectioning effect without requiring highly controlled illumination system and scanning of an object plane.
The present invention provides an illumination optical system suitable for realizing such a three-dimensional fluorescence microscope.
The present invention provides as one aspect thereof an illumination optical system configured to illuminate a sample placed on an object plane with light. The illumination optical system includes multiple light source areas which are mutually coherent and arranged separately from one another in a pupil plane of the illumination optical system. Among distances from a center of a pupil of the illumination optical system to centers of the multiple light source areas, at least one of the distances is different from the other distances.
The present invention provides as another aspect thereof a microscope including the above illumination optical system, and a projection optical system configured to form an image of the sample.
Using the illumination optical system of the present invention enables achieving a three-dimensional fluorescence microscope capable of providing a high quality sectioning effect without requiring highly controlled illumination system and scanning of an object plane.
Hereinafter, embodiments of the invention will be described with reference to the drawings.
A microscope illumination optical system of each embodiment of the present invention can be used for a three-dimensional microscope that is used for observation of a sample, such as a autoluminescent whose illuminance mechanism is fluorescence or phosphorescence. The microscope of each embodiment can be used as an epi-illumination microscope and a transmission microscope.
As a specific example, the microscope illumination optical system of each embodiment can be used for a microscope included in a digital slide scanner that is used for observation of a fluorescently stained sample serving as a test sample. The digital slide scanner is an apparatus that scans a preparation used in biological and pathological inspections and the like at high speed and converts scanned images of the preparation into high-resolution digital image data. Furthermore, the microscope illumination optical system of each embodiment can be used, for example, as a sectioning effect provider to provide the sectioning effect to the digital slide scanner including a projection optical system having a large numerical aperture (NA) and to a general fluorescence microscope.
Prior to a detailed description of the microscope illumination optical system of each embodiment, description will be made of problems in the conventional illumination method using the speckle pattern.
NPL 5 and NPL6 disclose a method of extracting only an image of a fluorescent object existing in an in-focus plane by using an image 1 captured under uniform intensity illumination and an image 2 captured under illumination by the speckle pattern. This method first produces an image 3 representing an intensity difference between the image 1 and the image 2 by a computer. Illumination of the object with the speckle pattern can be realized by inserting an optical element such as a frosted glass, which provides a random phase disturbance, into a pupil of an illumination optical system having a light source that emits a coherent excitation light. For simplification of the following description, an intensity distribution of the fluorescent object is defined as O(x, y, z), and an intensity distribution of O(x, y, z)=δ(z) is considered. In the following description, the intensity distribution of the fluorescent object O(x, y, z) is also referred to as “an object O”. The object O is a virtual object that exists only in the plane of z=0 and has a uniform intensity distribution in an x-y direction; the plane of z=0 is referred to as “an in-focus plane”. Moreover, a plane of z=±a (a>0) is representatively referred to as “an out-of-focus plane.”
On the other hand,
Furthermore,
As is clear from
NPL 5 and NPL 6 disclose a method of extracting, from data of these images 3, data that reflects the intensity distribution O(x, y, z) of the actual fluorescent object. Specifically, a computer takes in the images shown in
Therefore, calculating I(x, y, z) by using following expression (1) provides an image I(x, y, z) that acquires the sectioning effect by σ(x, y, z). That is, I(x, y, 0) has a certain value, but I(x, y, a) has little value.
I(x,y,z)=Iu(x,y,z)·σ(x,y,z) (1)
In expression (1), Iu(x, y, z) represents an image captured such that a position of a height of z is in focus and the general uniform illumination is performed. The image that acquires the sectioning effect is hereinafter referred to also as “a sectioning image”.
In this way, an image close to the actual object O can be reconstructed by the computing. However, this method naturally uses, as the illumination of the object, a speckle phenomenon that is a random phenomenon, which includes an unavoidable defect described below.
Originally expected I(x, y, 0) is Iu(x, y, 0) uniform in the x-y direction as shown in
Hence, this embodiment provides an illumination method having the sectioning effect while preventing the image quality degradation due to the illumination unevenness. A principle thereof will hereinafter be described.
This embodiment is based on the following mathematical facts. Generally, a function represented by expression (2) is referred to as “a comb function”.
Comb(x,y)==Σδ(x−mp)≦(y−np)
In expression (2), δ represents a Dirac delta function, and p represents an interval (pitch) between infinite valued points in a direction of a coordinate axis. In addition, Σ represents a sum symbol in which m and n are integers of −∞<m and n<∞.
The mathematical facts concerning the comb function is that, as represented in expression (3), the Fourier transform of itself thereof also becomes a comb function in which a pitch is 1/p.
F[comb(x,y)](f,g)=Σδ(f−m/p)δ(g−n/p) (3)
In expression (3), F represents the Fourier transform. Moreover, f and g represent spatial frequencies corresponding to x and y, respectively.
Generally, performing the Fourier transform on an amplitude distribution P(f, g) (pupil function) in a pupil of an optical system provides an amplitude distribution in an image plane of the optical system. In a case where the optical system is an illumination optical system, a square absolute value of the amplitude distribution in the image plane is an intensity distribution of light that illuminates a sample object. Thus, setting P(f, g) of the illumination optical system to the comb function achieves a comb function-like illumination light. The comb function-like illumination light provides illumination lights each having a uniform intensity distribution at a uniform pitch on the object plane, which may not cause the illumination unevenness.
NPL 5 and NPL 6 disclose that a smaller pitch of the illumination light on the object plane further reduces size of a calculation region of σ shown in
Such an illumination without illumination unevenness is shown in
(1/√2,1/√2);
(−1/√2,1/√2);
(−1/√2,−1/√2); and
(1/√2,−1/√2).
Using the method of calculating σ(x, y, 0) described above for the object illuminated by the periodic illumination light as shown in
However, there is a significant defect in the illumination shown in
When an image of the object O2 is captured, of course, the final image should not have intensity at z=0. Even if the final image has certain intensity, it is necessary that the intensity be very lower than those of the images at z=±1.
Consider that the pupil function shown in
Therefore, a comb function-like fluorescent light coming from the upper fluorescent object exactly overlaps a comb function-like fluorescent light coming from the lower fluorescent object at the position of z=0, and thereby a light intensity distribution having a very high contrast is formed at z=0.
Hence, this embodiment uses, the pupil function P2(f, g) shown in
(−1/√2+a,1/√2+b);
(−1/√2+a,−1/√2+b); and
(1/√2+b,−1/√2+a) (A)
In the coordinates, a and b represent real numbers.
In other words, in the pupil function P2(f, g), as shown in
In order to verify this effect, a situation will be described in which the upper fluorescent object located at the position of z=1 and the lower fluorescent object located at the position of z=−1 are illuminated with mutually displaced illuminations formed by P2, respectively. In this situation, the overlap of the fluorescent lights coming from the upper and lower fluorescent objects at the position of z=0 with a displacement (imperfect overlap thereof) forms a light intensity distribution having a very low contrast as shown in
As understood from comparison of
Using the illumination method (that is, the illumination optical system) of this embodiment described above with the methods disclosed in NPL 5 and NPL 6, enables provision of a good image without intensity unevenness without performing scanning that requires a long time.
Next, description will be made of a preferred arrangement example of the illumination optical system of this embodiment in the three-dimensional fluorescence microscope with reference to
Reference numeral 110 denotes an illumination optical system which has a configuration capable of being added to a microscope body constituted by an objective lens 102 and an image sensor 103. Reference numeral 101 denotes an object (sample) placed on an object plane.
In the illumination optical system 110, reference numeral 111 denotes a coherent light source constituted by a laser or the like which emits light of a wavelength for exciting a fluorescent sample. Reference numeral 112 denotes an optical element, such as a diffraction grating, a prism or an optical fiber, and has a function of dividing one light beam emitted from the light source 111 into multiple (for example, three) light beams. The optical element 112 is not limited to the diffraction grating, the prism or the like, and may be any other element as long as it is capable of realizing the pupil function shape characterized in this embodiment with respect to a pupil plane 113 of the illumination optical system 110. The pupil function shape in this embodiment can be realized by easy methods for engineers concerning microscopes or semiconductor exposure apparatuses, such as computer-generated hologram (CGH).
The divided light beams are reflected by a dichroic mirror 114 and pass through the objective lens (objective optical system) 102 to illuminate the object 101 with a lattice illumination light intensity distribution. Fluorescent light emitted from the object 101 passes through the objective lens 102, passes through the dichroic mirror 114 and then passes through another objective lens 102 to be imaged on the image sensor 103. An image captured by the image sensor 103 and displayed on a monitor (not shown) is observed by an observer.
Although the number of point light sources in the pupil of the illumination optical system was three in the above description, the number thereof is not limited to three. When the three point light sources are provided, three light beams having incident angles respectively corresponding to positions of the three point light sources are projected onto the object 101 and thereby a lattice-like pattern shown in
The outline of the embodiment of the present invention was described above. Installing an illumination unit (illumination optical system) of the embodiment capable of realizing both the asymmetric structure illumination and the illumination having a uniform intensity distribution to a general fluorescence microscope enables constructing a three-dimensional fluorescence microscope system capable of providing a high-quality sectioning effect. Moreover, the illumination unit can be realized only by performing a simple and easily restorable modification on the general fluorescence microscope. Description will be made of this illumination unit below.
In order to realize the three-dimensional fluorescence microscope, it is necessary to acquire by image capturing (a) an image 1 of a fluorescent sample illuminated with an excitation light having a uniform intensity distribution (the excited light is hereinafter referred to also simply as “a uniform illumination”) and (b) an image 2 of the fluorescent sample illuminated with the asymmetric structure illumination.
First, description is made of a configuration and an illumination method of the illumination unit realizing the asymmetric structure illumination. The image of the fluorescent sample (hereinafter referred to also as “a fluorescent image”) is acquired by using an image sensor such as a CCD sensor or a CMOS sensor. Since general fluorescence microscopes have multiple camera ports, the following description is made on an assumption that the three-dimensional fluorescence microscope of the embodiment has multiple camera ports. Moreover, since an ocular observation system does not have an essential role in the three-dimensional fluorescence microscope described below, its description (and drawings) is omitted.
Inmost general fluorescence microscopes, the excitation light filter 201, the dichroic mirror 114 and the fluorescent light filter 203 are combined as one unit and rotatably detachably (interchangeably) attached via a turret. The fluorescence light 302 transmitted through the fluorescent light filter 203 is reflected by a bending mirror 202, passes through an imaging lens 102-B and enters a half mirror 204 to be divided into two light beams. One of the two light beams is introduced to a first camera port 211 to be imaged on and image sensor 103 such as a CCD sensor disposed thereat, and the other one of the two light beams reaches a second camera port 212. The first and second camera ports 211 and 212 are arranged at position conjugate with the object 101, and an imaging surface of the image sensor 103 is disposed on a plane conjugate with the object 101. A plane optically conjugate with the object 101 which is located inside each of the first and second camera ports 211 and 212 is hereinafter referred to as “an in-camera port conjugate plane.”
Most recent microscopes including the general fluorescence microscopes employ an infinity correction method, and thereby the fluorescent light from the sample is converted into a collimated light flux by the objective lens and propagates as the collimated light without change to the imaging lens to be collected by the imaging lens. Moreover, generally, the microscopes use a telecentric optical system for both image side and object side optical systems. Under the above conditions, in the microscopes employing the infinity correction method, focusing is made on the sample located at a front focal point of the objective lens and an image of the sample is formed at a rear focal point of the imaging lens. In addition, a rear focal point of the objective lens is coincided with a front focal point of the imaging lens. Furthermore, the in-camera port conjugate plane is coincided with the rear focal point in an optical axis direction.
In order to realize the asymmetric structure illumination on the object, it is necessary to divide the excitation light into multiple mutually coherent light beams respectively having predetermined incident angles and to produce an interference region where the multiple light beams overlap one another on the object plane. Hereinafter the mutually coherent light beams are referred to also simply as “light beams”.
When realizing such an asymmetric structure illumination in the existing general fluorescence microscopes, where the excitation light is introduced from is a problem. However, in the above-mentioned general fluorescence microscopes having the camera ports, the conjugate relation between the in-camera port conjugate surface and the object plane can be utilized.
In particular, a configuration can be employed in which the image sensor 103 such as a CCD sensor is disposed at the first camera port 211 and the excitation light as the divided multiple light beams for the asymmetric structural illumination is introduced from the second camera port 212. This configuration can introduce the excitation light as the divided multiple light beams into the microscope from the second camera port 212, causes the excitation light to proceed along an optical path (the imaging lens 102-B, the objective lens 102-A and the object 101) that is reverse to a normal imaging optical path for the object 101, and then projects the excitation light onto the object 101. Since the camera port and the sample are arranged at conjugate positions, if the introduced multiple light beams overlap one another on the in-camera port conjugate plane, they also overlap one another on the object plane and interfere with one another, which secures realization of the asymmetric structure illumination.
A pattern pitch of the asymmetric structure illumination is decided depending on the incident angles of the multiple light beams on the object plane. Each of these incident angles can be defined by an angle formed by each light beam entering the camera port with respect to the optical axis. Specifically, when m represents an imaging magnification of the object with respect to the in-camera port conjugate plane and θ2 represents the incident angle of each light beam on the object plane, the incident angle θ1 formed by each light beam entering the camera port with respect to the optical axis can be decided such that the following relation is satisfied:
sin θ1=sin θ2/m.
Next, description will be made of optical properties of the excitation light entering the object. Since it is necessary that the light source used for the asymmetric structure illumination is coherent, it is desirable to use a laser source as the light source. As the laser source, a semiconductor laser or a gas laser can be used which has an oscillation wavelength in an excitation wavelength region. As one of properties of the laser light, a beam waist is important. The beam waist is a portion of the laser light (laser beam) where its diameter (beam diameter) is minimum. In other words, the beam diameter of the laser beam increases from the beam waist forward and backward in its propagation direction.
Moreover, it is known that a curvature radius of a wavefront of the laser light becomes maximum (planar) at the beam waist. In order to reduce distortion of an intensity pattern of the asymmetric structure illumination, it is desirable that a wavefront of each of the above-mentioned multiple light beams be planer on the object surface. Therefore, it is desirable that, as the multiple light beams to form the asymmetric structure illumination, multiple laser beams each forming its beam waist on the object plane be used. The method for realizing the asymmetric structure illumination by using the general fluorescence microscope and the desirable conditions therein are as described above.
Next, description will be made of a configuration of the illumination unit producing the above-mentioned multiple light beams and modifications of the general fluorescence microscope necessary to install the illumination unit.
In the illumination unit 400, a laser source 111 is provided which emits a laser beam 301 (shown by a dotted line) to excite a fluorescent sample. The laser beam 301 enters a first optical path length adjuster 410. The first optical path length adjuster 410 is constituted by four bending mirrors 411 to 414. The laser beam 301 is reflected by the four bending mirrors 411 to 414 in this order and then enter a condenser lens 401. The laser beam 301 that has passed through the condenser lens 401 and a collimator lens 402 enters a second optical path length adjuster 420.
The second optical path length adjuster 420 is constituted by four bending mirrors 421 to 424. The laser beam 301 is reflected by the four bending mirrors 421 to 424 in this order and then enters a Mach-Zehnder interferometer 430. The later beam 301 entering the Mach-Zehnder interferometer 430 is subjected to intensity division to be divided by a half mirror 431 into two laser beams 301-A and 301-B. The laser beam 301-A is reflected by a bending mirror 433 and then reaches a half mirror 434. On the other hand, the laser beam 301-B is reflected by a bending mirror 432 and then reaches the half mirror 434.
Each of the two laser beams 301-A and 301-B is reduced in its intensity by the half mirror 434. Then, the two laser beams 301-A and 301-B pass through the mount member 500 and the second camera port 212 to enter the general fluorescence microscope 200. Subsequently, the two laser beams 301-A and 301-B pass through the half mirror 204 and the imaging lens 102-B, are reflected by the bending mirror 202 and then pass through the objective lens 102-A to reach the object 101. The half mirrors 431 and 434 and the bending mirrors 432 and 433 in the Mach-Zehnder interferometer 430 are each provided with an angle adjustment mechanism (not shown). The angle adjustment mechanism enables adjustment of the Mach-Zehnder interferometer 430 such that the two laser beams 301-A and 301-B overlap each other at the second camera port 212 and are projected onto the object surface at respective predetermined incident angles.
Next, description will be made of a desirable beam diameter of the beam waist formed on the object 101 and setting of parameters of the optical system to realize the desirable beam diameter.
The beam diameter on the object 101 decides an illumination area on the object 101; the illumination area should sufficiently cover an observation area. When fobj represents a focal length of the objective lens 102-A and ftube represents a focal length of the imaging lens 102-B, an imaging magnification m from the object 101 to the image sensor 103 is expressed as follows:
m=f
tube
/f
obj.
When Wimage represents half of a diagonal length of an effective image pickup area of the image sensor 103 and Wobj represents half of the effective image pickup area on the object 101, the following relation is established:
W
obj
=W
image
/m.
Accordingly, the beam diameter on object 101 should be set to Wimage/m or more. In the following description, the beam diameter on the object 101 is Wimage/m.
A position of the beam waist and the beam diameter at the beam waist can be converted by causing the laser beam to pass through a lens. When w1 and w2 respectively represent beam widths (each corresponding to a 1/e2 radius) of the beam waist before and after passage through the lens, d1 and d2 respectively represent distances from the beam waist before and after the passage through the lens to the lens, f represents a focal length of the lens and λ represents a wavelength of the laser, the following relations expressed by expression (4) are established:
w
2
=w
1
2
·f
2[(f−d1)2+(π·w12/λ)2]
d
2
=f+(w2/w1)2·(d1−f) (4).
In addition, when w(z) represents a beam diameter at a position away from the beam waist of the laser beam whose beam waist width is wo by a distance z, R(z) represents a curvature radius of a beam wavefront at that position and θwo represents a beam divergence angle at a position sufficiently away from the beam waist, the following relations expressed by expressions (5) and (6) are established:
w(z)2=wo2{1+[z·λ/(π·wo2)]2}
R(z)=z·{1+[π·wo2/(z·λ)]2} (5)
θwo=λ/(π·wo) (6)
When the laser beam enters the lens with the beam waist being located at a front focal point of the lens, which corresponds to d1=f, the second expression of expressions (4) provides d2=f. Therefore, it is understood that the position of the beam waist after passage of the lens coincides with a rear focal point of the lens. Moreover, it is understood that, from transformation of the first expression of expression (4) by setting of d1=f, the following relation is established:
w
1
·w
2
=f·λ/π.
Accordingly, in order to set the beam diameter at the beam waist on the object 101 to Wobj, it is necessary that, when Wobj-front represents a beam diameter at a beam waist formed at a front focal position of the objective lens 102-A, the following relation be satisfied:
W
obj-front
=f
obj/·λ/(π·Wobj).
Similarly, a beam diameter Wport2 at a beam waist formed at the second camera port 212 is defined as follows:
W
port2
=f
tube·λ/(π·Wobj-front).
Substituting fobj/·λ/(π·Wobj) to Wobj-front in the above expression provides the following relation:
As described above, on the basis of the illumination area necessary on the object plane 101, the beam diameter that should be obtained at each beam waist for realizing the illumination area can be decided by using expression (4). As understood from the above expression, changing the imaging magnification m does not cause a variation of the beam diameter Wport2 in the second camera port 212.
A more detailed description of propagation of the beam waist will be made. The laser beam 301 has a certain beam diameter of the beam waist (the beam diameter of the beam waist is hereinafter referred to as “a beam waist diameter”) at a beam emitting portion of the laser source 111. Laser sources respectively have different unique beam waist diameters and different unique beam divergence angles. Relations among the beam waist diameter, the wavefront and the beam divergence angle are decided by expressions (4) to (6) described above.
In other words, in a case where the beam waist diameter is in a submillimeter to millimeter range like those of a lot of gas lasers, the beam divergence angle in a visible wavelength region is in a milliradian range whereas beam diameters of most semiconductor lasers are in a micrometer range and the beam divergence angles thereof are in a range from several ten to several hundred milliradian.
The configuration shown in
As described above, since changing the focal length of the objective lens 102-A to change the imaging magnification does not need changing the beam waist diameter at the second camera port 212, the focal lengths of the condenser lens 401 and collimator lens 402, the distance therebetween and the optical path, which are described above, are not necessary to be changed after once setting them.
However, in a case where the wavelength of the excitation light or the beam diameter at the beam emitting portion of the light source is changed corresponding to, for example, a change of kind of fluorescent dye to dye the sample, it is necessary to reset the focal lengths of the condenser lens 401 and collimator lens 402, the distance therebetween and to readjust the first and second optical path length adjusters 410 and 420. In order to perform such resetting and readjustment, it is desirable that the collimator lens 402 be a focal length changeable lens and the condenser lens 401 be movable in its optical axis direction.
In a case where the beam diameter of the beam emitting portion of the light source is small and the beam divergence angle thereat is relatively large like semiconductor lasers, the beam waist formed at a position of the condenser lens 401 shown in
In a case of using three laser beams, a tri-branching optical system shown in
Returning to
Next, description will be made of another configuration to provide the multiple light beams with reference to
The illumination unit 400 forms the beam waist at the in-camera port conjugate plane in the second camera port 212 as in the configuration shown in
Division of the light beam emitted from the light source into the multiple light beams to cause them to enter the second camera port 212 is possible by other configurations. Although a configuration shown in
At the pupil plane 113 of the illumination optical system, the two diffracted beams are collected at mutually different positions respectively corresponding to their incident angles on the second camera port 212.
The optical system constituted by the condenser lens 404 and the second collimator lens 405 can set a conjugate relation of the diffraction grating 112 and the second camera port 212. This setting can cause the two divided diffracted beams to overlap each other at the second camera port 212. Moreover, setting an incident angle of the collimated light on the diffraction grating 112 to an angle not equal to 0 clearly makes it possible to realize the asymmetric structure illumination on the object 101.
Next, a method for realizing the uniform illumination will be described. The uniform illumination can be realized by using the epi-illumination optical system originally provided with the general fluorescence microscope. However, constructing a three-dimensional image needs multiple sectioning images captured with step by step changes of the focus coordinate. Capturing such multiple sectioning images requires frequent switching between the asymmetric structure illumination and the uniform illumination. If removal of the illumination unit and restoration of the fluorescence microscope to the original state with installation of the epi-illumination optical system is performed at each switching, it is disadvantageous timewise. Moreover, frequent operation of the turret holding the optical elements such as the filter and the dichroic mirror and replacement of these optical elements may cause unnecessary vibration or image displacement.
Therefore, it is not realistic to remove a selective illumination unit to restore the microscope to the original state and thereby realize the uniform illumination. Thus, possible alternative methods are, for example, a method (first method) that blocks, of multiple light beams, other light beams than one light beam and thereby illuminates the object with the one light beam and a method (second method) that utilizes a degree of freedom of a polarized light beam so as to prevent the multiple light beams from interfering with one another. The first method can be realized by disposing a light-blocking member to block the laser beam 301-A shown in
Using the above-described methods enables, by providing a modification that is simple and restorable to the original state to the general fluorescent microscope and by installing the illumination unit, construction of a three-dimensional fluorescence microscope system having a high-quality sectioning effect. A pattern displacement of the asymmetric structure illumination does not influence final image quality as long as the pattern has periodicity. This is because, although computer processing for providing the sectioning effect requires standard deviation values calculated in areas near respective points in the image, the area near each point of the image is larger than one period of the pattern.
Next, a detailed description of the fluorescence microscopes provided with the illumination optical system having the pupil function P2 will be made in Embodiments 1 and 2, and a description of specific configurations of the illumination unit will be made in Embodiments 3 to 5.
A microscope illumination optical system that is in Embodiment 1 (Example 1) has a configuration as shown in
A comparison is made of the method using the random speckle illumination disclosed in NPL 5 and NPL 6 and the method using the lattice illumination formed by the illumination optical system of this embodiment having the pupil function P2.
On the other hand,
Although Embodiment 1 described the case of providing three point light sources in the pupil of the illumination optical system, the number of the point light sources is not limited to three, and may be two or four or more. Embodiment 2 (Example 2) describes an illumination optical system that uses two point light sources to provide the sectioning effect.
As described above, the number of light source areas may be any multiple number (two or more).
In experiments performed by the inventor in which image capturing is performed by a three-dimensional microscope using the illumination optical system described in each of Embodiments 1 and 2 with a pixel number of 256×256×256, a time required for the process by a workstation equipped with a CPU of 3.33 GHz was within one minute. Providing a dedicated computer program, a parallel distribution environment and optimized hardware such as a graphic accelerator sufficiently enables acquisition of an image within a time shorter than that required for scanning by a confocal microscope.
Embodiment 3 (Example 3) presents a numerical example relating to settings of the illumination area, the beam waist and the beam diameter by using the following specific numerical values (predetermined values) such as an imaging magnification of the microscope and thereby shows that the configuration shown in
A half length of a diagonal length of the image sensor 103: Wimage=4 mm
A focal length of the objective lens 102-A: Fobj=4 mm
A focal length of the imaging lens 102-B: Ftube=160 mm
An imaging magnification from the object 101 to the image sensor 103: m=40×
A wavelength of the laser source 101: λ=488 nm
A focal length of the condenser lens 401: F1=15 mm
A radius of the beam emitting portion of the laser source 111: Wo=0.26 mm
In Table 1, a unit of optical parameters except the wavelength of the laser source 101 is mm. From the given predetermined values (shown in an upper part of Table 1), the optical parameters other than the predetermined values are calculated. The calculated parameters listed below are collectively shown in a lower part of Table 1, which are realistic values.
An optical path length from the beam emitting portion of the laser source 111 via the first optical path length adjuster 410 to the condenser lens 401: D0
A beam diameter of beam waist formed near the focal point of the condenser lens 401: W1
A variable focal length of the collimator lens 402: F2
A distance between the condenser lens 401 and the beam waist formed near the focal point of the condenser lens 401: D1
A distance between the beam waist formed near the focal point of the condenser lens 401 and the collimator lens 402: D2
An optical path length from the collimator lens 402 via the second optical path length adjuster 420 and the Mach-Zehnder system 430 to the in-camera port conjugate plane in the second camera port 212 D3
A beam diameter of the beam waist formed at the in-camera port conjugate plane in the second camera port 212: Wport2
When setting the distance d2 between the condenser 401 and the collimator lens 402 to 98.7754 mm+15.2 mm, adjusting the optical path length d0 from the beam emitting portion of the laser source 111 via the first optical path length adjuster 410 to the condenser lens 401 to 933.898 mm provides an illumination area illuminated by the asymmetric structure illumination as an area having a width of Wimage=4 mm as assumption.
The connection portion shown in
Embodiment 4 (Example 4) shows a method that uses one of the multiple light beams used for the asymmetric structure illumination and blocks the other light beams in the configuration shown in
In
The light-blocking portion may be moved by other movement ways than the rotation about the optical axis, as long as the above-mentioned light-blocking function can be provided. For example, the light-blocking mask 501 may be slided so as to be inserted into and removed from the optical path. Moreover, the light-blocking mask 501 may be driven by any driving methods. Providing a lot of the light-blocking portions 503 whose each area is small on the light-blocking mask 501 makes it possible to switch between the asymmetric structure illumination and the uniform illumination only by a slight movement of the light-blocking mask 501.
The light-blocking portion 503 can be realized by using a material having an extremely low transmittance for a general light or by using a polarization element that selectively blocks a light in a specific polarization state. Moreover, using a liquid crystal polarizing plate or the like that is capable of dynamically changing its property such as transmittance as the light-blocking portions 503 eliminates even the necessity of the movement of the light-blocking mask 501.
This embodiment shows an exemplary configuration of an optical system that includes a branching portion to divide a light beam into multiple light beams and controls a polarization state of each light beam so as to perform switching of the asymmetric structure illumination and the uniform illumination.
Next, description will be made of a configuration of a beam interference optical system 430 constituted by elements 472, 473 and 474. The s-wave is reflected by a polarization beam splitter 472. On the other hand, the p-wave is transmitted through a polarization beam splitter 472. The s-wave reflected by the polarization beam splitter 472 directly proceeds to the second camera port 212. The p-wave transmitted through the polarization beam splitter 472 is reflected by a bending mirror 473 and thereby changes its proceeding direction. The p-wave changing the proceeding direction is converted, by a half-wave plate 474 whose optic axis is orthogonal to an optical axis and is tilted by 45 degrees with respect to a polarization direction of the p-wave, into an s-wave. Then, the s-wave reaches the second camera port 212. The two s-waves reaching the second camera port 212 have coherence at the second camera port 212 and the object 101 and thereby form the asymmetric structure illumination.
On the other hand, in order to realize the uniform illumination, a setting is made such that the polarized light components after passage through the polarization forming element 471 are only the s-wave or only the p-wave. This setting causes only the s-wave or only the p-wave to reach the second camera port 212, which makes it possible to form the uniform illumination. Although
A sine of an angle θ between the two light beams shown in
For example, when NA=0.95, d/r=1 and β=40, sin θ=0.024, which is significantly small. In order to ensure a distance of, for example, 5 cm between the polarization beam splitter 472 and the bending mirror 473 for preventing interference thereof, a distance of approximately 2 cm is necessary between these two elements and the in-camera port conjugate plane in the second camera port 212, which increases size of the illumination unit. In order to solve this problem, as shown in
The embodiments described above are merely typical examples, and in the practice of the present invention, various modifications and changes can be made for each embodiment.
This application claims the benefit of Japanese Patent Application Nos. 2012-188223, filed on Aug. 29, 2012 and 2013-174742, filed on Aug. 26, 2013, which are hereby incorporated by reference herein in their entirety.
Provided is an illumination optical system capable of being used for microscopes such as a fluorescence microscope and a digital slide scanner.
Number | Date | Country | Kind |
---|---|---|---|
2012-188223 | Aug 2012 | JP | national |
2013-174742 | Aug 2013 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/073857 | 8/29/2013 | WO | 00 |