The present invention relates to confocal optical microscopy methods, with particular reference to video-confocal microscopy, and also relates to devices for carrying out such methods.
As well known, confocal microscopy is a microscopic imaging technique that comprises a narrow-field illumination of a sample and a narrow-field capture of the light coming from the sample at the illuminated zone, i.e. a spot.
Optical video-confocal microscopy technique uses a narrow-field illumination of the sample, and a wide-field capture of the light coming from the sample, followed by a mathematical processing of the data, in order to emulate narrow-field capture.
More in detail, with reference to
In particular, EP 0833181 describes methods for calculating video-confocal microscopy images, in which algorithms are used such as:
I(x,y)=K[max(x,y)−min(x,y)−2Avg(x,y)],
wherein max(x,y), min(x,y) and Avg(x,y), for each mode of lighting versus the position u,v, are images formed by the maximum values, by the minimum values and by the average values, respectively, of the light intensity, for each couple x,y of coordinates, where K is a gain factor that depends upon the shape of the spots.
EP0833181 also describes a video-confocal microscopy device for forming the raw images, and for carrying out the above described method.
However, this process, like others of the prior art, has some drawbacks.
In fact, in the video-confocal microscopy techniques, the noise, and/or a too low number of few raw images, can cause artifacts in the obtained images, such as “spurious patterning”, which would worsen the performance.
However, increasing the scan density would extend the duration of the analysis, and photomodifications of the sample would be also possible. Therefore, the scan density should not exceed a reasonable limit.
Furthermore, the actual confocal and video-confocal techniques can provide an axial resolution power that is much lower than the lateral resolution power, even at the highest apertures. This depends upon diffraction effects and upon the microscope far-field configuration. This limitation makes it difficult to obtain images that can represent at best thin optical sections, which are necessary for accurately investigating the three-dimensional features of thick samples.
Normally, an increase of the spatial resolution power is desirable especially in biology and in medicine, typically in ophthalmology, dermatology, endoscopy, but also in the field of material investigation, in such fields as microelectronics, nanotechnologies, worked surfaces, non-destructive tests etc.
From US 20040238731 A1 and US 20040238731 A1 microscopy techniques and microscopes are known, in which steps are provided of axial displacement of the focus plane and deep investigation of the samples. However, these techniques and microscopes cannot provide a particularly high image resolution. More in detail, the apparatus of US 20040238731 A1 uses algorithms for obtaining, in particular, chromatic and spatial data of the sample by exploiting the chromatic aberration of its optics. The apparatus of US 20050163390 A1 is used for comparing low resolution optical sections in order to reconstruct the structure of thick samples by physically overturning the samples and by scanning them twice, in opposite directions, and can be used only for particular types of samples.
It is therefore a feature of the invention to provide a method and a device to overcome the above-mentioned drawback, of the present confocal and video-confocal optical microscopy, in order to extend its use to technical fields in which it can become an easier and cheaper alternative to fluorescence, reflection and even transmission microscopy techniques.
It is a particular feature of the invention to provide a method and a confocal or video-confocal microscopy device that has a better resolution power with respect to what can be obtained at present.
It is another particular feature of the invention to provide a video-confocal microscopy method and device that provide a finer spatial resolution even starting from raw images that have a low signal/noise ratio.
It is also particular feature of the invention to provide a video-confocal microscopy method and device that provide a finer spatial resolution even starting from a relatively low number of raw images, i.e. by a relatively low scan density.
It is then particular feature of the invention to provide a video-confocal or confocal microscopy method and device that provide an axial resolution power similar to the lateral resolution power, or in which the axial resolution power differs from the radial resolution power less than in the prior art.
These and other objects are achieved by a video-confocal microscopy method for creating an image of an optical section of a sample, the method comprising the steps of:
In a first aspect of the invention, the step of computing the final image comprises executing an algorithm configured for calculating, for each light detector element, at least one value of a central moment of order ≧3 of the light intensity distribution, the central moment of order ≧3 having at each coordinate x,y a value that depends upon the asymmetry of the intensity values distribution of each raw image versus the position of the illumination pattern, wherein the central moment is defined as:
m
h(x,y)=Avg{[Iu,v(x,y)−Avg(Iu,v(x,y))]h}, [1]
wherein:
This way, the final image takes higher values at the coordinates x,y of the light detector elements that correspond to positions at which critically focused sample portions are present.
In other words, the moments of order h≧3, which are used in the algorithm for computing the final video-confocal image, contain light intensity distribution data that allow to take into account the symmetry/asymmetry degree of the light intensity distribution at each pixel, i.e. at each light detector element of the detector, versus the position u,v of the illumination pattern.
The pixels, i.e. the detector elements of the detector, at which there is a higher asymmetry of the light intensity distribution correspond to sample portions that are critically focused, i.e. they correspond to sample portions that have a higher density, and/or to sample portions that emanate a higher brightness, for instance by fluorescence, by reflection or even by transmission, which points out local unevennesses and specific features of these sample portions.
Therefore, the above-mentioned critically focused sample portions are highlighted as more bright portions in the final confocal image.
Therefore, if central moments of the light intensity distribution are used to calculate the final image, according to the invention, better performances can be achieved than by prior art methods.
The light beams may generally include waves over a wide range of the electromagnetic spectrum, even if most current applications of confocal and video-confocal microscopy use wavelengths between the infrared and the near-UV range. Therefore, the expression “light” is to be understood in a broad sense, and also comprises waves out of the visible range. For the same reason, also the word “microscopy”, and the like, as used in this text, may have a wider meaning, which includes high resolution image capture techniques, i.e. high resolution imaging techniques, if the used wavelength is taken into account.
In an exemplary embodiment, the algorithm for computing the final image is defined by the equation:
IA
h(x,y)=mh(x,y)/[m2(x,y)](h-1)/2. [2]
wherein:
The value of the central moment may be calculated by formula [1] directly, or by formulas relating the central moments to the simple moments, which are well known to a person skilled in the art to which the invention relates.
Advantageously, the step of computing the final image is carried out by an algorithm defined by formula [2], wherein h=3, i.e. by the formula:
IA
3(x,y)=m3(x,y)/m2(x,y) [3]
In fact, it has been noticed that formula [3] makes it possible to obtain images that have a finer spatial resolution with respect to the prior art, and that have fewer random irregularities and are less affected by patterning effects due to a possible spatial incoherence of the scan pattern, in particular the scan ordered pattern, i.e. the distance between adjacent u,v positions, with respect to the matrix of the light detector elements, i.e. the pixels of the image detector, which normally occurs due to the step-by-step scanning of video-confocal microscopy, and makes it possible to obtain images that are less affected by such effects as the Moiré effect and the like.
As described above, the typical asymmetry of the light intensity distribution Iu,v(x,y) is particularly sensitive to the presence of critically focused sample portions, in particular it is sensitive to the presence of a material that is particularly concentrated in determined regions.
More in detail, images are obtained that have a resolution comparable or higher than what is allowed by the prior art techniques, in which the final image is obtained, for instance, through the algorithm:
I(x,y)=K[max(x,y)−min(x,y)−2Avg(x,y)]
to which EP 0833181 relates.
In alternative, or in addition, the step of computing the final image may be carried out by an algorithm still defined by formula [2], wherein the order h is an odd integer number ≧5.
In fact, the moments of odd order h, normally take particularly into account the asymmetry of the light intensity distribution at the light detector elements, i.e. at the pixels of the detector, versus the position u,v of the illumination pattern. As described above, this is relevant to detect the position of critically focused sample portions, through portions of the final image that have a particularly high light intensity.
Furthermore, provided the scan density and the signal-noise ratio are higher than predetermined values, the moments of odd order higher than 3 makes it possible to obtain more detailed data, the higher the order h is, provided the scan density, i.e. the distance between adjacent positions of the illumination pattern is suitably high.
In particular, the step of computing the final image is carried out by an algorithm defined by formula [2], wherein the order h is selected among 5, 7 and 9, i.e. the step of computing is carried out through a formula selected from the group consisting of:
IA
5(x,y)=m5(x,y)/[m2(x,y)]2 [4]
IA
7(x,y)=m7(x,y)/[m2(x,y)]3 [5]
IA
9(x,y)=m9(x,y)/[m2(x,y)]4 [6].
In a possible exemplary embodiment, this algorithm is expressed by a combination of values of central moments of the light intensity distribution, in particular by a linear combination that is expressed by the equation:
IA(x,y)=Σi=H′ . . . H″[(ci·mi(x,y)/[m2(x,y)](i-1)/2] [7]
In particular, the use of the linear combination defined by equation [7] turned out to be useful for minimizing the patterning due to Moiré effect and to the step-by-step scanning.
Naturally, also in the case of formulas [3]-[7], the values of the central moments may be directly calculated from equation [1], or by the above-mentioned formulas that relates the central moments to the simple moments.
In an exemplary embodiment of the invention, the method comprises a step of:
m
h,j(x,y)=Avg{[Iu,v,w
wherein:
The use of provisional images comprising images that relate to above-focus planes and to below-focus planes in order to calculate the final image allows an axial resolution higher than what is allowed by the prior art methods.
Therefore, not only the amount, but also the quality of the obtained data is improved, and not only in a lateral but also in an axial displacement of the illumination pattern, i.e. in a displacement about the optimum focus position. In fact, optical sections are obtained remarkably thinner than what it is allowed by the prior art methods. This is particularly advantageous for studying thick objects.
Advantageously, the maximum distance of the further illumination planes from the optical section is the same order of magnitude as the wavelength of the light beams.
In an advantageous exemplary embodiment, a step is provided of translationally moving a same illumination pattern according to a direction w transverse to the optical section, in particular according to a direction which is orthogonal to the optical section, wherein this same illumination pattern is sequentially shifted between the illumination plane arranged at the optical section and each of the further illumination planes αj, and said step of scanning the sample is sequentially carried out along each of the illumination planes.
In particular said plurality of further illumination planes comprises only the two further illumination planes that are arranged at opposite sides of the optical section, in particular at about the same distance from the optical section.
In this case, a combination of the provisional images is preferably defined by the equation:
IB(x,y)=IA(x,y)−k|IA−(x,y)−IA+(x,y)| [8]
wherein IA−(x,y) and IA+(x,y) are provisional images of the two planes α−,α+ that are arranged at opposite sides of the optical section, and k is a correction coefficient preferably set between 0.5 and 1.
For example, the provisional images may be calculated, in particular, by formulas that are similar to equation [2]:
IA
+(x,y)=mh+(x,y)/[(m2+(x,y))](h-1)/2 [2′],
and
IA
−(x,y)=mh−(x,y)/[(m2−(x,y))](h-1)/2 [2″],
where mh+(x,y) and mh−(x,y) are values of the central moments of order h of distributions of light intensity coming from the further illumination planes α− and α+, respectively, and
In alternative, the provisional images may be calculated by means of respective combinations of values of central moments of the light intensity distribution, in particular by means of linear combinations that may be obtained from equation [7]:
IA
+(x,y)=Σi=H′ . . . H″[ci·mi+(x,y)/[m2+(x,y)](i-1)/2] [7′],
and
IA
−(x,y)=Σi=H′ . . . H″[ci·mi−(x,y)/[m2−(x,y)](i-1)/2] [7″].
where mi+(x,y) and mi−(x,y) are values of the central moments of order i of intensity distribution of light coming from the further illumination planes α− and α+, respectively.
For instance, the step of computing the final image is carried out by an algorithm defined by formula [8], wherein h=3, i.e. by the formula:
IB
3(x,y)=IA3(x,y)−k|IA3−(x,y)−IA3+(x,y)|. [9]
In other exemplary cases, the step of computing the final image is carried out by an algorithm defined by formula [8], wherein h is an odd integer number ≧5. Even in this case, if the scan density and the signal-noise ratio are high enough, the values of central moments and the provisional images of higher order allows a higher lateral resolution, the higher h is. In particular, the step of computing the final image is carried out by an algorithm defined by formula [2], wherein h is selected among 5, 7 and 9.
In still other embodiments, the step of computing the final image is carried out by an algorithm defined by formula [7]:
IA(x,y)=Σi=H′ . . . H″[ci·mi(x,y)/[m2(x,y)](i-1)/2] [7]
In a second aspect of the invention, the above-mentioned objects are achieved by a confocal microscopy method for creating an image of a cross section of interest of a sample, the method comprising the steps of:
The use of provisional images comprising images referring to above-focus planes and below-focus planes to calculated the final image makes it possible to obtain an axial resolution higher than what is allowed by the prior art methods.
Therefore, not only the amount, but also the quality of the obtained data is improved, and not only in a lateral but also in an axial displacement of the illumination pattern, i.e. in a displacement about the optimum focus position. In fact, optical sections are obtained remarkably thinner than what it is allowed by the prior art methods. This is particularly advantageous for studying thick objects.
These advantages can be achieved also in the field of confocal microscopy, where the information, i.e. the light coming from the sample, is directly narrow-field collected.
Advantageously, this maximum distance is the same order of magnitude as the wavelength of the light of the light beams.
In particular,
In other words, the advantages of the method according to the second aspect of the invention advantages can be achieved also in the field of video-confocal microscopy, which differs from the technique of confocal microscopy for the fact of receiving the information, i.e. the light coming from the sample, to in a wide wiled, and for the fact of carrying a restriction in a narrow field analytically. As it is better explained hereinafter, such advantages can be achieved independently from the algorithm used to calculate the final image.
In a possible exemplary embodiment, a step is provided of translationally moving a same illumination pattern according to a direction w transverse to the section of interest, in particular according to a direction orthogonal to the section of interest, wherein the same illumination pattern is sequentially shifted between the illumination planes βj, and the step of scanning the sample is sequentially carried out on each illumination plane.
Preferably, said plurality of illumination planes comprises an illumination plane selected at the section of interest.
In particular said plurality of illumination planes comprises, at a predetermined distance from said section of interest, only said two illumination planes that are arranged at opposite sides of the section of interest, in particular at about the same distance from the section of interest.
In this case, the combination is preferably defined by the equation:
IB(x,y)=I(x,y)k|I−(x,y)−I+(x,y)| [10]
wherein I(x,y) is an image calculated at illumination plane β0, whereas I−(x,y) and I+(x,y) are provisional images of the two planes β−,β+ at opposite sides of the section of interest, and k is a correction coefficient preferably set between 0.5 and 1.
The provisional images may be calculated by an algorithm selected among the algorithms known from the confocal and video-confocal microscopy methods, such as the algorithms disclosed in EP 0833181, for example by the formula:
I(x,y)=K[max(x,y)−min(x,y)−2Avg(x,y)],
and similar for I−(x,y), I+(x,y), or by an algorithm defined by the formula:
I(x,y)=mh(x,y)/[m2(x,y)](h-1)/2 [2]
where the symbols have the meaning explained above, and similar for I−(x,y), I+(x,y).
In alternative, the provisional images are calculated by an algorithm defined by formulas deriving from equation [2], i.e. by the formulas:
IA
+(x,y)=mh+(x,y)/[(m2+(x,y))](h-1)/2 [2′],
and
IA
−(x,y)=mh−(x,y)/[(m2−(x,y))](h-1)/2 [2″]
In alternative, the provisional images are calculated as linear combinations of central moments of the light intensity distribution, in particular they are calculated by an algorithm defined by the formulas deriving from equation [7]:
IA
+(x,y)=Σi=H′ . . . H″[ci·mi+(x,y)/[m2+(x,y)](i-1)/2] [7′],
and
IA
−(x,y)=Σi=H′ . . . H″[ci·mi−(x,y)/[m2−(x,y)](i-1)/2] [7″].
As already described, more in general, the use of provisional images comprising images referring to above-focus planes and to below-focus planes in order to calculate the final image, according to the invention, makes it possible to obtain an axial resolution higher than what is allowed by the prior art methods.
Also in this case, the light emitted from the spots of the illumination plane may comprise reflected and/or transmitted and/or fluorescent light beams coming from the sample at such spots.
The above-mentioned objects are also attained by a confocal microscopy apparatus comprising:
In a particular exemplary embodiment,
In particular the computing means is configured for making a combination of provisional images of only two further illumination planes that are arranged at opposite sides of an illumination plane predetermined, in particular distance substantially alike by the illumination plane, the combination defined by the equation:
IB(x,y)=I(x,y)−k|I−(x,y)−I+(x,y)| [12]
wherein I(x,y) is an image calculated at illumination plane β0, while I−(x,y) and I+(x,y) are provisional images referring to the two further planes β−,β+ which are arranged at opposite sides of plane β0, and k is a correction coefficient preferably set between 0.5 and 1.
The invention will be now shown with the description of exemplary embodiments of the method and of the device according to the invention, exemplifying but not limitative, with reference to the attached drawings, in which like reference characters designate the same or similar parts, throughout the figures of which:
FIGS. 3,4,5 show details of the microscopy system of
With reference to
Apparatus 100 comprises a means 10 for generating a plurality of light beams 19. In the depicted exemplary embodiment, means 10 for generating light beams 19 comprises a light source 11 and a concentration optical system 12, associated to source 11 that is configured for conveying the light 11′ emitted by source 11 in a single light beam 13.
In this exemplary embodiment, the means 10 for generating light beams 19 also comprises a diaphragm 14 that provided with holes 14′ that, in this case, are arranged to form an ordered matrix. Diaphragm 14 is adapted to change a light beam that hits one of its own faces into a plurality of parallel light beams coming out of the opposite face. Diaphragm 14 is arranged to receive on an own face, which cannot be seen in
In this exemplary embodiment, apparatus 100 also comprises a beam divider 16 configured for receiving light beams 19 and for divert them towards a support 90 on which sample 99 is arranged.
Apparatus 100 preferably comprises a means 20 for concentrating light beams 19 in a plurality of spots 17 of an illumination plane β that corresponds, In the exemplary embodiment of
In this case, illumination plane β crosses a region to be observed of sample 99, in particular plane β is arranged at an optical section of interest of sample 99.
Apparatus 100 also comprises a light sensor means 40, in this case, a wide field sensor means, which is a feature of video-confocal microscopy. The light sensor means may comprise a photoelectric image detector 40, for instance a two-dimensional CCD detector. x,y indicate the coordinates of a plane defined by detector 40.
Apparatus 100 has a scan means for scanning the region to be observed of sample 99, or optical section π of sample 99, as diagrammatically shown by the double arrows 30, 30′. In this exemplary embodiment, the scanner means comprises a translation means 30,30′, not shown in detail, for causing a relative translation movement of diaphragm 14 with respect to the unit of source 10 and of collimator 12, for example according to the two alignment directions u′,v′ holes 14′ of matrix diaphragm 14. For example, the translation means may comprise stepper motors.
Due to the diversion of beams 19 in beam divider 19, two translation directions u,v illumination pattern 18 correspond to translation directions u′,v′ of diaphragm 14 in illumination plane β.
In alternative to diaphragm 14, in a not shown exemplary embodiment, the means 10 for generating light beams 19 and scan means 30,30′ may comprise a liquid crystal (LCD) light valve optoelectronic device, and also other devices with no mechanical moving parts such as light emitter arrays that can be programmed by means of suitable signals.
Still with reference to
The operation of such an arrangement, as well as of an arrangement comprising the above-mentioned alternative scan means, is well known by a skilled person and its detailed description will be omitted.
Apparatus 100 also comprises a computing means 50, which include a means 51 for forming a set of raw images 52, each of which is described by a function Iu,v,(x,y), where the two subscripts u,v, indicates the position of illumination pattern 18 with respect to reference plane β.
Subscripts u,v indicates the lateral position, i.e. a position that can be attained by a translation movement of illumination pattern 18 parallel to reference plane β0. Subscripts u,v take values that depend upon the scan features of scanner means 30′, in particular they take a plurality of values set between 0 and U,V, respectively, where 0 refers to a predetermined position of illumination pattern 18 in plane β, while U,V refer to positions in which each beam has covered the whole distance but one step between this position and the position of an adjacent node of illumination pattern 18 in plane β, according to the directions x,y, respectively. In particular, subscripts u,v take s−1 and t−1 values set between 0 and U and between 0 and V, respectively, where U and V are the pitches of illumination pattern 18 according to directions u,v, respectively, and s,t are the scan densities along directions u,v.
According to the invention, computing means 50 also comprises a combination means 54 for combining raw images 52, in order to form, i.e. to calculate, a final image 58. The algorithm used for forming final image 58 may be defined by the formula:
I(x,y)=mh(x,y)/[m2(x,y)(h-1)/2] [2]
wherein:
In alternative, the algorithm used for constructing the provisional images may be defined by the formula:
IA(x,y)=Σi=H′ . . . H″[ci·mi(x,y)/[m2(x,y)](i-1)/2][7]
which, if only one coefficient ci is different from zero, may be one of formulas [3], [4], [5], [6] and similar, which correspond to particular values of i, i.e. to particular orders of the central moment, where the meaning of the symbols is clear from the above.
With reference to
Further illumination planes βi are at respective distance δi from reference plane β0. Distance δi is preferably the same order of magnitude as the wavelength λ of the waves that form illumination beams 19. At least two of these illumination planes 13, are located at opposite sides of a reference plane β0, as indicated in
In analogy with Apparatus 100, apparatus 200 also comprises a computing means 50 that include a means 51 for forming a set of raw images 52j, referring to a particular illumination plane. In this case, each raw image is described by a function indicates Iu,v,w
According to the second aspect of the invention, apparatus 200 differs from apparatus 100 since computing means 50 comprises, instead of means 54 for calculating the final image from raw images 52:
Subscripts j and wj indicated the axial distance of illumination pattern 18 with respect to plane β0, i.e. indicates the position of planes βi that can be attained by a translation movement of illumination pattern 18 according to transverse i.e. axial direction w with respect to reference plane β0, in particular according to a direction perpendicular to reference plane β0. Subscript wj takes values δ−2, δ−1, δ0, δ+1, δ+2, according to the convention used in
As described, in the exemplary embodiment, as shown, reference plane β0 is arranged at optical section of interest π of sample 99. As shown in
The absolute value of −δ1, +δ2, or briefly δ, is advantageously the same order of magnitude of the wavelength of the light beams.
For example, in the case of
IB(x,y)=I(x,y)k|I−(x,y)−I+(x,y)| [10]
The shape of provisional images I(x,y), I−(x,y), I+(x,y), i.e. the algorithm used by means 55 for constructing provisional images 56′,56″,56′″ may be, in this case, any algorithm known for creating images starting from raw preliminary images 52, for example one of the algorithms described in EP 0833181 and in U.S. Pat. No. 6,016,367.
In alternative, the algorithm used by means 55 for constructing provisional images 56′,56″,56′″ may be defined by the formula:
I(x,y)=mh(x,y)/[m2(x,y)](h-1)/2 [2]
and similar for I−(x,y), I+(x,y), wherein:
In alternative, the algorithm used for constructing the provisional images may be defined by the formula:
IA(x,y)=Σi=H′ . . . H″[ci·mi(x,y)/[m2(x,y)](i-1)/2] [7]
which, if one of only the coefficient ci is different from zero, can correspond to one of formulas [3], [4], [5], [6] and similar, corresponding to a particular value of i, i.e. to a particular order of the central moment.
In other words, in some embodiments means 55 may be configured for executing algorithms as the means 54 of
In alternative, the algorithm used for constructing the provisional images may be defined by the formula:
I(x,y)=maxu,v[I(x,y)]−Avgu,v[I(x,y)] [5]
and similar for I−(x,y), I+(x,y), where the meaning of the symbols is clear from the above.
Even if the method according to the second aspect of the invention has been shown in detail only with reference to apparatus for video-confocal microscopy 200, it may be used in combination with a confocal microscopy apparatus.
IB(x,y)=I(x,y)−k|I−(x,y)−I+(x,y)|, [10]
where the meaning of the symbols is clear from the above.
Some examples of images obtained by the method according to the invention are described below. These images confirm the substantial improvements in the performances that are allowed by the invention, in terms of both lateral and axial spatial resolution.
In one example, images are obtained by means of central moments of the light intensity distribution, for example calculated by the formulas that define such values of central moments:
m
h(x,y)=Avg{[Iu,v(x,y)−Avg(Iu,v(x,y))]h}, [1]
wherein:
IA
3(x,y)=m3(x,y)/m2(x,y) [3].
By this technique a spatial resolution higher than 80 nm could be attained, in the case of both compact and scattered samples.
This corresponds to lateral super-resolution factors of about 3 and to axial super-resolution factors of about 7. It has been noticed that the performances achieved even in different applications are comparable or better than the performances declared by the manufacturers of new concept instruments, which are however expensive and not very versatile instruments.
In
I(x,y)=K[max(x,y)−min(x,y)−2Avg(x,y)];
In particular,
IA
3(x,y)=m3(x,y)/m2(x,y) [3].
These figures show, in comparison to
A further improvement, with respect to the image of
IA(x,y)=c5·m5(x,y)/[m2(x,y)]2+c7·m7(x,y)/[m2(x,y)]3+c9·m9(x,y)/[m2(x,y)]4 [7′]
is wherein c5=0, 48, c7=0, 36, c9=0, 24, i.e. through a linear combination of the expressions of formulas [4], [5], [6]. Even in this case, improvements were achieved with respect to the images provided by the conventional technique. This is particularly apparent if the regions 26,27 of the image of
As described, the technique according to the invention produces some advantages, with respect to the prior art, also in the case of images obtained in the presence of random noise. This is shown in
I(x,y)=K[max(x,y)−min(x,y)−2Avg(x,y)]
in which the raw images have been obtained in the presence of random noise.
In particular,
Also in this case, a further improvement is obtained, with respect to the image of
FIGS. 14,15 and 16 reproduce images of the same optical section obtained through the formulas:
IA
5(x,y)=m5(x,y)/(m2(x,y))2 [4]
IA
7(x,y)=m7(x,y)/(m2(x,y))3 [5]
IA
9(x,y)=m9(x,y)/(m2(x,y))4, [6],
respectively, and are shown here as a reference for the image of
Similar considerations apply for
With reference to the second aspect of the invention, in which the final image of an optical section is obtained as a combination 57 of provisional images 56 associated to respective illumination planes selected proximate to the optical section, a sample was arranged that contained fluorescent synthetic balls of the diameter of about 0.5 μm. With reference to the process shown in
In particular,
IA
3(x,y)=m3(x,y)/m2(x,y) [3],
where the symbols have the meaning as explained above, even if it is possible to use video-confocal methods of different type, for example the methods described in the present invention or methods known in the prior art or, more in general, methods arranged to make optical sections, for example confocal microscopy methods.
IB(x,y)=I(x,y)−k|I−(x,y)−I+(x,y)| [10]
wherein I−, I and I+ are functions that represent the provisional images of
From
The foregoing description exemplary embodiments of the invention can showing the invention by the or point of view conceptual so that other, by applying current knowledge, will be able to modify and/or adapt in various applications the specific exemplary embodiments without further research and without parting from the invention, and, accordingly, it is meant that such adaptations and modifications will have to be considered as equivalent to the specific embodiments. The means and the materials to realize the different functions described herein could have a different nature without, for this reason, departing from the field of the invention. S'intende that the expressions or the terminology used have object purely describedvo and, for this, not limitative.
Number | Date | Country | Kind |
---|---|---|---|
PI2012A000034 | Mar 2012 | IT | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2013/052476 | 3/27/2013 | WO | 00 |