None
Not Applicable.
Not Applicable.
The present disclosure relates to the fields of digital microscopy and digital imaging.
Optical microscopes are proved to be an important instrument of scientific and technological research. The optical microscopes traditionally comprise an optical system which forms the image of an object projected either onto the retina of human eye or onto the image sensor. The maximum resolution of optical microscopes is limited by diffraction of the optical systems, and cannot exceed the so called diffraction limit.
Diffraction limit on resolution of the optical systems is a fundamental physical principle of optics, which derives from the wave nature of light, and results in inevitable limitation of resolution of optical systems, which approximately equals the wavelength times the relative aperture of the optical system. More detailed explanation and accurate estimates can be found in [2]. However these limitations are relevant only for imaging systems that create an image of an object. Much higher optical resolution had been demonstrated by the optical near field microscopes and contact lithography techniques in semiconductor manufacturing industry.
The optical scanning near field microscopes [1] are based on the optical fiber waveguide which sequentially scans the surface of the investigated object, and creates the image of the scanned object line by line from the scanning results. The size of the scanning tip of the optical fiber waveguide and its distance from the object can be less than the light wavelength, providing the sub-wavelength resolution of the image.
However the shortcomings of the scanning near-field microscope are its slow speed, and small light sensitivity, due to the sequential image acquisition process of one pixel at a time via a tiny input hole of the waveguide tip. Moreover, optical near field microscopes are relatively bulky, expensive and delicate devices, which complicates or prohibits their use in many applications. Thus the better solution that improves over the above limitations of near field microscopes is much needed.
It is an object of the present invention to provide several solutions related to sub-wavelength imaging and microscopy. For this we disclose a Sub Wavelength imaging Array (SWA), where the size of the pixels and separation between the neighbor pixels is less than the wavelength of the operating bandwidth. Names image sensor, sub-wavelength image sensor, sub-wavelength pixel array, SWA, pixel array and similar may be used interchangeably to refer to SWA in this disclosure.
The conventional art teaches us, that such sub-wavelength array is impossible to manufacture and use for several reasons:
(1) Sub-wavelength array will have the individual pixels of the size less than the wavelength, which means less than about 0.5 micrometer for the visible band imaging, and it could not be successfully manufactured in the past due to tiny size of the pixels and their internal circuitry. However the improved resolution of the semiconductor manufacturing now allows to overcome this limitation, and further shrinking of the pixel size will be possible in the future.
(2) Diffraction limit on resolution of the optical systems is a fundamental physical principle of optics, which derives from the wave nature of light, and results in inevitable limitation of resolution of optical systems. Δx=2.44λF, where λ is the wavelength, and F is the relative aperture, defined as focal length divided by lens aperture. More detailed explanation and accurate estimations can be found in [2]. Since for visible bandwidth 0.4 μm≦λ≦0.7 μm; for most optical systems, F≧1.4; we obtain the resolution limit Δx≧1.9 μm.
The diffraction limit on the resolution derives from fundamental physics and cannot be directly overcome. To overcome this limitation, we place the imaged object immediately adjacent to the sub-wavelength array, and we eliminate the optical system in between. This results in the much smaller limit on the resolution. That is due to the fact that the absence of imaging system equals to tiny relative aperture, and in that case the resolution limit can be calculated by physical simulation of electromagnetic field pattern. The examples of similar configurations are near-field systems such as a scanning near-field microscope, where the optical fiber with sub-wavelength opening scans over the object, and sub-wavelength optical lithography. These systems demonstrate sub-wavelength resolution (Δx<λ), which for the visible band (0.4 um≦λ≦0.7 um) translates into: Δx<0.5 um. The exact resolution can be calculated by near-field optical calculation in accurate physical simulations, as described in [2].
(3) Closely spaced pixels in dense image sensors are known to have significant crosstalk, when the light that should contribute to particular pixel eventually contributes to its neighbor pixel. This crosstalk can be caused by optical scattering of the photons before the conversion into electric carriers (optical crosstalk), or by the scattering of electric carriers within the semiconductor (electrical crosstalk).
Here we disclose the novel architecture solutions for pixels, boundaries between the pixels and image sensor array, as well as the novel test structures and image processing methods that eliminate or significantly decrease the neighbor pixel cross-talk.
The disclosed architecture solutions to decrease the crosstalk include use of optical isolation between the neighbor pixels; thinning of active light sensing layer; use of additional p-n transition areas below the pixel and between the neighbor pixels in order to collect and drain out the stray carriers; use of semiconductor etch in the boundary between the neighbor pixels for electrical and optical isolation between the adjacent pixels; use of novel calibration structures to measure the pixel cross-talk, which are used stand alone or integrated into the image sensor; use of signal processing eliminating or decreasing the cross-talk via deconvolution of crosstalk from the acquired image.
Multiple different uses and applications of the disclosed sub-wavelength array are possible, such as:
(1) Microfluidic systems, and lab on a chip systems, which are the micro-systems for medical, chemical, biological analysis and synthesis used for applications, where SWA can be integrated into the system and used as optical microscopy imaging and analysis tool. For liquid flow control in microfluidic applications of SWA, we disclose the electrostatic phased array integrated with SWA.
(2) Diffractive imaging, where not the image of the object, but rather a diffraction pattern created by the object is imaged. The diffraction pattern can have features of sub-wavelength size, and therefore diffractive imaging can require sub-wavelength resolution. Diffractive imaging can be produced under monochromatic or narrow bandwidth illumination, or can be facilitated by narrow band-pass filter deposited over the sub-wavelength image sensor.
(3) Use of the SWA in various systems: sub-wavelength microscope, contact microscope, scanners, micro-electro-mechanical system (MEMS), high resolution digital cameras, and other imaging and electro-optical systems, at least in all the fields, systems, devices and applications where the prior art image sensors are currently used.
The present disclosed subject matter will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which corresponding or like numerals or characters indicate corresponding or like components. Unless indicated otherwise, the drawings provide exemplary embodiments or aspects of the disclosure and do not limit the scope of the disclosure. In the drawings:
The disclosed subject matter is described below with reference to the enclosed drawings.
One technical problem dealt with by the disclosed subject matter is the imaging of the objects details finer than the wavelength of the operating bandwidth. There were at least two limitations prohibiting such imaging in the prior art.
The first limitation is the diffraction limit of the optical systems. In the conventional microscopes and imaging systems there is an optical system forming an image of the object on the image sensor. The resolution Δx of this optical system cannot exceed the diffraction limit, Δx≧2.44λF. The relative aperture F of optical systems almost always exceeds 1.4: F≧1.4, the wavelength λ of visible band is 0.4 μm≦λ≦0.7 μm (micrometer), average wavelength λ≈0.55 μm and therefore the diffraction limit for optical systems in the visible bandwidth is about Δx≈2.44·0.55 μm·1.4≈1.9 μm (μm denotes micrometer).
The diffraction limit on the resolution is relaxed in the near-field systems where the object is closely adjacent to the imaging plane, and no optical system is placed between the object and the imaging element. The examples of near-field optical systems, with resolution exceeding the light wavelength are scanning near-field microscopes, where the optical fiber with opening smaller than the operating wavelength is scanned along the object, and the contact mask lithography where the resolution better than the wavelength is obtained.
However we do not limit the applications of disclosed sub-wavelength image sensor to the near-field domain. In some applications it can be advantageous to have a sub-wavelength sensor resolution even if the optical system has lower resolution.
The second limitation of prior art is a significant cross-talk between the neighbor pixels of existing image sensors, which increases with decrease of the pixel size and distance between the neighbor pixels.
Here we disclose the pixel and architecture, calibration structures and image processing methods that eliminate or significantly decrease the pixel cross-talk.
The disclosed architecture solutions to decrease the crosstalk include use of optical isolation between the neighbor pixels; thinning of active light sensing layer; use of additional p-n transition areas below the pixel and along the boundary between the neighbor pixels in order to collect and drain out the stray carriers; use of semiconductor etch in the boundary between the neighbor pixels for electrical and optical isolation between the adjacent pixels; use of calibration structures to measure the cross-talk; use of signal processing to eliminate and decrease the cross-talk by measuring the point spread function of the crosstalk on the disclosed test structures, and further de-convoluting the crosstalk from the acquired image.
Microfluidic systems are the micro-systems for medical, chemical, biological analysis and synthesis. In medical and biological microfluidic systems liquid sample analysis may include optical detection, identification and counting of particles in the liquid sample, such as blood, or other medical or biological sample. SWA integrated as part of microfluidic system may acquire images or video stream of the sample liquid flowing over it, and illuminated by ambient illumination or dedicated light source. The automatic image and video processing of acquired images or video stream may allow to detect, identify and count the specific particles, cells, bacteria or viruses, and use the obtained results as the system output or intermediate data for further chemical or biological analysis, medical diagnostics or system control.
One of the important tasks that microfluidic systems are dealing with is the pumping and transportation of the fluid sample along the system. One of the known methods is electrostatic pumping, where electrostatic field or change of surface properties at the liquid/surface boundary creates a force, moving the liquid. Here we disclose the architecture integrating the imaging array with electrostatic phased array for liquid pumping.
Diffractive imaging is the field were high resolution imaging by SWA will be advantageous. In the diffractive imaging, the diffractive pattern created by the object is imaged. It can be further used for 3D shape reconstruction of the object, creation of holographic image of the object, and in various other applications. The diffraction pattern can have features of sub-wavelength size, and therefore diffractive imaging can benefit from sub-wavelength resolution of SWA. Diffractive imaging system may comprise and SWA, with optional illumination source and optional processing unit. Illumination source may be monochromatic. Optionally the active bandwidth of SWA may be limited by covering it with band-pass filter, or by other way.
The invention is not limited to any specific pixel architecture, operation mode or layout of the pixels within the array. Multiple variants of the pixel architecture may be implemented (linear pixel, logarithmic pixel, pixel with electronic shutter and other types), various operation modes such as single frame, multiple frames, video, high dynamic range imaging, may be applied; multiple layout of the pixels within the array (rectangular grid, honeycomb, pixels of varying sizes, radial array, and other known or future types) may be used;
Various mechanisms of conversion of light into electrical signal (closed photodiode, photo-transistor, and other types), mechanism of pixel selection, reset and read-out are possible. Various types of semiconductor including indirect band-gap semiconductors, such as semiconductors comprising Si, Ge, C elements and their dopants, and direct band-gap semiconductors, such as semiconductors comprising Ga, As, In, P, Al, N elements may be used.
The disclosed SWA may be used in any band or sub-band of electromagnetic spectrum within the wavelength range of 0.1 um-2.0 um. For example in the visible bandwidth of 0.4 um-0.7 um, infrared bandwidth 0.7 um-1.1 um, ultra-violet bandwidth 0.2 um-0.4 um or other bandwidths.
The term sub-wavelength is used in the meaning, that if the array is sensitive, for example, in the visible band 0.4 um-0.7 um, (um denotes micrometer) then at least one of the dimensions of the pixel or separation between the pixels in at least one direction is less than 0.7 um.
If the additional structures for cross-talk reduction, that are illustrated on
The calibration SWA is closed by an optically shielding layer 305 with an opening window 310. In ideal case, when the pixel cross-talk is absent, all the light signal would be sensed by pixel 320, and zero signal would be sensed by pixels 330, 340, and 350. However due to the neighbor pixel cross-talk, some light or generated carriers will be leaked from pixel 310 towards its neighbor pixels. The crosstalk calibration structure 300 allows to measure this cross-talk, and later take it into account for cross-talk deconvolution of images acquired by SWA.
The cross-talk calibration structure may have an inverse shielding, when the central pixel 310 is shielded by light-blocking layer, and all the other pixels are open. In that case, the signal will be lowest on pixel 310, and will be decreased due to cross-talk on its neighbor pixels. This is due to the fact that the signal of the remote pixels is composed of the light that falls directly onto them, and also partially of the cross-talk from the neighbor pixels. However, the cross-talk contribution from the shielded pixel is decreased, therefore decreasing the value measured on its neighbors.
The inverse calibration structure has advantages, since it does not need any dedicated stand-alone calibration structure: a single or few scattered pixels of the SWA may be shielded, while majority of pixels will remain unshielded and fully operational. The single or few shielded pixels allow the cross-talk calibration, while the rest of the pixels (which constitute the majority) allow the operation and imaging by SWA. The image value under the shielded pixels can be reconstructed by interpolation of the values of neighbor pixels, or other image processing techniques known in the art.
For example, for SWA matrix of million pixels of size 1000×1000 pixels, only 100 pixels scattered as the grid at the positions X=50, 150, 250, . . . 950; and Y=50, 150, 250, . . . 950 can optically shielded, while all the other pixels kept clear.
The information from the SWA with calibration shielded pixels may be used in multiple ways. One of the ways is to acquire a calibration image when there is no object, and the SWA is illuminated uniformly. From signal decay of the pixels neighboring the each shielded pixel the cross-talk between the pixels is obtained. All the neighbors together yield the signal point-spread function, or the blurring kernel. Multiple shielded pixels allow to obtain more accurate results, via averaging, and decreasing the noise, illumination and manufacturing non-uniformity.
Alternative way is to calculate the blurring kernel from the operating image itself. Despite the non-uniform and un-known in advance image presenting in the captured image, averaging over the multiple shielded pixels and their vicinities allows to separate between the image content, and blurring kernel. Further averaging is possible due to the fact that blurring kernel should be symmetric under operations of mirroring and rotation.
Image acquisition and deblurring on the basis of the measured blur kernel is performed in step 530. Image deblurring is the inverse problem of obtaining unblurred image I from the blurred image Y. Multiple ways of image deblurring facilitated by the knowledge of the blur kernel K are known in the art, including application of inverse or pseudo inverse kernel: I=inv(K)*Y; solution of the least squares problem: I=arg(min(Y−K*I)̂2) and others. It is known in the art of image and signal processing, that knowledge of the blur kernel generally eases the deblurring, and improves its accuracy.
For the calibration method of
The execution order of the flow-chart in this case would be: Step 510 and 530, followed by step 520, followed by 540.
610 is the n-type region of photodiode—the core photosensitive element of the pixel; 611 is the same region of its neighbor pixel. 620 is the p-type region, to form a photodiode with regions 610, 611 (e.g. if regions 610 are of n-type, then 620 is of p type, and vice versa).
615 is the closed p-n junction of the photodiode, formed by applying the reverse bias to the pixel photodiode during the pixel exposure time. It corresponds to the photo diode 220 of
612 is the contact of the pixel photodiode.
The first novel disclosed structure reducing the pixel crosstalk is formed by the additional doping region of n type 650 below the pixel, which is reversely biased to form a closed p-n junction 625. The junction thickness may be increased by low-concentration doping, to form a p-i-n structure. This reversely biased junction creates an electric field absorbing the scattered carriers that escaped the working photodiode junction 615 before they reach the junction of adjacent pixels, therefore reducing the neighbor pixel crosstalk.
The second novel disclosed structure reducing the optical pixel crosstalk is the optical isolation between the neighbor pixels by the wall of metal layers and via-connectors between them, shown as 640. It may be extended through the part, or the whole height of backend dielectric layers 660.
Another method to reduce the neighbor pixel cross-talk is to decrease the height of the backend layer 660.
The third novel disclosed structure reducing the electrical pixel crosstalk is the additional closed diode structure along the boundary between the pixels, shown as doped region 630 and reversely biased closed diode 635.