The present invention relates to a solid-state image sensor that uses an infrared ray.
Recently, the performance and functionality of digital cameras and digital movie cameras that use some solid-state image sensor such as a CCD and a CMOS (which will be sometimes referred to herein as an “image sensor”) have been enhanced to an astonishing degree. In particular, the size of a pixel structure for use in a solid-state image sensor has been further reduced these days thanks to rapid development of semiconductor device processing technologies, thus getting an even greater number of pixels and drivers integrated together in a solid-state image sensor. And the performance of image sensors has been further enhanced as well. Meanwhile, cameras that use a backside illumination type image sensor, which receives incoming light on its reverse side, not on its front side with a wiring layer for the solid-state image sensor, have been developed just recently and their property has attracted a lot of attention these days. An ordinary image sensor receives incoming light on its front side with the wiring layer, and therefore, no small part of the incoming light would be lost due to the presence of a complicated structure on the front side. In the backside illumination type image sensor, on the other hand, nothing in its photodetector section will cut off the incoming light, and therefore, almost no part of the incoming light will be lost by the device structure.
A camera that uses such a backside illumination image sensor would further expand the range of the image capturing environment. Also, recently, there is a growing demand for performing an image capturing session not just in the daytime but also at nighttime using an infrared ray. To meet such a demand, cameras that are dedicated to performing a shooting session at nighttime using a backside illumination image sensor or cameras that can be used both in the daytime and in the nighttime with the use of such a sensor could be put on the market sometime soon.
While the backside illumination image sensors are being developed, a lot of people have reported results of their researches on the device structures and manufacturing processes of such sensors. Meanwhile, it has also been researched rather extensively how such an image sensor should use the incoming light. For example, Patent Documents Nos. 1 and 2 disclose a technique for arranging an element with reflecting and condensing functions (such as a reflector or a concave mirror) on the principal surface of an image sensor in order to increase the sensitivity. According to such a technique, the light that has come through the back surface of the image sensor and has been transmitted through a photosensitive cell is made to be incident on the same photosensitive cell again , thereby attempting to increase the optical efficiency. On the other hand, Patent Document No. 3 discloses a technique for achieving the same object by getting the transmitted light reflected by a multilayer film. In any case, most of the light that has been transmitted through a photosensitive cell of an image sensor is an infrared ray due to the light absorbing property of silicon. That is why if the light that has once been transmitted through a photosensitive cell is made to be incident on the same photosensitive cell again, then the sensitivity to infrared rays, among other things, would increase.
As for a normal, non-backside-illumination image sensor (which will be referred to herein as a “frontside illumination image sensor), a technique for obtaining a color signal and an infrared signal at the same time is disclosed in Patent Document No. 4, for example. According to Patent Document No. 4, color separation filters for transmitting light rays representing the respective colors of RGB and an infrared pass filter that transmits only an infrared ray (IR) are arranged in a matrix consisting of two columns and two rows as shown in
As for how to use an infrared ray in an image sensor, there is a technique for making a light ray that has been once transmitted through a photosensitive cell incident on the same photosensitive cell again by using a reflective mirror in order to increase the optical efficiency. To produce an infrared image by such a technique, however, an infrared pass filter that transmits an infrared ray and absorbs a visible radiation should be used. Also available is a technique for performing an image capturing session in a broad wavelength range, which covers the visible radiation through infrared ray parts of the spectrum, by using four different types of color filters and infrared cut filter that transmit the RGB rays and the infrared ray (IR), respectively. To obtain a color image and an infrared image at the same time by such a technique, however, an infrared pass filter and an infrared cut filter both should be used.
It is therefore an object of the present invention to provide a technique for getting an infrared image without using any infrared pass filter or infrared cut filter in a backside or frontside illumination image sensor. Another object of the present invention is to provide a technique for obtaining a color image and an infrared image at the same time even without using such filters.
A solid-state image sensor according to the present invention includes: a semiconductor layer, which has a first surface and a second surface that is opposed to the first surface and that receives incoming light; a number of unit blocks, which are arranged two-dimensionally in between the first and second surfaces of the semiconductor layer and each of which includes first and second photosensitive cells; and a reflecting portion, which is arranged on the same side as the first surface of the semiconductor layer to reflect an infrared ray that has been transmitted through the first photosensitive cell in each unit block and to make that infrared ray incident on either the first or second photosensitive cell. One of the first and second photosensitive cells, which has received the infrared ray that has been reflected from the reflecting portion, outputs a photoelectrically converted signal, in which a component representing the intensity of the infrared ray that has been incident there after having been reflected by the reflecting portion has been added to the photoelectrically converted signal to be output by the other photosensitive cell.
In one preferred embodiment, the reflecting portion includes a number of infrared ray reflecting mirrors, each of which is arranged to face the first photosensitive cell in an associated one of the unit blocks.
In another preferred embodiment, each unit block further includes third and fourth photosensitive cells. The image sensor further includes first, second, third and fourth color separation filters, which are arranged on the same side as the second surface to face the first, second, third and fourth photosensitive cells, respectively. The first and second color separation filters transmit a visible light ray falling within a first wavelength range and the infrared ray in the incoming light received. The third color separation filter transmits a visible light ray falling within a second wavelength range and the infrared ray in the incoming light received. And the fourth color separation filter transmits a visible light ray falling within a third wavelength range and the infrared ray in the incoming light received.
In a specific preferred embodiment, the visible light rays falling within the first, second and third wavelength ranges are green, red and blue rays, respectively.
In still another preferred embodiment, each said unit block further includes third and fourth photosensitive cells. The solid-state image sensor further includes first and second color separation filters, which are arranged on the same side as the second surface to face the third and fourth photosensitive cells, respectively. The first color separation filter transmits a visible light ray falling within a first wavelength range and the infrared ray in the incoming light received, and the second color separation filter transmits a visible light ray falling within a second wavelength range and the infrared ray in the incoming light received.
In a specific preferred embodiment, the visible light rays falling within the first and second wavelength ranges are cyan and yellow rays, respectively.
In yet another preferred embodiment, the first, second, third and fourth photosensitive cells transmit a part of the incident infrared ray at a first ratio.
In yet another preferred embodiment, the solid-state image sensor further includes a light absorbing layer, which is arranged on the same side as the first surface of the semiconductor layer so as to absorb the infrared ray that has been transmitted through the second photosensitive cell in each unit block.
In yet another preferred embodiment, the solid-state image sensor further includes an interconnect layer, which is arranged on the same side as the first surface of the semiconductor layer.
An image capture device according to the present invention includes a solid-state image sensor and a signal processing section for processing an electrical signal supplied from the solid-state image sensor. The solid-state image sensor includes: a semiconductor layer, which has a first surface and a second surface that is opposed to the first surface and that receives incoming light; a number of unit blocks, which are arranged two-dimensionally in between the first and second surfaces of the semiconductor layer and each of which includes first and second photosensitive cells; and a reflecting portion, which is arranged on the same side as the first surface of the semiconductor layer to reflect an infrared ray that has been transmitted through the first photosensitive cell in each unit block and to make that infrared ray incident on either the first or second photosensitive cell. One of the first and second photosensitive cells, which has received the infrared ray that has been reflected from the reflecting portion, outputs a photoelectrically converted signal, in which a component representing the intensity of the infrared ray that has been incident there after having been reflected by the reflecting portion has been added to the photoelectrically converted signal to be output by the other photosensitive cell. The signal processing section calculates the intensity of the infrared ray received at each photosensitive cell by performing processing including calculating the difference between photoelectrically converted signals supplied from the first and second photosensitive cells.
A solid-state image sensor according to the present invention includes a reflecting portion that reflects an infrared ray, which has been transmitted through one of two photosensitive cells, and that makes it incident on either photosensitive cell again. That is why the infrared rays to be converted photoelectrically by the two photosensitive cells should produce signals with mutually different intensities. And by calculating the difference between those two photoelectrically converted signals, the infrared ray components that have been received at the respective photosensitive cells can be obtained. Also, if color filters are used, RGB color signals can also be calculated by subtracting signals representing the infrared ray components from signals representing the intensities of the incoming light received at the photosensitive cells. As a result, an infrared image and a color image can be obtained even without using any infrared pass filter or cut filter.
First of all, it will be described basically how this invention works in principle before specific preferred embodiments of the present invention are described in detail.
A reflecting portion 3 is arranged over the first surface 30a so as to face the photosensitive cell array. The reflecting portion 3 is arranged to reflect an infrared ray that has been transmitted through the photosensitive cell 2a and make it incident on either the photosensitive cell 2a or the photosensitive cell 2b. In the example illustrated in
An interconnect layer 5 is arranged on the first surface 30a of the semiconductor layer 30. Also, as viewed from the photosensitive cell array, a transparent layer 8 and a substrate 6 to support the semiconductor layer 30 are stacked in this order over the first surface.
When light including a visible radiation and an infrared ray is incident on the photosensitive cells 2a and 2b, photoelectric conversion is produced by each of those photosensitive cells 2a and 2b, which outputs a photoelectrically converted signal representing the intensity of the light that has been converted photoelectrically. As measured along the optical axis of the incoming light, each photosensitive cell has a predetermined thickness. Each of those photosensitive cells is arranged so as to convert photoelectrically almost all of the visible radiation and a part of the infrared ray that are included in the incoming light and to just transmit the rest of the infrared ray.
The infrared ray that has been transmitted through the photosensitive cell 2a passes through the transparent layer 8, gets reflected by the infrared ray reflecting mirror 3a, and then enters the photosensitive cell 2a again. As a result, that reflected infrared ray produces another photoelectric conversion at the photodetector 2a. Meanwhile, the infrared ray that has been transmitted through the photosensitive cell 2b is supposed to pass through the transparent layer 8, enter the substrate 6 of the image sensor, and get absorbed there.
Suppose the intensities of the visible radiation and infrared ray that have been incident on one unit block 2 are identified by W and IR0, respectively. According to the arrangement described above, first of all, the visible radiation with the intensity W and the infrared ray with an intensity Δ IR (<IR) are converted photoelectrically by the photosensitive cells 2a and 2b. On top of that, part of the infrared ray that has been reflected by the reflective mirror 3a and incident on the photosensitive cell 2a again and that has an intensity Δ IR′ (<IR) is further photoelectrically converted by the photosensitive cell 2a. As a result, the photosensitive cell 2a outputs a photoelectrically converted signal S2a, of which the overall intensity is calculated by W+Δ IR+Δ IR′. On the other hand, the photosensitive cell 2b outputs a photoelectrically converted signal 2b, of which the overall intensity is calculated by W+Δ IR. That is to say, the photosensitive cell 2a outputs a photoelectrically converted signal S2a, which is obtained by adding a component representing the intensity of the extra infrared ray that has been reflected by the reflecting portion 3 and then incident on the photosensitive cell 2a again to the photoelectrically converted signal S2b to be output by the photosensitive cell 2b.
Hereinafter, these intensities Δ IR and Δ IR′ will be described in further detail with reference to
IR=IR0e−ax (1)
In this case, if the thickness of each of the photosensitive cells in the solid-state image sensor shown in
Δ IR=IR0(1−e−ad) (2)
Δ IR′=IR0e−ad(1−e−ad) (3)
Thus, Δ IR and Δ IR′ have mutually different values and Δ IR is smaller than Δ IR′.
In this example, the infrared ray reflecting mirror 3a is arranged so as to reflect the infrared ray that has been transmitted through the photosensitive cell 2a back and make it enter the photosensitive cell 2a again. However, the present invention is in no way limited to that specific preferred embodiment. For example, the infrared ray reflecting mirror 3a may also be tilted so as to reflect the infrared ray that has been transmitted through the photosensitive cell 2a and make it enter the photosensitive cell 2b.
Supposing such an approximation is satisfied in the arrangement shown in
S2a=Ws+2 Δ IRs (4)
S2b=Ws+Δ IRs (5)
In Equations (4) and (5), Ws and Δ IRs denote photoelectrically converted signals corresponding to the intensities W and Δ R, respectively, and 2 Δ IRS indicates that its magnitude is a double of Δ IRs. By subtracting S2b from S2a, Δ IRs can be obtained. Δ IRs should be proportional to the intensity IR0 of the incoming infrared ray. That is why by calculating SIRS in each unit block, the distribution of Δ IRs in respective unit blocks, i.e., an infrared image, can be obtained. It should be noted that even if Δ IR≈Δ IR′ is not satisfied, Δ IRs can also be obtained based on Equations (2) and (3), and therefore, an infrared image can also be obtained in a similar manner. Based on the principle described above, the solid-state image sensor of the present invention can obtain an infrared image even without using any infrared pass filter.
According to the present invention, the solid-state image sensor shown in
According to the present invention, the solid-state image sensor does not have to be the backside illumination type as shown in
According to the present invention, to obtain a color image as well as an infrared image, each unit block 2 should includes not just the photosensitive cells 2a and 2b but also other photosensitive cells as well.
Also, to obtain a color image, color separation filters (which are often called simply “color filters”) need to be arranged to face at least some of the photosensitive cells in each unit block 2 and to receive the incoming light. By arranging an appropriate combination of color filters, each of which is designed to pass a visible radiation falling within its associated wavelength range and an infrared ray, an infrared image and a color image can be obtained at the same time.
Suppose in the pixel arrangement shown in
S2c=Rs+Δ IRs (6)
S2d=Bs+Δ IRs (7)
In Equations (6) and (7), Rs and Bs denote signal components representing the R and B rays, respectively. If no color filters are arranged to face the photosensitive cells 2a and 2b, their photoelectrically converted signals S2a and S2b are calculated by Equations (4) and (5), respectively. Δ IRs, Rs, Bs and Ws can be obtained by Equations (4) through (7). By subtracting Rs and Bs from Ws thus obtained, a signal component Gs representing the green (G) ray can be obtained. By performing these signal calculations on each unit block 2 and obtaining Δ IRS, Rs, Bs and Gs on a unit block basis, a color image and an infrared image can be both obtained.
Hereinafter, specific preferred embodiments of the present invention will be described with reference to
(Embodiment 1)
The image capturing section 100 includes a lens 101 for imaging a given subject, an optical plate 102, a solid-state image sensor 103 for converting optical information, which has been collected by imaging the subject through the lens 101 and the optical plate 102, into an electrical signal by photoelectric conversion, and a signal generating and receiving section 104. In this case, the optical plate 102 includes a quartz crystal low-pass filter for reducing a moire pattern to be caused by a pixel arrangement. The signal generating and receiving section 104 generates a fundamental signal to drive the solid-state image sensor 103 and receives a signal from the solid-state image sensor 103 and passes it to the signal processing section 200.
The signal processing section 200 includes a memory 201 to store the signal supplied from the signal generating and receiving section 104, a color signal generating section 202 for generating a signal including color information and infrared ray information (i.e., a color signal) based on the data that has been read out from the memory 201, and an interface (IF) section 203 that outputs the color signal to an external device.
It should be noted that this configuration is only an example and that according to the present invention, all components but the solid-state image sensor 103 can be an appropriate combination of known elements. Hereinafter, a solid-state image sensor 103 according to this preferred embodiment will be described.
In the solid-state image sensor of this preferred embodiment, multiple color filters are arranged to face respective photosensitive cells. Each of those color filters is designed to transmit a visible radiation falling within a wavelength range that is associated with any of multiple color components and an infrared ray. In the following description, if a color component is identified by C, a color filter that transmits the color component C and an infrared ray will be referred to herein as a “C element”.
According to this preferred embodiment, Bayer color filters that use red (R), green (G) and blue (B) color elements is adopted. As shown in
The structure shown in
In such an arrangement, the incoming light including visible radiations and an infrared ray is transmitted through the micro lenses 4 and the color filters 1a, 1b and 1c, incident on the respective photosensitive cells, and then converted photoelectrically there. As described above, each of the photosensitive cells has its thickness defined so as to convert photoelectrically almost all of the visible radiation (i.e., from blue ray through red ray) and a part of the infrared ray with the intensity Δ IR. That is why the light transmitted through each photosensitive cell consists mostly of the infrared ray. The infrared rays that have been transmitted through the photosensitive cells 2a, 2b and 2d are then incident on the substrate 6 of the image sensor. On the other hand, the infrared ray that has been transmitted through the photosensitive cell 2c is reflected by the infrared ray reflecting mirror 3a, which is arranged right under the photosensitive cell 2c, toward the photosensitive cell 2b that is located diagonally to the mirror 3a and then incident on it. Consequently, the photosensitive cell 2b receives infrared rays, of which the overall intensity is twice as high as any other photosensitive cell's. As a result, the photosensitive cells 2a, 2b, 2c and 2d generate photoelectrically converted signals S2a through S2d represented by the following Equations (8) to (11):
S2a=Rs+Δ IRs (8)
S2b=Gs+2 Δ IRs (9)
S2c=Gs+Δ IRs (10)
S2d=Bs+Δ IRs (11)
where Rs, Gs, Bs and IRs denote the photoelectrically converted signals representing the red, green, blue and infrared rays, respectively.
According to this preferred embodiment, infrared rays with two different intensities are supposed to be received by the photosensitive cells associated with the green elements. Consequently, by calculating the difference between their intensities, an infrared ray component can be calculated. According to this preferred embodiment, by subtracting Equation (10) from Equation (9), the infrared ray component Δ IRs can be calculated as in the following Equation (12):
S2b−S2c=Δ IRs (12)
And by subtracting the infrared ray component Δ IRs from the photoelectrically converted signals generated by the respective photosensitive cells, RGB color components Rs, Gs and Bs can be calculated.
As described above, in the image sensor of this preferred embodiment, an infrared ray reflecting mirror 3a is arranged to reflect an infrared ray toward at least one of the two photosensitive cells that are arranged adjacent to each other to receive visible radiations representing the same color. By getting the infrared ray that has been transmitted through one photosensitive cell reflected by the infrared ray reflecting mirror 3a toward the other photosensitive cell and making it incident there, the intensities of the infrared rays received by the two photosensitive cells will be different from each other. As a result, by calculating their signal difference between those two photosensitive cells, the infrared ray component can be obtained. And by subtracting the infrared ray component from each pixel signal, RGB color signals can be obtained. The image capture device of this preferred embodiment can obtain an infrared image and a color image represented by visible radiations at the same time without using any infrared cut filter or infrared pass filter. Consequently, the present invention would achieve significant effects in practice.
In the preferred embodiment described above, the infrared ray reflecting mirror 3a is arranged to make an infrared ray that has been transmitted through the photosensitive cell 2c incident on the photosensitive cell 2b. However, the present invention is in no way limited to that specific preferred embodiment. Alternatively, the infrared ray reflecting mirror 3a may also be arranged parallel to the plane on which the photosensitive cells are arranged as an array to reflect an infrared ray, which has been transmitted through the photosensitive cell 2c, back to the same photosensitive cell 2c. Even so, an infrared image and a color image can also be obtained by calculating a signal difference between the two photosensitive cells 2b and 2c. Furthermore, the infrared ray reflecting mirror 3a does not have to be provided as a single element for each unit block. Instead, an infrared ray reflecting mirror 3a may also be broad enough to cover multiple unit blocks on the same plane. Optionally, the infrared ray reflecting mirror 3a could also be designed to reflect not only an infrared ray but also visible radiations as well. On top of that, the image sensor of this preferred embodiment may further include a light absorbing member, which is arranged on the same side as the first surface 30a so as to face the photosensitive cells 2a, 2b and 2d, in addition to every member of the preferred embodiment described above. By providing such a light absorbing member, it is possible to prevent the light that has been transmitted through each photosensitive cell from being reflected from the substrate 6, for example.
(Embodiment 2)
Hereinafter, a second preferred embodiment of the present invention will be described. The image capture device of the second preferred embodiment of the present invention is the same as the counterpart of the first preferred embodiment described above except the arrangement of color filters in its image sensor. Thus, the following description of the second preferred embodiment will be focused on only those differences from the first preferred embodiment to avoid redundancies.
In this preferred embodiment, color filters are arranged basically in a matrix consisting of two columns and two rows, which function as two transparent elements and two complementary color elements. As shown in
When the image sensor with such an arrangement receives light, which falls within the visible radiation range and the infrared range, from the subject, each photosensitive cell produces photoelectric conversion. Each of the photosensitive cells has its thickness defined so as to convert photoelectrically almost all of the visible radiation and a part of the infrared ray with the intensity Δ IR. That is why the respective photosensitive cells receive the infrared ray with the intensity Δ IR evenly. The infrared ray is once transmitted through the photosensitive cell 2c but is reflected by an infrared ray reflecting mirror 3a back to the same photosensitive cell 2c again. Consequently, the photosensitive cell 2c receives infrared rays, of which the overall intensity is twice as high as any other photosensitive cell's. Meanwhile, the infrared ray that has been transmitted through the photosensitive cell 2b is absorbed into the light absorbing layer 7 and is never reflected. As a result, the photosensitive cells 2a, 2b, 2c and 2d generate photoelectrically converted signals S2a through S2d represented by the following Equations (13) to (16):
S2a=Gs+Bs+Δ IRs (13)
S2b=Rs+Gs+Bs+Δ IRs (14)
S2c=Rs+Gs+Bs+2 Δ IRs (15)
S2d=Rs+Gs+Δ IRs (16)
Just like the image capture device of the first preferred embodiment described above, the image capture device of this preferred embodiment can also obtain the infrared ray component Δ IRs by calculating the differential signal between the photosensitive cells 2c and 2d. And by subtracting that component from the respective pixel signals, Ws (=Rs+Gs+Bs), Cs (=Gs+Bs) and Ys (=Rs+Gs) signals can be obtained. And by using the signals thus obtained, the Rs signal can be obtained by subtracting Cs from Ws, the Bs signal can be obtained by subtracting Ys from Ws, and the Gs signal can be obtained by subtracting Rs and Bs from Ws. In this manner, RGB color signals can be obtained.
As described above, according to this preferred embodiment, an infrared ray reflecting mirror 3a is arranged to reflect an infrared ray toward at least one of the two photosensitive cells that are arranged adjacent to each other to receive visible radiations representing the same color. By getting the infrared ray that has been transmitted through one photosensitive cell reflected by the infrared ray reflecting mirror 3a toward the same photosensitive cell and making it incident there, the intensities of the infrared rays received by the two photosensitive cells will be different from each other. As a result, by calculating their signal difference between those two photosensitive cells, the infrared ray component can be obtained. And by subtracting the infrared ray component from each pixel signal, RGB color signals can be obtained.
The image capture device of this preferred embodiment can obtain an infrared image and a color image represented by visible radiations at the same time without using any infrared cut filter or infrared pass filter. On top of that, the image sensor of this preferred embodiment uses transparent elements and color filters representing complementary colors, and therefore, would have higher sensitivity than its counterpart of the first preferred embodiment described above. Consequently, the present invention would achieve significant effects in practice.
In the preferred embodiment described above, the infrared ray reflecting mirror 3a is arranged to make an infrared ray that has been transmitted through the photosensitive cell 2c incident on the same photosensitive cell 2c again. However, the present invention is in no way limited to that preferred embodiment. Alternatively, the infrared ray reflecting mirror 3a may also be arranged so as to define a tilt angle with respect to the plane on which the photosensitive cells are arranged as an array to reflect an infrared ray, which has been transmitted through the photosensitive cell 2c, toward the photosensitive cell 2b and make it incident there. Even so, an infrared image and a color image can also be obtained by calculating a signal difference between the two photosensitive cells 2b and 2c. Furthermore, the infrared ray reflecting mirror 3a does not have to be provided as a single element for each unit block. Instead, the infrared ray reflecting mirror 3a may also be broad enough to cover multiple unit blocks on the same plane.
The light absorbing layer 7 of this preferred embodiment does not have to be provided as a single element for each unit block but may also cover multiple unit blocks as well. Furthermore, the light absorbing layer 7 is not an indispensable element for the present invention in the first place, and therefore, the effect of this preferred embodiment would also be achieved even without the light absorbing layer 7.
In each of the preferred embodiments of the present invention described above, the infrared ray reflecting mirror 3a that reflects an infrared ray to say the least should be arranged inside the image sensor. In the manufacturing process of a backside illumination image sensor, both the principal surface and back surface sides thereof need to be processed, and therefore, it is easier to introduce the process step of arranging such an infrared ray reflecting mirror into its manufacturing process than it is in the manufacturing process of a conventional frontside illumination image sensor. Consequently, the present invention is also applicable to a frontside illumination image sensor but can be used particularly effectively in a backside illumination image sensor, among other things.
The infrared ray receiving image capture device of the present invention can be used extensively in any infrared camera that uses a solid-state image sensor. For example, the
Number | Date | Country | Kind |
---|---|---|---|
2009-051708 | Mar 2009 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2010/001409 | 3/2/2010 | WO | 00 | 1/11/2011 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2010/100897 | 9/10/2010 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5210433 | Ohsawa et al. | May 1993 | A |
5416345 | Matsunaga | May 1995 | A |
8680591 | Haddad et al. | Mar 2014 | B2 |
8698084 | Jiang et al. | Apr 2014 | B2 |
20060044429 | Toda et al. | Mar 2006 | A1 |
20060188721 | Irvin et al. | Aug 2006 | A1 |
20070059901 | Majumdar et al. | Mar 2007 | A1 |
20090050855 | Majumdar et al. | Feb 2009 | A1 |
20100118243 | Majumdar et al. | May 2010 | A1 |
20110164156 | Hiramoto et al. | Jul 2011 | A1 |
Number | Date | Country |
---|---|---|
02-264473 | Oct 1990 | JP |
03-109769 | May 1991 | JP |
11-317510 | Nov 1999 | JP |
2005-006066 | Jan 2005 | JP |
2006-054262 | Feb 2006 | JP |
Entry |
---|
International Search Report for corresponding International Application No. PCT/JP2010/001409 mailed May 18, 2010. |
Number | Date | Country | |
---|---|---|---|
20110115919 A1 | May 2011 | US |