Cross talk reduction

Information

  • Patent Grant
  • 7808022
  • Patent Number
    7,808,022
  • Date Filed
    Tuesday, March 28, 2006
    18 years ago
  • Date Issued
    Tuesday, October 5, 2010
    13 years ago
Abstract
A method and apparatus for reducing cross-talk between pixels in a semiconductor based image sensor. The apparatus includes neighboring pixels separated by a homojunction barrier to reduce cross-talk, or the diffusion of electrons from one pixel to another. The homojunction barrier being deep enough in relation to the other pixel structures to ensure that cross-pixel electron diffusion is minimized.
Description
TECHNICAL FIELD

The present invention relates generally to a semiconductor based image sensor and, more particularly, to a semiconductor pixel structure for detecting electromagnetic radiation.


BACKGROUND

Semiconductor based sensors and devices for detecting electromagnetic radiation have been implemented in a semiconductor substrate in CMOS or MOS technology. In these sensors, the regions adapted for collecting charge carriers being generated by the radiation in the semiconductor substrate are formed of a p-n or a n-p junction photodiode with a substrate being of a n type conductivity or p type conductivity respectively. Such junctions are called collection junctions. Of the image sensors implemented in a complementary metal-oxide-semiconductor CMOS or MOS technology, image sensors with passive pixels and image sensors with active pixels are distinguished. The difference between these two types of pixel structures is that an active pixel amplifies the charge that is collect on its photosensitive element. A passive pixel does not perform signal amplification and requires a charge sensitive amplifier that is not integrated in the pixel.


One prior semiconductor based image sensor is illustrated in FIG. 1. In the semiconductor based image sensor of FIG. 1, the photodiode is formed by an n-p collection junction with the substrate being of p type conductivity. The photodiode that collects the charge carriers being generated by the radiation is shown on the right and the diode structure associated with the unrelated (to the detection) readout circuitry is shown on the left of the figure. If the diode structure for the non-related readout circuitry is placed in the neighborhood of the collection junction of the detector photodiode, part of the charges that otherwise would have reached the collection junction will be collected by junctions or components of the un-related readout circuitry. The charge carriers generated by light falling on the regions of the detector that are used for readout circuitry, therefore, are mainly collected by the junctions of this readout circuitry. The area taken by the readout circuitry in the pixels, therefore, is lost for collecting the radiation, which is essentially the reason for the low “fill factor” or low sensitivity of active pixel based sensors.


One semiconductor based image sensor, as described in U.S. Pat. No. 6,225,670 and illustrated in FIG. 2, provides a solution to the above describe problem with the image sensor illustrated in FIG. 1. The semiconductor based detector illustrated in FIG. 2 has a small, but effective, barrier well between the radiation sensitive volume in the semiconductor substrate and the regions and junctions with unrelated readout circuitry, and also has no or a lower barrier between the radiation sensitive volume in the semiconductor and the photodiode collection junction. The collection junction collects all photoelectrons that are generated in the epitaxial layer beneath the surface of the whole pixel. This is possible because the electrons will see a small but sufficient electrostatic barrier towards the active pixel circuitry and towards the substrate. The only direction in which no, or a low, barrier is present is the collection junction. Virtually all electrons will diffuse towards this junction. Such is pixel structure is also called “well-pixel” because in practice the collection junction in such pixel is implemented as a so-called n-well implantation.


However, the well pixel structure of FIG. 2 may have some cross-talk associated with it. For most applications, the ideal pixel can be considered as a square of Silicon, packed in array of nothing but such squares. The sensitive area is the complete square. The sensitivity is high and constant within the square and zero outside the square. That is, light impinging inside the pixel's boundary should contribute to the pixel's signal, and light impinging outside the boundary should not—it should contribute to another pixel's signal. Reality is less ideal. The optical information entering in a neighboring pixel's signal is called “optical cross-talk.” Optical cross-talk is expressed in % signal lost to the neighbor. One makes sometimes distinction between left/right/up/down neighbors, and even 2nd, 3rd neighbors. Optical cross-talk is typically also wavelength dependent. Short wavelengths typically suffer less from optical cross-talk than longer wavelengths. Optical cross-talk can be directly derived from the “effective pixel shape” (EPS). EPS can be understood as the pixel response as a function of an infinitesimal light spot that travels over the pixel (and beyond) in X and Y direction. The EPS for an ideal pixel is a square. The EPS for an ideal and for a real pixel and corresponding optical cross-talk are illustrated in FIG. 3.



FIG. 4 illustrates the optical cross-talk in a well pixel. Impinging light generates photo-electrons in the p-type epitaxial layer. These diffuse randomly until they reach the depletion layer of the photodiode's collection junction. When electrons are optically generated near the border between two pixels, the electrons can diffuse either way (i.e., to either one of the collection junctions of two neighboring pixels) as illustrated in FIG. 4. In such image sensors, the border of the pixels becomes “fuzzy.” When translating this to image quality, the “fuzzyness” is the optical cross-talk between pixels. For the image created by an image sensor with such pixels the effect is a blurriness or lack of sharpness to the image.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:



FIG. 1 illustrates one prior semiconductor based image sensor.



FIG. 2 illustrates another, conventional semiconductor based image sensor.



FIG. 3 illustrates the derivation of optical cross-talk from an effective pixel shape.



FIG. 4 conceptually illustrates optical cross-talk in a well pixel.



FIG. 5 illustrates one embodiment of an image sensor implementing the methods and apparatus described herein.



FIG. 6A is a cross sectional view illustrating one embodiment of pixels having a homojunction barrier to reduce optical cross talk.



FIG. 6B is a cross sectional view illustrating another embodiment of pixels having a homojunction barrier formed around a trench.



FIG. 6C is a cross sectional view illustrating yet another embodiment of pixels having a homojunction barrier to reduce optical cross talk.



FIG. 7 illustrates an alternative embodiment of a pixel matrix structure to reduce optical cross-talk.



FIG. 8 illustrates another embodiment of pixel structures to reduce optical cross-talk.





DETAILED DESCRIPTION

A pixel having a structure to reduce cross-talk is described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques are not shown in detail or are shown in block diagram form in order to avoid unnecessarily obscuring an understanding of this description.


Reference in the description to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment. Any of the signals provided over various buses described herein may be time multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be one or more single signal lines, and each of the single signal lines may alternatively be buses.



FIG. 5 illustrates one embodiment of an image sensor implementing the methods and apparatus described herein. Image sensor 1000 includes an imaging core 1010 and components associated with the operation of the imaging core. The imaging core 1010 includes a pixel matrix 1020 having an array of pixels (e.g., pixel 300) and the corresponding driving and sensing circuitry for the pixel matrix 1020. The driving and sensing circuitry may include: one or more scanning registers 1035, 1030 in the X- and Y-direction in the form of shift registers or addressing registers; buffers/line drivers for the long reset and select lines; column amplifiers 1040 that may also contain fixed pattern noise (FPN) cancellation and double sampling circuitry; and analog multiplexer (mux) 1045 coupled to an output bus 1046. FPN has the effect that there is non-uniformity in the response of the pixels in the array. Correction of this non-uniformity needs some type of calibration, for example, by multiplying or adding/subtracting the pixel's signals with a correction amount that is pixel dependent. Circuits and methods to cancel FPN may be referred to as correlated double sampling or offset compensation and are known in the art; accordingly, a detailed description is not provided.


The pixel matrix 1020 may be arranged in N rows of pixels by N columns of pixels (with N≧1), with each pixel (e.g., pixel 300) is composed of at least a photosensitive element and a readout switch (not shown). A pixel matrix is known in the art; accordingly, a more detailed description is not provided.


The Y-addressing scan register(s) 1030 addresses all pixels of a row (e.g., row 1022) of the pixel matrix 1020 to be read out, whereby all selected switching elements of pixels of the selected row are closed at the same time. Therefore, each of the selected pixels places a signal on a vertical output line (e.g., line 1023), where it is amplified in the column amplifiers 1040. An X-addressing scan register(s) 1035 provides control signals to the analog multiplexer 1045 to place an output signal (amplified charges) of the column amplifiers 1045 onto output bus 1046. The output bus 1046 may be coupled to a buffer 1048 that provides a buffered, analog output 1049 from the imaging core 1010.


The output 1049 from the imaging core 1010 is coupled to an analog-to-digital converter (ADC) 1050 to convert the analog imaging core output 1049 into the digital domain. The ADC 1050 is coupled to a digital processing device 1060 to process the digital data received from the ADC 1050 (such processing may be referred to as imaging processing or post-processing). The digital processing device 1060 may include one or more general-purpose processing devices such as a microprocessor or central processing unit, a controller, or the like. Alternatively, digital processing device 1060 may include one or more special-purpose processing devices such as a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like. Digital processing device 1060 may also include any combination of a general-purpose processing device and a special-purpose processing device.


The digital processing device 1060 is coupled to an interface module 1070 that handles the information input/output (I/O) exchange with components external to the image sensor 1000 and takes care of other tasks such as protocols, handshaking, voltage conversions, etc. The interface module 1070 may be coupled to a sequencer 1080. The sequencer 1080 may be coupled to one or more components in the image sensor 1000 such as the imaging core 1010, digital processing device 1060, and ADC 1050. The sequencer 1080 may be a digital circuit that receives externally generated clock and control signals from the interface module 1070 and generates internal signals to drive circuitry in the imaging core 1010, ADC 1050, etc. In one embodiment, the voltage supplies that generate the control signals used to control the various components in the pixel structure of FIG. 5 discussed below may be generated by drivers illustrated by control drivers block 1015.


It should be noted that the image sensor illustrated in FIG. 5 is only an exemplary embodiment and an image sensor may have other configurations than that depicted in FIG. 5. For example, alternative embodiments of the image sensor 1000 may include one ADC 1050 for every pixel 300, for every column (i.e., vertical output line 1023), or for a subset block of columns. Similarly, one or more other components within the image sensor 1000 may be duplicated and/or reconfigured for parallel or serial performance. For example, a fewer number of column amplifiers 1040 than pixel matrix columns may be used, with column outputs of the pixel matrix multiplexed into the column amplifiers. Similarly, the layout of the individual components within the image sensor 1000 may be modified to adapt to the number and type of components. In another embodiment, some of the operations performed by the image sensor 1000 may be performed in the digital domain instead of the analog domain, and vice versa.



FIG. 6A is a cross sectional view illustrating one embodiment of pixels having a homojunction barrier to reduce optical cross talk. Two neighboring pixels of the pixel matrix 1020 are illustrated in FIG. 6A: pixel A 601 and pixel B 602. Pixel A and Pixel B in the embodiment illustrated in FIG. 6A are formed using an n-p junction photodiode with a substrate that is of a p type conductivity substrate 640. The n regions 611 and 612 are collection junctions for pixels A and B, respectively, for collecting charge carriers being generated by radiation in epitaxial layer 630 and/or substrate 640. The radiation may be of any type of radiation, for example, all forms of light including infra-red and ultraviolet as well as the optical spectrum, high energy electromagnetic rays such as x-rays and nuclear particles. The n regions 611 and 612 form photodiodes with epitaxial layer 630 in pixels A and B, respectively. The n region 628 is a junction that may be part of readout circuitry for operating on signals being generated by the charge carriers collected by the collection region 611. The fabrication and configuration of a pixel is known in the art; accordingly, a more detailed discussion is not provided. It should be noted that the pixels may include other regions and structures that are not illustrated so as not to obscure an understanding of embodiments of the present invention.


In this embodiment, the border region 610 between the photodiodes of pixel A 601 and pixel B 602, respectively, in pixel matrix includes a homojunction barrier 620 that inhibits electrons that are optically generated (by light 605) in one pixel (e.g., pixel B 602) from diffusing to a neighbor pixel (e.g., pixel A 601). The homojunction barrier 620 may be composed of a deep, heavily doped (denoted by “+”) p+ region. In one embodiment, the homojunction barrier 620 may be approximately 2 times or more as heavily doped (denoted by “++”) with respect to a region (e.g., epitaxial layer 630) designated as “p−”. “Deep” as used herein means protruding deeper in the epitaxial layer 630 than other p regions (e.g., p region 650) in the pixel 1020. In one particular embodiment, the homojunction barrier 620 may be at least approximately 2 times deeper (depth 671) than the depth 672 of the shallower p region 625.


In one embodiment, the homojunction barrier 620 may be disposed in a shallow p region 625. “Shallow” as used herein means protruding less into the epitaxial layer 630 less than the n regions (e.g., region 612) in a pixel (e.g., pixel B 602). In one embodiment, the shallow p region 625 may be a “p-well” implant (for example similar to that described in regards to FIG. 4 at the border between two pixels). Such a p-well may contain an n-region 628 that is used in the fabrication of nMOSFETS. Alternatively, the shallow p region 625 may be a p+ implant used, for example, as an nMOSFET source-drain, with the deeper p region being formed as a p-well. It should be noted that in an embodiment where the p+ region of the homojunction barrier 620 is has depth 671 of approximately 2 to 4 times deeper than the depth 672 shallow p region 625, the formation of the homojunction region may be referred to as a tub. In yet another embodiment illustrated in FIG. 6C, the homojunction barrier 620 may not be formed in a shallow p-region but, rather, directly formed in the p− epitaxial layer 630.


The difference in doping concentrations between the p− epitaxial layer 630 and the p+ homojunction barrier 620 represents a weak electrostatic barrier and electric field that counteracts the diffusion of electrons from p− towards p+, hence it will inhibit electrons from passing from one pixel (e.g., pixel B 602) to another neighboring pixel (e.g., pixel A 601). The diffusion of electrons from the area of one pixel to the neighbor pixel is impeded by a p+ region of the homojunction barrier 620 in the p− epitaxial layer 630 disposed between the collection regions 611 and 612. In an alternative embodiment, an epitaxial layer may not be used and the regions may be disposed directly in another type of charge generation layer, for example, tub regions or substrate. In either configuration, the homojunction barrier 620 may protrude into the substrate. The homojunction barrier 620 may result in a crisper separation of the optical volumes of neighboring pixels by reducing the mixing of signals of neighboring pixels.



FIG. 6B is a cross sectional view illustrating an alternative embodiment of pixels having a homojunction barrier to reduce optical cross talk. In this embodiment, the homojunction barrier 620 is formed around a trench 680. The formation of a trench is known in the art; accordingly, a detailed description is not provided.


Although formation of the homojunction barrier 620 is discussed at times in relation to an implantation operation for ease of explanation, it should be noted that other fabrication techniques may be used to generate the doped region, for example, diffusion and epitaxial growth. Such fabrication techniques are known in the art; accordingly, a detailed discussion is not provided. In addition, the pixels structures have been illustrated and discussed in regards to a using an n-p junction photodiode with a substrate that is of a p type conductivity substrate only for ease of explanation purposes. In an alternative embodiment, the pixels may be formed using a p-n junction photodiode with a substrate that is of a n type conductivity substrate and, correspondingly, an n type homojunction barrier 620.


In alternative embodiments, other structures may be utilized to reduce cross-talk between neighboring pixels, for example, as described below.



FIG. 7 illustrates an alternative embodiment of a pixel structure to reduce cross-talk. In this embodiment, reduction of cross-talk may be achieved by a dummy photodiode collection region 710 (e.g., n-implant {that is typically but not necessarily of the same nature as the real photodiode}) between the real photodiode collection regions 720 and 730. This dummy photodiode may also be additionally covered by a metal light shield 715. Alternatively, the metal light shield 715 need not be used. Although the structure illustrated in FIG. 7 may require additional room for the dummy diode plus buffer space, it may provide an effective countermeasure for cross-talk. The photo-charge that attempts to cross the border between tow pixels is collected by the dummy photodiode.



FIG. 8 illustrates another embodiment of a pixel structure to reduce cross-talk. In this embodiment, cross-talk may be reduced by embedding a pixel 801 in a deeper tub region 810 than the p-well region 820. The photosensitive volume is now confined to the p-tub 810. Each pixel is contained in a separate p-tub. For example, pixel 801 is contained in p-tub 810 and pixel 802 is contained in p-tub 830. Since electrons cannot diffuse between p-tubs, in the n-type substrate 850, there may be no resulting cross-talk at all.


It should be noted that the semiconductor manufacturing processes of fabricating the various regions and layers described above are known in the art; accordingly, more detailed descriptions are not provided.


Embodiments of the present have been illustrated with a photodiode device type and CMOS technology for ease of discussion. In alternative embodiments, other device types (e.g., photogate and phototransistor), device technologies (e.g., charge coupled device (CCD) and buried channel CMOS), and process technologies (e.g., nMOS, buried channel CMOS and BiCMOS) may be used. Furthermore, the image sensors discussed herein may be applicable for use with all types of electromagnetic (EM) radiation (i.e., wavelength ranges) such as, for example, visible, infrared, ultraviolet, gamma, x-ray, microwave, etc. In one particular embodiment, the image sensors and pixel structures discussed herein are used with EM radiation in approximately the 300-1100 nanometer (nm) wavelength range (i.e., visible light to near infrared spectrum). Alternatively, other the image sensors and pixel structures discussed herein may be used with EM radiation in other wavelength ranges.


The image sensor and pixel structures discussed herein may be used in various applications including, but not limited to, a digital camera system, for example, for general-purpose photography (e.g., camera phone, still camera, video camera) or special-purpose photography (e.g., in automotive systems, hyperspectral imaging in space borne systems, etc). Alternatively, the image sensor and pixel structures discussed herein may be used in other types of applications, for example, machine and robotic vision, document scanning, microscopy, security, biometry, etc.


Although the present invention has been described with reference to specific exemplary embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention as set forth in the claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A semiconductor based image sensor having pixels, comprising: in each of the pixels, a first region having dopants of a first conductivity type disposed in a charge generation layer having dopants of a second conductivity type;in a border region between pixels, a second region having dopants of the second conductivity type disposed in the charge generation layer, wherein the second region is deeper in the charge generation layer than the first region; anda third region of the second conductivity type disposed in the charge generation layer, wherein the second region is at least twice as deep in the charge generation layer as the third region, wherein the first region and third region are laterally disposed in relation to one another, and wherein the first region is deeper in the charge generation layer than the third region.
  • 2. The semiconductor based image sensor of claim 1, wherein the first conductivity type is n-type and wherein the second conductivity type is p-type.
  • 3. The semiconductor based image sensor of claim 1, wherein the charge generation layer and the first region in each of the pixels form photodiodes.
  • 4. The semiconductor based image sensor of claim 1, wherein the border region comprises a homojunction barrier comprising the second region having dopants of the second conductivity type.
  • 5. The semiconductor based image sensor of claim 1, wherein the third region is part of readout circuitry for operating on signals being generated by charge carriers collected by the first region.
  • 6. The semiconductor based image sensor of claim 1, wherein the second region is disposed in the third region.
  • 7. The semiconductor based image sensor of claim 6, wherein the charge generation layer is an epitaxial layer, wherein the third region is a p-well and the second region is a p-implant.
  • 8. The semiconductor based image sensor of claim 1, wherein the second region is disposed in the charge generation layer outside of the third region.
  • 9. The semiconductor based image sensor of claim 1, further comprising a trench in the border region, wherein the second region is disposed around the trench.
  • 10. The semiconductor based image sensor of claim 9, wherein the charge generation layer is an epitaxial layer and wherein the third region is a p-well and the second region is a p-implant.
  • 11. The semiconductor based image sensor of claim 1, wherein the charge generation layer is an epitaxial layer and wherein the third region is a p-well and the second region is a p-implant.
  • 12. The semiconductor based image sensor of claim 1, further comprising a fourth region having dopants of the first conductivity type disposed in the third region.
  • 13. A method, comprising: providing a semiconductor based image sensor having neighboring pixels, wherein said pixels comprise photodiodes, each having a first region of dopants of a first conductivity type;receiving radiation in a charge collection layer of the semiconductor based image sensor to generate electrons, wherein said charge collection layer is comprised of dopants of a second conductivity type, wherein the first region is disposed in said charge collection layer at a first depth; andinhibiting the diffusion of electrons that are generated closer to one of the neighboring pixels than to the other of the neighboring pixels using a second region, wherein said second region is comprised of the second conductivity type different from the first conductivity type, wherein said second region has a greater depth in said charge collection layer than the first region, and a third region of the second conductivity type disposed in the charge generation layer, wherein said second region is at least twice as deep in the charge generation layer as the third region, wherein the first region and third region are laterally disposed in relation to one another, and wherein the first region has a greater depth in said charge collection layer than the third region.
  • 14. The method of claim 13, wherein the first conductivity type is n-type and wherein the second conductivity type is p-type.
  • 15. The method of claim 13, wherein the charge collection layer is an epitaxial layer and wherein the second region is a p-implant and the third region is a p-well.
REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 60/666,080, filed on Mar. 28, 2005, which is hereby incorporated by reference.

US Referenced Citations (134)
Number Name Date Kind
3770968 Hession et al. Nov 1973 A
3904818 Kovac Sep 1975 A
4148048 Takemoto et al. Apr 1979 A
4253120 Levine Feb 1981 A
4274113 Ohba et al. Jun 1981 A
4373167 Yamada Feb 1983 A
4389661 Yamada Jun 1983 A
4473836 Chamberlain Sep 1984 A
4484210 Shiraki et al. Nov 1984 A
4498013 Kuroda et al. Feb 1985 A
4565756 Needs et al. Jan 1986 A
4580103 Tompsett Apr 1986 A
4581103 Levine et al. Apr 1986 A
4628274 Vittoz et al. Dec 1986 A
4630091 Kuroda et al. Dec 1986 A
4647975 Alston et al. Mar 1987 A
4696021 Kawahara Sep 1987 A
4703169 Arita Oct 1987 A
4774557 Kosonocky Sep 1988 A
4809074 Imaide et al. Feb 1989 A
4814848 Akimoto et al. Mar 1989 A
4831426 Kimata et al. May 1989 A
4914493 Shiromizu Apr 1990 A
4951105 Yamada Aug 1990 A
4984044 Yamamura Jan 1991 A
4984047 Stevens Jan 1991 A
4998265 Kimata Mar 1991 A
5084747 Miyawaki Jan 1992 A
5101253 Mizutani et al. Mar 1992 A
5122881 Nishizawa et al. Jun 1992 A
5128534 Wyles et al. Jul 1992 A
5144447 Akimoto et al. Sep 1992 A
5146074 Kawahara et al. Sep 1992 A
5153420 Hack et al. Oct 1992 A
5162912 Ueno et al. Nov 1992 A
5164832 Halvis et al. Nov 1992 A
5182446 Tew Jan 1993 A
5182623 Hynecek Jan 1993 A
5191398 Mutoh Mar 1993 A
5258845 Kyuma et al. Nov 1993 A
5270531 Yonemoto Dec 1993 A
5283428 Morishita et al. Feb 1994 A
5296696 Uno Mar 1994 A
5306905 Guillory et al. Apr 1994 A
5307169 Nagasaki et al. Apr 1994 A
5311319 Monoi May 1994 A
5321528 Nakamura Jun 1994 A
5329112 Mihara Jul 1994 A
5335008 Hamasaki Aug 1994 A
5345266 Denyer Sep 1994 A
5434619 Yonemoto Jul 1995 A
5436949 Hasegawa et al. Jul 1995 A
5461425 Fowler et al. Oct 1995 A
5496719 Miwada et al. Mar 1996 A
5519207 Morimoto May 1996 A
5528643 Hynecek Jun 1996 A
5576763 Ackland et al. Nov 1996 A
5578842 Shinji Nov 1996 A
5587596 Chi et al. Dec 1996 A
5608204 Hofflinger et al. Mar 1997 A
5608243 Chi et al. Mar 1997 A
5614744 Merrill Mar 1997 A
5625210 Lee et al. Apr 1997 A
5625322 Gourgue et al. Apr 1997 A
5656972 Norimatsu Aug 1997 A
5668390 Morimoto Sep 1997 A
5675158 Lee Oct 1997 A
5710446 Chi et al. Jan 1998 A
5714753 Park Feb 1998 A
5721425 Merrill Feb 1998 A
5737016 Ohzu et al. Apr 1998 A
5754228 Dyck May 1998 A
5786607 Ishikawa et al. Jul 1998 A
5793423 Hamasaki Aug 1998 A
5808677 Yonemoto Sep 1998 A
5812191 Orava et al. Sep 1998 A
5828091 Kawai Oct 1998 A
5841126 Fossum Nov 1998 A
5841159 Lee et al. Nov 1998 A
5861621 Takebe et al. Jan 1999 A
5872371 Guidash et al. Feb 1999 A
5872596 Yanai Feb 1999 A
5886353 Spivey et al. Mar 1999 A
5898168 Gowda et al. Apr 1999 A
5898196 Hook et al. Apr 1999 A
5903021 Lee et al. May 1999 A
5904493 Lee et al. May 1999 A
5933190 Dierickx Aug 1999 A
5952686 Chou et al. Sep 1999 A
5953060 Dierickx Sep 1999 A
5955753 Takahashi Sep 1999 A
5956570 Takizawa Sep 1999 A
5973375 Baukus et al. Oct 1999 A
5977576 Hamasaki Nov 1999 A
5990948 Sugiki Nov 1999 A
6011251 Dierickx et al. Jan 2000 A
6040592 McDaniel et al. Mar 2000 A
6043478 Wang Mar 2000 A
6051857 Miida Apr 2000 A
6100551 Lee et al. Aug 2000 A
6100556 Drowley et al. Aug 2000 A
6107655 Guidash Aug 2000 A
6111271 Snyman et al. Aug 2000 A
6115066 Gowda et al. Sep 2000 A
6133563 Clark et al. Oct 2000 A
6133954 Jie et al. Oct 2000 A
6136629 Sin Oct 2000 A
6137100 Fossum et al. Oct 2000 A
6166367 Cho Dec 2000 A
6188093 Isogai et al. Feb 2001 B1
6194702 Hook et al. Feb 2001 B1
6204524 Rhodes Mar 2001 B1
6225670 Dierickx May 2001 B1
6239456 Berezin et al. May 2001 B1
6316760 Koyama Nov 2001 B1
6403998 Inoue Jun 2002 B1
6459077 Hynecek Oct 2002 B1
6545303 Scheffer Apr 2003 B1
6570618 Hashi May 2003 B1
6631217 Funatsu et al. Oct 2003 B1
6636261 Pritchard et al. Oct 2003 B1
6778214 Toma Aug 2004 B1
6815791 Dierickx Nov 2004 B1
6825455 Schwarte Nov 2004 B1
6836291 Nakamura et al. Dec 2004 B1
6906302 Drowley Jun 2005 B2
6967316 Lee Nov 2005 B2
6975356 Miyamoto Dec 2005 B1
7199410 Dierickx Apr 2007 B2
7253019 Dierickx Aug 2007 B2
7256469 Kanbe Aug 2007 B2
20020022309 Dierickx Feb 2002 A1
20030011694 Dierickx Jan 2003 A1
20070145503 Dierickx Jun 2007 A1
Foreign Referenced Citations (27)
Number Date Country
2132629 Sep 1993 CA
0260954 Mar 1988 EP
0548987 Jun 1993 EP
0635973 Jan 1995 EP
0657863 Jun 1995 EP
0739039 Oct 1996 EP
0773669 May 1997 EP
0632930 Jul 1998 EP
0858111 Aug 1998 EP
0858212 Aug 1998 EP
0883187 Dec 1998 EP
0903935 Mar 1999 EP
0978878 Feb 2000 EP
2324651 Oct 1998 GB
01-204579 Aug 1989 JP
02-050584 Feb 1990 JP
04088672 Feb 1992 JP
04-207589 Jul 1992 JP
05-030433 Feb 1993 JP
06-284347 Oct 1994 JP
07-072252 Mar 1995 JP
09321266 Dec 1997 JP
9304556 Mar 1993 WO
9319489 Sep 1993 WO
9810255 Mar 1998 WO
9916268 Apr 1999 WO
0055919 Sep 2000 WO
Provisional Applications (1)
Number Date Country
60666080 Mar 2005 US