The method and system relates to the area of yield enhancement of photovoltaic cell thin film production processes and in particular to production of thin film photovoltaic cells.
Thin film (TF) layers are typically layers of metal, semiconductor, or organic material having a high degree of surface uniformity. The layers are produced by depositing the desired material on a flexible or rigid substrate. The substrates may have a variety of sizes ranging from microscopic integrated circuits to large solar panels having dimensions of tens of meters. The substrates may be rigid or flexible, precut sheets or continuous web material. The layers may be deposited by sputtering, electrolytic deposition, printing, or other processes known in the art and may be transparent, translucent, opaque or reflective.
In order to ensure proper functionality, the TF layer should be homogeneous and free of defects such as pinholes, bumps, dish downs, scratches, shorts, cuts etc. A range of complicated production processes and systems that combine partially integrated and stand-alone material deposition systems, optical inspection systems, metrology measurement systems, repair devices etc., are typically used to ensure the integrity of the TF layer or to locate the defects and remove the defective sections of material from the process.
The task becomes more complicated as the size of the substrate increases. For example, PV (Photovoltaic) TF production uses substrates of larger size than Flat Panel Displays (FPD) substrates. Gen 6 FPD substrates and some later FPD generations may be as large as 220-260 centimeters and thin-layer solar photovoltaic panels, which are used to convert light energy into electricity, may have dimensions of a few meters. The above mentioned defects significantly reduce production yield and increase the cost of finished products. The yield is especially important in the solar panel market undergoing rapid growth and wherein removal of the defects improves solar panel efficiency and in many cases saves a complete solar cell from being disposed of as a malfunctioning cell.
The industry still does not possess a fully automated integrated solar panel production line. It is searching for solutions that could enable effective production defect detection and classification, and further support a repair capability. The repair is needed to remedy and fix defective panels having the above mentioned defects. Such systems could be part of new production lines or additions to those already existing, and, will also increase the produced photovoltaic modules efficiency, correlating with the percentage of light energy converted to electricity.
A system and a method for photovoltaic thin film quality control. The system illuminates an area of a continuously moving photovoltaic film, acquires the illuminated area and compares it with a predetermined defect free image. The difference between the images indicates the presence of photovoltaic thin film defects. The processes of image acquisition, defect detection and classification, geometric feature analyses, and others are concurrent processes performed without slowing the speed of continuously moving photovoltaic film. The system communicates the detected defects to located upstream and downstream thin film production systems and undertakes corrective actions, improving the yield of a thin film production line.
The term “auxiliary image” as used in the present disclosure means an acquired image illuminated by one type of illumination, or characterized by one type of image field, or a combination. For example, an image obtained by green light illumination using bright field method or an image obtained by dark field or bright field methods would be auxiliary images. An auxiliary image would typically be an image of a line illuminated by any one of the illumination sources.
The term “combined image” as used in the present disclosure means an image generated by a manipulation of two or more auxiliary images.
The term “time slice” as used in the present disclosure means time in course of which one line of one auxiliary image is exposed to a selected illumination.
The term “Line time” as used in the present disclosure means a sum of time slices comprising the particular line and transition time between the lines.
The term “Area type” as used in the present disclosure means a designated portion of the produced web or sheet with predetermined geometric and thin film layer structure characteristics. A current collection line is an example for an “area type”.
The term “Illumination field type” as used in the present disclosure means an illumination field such as a “dark field” or “bright field” created by a light source.
The term “Light source type” as used in the present disclosure means the type of the emitted light such as Infra Red, Red, Green or Blue.
The term “repair” or “defect correction” as used herein means at least one of isolating material addition, conductor material addition, semiconductor material addition, excessive material removal including metal conductors.
The term “Present Method Logic” as used herein means multiperspective imaging (MPI) a technique combining what is seen from multiple perspective or viewpoints into a single image. The multiperspective images can preserve and depict, within a single context, details of an image that are simultaneously inaccessible from a single view, yet easily interpretable by a viewer or a computer.
The term “article” as used herein means a substrate coated with a thin film coating.
Defect “classification” is a process of association of defects detected in the course of TF layer inspection to predetermined defect types.
Fast switching light sources are LEDs, laser diodes, and sources implemented by a source of a “permanent” light such as an incandescent or metal halide lamp configured with Digital Micromirror Device (DLP).
Any production process that takes place before the thin layer inspection is termed “upstream process” and any process that takes place after the thin layer inspection is termed “downstream process.” Accordingly, such processes are performed on located upstream or downstream workstations.
The system and the method, both as to organization and method of operation, may best be understood by reference to the following detailed description when read with the accompanied drawings, in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the system and the method.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and show by way of illustration specific embodiments where the system and method may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” “leading,” “trailing,” etc., is used with reference to the orientation of the figure(s) being described. Because components of embodiments of the present apparatus can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting.
Thin film layers or coating production processes may produce transparent, translucent, opaque or reflective layers. As with any other production process the TF production is not free of defects.
For example, bright field illumination enables detection of pinholes in a reflective coating and bumps on a transparent coating. However, it does not support the defect classification, because both the pinholes and bumps have an identical appearance in the field of view of the optical system. Table 1 indicates known illumination source types or schemes and their drawbacks, where
Bright field illumination (column 204) enables detection of five types of defects but helps in classification of only one of them. Dark field illumination (column 208) enables detection of four types of defects, although it does not enable classification of the detected defects (column 200). Backlit illumination (column 212) is of use in detection of three types of defects, but supports classification of only one of them. Generally, Table 1 shows that none of the existing illumination techniques alone is capable of detecting all mentioned defects. The classification ratio of the detected defects is poor. It is also necessary to mention that because some of the defects are micron-sized they are difficult to detect while 100% inspection of the controlled parts area is required. The usage of multiple illumination sources assisting in classification of the detected defects increases the likelihood of reaching a successful classification. The goal of such a setup is selection of a minimal number of illumination sources enabling classification of all types of defects in a predetermined thin layer.
Illumination unit 416 of quality control system 308 consists of a plurality of illumination sources with each source having different illumination characteristics. Such characteristics for example, may be illumination source intensity, wavelength, polarization, incidence angle, illumination duration and a combination thereof.
At least one illumination source 444 could be configured to produce a bright field illumination on layer 440 deposited on substrate 410. Additional sources may also be used, such as (a) illumination source 448 configured to produce a dark field illumination and/or (b) illumination source 452 configured to provide a backlit illumination and/or transmission illumination. An additional illumination source 456 is configured to illuminate layer 440 with infrared radiation. All illumination sources are configured to illuminate a line rather than a spot on layer 440. (It should be noticed that the illuminated line is not shown in
Illumination sources 444-456 of illumination unit 416 are arranged such that they allow the building of a required optical setup most appropriate for a particular TF layer quality control conditions. For example, in case of need the incidence angles defined as the angle between the incident beam and a perpendicular 460 to the incidence point may be adjusted. The number of illumination sources used supports proper illumination spectrum selection providing a variety of wavelengths to be used in the TF production control process.
All of the illumination sources are operative to illuminate layer 440 for at least one time slice, although the time slice varies for each illumination source and to some extent depends on the characteristics of the image acquisition device 420. Two or more of the illumination sources may operate simultaneously, illuminating the same line with a plurality of wavelength and/or types of illumination.
Illumination sources 444-456 may be monochromatic or broadband sources such as LED's, laser diodes, quartz-halogen or metal-halide lamps and others. An appropriate selection of emission spectrum of these sources illuminating an identical location—line allows a desired spectral mix to be built up. Sources 444-456 may be arranged such as to illuminate a line on coated layer 440. Multi-fiber beam shape transformer with optional cylindrical optics (refractive or reflective) could be used to assist in forming the desired illuminated line. Such shape transformer having, for example, a circular input shape and linear output shape (at the article side) may be produced as a bundle of fibers. Such fiber bundles are manufactured by many companies, for example Schott AG, 55120 Mainz Germany, Dolan-Jenner Industries, Inc., Boxborough Mass. 01719 U.S.A., and others. Optionally, system 308 may include at least one illumination source providing polarized illumination of desired amplitude and phase characteristics (not shown).
As noted above, articles may have large dimensions that could be difficult to capture by a single camera or proper illuminate by the illumination source such a relatively long line.
Slow switching light sources like metal halide lamps or similar require additional modulation devices. In addition, certain optics should be used to convert the usually circular beam into an illuminated line. For example, metal halide (or quartz-halogen) lamps radiation may be modulated in time using a standard DLP that can be based on projection technique commercially available from Texas Instruments, Inc., Dallas Tex. U.S.A. (http://www.dlp.com/). Both standard configurations of “three chip” and “one chip” DLP projectors may be used.
A metal halide light source 610 directs a “white” light beam 614 onto a three chip DLP projector 618 that provides individually time controlled Red 622, Green 626, and Blue 630 light beams. “Cold” mirror 646 is arranged at an angle such that it “filters out IR radiation present in each metal halide light source. Mirror 646 transmits the visible R, G, B illumination components 622, 626, 630 of beam 614, such that they are mixed with beam 650 where the IR component 634 is reflected onto a radiation absorbing screen 670. An additional metal halide light source 638 provides a broad band time controlled beam 642. The IR component 666 of time modulated beam 642 is mixed into the combined time modulated beam 650 whereas the visible components of beam 642 RGB 654, 658 and 662 are transmitted into screen 670. The components 622, 626, 630 and 666 are combined into one beam 650, which illuminates a line on layer 440.
Image acquisition and processing device 420 (
As noticed, the auxiliary images are acquired sequentially or concurrently. For every acquired line of the scanned image each light source is applied to meet its own accumulation time.
Since in the course of sequential image acquisition process the inspected substrate moves with respect to the sensor 420 (
PS(444,448)=(t448/T)*Pixel size,
In a similar way phase shift between images generated by illumination sources 448 and 452, and sources 444 and 452 may be defined:
PS(448,452)=(t452/T)*Pixel size, and PS(444,452)=(T−t448−t452)/T)*Pixel size.
Numerals 744, 748 and 752 mark the process of data read out from the ASR 1104 into the auxiliary images storage 1108, respectively acquired by using the illumination sources 444, 448, and 452. The transfer time of the data from ASR 1104 into auxiliary images storage 1108 (
The actual illumination time slice such as t444, t448 or t452 can be shorter than the transfer time 744. In such a case the ASR 1104 (
First illumination source 444 is applied to line (i) for respective detector (CCD) accumulation time. Concurrently with the acquisition of the line, the image of that line, moves to an analogue shift register (ASR) of the CCD. Next, illumination source 448 is applied to the same line or target area for its own accumulation time. Concurrently the image of line (i) is read from the ASR and stored as “first auxiliary image” of line (i). The process is repeated for the remaining illumination sources with all of the sources illuminating the same line or target area by the required type of illumination. Following completion of line (i) acquisition system control proceeds to the acquisition of the next (i+1) line and so on until the complete panel/frame image is acquired.
The phase shift between images illuminated by a combination of illumination sources 444 and 448 may be expressed as:
PS((444,448),(448,452))=(Max(t444,t(444,448))/T)*Pixel Size.
PS((444,448),(448,452))=1−(Max(t444,t(444,448))/T)*Pixel Size and between images (lines) illuminated by sources 448 and 452, and between images (lines) illuminated by sources 444 and 448 will be PS((448,452),(444,448))=(1−PS((444,448),(448,452))). The dependence expressed by the above equations enables correlation of the auxiliary images down to a zero phase shift. Line time T in this case is T>=(Max(t444,t(444,448)))+(Max (t452,t(448,452))).
Simultaneously with the acquisition of a plurality of auxiliary images corresponding to one or more of the illumination sources, processing of the images takes place. This processing may include generation of a phase correction factor and initiating pointers to each of the auxiliary images acquired. The correction factor and pointers will be used in the generation of the next target area and combined image. Following generation of a combined image it may be compared to a stored predefined (defect free) image and the deviations between the captured image and the predefined image may be determined. The number of these deviations and their magnitude indicate coating layer quality.
A process of system parameter set-up precedes the image acquisition process. The process of system parameter set-up includes at least one of setting a list of auxiliary images to be used for a current job and operating parameters of the images, target area type, illumination type, and an algorithm of the image geometrical measurements determination. The target area type may be at least one of cell area, separating line area, laser drills, current collection “fingers,” and determination of an image most suitable for image geometry measurements. The system parameters acquired during the set-up process will be archived and used in the course of production for inspection and geometrical measurements purposes. Previously archived setup parameters can be reused for similar future jobs. Generally, the above disclosed setup processes enable selection of a minimal number of illumination sources and their combination supporting optimal detection of defects that are existing in the inspected thin layer.
The defect detection and classification process will now be explained in detail to further clarify the disclosed method and apparatus, the system parameters set-up process and the inter relation between the set-up process and the defect detection and classification process.
During the set-up stage 1204 a plurality of auxiliary images will be acquired, the number of the acquired auxiliary images will be designated by n. A reduced set of the acquired auxiliary images will serve as references for the defect detection and classification processes. The number of images in the reduced set is designated by k, where k is less than n. Each auxiliary image is characterized by two attributes:
1) Local gray level value.
2) Variance of the gray levels in repeated measurements.
This plurality of auxiliary images and their respective attributes can be described by a two dimensional vector system. For example, Image Vector IV(k,2) where k is the number of auxiliary images and 2 is the number of attributes. Such vector enables differentiating between two locations in a k dimensional space using the gray levels and gray levels variance. A one dimensional statistical distance vector SDV( ) can be created by dividing the Gray Level component with the Variance component in Image Vector IV( ) corresponding for each auxiliary image.
The created distance vector SDV( ) will represent the statistical probability of miss classification of the auxiliary images since the images are characterized by a Gaussian spread function.
A setup learning or calibration process will run on a fully loaded system equipped with suitable hardware. During this process known defects will be inspected. In order to achieve maximal signal to noise ratio for each reference auxiliary image the process will optimize the imaging parameters for every illumination source for every target area. Every known defect will be mapped with an N domains differential space, where N indicates the number of reference auxiliary images.
A reference defect vector will be created and denoted by DV( ), each member in the vector representing a parameter related to a reference auxiliary image (1-N). Each member in the vector DV( ) vector represents the distance between the nominal gray level value denoted by NI(i) and the defect gray level value denoted by DI(i) divided by the variance of NI(i) denoted by VNI(i). This enables a DV( ) vector structure to be created as follows:
DV{[NI(1)−DI(1)]/VNI(1)[NI(2)−DI(2)]/VNI(2), . . . [NI(n)−DI(n)]/VNI(n)}.
At the end of the process a set of reference area type images to be used during the defect detection and classification process will be selected. These auxiliary image selection criteria will include the following factors:
1. The summary of all auxiliary images integration times (SITDDC—Summary Integration Time Defects Detection & Classification) is no longer then the line time allowed by the application.
2. The longest distance out of a group representing the shortest distances measured in vector DV( ). The distances are measured between all of the points of interest, for example: defects from their nominal locations and defects from other defects.
3. This shortest distance mentioned above is more then a given threshold, the threshold is at least 3.
The set-up process will also identify reference auxiliary images to be used for the purpose of geometrical measurements. For every target area type the process will identify all its neighboring area types. In addition, borders between the area type and the neighboring area types will be identified and mapped into an N domains differential space to create a border vector. (N is the number of auxiliary images). The boarder vector is denoted by BVAt1-At2( ). At1 represents an image area type whereas At2 is a neighboring image area type to At1. The vector BVAt1-At2 ( ) is comprised of elements representing the distances between nominal gray levels of image area type At1 and neighboring image area type At2 divided by maximal variance between At1 and At2 and is computed for every reference auxiliary image. The nominal gray level value for At1 and At2 in for an auxiliary image I will be denoted respectively as NI At1(i), NIAt2(i) and their maximal variance as VNIAt12(i). Thus creating a border vector BVAt1-At2( ) structured as follows:
BVAt1−At2{[At2At1(1)−NIAt2(1)]/VNIAt12(1)[NIAt1(2)−NIAt2(2)]/VNIAt12(2), . . . [NIAt1(n)−NIAt2(n)]/VNIAt12(n)}.
At the end of the setup process a set of reference auxiliary images to be used during the geometrical measurements of this job will be selected. The auxiliary images selection criteria will include the following factors:
The setup process 1204 produces reference information 1208 to be used for defect detection and defect classification during job production process. The reference information 1208 is comprised of a selected minimal set of auxiliary images enabling defect detection and classification, a reference vector for every area type at its nominal production conditions and reference vector for every defect. The reference information 1208 is forwarded to detection module 1212 and classification module 1216. The detection reference information 1208 will also include a reduced list of auxiliary images which will be sufficient for performing defect detection. Working on a reduced list of auxiliary images will increase the processing performance of the detection module 1212. The classification reference information 1208 will also include the vectors for all known defects and area types.
During job production, images of TF layer are acquired. The defect detection module 1212 performs an analysis on the job description parameters 1200 of the incoming job, to identify the area types in the reference auxiliary images.
The defect detection module 1212 measures the gray level value of the acquired pixel and the difference between the actual value and the expected value according to the expected area type at that pixel. If the measured difference exceeds a given threshold then the detection module 1212 will calculate the vector distance between the nominal location of the area type and the pixel location on the vector. In a case where this distance is within a permitted threshold then no defect is detected and the process is repeated for the next pixel. Otherwise, a defect is detected and the system will proceed to the classification stage.
Each detected defect is submitted to the defect classification module 1216 for classification. During classification the classification module 1216 identifies the area type at which the defect originated. The classification module 1216 measures the distance between the average defect location on the vector and with all the defect types previously identified on that inspected area type at the setup process. The classification module picks the closest defect type which stands within the tolerances defined for that defect type and classifies it as such. In the case where the classification module is unable to reach an obvious result the defect is classified as an “other defect”. The defect classification information along with the detected defects locations may be submitted to an upstream production steps 304 (
Production line 300 or a similar line equipped with system 308 and proper repair devices could be used to improve thin layer production yield. Thin layer process defects could be identified according to one of the sequential or concurrent image acquisition methods disclosed above. Identified defects could be tagged according to a set of pre-determined production defects references and communicated to the upstream production steps 304 where some parameters of the production process would be changed or downstream to a repair station. The process of analyses of the tagged defects is performed without interrupting the movement of the production line. The defects are classified and repair instructions issued and communicated to at least one repair station on the type of repair to be performed.
There may be a number of image quality control stations on each production line. The stations may have a different set of illumination sources and operate at different resolution. The stations may communicate between them. For example, an upstream located station may detect certain defects, communicate them to a downstream station and request a more thorough control sequence with a different illumination source combination or higher resolution. The communication between the stations may trigger for example a metrology system for layer thickness measurement or other parameters control. The stations may communicate to each other images captured at different resolutions or illumination sources and process them in a synchronous mode.
There may be a variety of repair station types on the production floor. A repair station for coating layer defects correction would typically include a communication facility for receiving defect location coordinates. The station may be a node on the communication network connecting on-line all or most of the production and engineering equipment related to the process or have a device for reading removable storage media on which results of quality control process have been recorded.
Repair station may be part of production line 300 (On-line TF repair station,
The disclosed method and apparatus support operation of system 308 (
The operation of the systems disclosed according to the method described will control each of the individually acquired images produced by different illumination sources to provide full dynamic range of each and every one of the images regardless of the differences that may exist in combined illumination intensity and sensor sensitivity.
The disclosed method and system are operative for all optical illumination field set-ups such as bright field, diffusive illumination, dark field, backlit and other illumination types and schemes, as well as in different spectral zones like IR, UV, and specific colors. It is applied to different physical phenomenon like florescence material reaction to UV light, polarization effects and others, as long as the sensible energy emission disappears during the transition between the time slices.
Although demonstrated on thin layer photovoltaic coatings manufacture the method and system are applicable to wafer based photovoltaic products, crystalline and polysilicon coatings and almost every large format articles manufacturing processes such as Flexible Displays and e-Paper, RFID (passive and active antennae), OLEDs and others.
While the exemplary embodiment of the method of and apparatus for improving production yield of thin layer based products have been illustrated and described, it will be appreciated that various changes can be made therein without affecting the spirit and scope of the method. The scope of the method, therefore, is defined by reference to the following claims:
This application, which is a non-provisional application being filed under 37 CFR 1.53(b) and 35 USC 111, claims the benefit of the priority date of the United States Provisional Application for patent filed on Mar. 30, 2008 and assigned Ser. No. 61/040,914, which application is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61040914 | Mar 2008 | US |