Advances in plasma processing have provided for growth in the semiconductor industry. To be competitive, a manufacturing company needs to be able to process the substrates into quality semiconductor devices. Tight control of the process parameters is generally needed to achieve satisfactory results during substrate processing. When the processing parameters (e.g., RF power, pressure, bias voltage, ion flux, plasma density, and the likes) fall outside of a pre-defined window, undesirable processing results (e.g., poor etch profile, low selectivity, damage to the substrate, damage to the processing chamber, and the likes) may result. Accordingly, the ability to identify conditions when the processing parameters are outside the pre-defined windows is important in the manufacture of semiconductor devices.
During substrate processing, certain uncontrolled events may happen that may damage the substrate and/or cause damage to the processing chamber components. To identify the uncontrolled events, data may be collected during substrate processing. Monitoring devices, such as sensors, may be employed to collect data about the various process parameters (such as bias voltage, reflected power, pressure, and the likes) during substrate processing. As discussed herein, sensor refers to a device that may be employed to detect conditions and/or signals of a plasma processing component. For ease of discussion, the term “component” will be used to refer to an atomic or a multi-part assembly in a processing chamber.
The type and amount of data that are being collected by the sensors have increased in recent years. By analyzing the data collected by the sensors in relation to the process module data and the process context data (chamber event data), parameters that are outside of the pre-defined window may be identified. Accordingly, corrective actions (such as recipe adjustment) may be provided to stop the uncontrolled event(s), thereby preventing further damage from occurring to the substrate and/or the processing chamber components.
The invention relates, in an embodiment, to a process-level troubleshooting architecture (PLTA) configured to facilitate substrate processing in a plasma processing system. The architecture includes a process module controller. The architecture also includes a plurality of sensors, wherein each sensor of the plurality of sensors communicates with the process module controller to collect sensed data about one or more process parameters. The architecture further includes a process-module-level analysis server, wherein the process-module-level analysis server communicates directly with the plurality of sensors and the process module controller. The process-module-level analysis server is configured for receiving data, wherein the data include at least one of the sensed data from the plurality of sensors and process module and chamber data from the process module controller. The process-module-level analysis server is also configured for analyzing the data and sending interdiction data directly to the process module controller when a problem is identified during the substrate processing.
The above summary relates to only one of the many embodiments of the invention disclosed herein and is not intended to limit the scope of the invention, which is set forth in the claims herein. These and other features of the present invention will be described in more detail below in the detailed description of the invention and in con junction with the following figures.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
The present invention will now be described in detail with reference to a few embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process steps and/or structures have not been described in detail in order to not unnecessarily obscure the present invention.
Various embodiments are described hereinbelow, including methods and techniques. It should be kept in mind that the invention might also cover articles of manufacture that includes a computer readable medium on which computer-readable instructions for carrying out embodiments of the inventive technique are stored. The computer readable medium may include, for example, semiconductor, magnetic, opto-magnetic, optical, or other forms of computer readable medium for storing computer readable code. Further, the invention may also cover apparatuses for practicing embodiments of the invention. Such apparatus may include circuits, dedicated and/or programmable, to carry out tasks pertaining to embodiments of the invention. Examples of such apparatus include a general-purpose computer and/or a dedicated computing device when appropriately programmed and may include a combination of a computer/computing device and dedicated/programmable circuits adapted for the various tasks pertaining to embodiments of the invention.
As aforementioned, to gain a competitive edge, manufacturers have to be able to effectively and efficiently troubleshoot problems that may arise during substrate processing. Troubleshooting generally involves analyzing the plethora of data collected during processing. To facilitate discussion,
Consider the situation wherein, for example, a manufacturing company may have one or more cluster tools (such as etch tools, cleaning tools, strip tools, and the likes). Each cluster tool may have a plurality of processing modules, wherein each processing module is configured for one or more specific processes. Each cluster tool may be controlled by a cluster tool controller (CTC), such as CTC 104, CTC 106, and CTC 108. Each cluster tool controller may interact with one or more process module controller (PMC), such as PMCs 110, 112, 114, and 116. For ease of discussion, examples will be provided in relation to PMC 110.
In order to identify conditions that may require intervention, sensors may be employed to collect data (sensed data) about processing parameters during substrate processing. In an example, during substrate processing a plurality of sensors (such as sensors 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, and 140) may interact with the process module controllers to collect data about one or more processing parameters. The type of sensors that may be available may depend upon the type of data that may be collected. For example, sensor 118 may be configured to collect voltage data. In another example, sensor 120 may be configured to collect pressure data. Generally, the sensors that may be employed to collect data from a process module may be of different brands, makes, and/or models. As a result, a sensor may have little or no interaction with another sensor.
Usually, a sensor is configured to collect measurement data about one or more specific parameters. Since most sensors are not configured to perform processing, each sensor may be coupled to a computing module (such as a computer, user interface, and the likes). The computing module is usually configured to process the analog data and to convert the raw analog data into a digital format.
In an example, sensor 118 collects voltage data from PMC 110 via sensor cable 144. The analog voltage data received by sensor 118 is processed by a computing module 118b. The data collected by the sensors are sent to a host-level analysis server (such as data box 142). Before sending the data onward to data box 142 over the network connection, the data is first converted from an analog format into a digital format by the computing module. In an example, computing module 118b converts the analog data collected by sensor 118 into a digital format before sending the data over a network path 146 to data box 142.
Data box 142 may be a centralized analysis server that is configured to collect, process, and analyze data from a plurality of sources, including the sensors and the process modules. Usually, one data box may be available to process the data collected during substrate processing by all of the cluster tools of a single manufacturing company.
The actual amount of data that may be transmitted to data box 142 may be significantly less than the amount collected by the sensors. Usually, a sensor may collect a massive amount of data. In an example, a sensor may collect data at rates of tip to 1 megabyte per second. However, only a fraction of the data collected by the sensors is sent to data box 142.
One reason for not transmitting the entire data streams collected by the sensors to data box 142 is due to the network bandwidth limitation when using cost-effective, commercially available communication protocols. The network pipeline to data box 142 may not be able to handle large volume of data from a plurality of sources (such as sensors 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, and 140) being sent to a single receiver (such as data box 142). In other words, the network path between the sensor arrangements (sensor and computing module) and data box 142 may experience major traffic congestion as data box 142 tries to receive the massive amount of data coming from all of the sensor arrangements. As can be appreciated from the foregoing, if data box 142 is unable to handle the incoming traffic, the data packets being sent may be dropped and may have to be resent, thereby putting an additional burden onto the already heavily congested network pipeline.
In addition, data box 142 may not be able to handle a high volume of incoming data from multiple sources while at the same time performing other important functions, such as processing and analyzing data. As aforementioned, data box 142 is not only configured to receive the incoming data packets but data box 142 is also configured to process and analyze all of the incoming data streams, for example. Since data box 142 is the analysis server for the different data streams being collected, data box 142 needs sufficient processing capability to perform analysis on the plethora of data streams.
Since data box 142 has limited processing resource, only a fraction of the data collected from each sensor is sent to data box 142. In an example, of the thousands of data items that may be collected by a single sensor, only 10-15 data items at 1-5 hertz may be forwarded to data box 142. In one example, only a summary of the data collected by sensor 118 may be sent to data box 142.
In addition to receiving data from the plurality of sensors, data box 142 may also be receiving data from the process module controllers. In an example, process module data and process context data (chamber-event data) may be collected by each process module controller and forwarded to data box 142. For ease of discussion, process module data and process context data may also be referred to as process module and chamber data. For example, process module data and process context data may be collected by PMC 110 and be sent to CTC 104 via a path 148. CTC 104 is not only managing the data from PMC 110 but may also be handling the data from the other processing module controllers within the cluster tool (such as PMC 112, PMC 114, and PMC 116).
The data collected by the cluster tool controller is then transmitted to a fab host 102 via a semiconductor equipment communication standard/generic equipment module (SECS/GEM) interface. In an example, CTC 104 transmits data collected from PMCs 110, 112, 114, and/or 116 to fab host 102 through SECS/GEM 156 via a path 150. Fab host 102 may not only be receiving data from CTC 104, but also may be receiving data from other cluster tool controllers, such as CTCs 106 and 108, for example. The data collected by fab host 102 is then forwarded to data box 142 via a path 158. Due to the sheer volume of data being collected, not all data being sent to fab host 102 is forwarded to data box 142. In many instances, only a summary of the data may be transmitted to data box 142.
Data box 142 may process, analyze and/or correlate the data collected by the sensors and the process module controllers. If an anomaly is identified, data box 120 may then determine the source of the problem, such as a parameter that is not in conformance with a recipe step being performed in PMC 110, for example. Once the source of the problem has been identified, data box 142 may send an interdiction in the format of an Ethernet message to fab host 102. Upon receiving the message, fab host 102 may forward the message through SECS/GEM 156 to CTC 104. The cluster tool controller may then relay the message to the intended process module controller, which is PMC 110 in this example.
Unfortunately, the interdiction is usually not provided in real-time. Instead, the interdiction is usually received by the intended process module after the affected substrate has been processed or even after the entire substrate lot has exited the process module. Accordingly, not only have the substrate/substrate lot been damaged, but one or more processing chamber components may have also been negatively impacted, thereby increasing waste and increasing ownership cost.
One reason for the delay is due to the sheer volume of data being received from a plethora of sources. Even if data box 142 may be configured with a fast processor and have sufficient memory to handle the large volume of data streams, data box 142 may still need time to process, correlate and/or analyze all of the data being collected.
Another reason for the delay in receiving the interdiction by the process module is due to the incomplete data streams that are being received by data box 142. Since data box 142 is receiving data from a plethora of sources, the actual data that is being sent to data box 142 is significantly less than the data being collected. In an example, instead of sending the 1 gigahertz data stream that is being collected by sensor 118, only a fraction (about 1-5 hertz) of the data is actually being sent. As a result, even though data box 142 is receiving a high volume of data from all of its sources, the data that is being received is usually incomplete. Thus, determining an uncontrolled event may take time given that data box 142 may not have access to the complete data set from all sources.
In addition, the paths by which the data are being sent to data box 142 may vary. In an example, data are sent directly from a sensor arrangement (that is sensor and its computing module) after the analog data has been converted into digital data. In contrast, the data collected by the process module is transmitted over a longer network path (through at least cluster tool controller and fab host). Accordingly, data box 142 is unable to complete is analysis until all related data streams have been received.
Not only is the network path between a process module and data box 142 longer but the data streams sent through this path are usually faced with at least two bottlenecks. The first bottleneck is at the cluster tool controller. Since the data collected by the process modules within a cluster tool is being sent to a single cluster tool controller, the first bottleneck occurs since the data streams from the various process modules have to be processed through a single cluster tool controller. Given the sheer volume of data that can be transmitted from each process module, the network path to the cluster tool controller usually experiences heavy traffic congestion.
Once the data has been received by the cluster tool controller, the data is transmitted to fab host 102. The second bottleneck may occur at fab host 102. Given that fab host 102 may be receiving data from various cluster tool controllers, traffic into fab host 102 may also be experiencing congestion due to the high volume of data being received.
Since data box 142 needs the data from the different sources in order to determine an uncontrolled event, the traffic condition between a process module and data box 142 prevents timely delivery of the data streams to data box 142. As a result, precious time is lost before data box 142 has gathered all the necessary data to perform analysis. Furthermore, once an interdiction is prepared, the interdiction has to travel through the same lengthy path back to the affected process module before the interdiction can be applied to perform corrective action.
Another factor contributing to the delay is the challenge of correlating data from the various data sources. Since the data streams being received by data box 142 is usually a summary of the data collected from each sensor and/or process modules, correlating the data may be a challenging task since the data streams available may be of different time intervals. In an example, the selected data streams transmitted to data box 142 from sensor 118 may be at a one second interval while the data streams from PMC 110 may be at a two second interval. As a result, correlating data streams may require time before an uncontrolled event may be definitively determined.
An additional challenge for correlating the data is due to the different paths by which the data are being sent to data box 142. As the data is being transmitted through different computers, servers, and the likes, the data may be exposed to computer drift, network latency, network loading and the likes. As a result, data box 142 may have difficulty correlating the data from the various sources. Given that a tight correlation is required to quickly identify uncontrolled events, more analysis may be required to be performed before an uncontrolled event may be accurately identified.
Another disadvantage of the solution provided in
To reduce the actual time delay between the actual occurrence of the uncontrolled event within the process module and the receipt of the interdiction by the process module, a cluster-level analysis server is provided.
Similar to
Similar to
Between fab host 202 and CTC 204, a serial tap may be connected to network path 250 to duplicate the data being forwarded to fab host 202. In an example, a serial tap 208 may intercept the data being forwarded by CTC 204 to fab host 202. The data is duplicated and a copy of the data stream is sent to remote controller 242 via a path 254. If the fab host is connected to more than one cluster tool controller, than for each cluster tool controller, a dedicated remote controller is associated with the cluster tool controller. In an example, the data being sent from CTC 206 to fab host 202 via a path 252 is intercepted by another serial tap (256). The data is duplicated and sent via a path 258 to a remote controller (260) that is different than the remote controller (242) associated with CTC 204.
Hence, instead of a single data box to handle all the data from the various cluster tools, multiple remote controllers may be available to handle the data from the various cluster tools. In other words, each cluster tool is associated with its own remote controller. Since each remote controller is handling data from a fewer number of data sources (such as the process module controllers and the sensors associated with a single cluster tool), each remote controller is able to handle a higher volume of data from each source. In an example, instead of 30-100 data items being sent, about 40 kB-100 kB data items at 10 hertz may now be received by each remote controller.
Data received from the sensors and the process module controllers are analyzed by the remote controller. If a problem is identified, the remote controller may send an interdiction to the cluster tool controller. In an example, remote controller 242 identifies a problem within PMC 210. An interdiction is sent via paths 254 and 250 through serial lap 208 to CTC 204. Upon receiving the interdiction, CTC 204 forwards the interdiction to the intended process module controller, which is PMC 210 in this example.
Since the remote controller is only responsible for handling data from one cluster tool instead of a plurality of clusters tools (as being done by data box 142), more data may be analyzed and better correlation may exist between the different data sets. As a result, the remote controller may perform better and faster analysis, thereby providing more timely intervention to correct an uncontrolled event within a processing module. In an example, instead of receiving an interdiction to prevent an identified uncontrolled event from happening in the next substrate lot (such as the interdiction provided by data box 142), the interdiction sent by remote controller 242, for example, may enable the process engineers to salvage at least part of the substrate lot that is scheduled to be processed.
Although the remote controller solution is a better solution than the data box solution, the remote controller solution still depends upon summary data to perform its analysis. As a result, problems that may be occurring during substrate processing may remain unidentified. Further, the path between the process module and the remote controller is still not a direct path. As a result, computer drift, network latency, and/or network loading may cause time discrepancy that may make it difficult for the remote controller to correlate the data from the sensors with the data from the process modules.
Thus, even though the remote controller solution has increased the timeliness of the interdiction, the remote controller solution is still inadequate. At best, the interdiction may be able to prevent a problem experienced by the affected substrate from occurring during the processing of the next substrate. In a fiercely competitive market where cost needs to be minimized, waste due to damaged substrate and/or downtime due to damaged processing chamber components may translate into market loss. Accordingly, a real-time solution for identifying uncontrolled event is desired.
In accordance with embodiments of the present invention, a process-level troubleshooting architecture (PLTA) is provided in which troubleshooting is performed at the process module level. Embodiment of the invention includes a process-level troubleshooting architecture that provides for real-time analysis with real-time interdiction. Embodiment of the invention further includes arrangements for load balancing and Fault tolerance between sensors.
In an embodiment of the invention, the process-level troubleshooting architecture is a network system in which an analysis server is communicating with a single processing module and its corresponding sensors. In an embodiment, the information being exchanged in the network is bidirectional. In an example, the analysis server may be continually receiving process data from the processing module and sensors. Conversely, the sensors may be receiving data from the processing module and the processing module may be receiving instructions from the analysis server.
Consider the situation wherein, for example, a substrate is being processed. During substrate process, a plurality of data may be collected. In an example, data about pressure is collected every 100 milliseconds. If the processing takes one hour, 36,000 data items have been collected for the pressure parameter. However, a plurality of other process data (e.g., voltage bias, temperature, etc.), besides pressure data, may also be collected. Thus, a considerable amount of data is being collected by the time the substrate process has completed.
In the prior art, the data are transmitted to an analysis server that may be configured to service data collected from a plurality of processing modules (such as remote controller 242 of
In one aspect of the invention, the inventors herein realized that a more accurate and quicker analysis may be performed if more granular data is available for analysis. In order to analyze more data from a single source, the analysis server has to be analyzing data from fewer sources. In an embodiment, an arrangement is provided for processing and/or analyzing data at a process module level. In other words, a process-module-level analysis server is provided for performing analysis for each process module and its corresponding sensors.
In an embodiment, the process-module-level analysis server includes a shared memory backbone that may include one or more processors. Each processor may be configured to interact with one or more sensors. In an example, data collected by sensor 1 may be processed by processor 1 while data collected by sensor 2 is processed by processor 2.
Unlike the prior art, the processors may share its processing power with one another to perform load balancing and fault tolerance. In the prior art, a computing module is configured to handle the data collected by a sensor. Since each computing module is an individual unit and usually does not interact with one another, load balancing is usually not performed. Unlike the prior art, the set of processors within the process-module-level analysis server may perform load balancing. In an example, if processor 1 is experiencing data overload while processor 2 is receiving little or no data, processor 2 may be recruited to assist processor 1 in processing the data from sensor 1.
Furthermore, in the prior art, if a computing module is malfunctioning, other computing modules is unable to take over the processing performed by the malfunctioning computing module since the computing modules tend to be of different brands/makes/models. Unlike the prior art, workload may be redistributed between the processors as needed. For example, if processor 2 is unable to perform its function, the workload may be redistributed to other processor until processor 2 is fixed. As can be appreciated from the foregoing, the processors eliminate the need for individual computing modules, thereby also reducing the physical space required to house the computing modules.
In an embodiment of the invention, the processors may be divided into two types of processors: primary processor and secondary processor. Both primary and secondary processors are configured to handle data from sensors. In an example, if secondary processor 1 is associated with sensor 1 then secondary processor 1 usually only process data coming from sensor 1. Likewise, if secondary processor 2 is associated with sensors 2 and 3, then secondary processor 2 usually only process data coming from those two sensors (2 and 3).
In an embodiment, the shared memory backbone may include one or more primary processors. The set of primary processors may be configured not only to handle data from the sensors but may also be configured to handle data coming from the processing module. In addition, the set of primary processors is configured to correlate the data between the various sources (such as the sensors and processing module) and perform analysis. If an interdiction is needed, the set of primary processors is configured to send the interdiction to the process module controller.
The features and advantages of the present invention may be better understood with reference to the figures and discussions that follow.
The data collected by each processing module is collected by its corresponding processing module controllers (PMC 306, PMC 308, PMC 310, and PMC 312) and transmitted to a fab host 302 via a cluster tool controller (CTC) 304. The data that may be transmitted by the PMCs may be the same type of data (process module data and process context data) that has been previously sent in the prior art. Unlike the prior art, the data being transmitted to fab host 302 is not relied upon by the processing modules to perform troubleshooting. Instead, the data may be archived and be made available for future analysis.
In an embodiment, a process-module-level analysis server (APECS 314) is provided to perform the analysis needed for troubleshooting. Consider the situation wherein a substrate is being etched in PMC 308. During substrate processing, sensors 316, 318, and 320 are collecting data from PMC 308. In an example, sensor 316 is configured to collect voltage bias data from PMC 308. Analog data collected from PMC 308 is sent via sensor cable 328 to sensor 316. Likewise, sensors 318 and 320 may be collected data via sensor cables 330 and 332, respectively. The data collected by the sensors may then be transmitted via one of the paths 322; 324, and 326 to APECS 314 for processing and/or analysis.
Unlike the prior art, data collected by the sensors do not have to be preprocessed (such as summarized, for example) before being transmitted to the analysis server (APECS 314). In an embodiment, instead of having a computing module to process the data, each sensor may include a simple data converter that may be employed to convert the analog data into digital data before forwarding the data to APECS 314. Alternatively, a data converter, such as a field-programmable gate array (FPGA) may be built into APECS 314, in an embodiment. In an example, each processor may include a data converter algorithm for converting the data into a digital format as part of its processing. As can be appreciated from the foregoing, by eliminating the need for a computing module, less physical space is required to house the cluster tool and its hardware. As a result, the cost of ownership may be reduced.
Since APECS 314 is dedicated to processing data only from one processing module and its corresponding sensors, APECS 314 is able to handle a higher volume of data coming from a single source. In other words, instead of having to pare down the volume of data transmitted from each sensor, APECS 314 is configured to handle most, if not all, of the data collected by each sensor. In an example, instead of just 10-15 data items being sent for analysis, now two thousands plus data items from each sensor may be available for analysis by APECS 314. As a result, the data stream that is available for APECS 314 to process and analyze is a more complete data set.
In an embodiment, APECS 314 is also configured to handle the data coming from the processing module. Unlike the prior art in which the data stream is sent through a lengthy data path through various servers (e.g., cluster tool controller, fab host, etc.) before being received by the analysis server (such as the data box or the remote controller), the data collected by the process module is sent directly to APECS 314 without having to go through other servers. In an example, process module data may be sent from PMC 308 to APECS 314 via a path 334. If an uncontrolled event is identified, an interdiction may be sent directly to PMC 308 via a path 336 without having to go through other servers first.
Further details about the process module level analysis server are provided in
Data sources may flow from two main sources, data collected by sensors and data collected by a process module. In an embodiment, APECS 400 is configured to receive incoming data from a plurality of sensors (sensors 410, 412, 414, 416, 420, 422, 424, and 426). Given that some cluster tool owners may have already invested a considerable amount of money into the traditional sensor arrangement (sensor with a computing module), APECS 400 is configured to accept data from both the traditional sensor arrangements and the modified sensors (sensor that does not require a computing module).
In an embodiment, APECS 400 may include an interface, such as Ethernet switch 418, for interacting with traditional sensor arrangements (such as sensors 410, 412, 414, and 416). In an example, data collected by sensor 410 is first converted from an analog format into a digital format by computing module 410b before the digital data is transmitted to APECS 400 (via paths 430, 432, 434, or 436). Ethernet switch 418 is configured to interact with the traditional sensor arrangements to accept the data streams. The data streams are then passed (via paths 446, 448, 450, or 452) to one of the processors (402, 404, 406, and 408) within APECS 400 for processing.
Instead of utilizing a traditional sensor arrangement for measuring process parameters, a modified sensor (one without a computing module) may be employed. Since the data collected does not have to be summarized, a computing module is no longer required for processing. Instead, a modified sensor may include a data converter (not shown), such as an inexpensive FPGA, for converting data from an analog format to a digital format, in an embodiment. Alternatively, instead of installing a data converter within the sensors, a data converter (not shown) may be installed within APECS 400. Regardless if the data converter is installed externally or internally to APECS 400, the elimination of the computing module provides a cost saving in the ownership of the cluster tool. In an example, the cost to purchase, house, and maintain the computing module is substantially eliminated.
In an embodiment of the invention, APECS 400 include a set of processors (402, 404, 406, and 408) for handling the incoming data. The set of processors may be physical processing units, virtual processors, or a combination thereof. Each processor is responsible for handling the data streams from the sources associated with the processor. In an example, data streams flowing in from sensor 422 via a path 440 are handled by processor 404. In another example, data streams collected by sensor 424 are transmitted to processor 406 via a path 442 for processing.
The number of processors and its relationship with the sensors may depend upon a user's configuration. In an example, even though
Each of the processors shares a shared memory backbone 428, in an embodiment. As a result, load balancing may be performed when one or more processors are overloaded. In an example, if the data streams flowing in from sensor 426 via a path 444 is overwhelming processor 408 processing capability, other processors may be recruited to help reduce the load on processor 408.
Besides load balancing, a shared memory backbone also provides an environment for fault tolerance. In other words, if one of the processor is not working properly, the processing previously supported by the malfunctioning processor is redistributed to the other processors. In an example, if processor 406 is not functioning properly and is unable to process the data streams coming from sensor 424, processor 404 may be directed to handle the data streams from sensor 424. Accordingly, the ability to redistribute the workload enables the improperly functioning processor to be replaced without incurring downtime for the entire server.
In an embodiment, two types of processors may exist within APECS 400. The first type of processors is a secondary processor (such as processor 404, 406, or 408). Each secondary processor is configured to process the data streams received from its corresponding sensor(s). Additionally, each processor is configured to analyze the data and to identify any potential problem that may exist with the corresponding sensor(s), in an embodiment.
The second type of processor is known as a primary processor (402). Although
Another source of data for a primary processor is a process module. In other words, the process module data and the process context data collected by a process module is processed by the primary processor. In an example, data collected by a process module is sent through a process control bus via a path 454 to APECS 400. The data first traverses through Ethernet switch 418 before flowing via path 446 to primary processor 402.
In addition to processing data, the primary processor is also configured to analyze data from multiple sources. In an example, data correlation between data streams from sensors 422 and 424 is performed by primary processor 402. In another example, data correlation between data streams from one or more sensors with data streams from a process module is also performed by primary processor 402.
Since the data paths for each of the data sources are now of about similar length, correlating the data is significantly less challenging than that experienced in the prior art. In an example, since the data flow from the process module to APECS 400 without having to go through other servers (such as a cluster tool controller and/or a fab host), the data streams from the process module does not experience changes due to computer and/or network conditions (such as computer drift, network latency, network loading and the likes) that may have occurred when the data streams have to be transmitted through other servers (such as a cluster tool controller, a fab host, and the likes) as described in
Besides the data path, quicker and more accurate analysis may be performed since a higher volume of data with more granularities from a single source provides more data points for performing correlation. In the prior art, correlation between data sources is usually difficult because the data that is available for analysis is usually incomplete since the prior art analysis server is unable to handle a high volume of data from a plethora of data sources. Unlike the prior art, the number of data sources is significantly reduced since each analysis server is now only responsible for analyzing data from a limited number of sources (the process module and the sensors associated with the process module). Since the number of data sources has been significantly reduced, the analysis server has the capacity to handle a higher volume of data from a single source. Given that more granular details are provided, better correlation may be achieved between the data streams of the various sources.
If a problem (such as an uncontrolled event) is identified, primary processor is configured to send an interdiction to the process module. In an embodiment, a direct digital output line 456 is employed to send an interdiction from APECS 400 to the process module. With a direct digital output line between the two devices, the interdiction does not have to be first converted into an Ethernet message before the interdiction can be transmitted. Accordingly, the time required to properly format the interdiction and then convert it back is substantially eliminated. Thus, APECS 400 is able to provide real-time interdictions or near-real time interdictions to the process module to handle the uncontrolled event.
In an embodiment, a primary processor may also be configured to interact with other devices via a path 458. In an example, if a cluster tool controller sends a request to APECS 400, the request may be sent via path 458 and be handled by primary processor 402. In another example, notification to the fab host may be sent via path 458 and the cluster tool controller.
As can be appreciated from one or more embodiments of the present invention, a process-level troubleshooting architecture is provided. By localizing the analysis server at the process module level, data granularity is provided for analysis resulting in a quicker and more accurate analysis. With a similar data path for the various data sources, better correlation exists between the various data streams. With quicker and more accurate analysis, troubleshooting may be performed on more timely basis with the interdiction provided in a timely manner to provide corrective action that may be employed to not only prevent the next substrate from being damaged but also to provide corrective action to fix the uncontrolled event impacting the affected substrate, thereby saving the affected substrate from being damaged. Thus, fewer numbers of substrates are wasted and damages to the processing chamber components may be substantially reduced.
While this invention has been described in terms of several preferred embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this invention. Although various examples are provided herein, it is intended that these examples be illustrative and not limiting with respect to the invention.
Also, the title and summary are provided herein for convenience and should not be used to construe the scope of the claims herein. Further, the abstract is written in a highly abbreviated form and is provided herein for convenience and thus should not be employed to construe or limit the overall invention, which is expressed in the claims. If the term “set” is employed herein, such term is intended to have its commonly understood mathematical meaning to cover zero, one, or more than one member. It should also be noted that there are many alternative ways of implementing the methods and apparatuses of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.
This application is related to and claims priority under 35 U.S.C. §119(e) to a commonly assigned provisional patent application entitled “Arrangement for Identifying Uncontrolled Events at the Process Module Level and Methods Thereof,” by Huang et al., Application Ser. No. 61/222,024, filed on Jun. 30, 2009, which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
4310880 | Gehman | Jan 1982 | A |
4443848 | Gehman | Apr 1984 | A |
4659413 | Davis et al. | Apr 1987 | A |
4661196 | Hockersmith et al. | Apr 1987 | A |
4785399 | Evans et al. | Nov 1988 | A |
5206184 | Allen et al. | Apr 1993 | A |
5272872 | Grutter et al. | Dec 1993 | A |
5450205 | Sawin et al. | Sep 1995 | A |
5479340 | Fox et al. | Dec 1995 | A |
5640518 | Muhich et al. | Jun 1997 | A |
5864773 | Barna et al. | Jan 1999 | A |
5971591 | Vona et al. | Oct 1999 | A |
5986747 | Moran | Nov 1999 | A |
6021215 | Kornblit et al. | Feb 2000 | A |
6077386 | Smith, Jr. et al. | Jun 2000 | A |
6192287 | Solomon et al. | Feb 2001 | B1 |
6246481 | Hill | Jun 2001 | B1 |
6253113 | Lu | Jun 2001 | B1 |
6318384 | Khan | Nov 2001 | B1 |
6332961 | Johnson et al. | Dec 2001 | B1 |
6343251 | Herron et al. | Jan 2002 | B1 |
6377210 | Moore | Apr 2002 | B1 |
6420194 | Reitman | Jul 2002 | B1 |
6553277 | Yagisawa et al. | Apr 2003 | B1 |
6567718 | Campbell et al. | May 2003 | B1 |
6665576 | Hayashi | Dec 2003 | B2 |
6745096 | Yamamoto et al. | Jun 2004 | B2 |
6805810 | Smith, Jr. et al. | Oct 2004 | B2 |
6813534 | Sui et al. | Nov 2004 | B2 |
6821794 | Laursen et al. | Nov 2004 | B2 |
6824627 | Dhindsa et al. | Nov 2004 | B2 |
6879867 | Tanaka et al. | Apr 2005 | B2 |
6895293 | Reiss et al. | May 2005 | B2 |
6902646 | Mahoney et al. | Jun 2005 | B2 |
6969619 | Winniczek | Nov 2005 | B1 |
7010374 | Tanaka et al. | Mar 2006 | B2 |
7016811 | Peck et al. | Mar 2006 | B2 |
7047099 | Shanmugasundram et al. | May 2006 | B2 |
7050873 | Discenzo | May 2006 | B1 |
7058467 | Tanaka et al. | Jun 2006 | B2 |
7062411 | Hopkins et al. | Jun 2006 | B2 |
7082345 | Shanmugasundram et al. | Jul 2006 | B2 |
7103443 | Strang | Sep 2006 | B2 |
7107115 | Tanaka et al. | Sep 2006 | B2 |
7113838 | Funk et al. | Sep 2006 | B2 |
7123980 | Funk et al. | Oct 2006 | B2 |
7127358 | Yue et al. | Oct 2006 | B2 |
7146237 | Lev-Ami et al. | Dec 2006 | B2 |
7158848 | Tanaka et al. | Jan 2007 | B2 |
7167766 | Lam et al. | Jan 2007 | B2 |
7217336 | Strang | May 2007 | B2 |
7257457 | Imai et al. | Aug 2007 | B2 |
7328126 | Chamness | Feb 2008 | B2 |
7337019 | Reiss et al. | Feb 2008 | B2 |
7356580 | Huang et al. | Apr 2008 | B1 |
7373216 | Winkler et al. | May 2008 | B1 |
7376479 | Tanaka et al. | May 2008 | B2 |
7413672 | Keil | Aug 2008 | B1 |
7477960 | Willis et al. | Jan 2009 | B2 |
7493185 | Cheng et al. | Feb 2009 | B2 |
7499897 | Pinto et al. | Mar 2009 | B2 |
7531368 | Winkler et al. | May 2009 | B2 |
7565220 | Huang et al. | Jul 2009 | B2 |
7620511 | Shannon et al. | Nov 2009 | B2 |
7630859 | Harvey | Dec 2009 | B2 |
7647237 | Malave et al. | Jan 2010 | B2 |
7668615 | Goff et al. | Feb 2010 | B2 |
7672747 | Huang et al. | Mar 2010 | B2 |
7676295 | Weetman | Mar 2010 | B2 |
7713758 | Yamashita et al. | May 2010 | B2 |
7729795 | Winkler et al. | Jun 2010 | B2 |
7793162 | Mock et al. | Sep 2010 | B2 |
7805639 | Mock et al. | Sep 2010 | B2 |
7829468 | Keil et al. | Nov 2010 | B2 |
7835814 | Mock et al. | Nov 2010 | B2 |
7842519 | Winkler et al. | Nov 2010 | B2 |
7848898 | Shannon et al. | Dec 2010 | B2 |
7864502 | Boyd et al. | Jan 2011 | B2 |
7886046 | Zeitoun et al. | Feb 2011 | B1 |
7939450 | Yamashita et al. | May 2011 | B2 |
7967995 | Funk et al. | Jun 2011 | B2 |
8000827 | Weetman et al. | Aug 2011 | B2 |
8005562 | Baek et al. | Aug 2011 | B2 |
8014991 | Mitrovic et al. | Sep 2011 | B2 |
8032348 | Mitrovic et al. | Oct 2011 | B2 |
8036869 | Strang et al. | Oct 2011 | B2 |
8050900 | Mitrovic et al. | Nov 2011 | B2 |
8073667 | Strang et al. | Dec 2011 | B2 |
8140719 | Lauterbach et al. | Mar 2012 | B2 |
8274792 | Soffer | Sep 2012 | B2 |
20020177917 | Polla et al. | Nov 2002 | A1 |
20030087459 | Laursen et al. | May 2003 | A1 |
20030223055 | Agarwal et al. | Dec 2003 | A1 |
20030226821 | Huang et al. | Dec 2003 | A1 |
20040004708 | Willis | Jan 2004 | A1 |
20040031052 | Wannamaker et al. | Feb 2004 | A1 |
20040055868 | O'Leary et al. | Mar 2004 | A1 |
20040117054 | Gotkis et al. | Jun 2004 | A1 |
20040175880 | Tanaka et al. | Sep 2004 | A1 |
20040254762 | Hopkins et al. | Dec 2004 | A1 |
20050055175 | Jahns et al. | Mar 2005 | A1 |
20050071034 | Mitrovic | Mar 2005 | A1 |
20050071035 | Strang | Mar 2005 | A1 |
20050071036 | Mitrovic | Mar 2005 | A1 |
20050071037 | Strang | Mar 2005 | A1 |
20050071038 | Strang | Mar 2005 | A1 |
20050071039 | Mitrovic | Mar 2005 | A1 |
20050084988 | Huang et al. | Apr 2005 | A1 |
20050130125 | Zagyansky | Jun 2005 | A1 |
20050159911 | Funk et al. | Jul 2005 | A1 |
20050171627 | Funk et al. | Aug 2005 | A1 |
20050234574 | Lam et al. | Oct 2005 | A1 |
20060049831 | Anwar et al. | Mar 2006 | A1 |
20060144335 | Lee et al. | Jul 2006 | A1 |
20060171848 | Roche et al. | Aug 2006 | A1 |
20060180570 | Mahoney | Aug 2006 | A1 |
20060184264 | Willis et al. | Aug 2006 | A1 |
20060287753 | Plumhoff | Dec 2006 | A1 |
20070050076 | Yamazaki et al. | Mar 2007 | A1 |
20070110043 | Girard | May 2007 | A1 |
20070226540 | Konieczny | Sep 2007 | A1 |
20080061793 | Anwar et al. | Mar 2008 | A1 |
20080082579 | Huang et al. | Apr 2008 | A1 |
20080082653 | Huang et al. | Apr 2008 | A1 |
20080208487 | Goebel et al. | Aug 2008 | A1 |
20080243988 | Huang et al. | Oct 2008 | A1 |
20080253085 | Soffer | Oct 2008 | A1 |
20090078196 | Midorikawa | Mar 2009 | A1 |
20090216920 | Lauterbach et al. | Aug 2009 | A1 |
20090242513 | Funk et al. | Oct 2009 | A1 |
20100042452 | Chen et al. | Feb 2010 | A1 |
20100125360 | Huang et al. | May 2010 | A1 |
20100198556 | Kost | Aug 2010 | A1 |
20100332013 | Choi et al. | Dec 2010 | A1 |
20100332014 | Albarede et al. | Dec 2010 | A1 |
20120101622 | Yun et al. | Apr 2012 | A1 |
Number | Date | Country |
---|---|---|
101142660 | Mar 2008 | CN |
101145499 | Mar 2008 | CN |
2003-197609 | Jul 2003 | JP |
10-2005-0030342 | Mar 2005 | KR |
10-2008-0006750 | Jan 2008 | KR |
200901352 | Jan 2009 | TW |
WO-2004102642 | Nov 2004 | WO |
Entry |
---|
Chowdhury, A.I.; Read, W.W.; Rubloff, G.W.; Tedder, L.L.; and Parsons, G.N., “Real-time Process Sensing and Metrology in Amorphous and Selective Area Silicon Plasma Enhanced Chemical Vapor Deposition Using in situ Mass Spectrometry”, Jul. 1996, American Vacuum Society, Journal of Vacuum Science Technology, B15(1). |
Sofge, D.A., “Virtual Sensor Based Fault Detection and Classification on a Plasma Etch Reactor”, Aug. 1997, The Second Joint Mexico-US International Workshop on Neural Networks and Neurocontrol, Playa del Carmen, Quintana Roo Mexico. |
Tay, A.; Chua, H.-T.; Wang, Y.; and Yang, G., “Control of Semiconductor Substrate Temperature Uniformity During Photoresist Processing in Lithography”, Aug. 2009, Proceedings of the 7th Asian Control Conference, Hong Kong, China. |
“International Search Report”, Issued in PCT Application No. PCT/US2010/040456; Mailing Date: Jan. 21, 2011. |
“Written Opinion”, Issued in PCT Application No. PCT/US2010/040456; Mailing Date: Jan. 21, 2011. |
“International Search Report”, Issued in PCT Application No. PCT/US2010/040465; Mailing Date: Jan. 17, 2011. |
“Written Opinion”, Issued in PCT Application No. PCT/US2010/040465; Mailing Date: Jan. 17, 2011. |
“International Search Report”, Issued in PCT Application No. PCT/US2010/040468; Mailing Date: Jan. 17, 2011. |
“Written Opinion”, Issued in PCT Application No. PCT/US2010/040468; Mailing Date: Jan. 17, 2011. |
“International Search Report”, Issued in PCT Application No. PCT/US2010/040478; Mailing Date: Dec. 28, 2010. |
“Written Opinion”, Issued in PCT Application No. PCT/US2010/040478; Mailing Date: Dec. 28, 2010. |
“International Search Report”, Issued in PCT Application No. PCT/US2010/040477; Mailing Date: Feb. 8, 2011. |
“Written Opinion”, Issued in PCT Application No. PCT/US2010/040477; Mailing Date: Feb. 8, 2011. |
“International Search Report”, Issued in PCT Application No. PCT/US2010/042933: Mailing Date: Feb. 18, 2011 |
“Written Opinion”, Issued in PCT Application No. PCT/US2010/042933; Mailing Date: Feb. 18, 2011. |
“Non Final Office Action”, U.S. Appl. No. 12/826,577, Mailing Date: Jun. 22, 2012. |
“Non Final Office Action”, U.S. Appl. No. 12/826,564; Mailing Date: Aug. 30, 2012. |
“Final Office Action”, U.S. Appl. No. 12/826,577, Mailing Date: Oct. 22, 2012. |
“Non Final Office Action”, U.S. Appl. No. 12/826,568, Mailing Date: Nov. 9, 2012. |
“Final Office Action”, U.S. Appl. No. 12/826,564; Mailing Date: Feb. 27, 2013. |
“Final Office Action”, U.S. Appl. No. 12/826,568, Mailing Date: Mar. 25, 2013. |
Notification of Examination Opinions dated Sep. 29, 2014 for Taiwanese Patent Application No. 099121516, 3 pages. |
“International Preliminary Report on Patentability”, PCT Application No. PCT/US2010/042933, Mailing Date: Jan. 12, 2012. |
“International Preliminary Report on Patentability”, PCT Application No. PCT/US2010/040456, Mailing Date: Jan. 12, 2012. |
“International Preliminary Report on Patentability”, PCT Application No. PCT/US2010/040465, Mailing Date: Jan. 12, 2012. |
“International Preliminary Report on Patentability”, PCT Application No. PCT/US2010/040468, Mailing Date: Jan. 12, 2012. |
“International Preliminary Report on Patentability”, PCT Application No. PCT/US2010/040477, Mailing Date: Jan. 12, 2012. |
“International Preliminary Report on Patentability”, PCT Application No. PCT/US2010/040478, Mailing Date: Jan. 12, 2012. |
Notification of First Office Action dated Nov. 14, 2014 corresponding to Chinese Patent Application No. 201080028990.X, 6 pages. |
Number | Date | Country | |
---|---|---|---|
20100332012 A1 | Dec 2010 | US |
Number | Date | Country | |
---|---|---|---|
61222024 | Jun 2009 | US |