The present invention relates to controlling work in process flows through a production environment, and more specifically, to a method that balances flow of work through a system based upon capacity of the tools in the manufacturing line.
One aspect of managing production environments relates to controlling work in process (WIP) due to different timing in the manufacturing process for differing tools. In addition, for a diverse product mix, shared resources, and maintenance requirements, WIP may be difficult to manage. In a manufacturing line, the potential exists for an upstream tool to have a capacity greater than that of the tool immediately following it. This discrepancy in capacity between the upstream tool and downstream tool inherently results in WIP buildup.
A process time window is a manufacturing process requirement where one or more operations must be completed within a period of specified length. Process time window violations can cause product rework or scrap in the event of WIP buildup. In some instance, a process time window may be established to prevent growth, oxidation, corrosion, evaporation, spoilage or any other degradation of a product, or the introduction of foreign matter. Regardless of whether a specific process time window has been established or not, it is preferred to move a wafer through a production line with minimal wait time between operations.
WIP buildup may occur due to several variables. Primarily, WIP buildup results from capacity differences between consecutive operations. However, WIP buildup may result from either scheduled or unscheduled repairs, transportation of work between tools, and variations in process steps. For example, the processing time required for a tool may not be static and may vary based on the parameters of the specific process being performed by that tool. WIP buildup may occur due to differing processing requirements thereby changing the process times for individual tools. The differing process times may result in WIP buildup during certain processes and no WIP buildup in other cases.
In addition, WIP buildup may result due to Bank Releases, or releases of product from outside the standard process flow sequence into the manufacturing line. For example, in semiconductor manufacturing, a number of wafers from an alternate processing sequence may be introduced to a downstream tool. The result is that the upstream tool still processing wafers will create excessive wafers for the downstream tool resulting in WIP buildup.
The current state of the art methodology for managing WIP is through “Range Management”. Range Management breaks manufacturing flow into units called ranges. Each range has a daily run rate defined (WIP target), and assigned to the last tool in the range. When the target is exceeded, the range is stopped and nothing is allowed to enter the last tool in the range.
However, range management does not account for real time tool capacity variation within a range. In addition, throughput is not optimized and cycle time not minimized.
A method to manage WIP is necessary to minimize WIP buildup. To accomplish this step, the inventors propose a centralized system that references tool processing parameters to determine processing capability. In the cases where the upstream tool has a shorter processing time than the downstream tool, the system is used to determine if product should be processed at the upstream tool to avoid WIP buildup at the downstream tool.
One embodiment of the present invention is a method that determines the real time downstream processing capacity and a downstream toolset's fastest predicted run rate. The fastest predicted run rate may be determined as a standard deviation. For example a tool may run at a rate of 60 minutes per run. Assuming a normal distribution of run rate lengths, the tool may have a standard deviation of 5 minutes per run. This would result in the tool having a fastest predicted run rate of 45 minutes per run at a 99.7% confidence interval or three standard deviations (15 minutes) away from the mean. Similarly an upstream tool capacity and an upstream tool fastest predicted run rate is also determined.
Once the fastest predicted run rates are determined and the tool capacity both up and downstream are determined, a calculation is made to determine when the upstream tool capacity is greater than the downstream tool capacity. When the upstream tool capacity is greater the system adjusts the upstream tool capacity by slowing the upstream tool so the fastest predicted run rate of the upstream tool overlaps the fastest predicted run rate of the downstream tool.
In another embodiment, where the process time window is sensitive, the user may determine that the primary concern is the process time window and not the production rate. In this case, another embodiment is a method that determines the real time downstream processing capacity and a downstream toolset's slowest predicted run rate. The slowest predicted run rate may be determined as a standard deviation. For example a tool may run at a rate of 60 minutes per run. Assuming a normal distribution of run rate lengths, the tool may have a standard deviation of 5 minutes per run. This would result in the tool having a slowest predicted run rate of 75 minutes per run at a 99.7% confidence interval or three standard deviations (15 minutes) away from the mean. Similarly an upstream tool capacity and an upstream tool slowest predicted run rate is also determined.
Once the slowest predicted run rates are determined and the tool capacity both up and downstream are determined, a calculation is made to determine when the upstream tool capacity is greater than the downstream tool capacity. When the upstream tool capacity is greater the system adjusts the upstream tool capacity by slowing the upstream tool so the slowest predicted run rate of the upstream tool overlaps the slowest predicted run rate of the downstream tool.
In another embodiment, utilizing the method above, an additional step is added to monitor the WIP in the system. In the event excessive WIP buildup is detected, the upstream tool is further slowed until the WIP buildup level becomes acceptable.
Another embodiment of the present invention is a method that identifies when special cause components will occur and adjusts the upstream tooling to minimize WIP buildup during these operations. Special cause components may result from such events as preventative maintenance, unbanking product or releasing product to a downstream tool out of sequence, or delays caused for other reasons, such as weather related delays. It is possible to account for the special cause components, for example to implement preventative maintenance, without affecting the total production targets by understanding the WIP. The process must take into account time sensitive processing sequences within a production environment. The processing sequences perform operations by utilizing one or more tools at each processing step. The method calculates the historical preventative maintenance means and standard deviations for every tool in the line. When a downstream tool needs preventative maintenance, the method shifts the distribution of the upstream tool's processing speed by an amount based on the historical mean plus or minus the standard deviations of preventative maintenance on that tool. For the most probable outcome, the shift of the upstream tool is by the mean of the predicted down time of the downstream tool. To minimize wasted time waiting for the downstream tool and maximize throughput, shift the upstream tool's processing speed by the left limit of the predicted down time (fastest predicted time for preventative maintenance). To minimize process time window related errors, shift the upstream tool's processing speed by the right limit of the predicted down time (slowest predicted time for preventative maintenance). The method once again determines the real time downstream processing capacity and the downstream tool's fastest predicted run rate as described above. The upstream tool's capacity and fastest predicted run rate are also determined. As before, a calculation determines when the upstream tool capacity is greater than the downstream tool capacity. The upstream tool capacity is adjusted by slowing the upstream tool so the fastest predicted run rate of the downstream tool overlaps the fastest predicted run rate of the upstream tool. Once the tool run rates match, the system determines when a special cause component is identified. When a special cause component is identified the system determines when the special cause component will occur and the duration. The run rate of the upstream tool is then reduced prior to the slow down time for a duration equal to slow down duration.
Within such a production environment, certain processing activity flows that are applied to lots of work in process items must be completed within a certain processing time window or the work in process items may be destroyed or otherwise deteriorate or expire. For example in the example where a FOUP is utilized, wafers sitting in a FOUP for extended periods of time may be subject to oxidation, foreign matter accumulation, or film degradation, evaporation, or corrosion. Therefore, once a processing flow that has a processing time window limitation has begun, it must be completed within the time limit of the processing flow. For example, if one of the processing activities within a processing flow uses a tool to deposit a material that is easily oxidized, it may be necessary to cover the material with some form of insulator to prevent the material from oxidizing excessively. Thus, in this example, if the material is not covered with the insulator within the processing time window, the entire lot may suffer excessive oxidation and have to be scrapped.
One way to deal with process time windows is to manage each process time window independently by applying work in process limits for each processing flow. This method holds work in process prior to starting a processing flow until the work in process currently within the processing flow is below the limit. However, this method ignores the fact that some of the tools or resources within the processing flow are shared with other processing flows which creates the risk of releasing too much or too little work to the processing flow. In addition, with this method, work in process limits are based on the “planned” bottleneck operation, which risks releasing too much work in process to the processing flow when non-bottleneck tools are underperforming. Further, with this type of method, when new processing flows are established, it is necessary to set up bottleneck definitions and remain within work in process limits.
The problem of maintaining WIP control for a system which is also capable of handling special cause components, such as preventative or emergency maintenance and time constrained WIP, can occur in any manufacturing environments with multiple tools running at potentially different production rates.
The result is that the manufacturing system needs to be able to handle both steady state manufacturing and be adaptable to adjust for variations to the process. The system must be adaptable to handle the introduction of parts out of the normal sequence or a short term downstream delay. Therefore, various embodiments herein propose a methodology to manage the WIP through management of upstream tool production rates. The embodiments herein provide a process to determine the appropriate run rate of upstream tooling and to provide for delays in upstream production in the event of a variation in the normal run rate of the downstream tooling.
More specifically, as shown in flowchart form in
Step 230 determines if the fastest predicted run rate of the upstream tool is faster than the downstream tool. When this occurs, step 240 adjusts the upstream tool run rate such that the fastest predicted run rates of the two tools overlap, meaning that the upstream tool completes its production to provide the downstream tool with the processed product at the point when the downstream tool is ready to start its next cycle. Overlap as used in this application is defined as the run rates of production of the two tools having substantially similar values, those values being the slowest predicted run rates or fastest predicted run rates.
The traditional approach takes x=B−A, and the output of the upstream tool is decreased by x, i.e. the new output becomes A+x.
The embodiment first defines the range of parts per hour (A1-A2 for the upstream tool, and B1-B2 for the downstream tool) at the 99.7% confidence interval. The upstream tool is then slowed down by y=(B1−A1). Taking the case above, tool A is slowed down by 8-4, and the new distribution for tool A becomes 8-10 minutes, and on average it will now take 9 minutes per widget. Clearly, this is different than the 10 minutes per widget obtained with the traditional approach.
In another embodiment, the tooling may utilize batch processing. The upstream tool A processes 20 widgets/hour, one widget at a time. The range is 19-21 widgets/hour at the 99.7% confidence interval. The downstream tool B processes 10 widgets/hour, all 10 widgets at one time. The range is 8-11 widgets/hour at the 99.7% confidence interval.
Using the above approach, the ranges are identified in the problem definition. Now, the shift is in the opposite direction because now we're dealing with parts per hour instead of time per widget. Y′=A2−B2=21−11=10, thus tool A will be shifted from 19-21 to 9-11 widgets/hour. (Note, in the segment representation, this time we're using the parts/hour definition, so the tool A is still the one being moved, but we're still using the faster limit, which is now the larger of the two numbers).
In another embodiment shown in flowchart form in
Step 235 determines if the slowest predicted run rate of the upstream tool is faster than the slowest predicted run rate of the downstream tool. When this occurs, step 245 adjusts the upstream tool run rate such that the slowest predicted run rates of the two tools overlap at the completion point to provide the downstream tool with the processed product at the point no sooner than when the downstream tool is ready to start its next cycle.
The process may also be derived using the percentiles (step 1 in the above approach). For a normal distribution, this can be easily defined by moving a number of standard deviations away from the mean. For example, if one moves +/−3 standard deviations away from the mean, 99.7% of all data will be included. However, most real distributions are not normal. One possible way of approaching such a distribution is to use the method of
Step 410 may be to perform a Box-Cox power transformation to normalize the distribution. In statistics, the Box-Cox distribution (also known as the power-normal distribution) is the distribution of a random variable X for which the Box-Cox transformation on X follows a truncated normal distribution. Step 420 may be to identify the percentile values of the transformed normal data. Step 430 may be to find the inverse of the transformation and plug in the values from step 420. Alternatively, one could resort to fitting to the Johnson distribution, or any number of other methods, as published in the literature.
In a further embodiment as illustrated in
In another embodiment illustrated in
Step 560 may be used to determine when the special cause component will occur, a slow down time for the upstream tool, and slow down duration for the upstream tool. For example, in the event of an introduction of unscheduled downstream parts, for semiconductors, it may be referred to as unbanking wafers. When unbanked wafers are introduced, or wafers are introduced from an alternate upstream path, the slow down time will be the time when the unbanked wafers will be provided to the downstream tool. The slow down duration will be the processing time of the unbanked wafers.
Step 570 may be used to reduce production for the upstream tool at the slow down time for the slow down duration. Thereby the upstream tool introduces a delay which allows the downstream interruption to be completed, without excessive WIP.
For instance, tool B has z widgets in front of it. The system may adjust the process such that when tool A makes 8-11 widgets/hour, tool B will not accumulate excess parts. But the goal is to get rid of all parts on tool B before the preventative maintenance begins.
A representative hardware environment for practicing the embodiments of the invention is depicted in
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Deployment Types include loading directly in the client, server and proxy computers via loading a storage medium such as a CD, DVD, etc. The process software may also be automatically or semi-automatically deployed into a computer system by sending the process software to a central server or a group of central servers. The process software is then downloaded into the client computers that will execute the process software. The process software is sent directly to the client system via e-mail. The process software is then either detached to a directory or loaded into a directory by a button on the e-mail that executes a program that detaches the process software into a directory. The process software is sent directly to a directory on the client computer hard drive. When there are proxy servers, the process will, select the proxy server code, determine on which computers to place the proxy servers' code, transmit the proxy server code, then install the proxy server code on the proxy computer. The process software will be transmitted to the proxy server then stored on the proxy server.
While it is understood that the process software may be deployed by manually loading directly in the client, server and proxy computers via loading a storage medium such as a CD, DVD, etc., the process software may also be automatically or semi-automatically deployed into a computer system by sending the process software to a central server or a group of central servers. The process software is then downloaded into the client computers that will execute the process software. Alternatively the process software is sent directly to the client system via e-mail. The process software is then either detached to a directory or loaded into a directory by a button on the e-mail that executes a program that detaches the process software into a directory. Another alternative is to send the process software directly to a directory on the client computer hard drive. When there are proxy servers, the process will, select the proxy server code, determine on which computers to place the proxy servers' code, transmit the proxy server code, then install the proxy server code on the proxy computer. The process software will be transmitted to the proxy server then stored on the proxy server.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.