Parallel data processing engines are powerful and efficient means of processing large volumes of data, for example, in data integration and data warehousing scenarios. The data processing applications executed by these engines are typically made up of a complex system of processes and/or threads, which are referred to as “operators”, working in parallel to perform all of the required data manipulations. Data is passed from one operator to another via record buffers. Each operator gets the data to be processed from its input buffer, and writes the data it has processed to its output buffer. These buffers are shared with the previous and subsequent operators as their output and input buffers, respectively. The overall throughput of the application is generally determined by the slowest operator in the set, as its rate of consumption and production have a ripple effect throughout the application. A slow operator can create a bottleneck in the process. The entire flow of the process may be affected by a bottleneck, but it is difficult to determine where the bottleneck occurs. It is also difficult to determine if multiple bottlenecks occur, and where they may occur.
According to one embodiment of the present invention, in a method for identifying performance bottleneck status in a parallel data processing environment, implemented by a computing processor, the processor examines data flow associated with the parallel data processing environment to identify at least one operator where an operator type is associated with the operator, at least one buffer, and a relationship that the buffer has with the operator where the relationship is associated with the operator type. The processor monitors the buffer to determine a buffer status associated with the buffer. The processor applies a set of rules to identify an operator bottleneck status associated with the operator where the set of rules is applied to the operator, based on the operator type, the buffer status, and the relationship that the buffer has with the operator. The processor determines a performance bottleneck status associated with the parallel data processing environment, based on the operator bottleneck status.
In one aspect of embodiments disclosed herein, when the method examines data flow associated with the parallel data processing environment, the method identifies a first sub-operator connected to a second sub-operator, where no buffer exists between the first sub-operator and the second sub-operator, and combines the first sub-operator and the second sub-operator to create the operator.
In one aspect of embodiments disclosed herein, when the method examines data flow associated with the parallel data processing environment, the method determines the operator type associated with the operator based on a data partition configuration associated with the buffer, and the relationship that the buffer has with the operator.
In one aspect of embodiments disclosed herein, when the method monitors the buffer to determine the buffer status associated with the buffer, the method determines the buffer status based on a flow of data from the operator to at least one other operator.
In one aspect of embodiments disclosed herein, when the method determines the buffer status based on the flow of data, the method identifies the buffer status as at least one of a buffer FULL status where no data can be written to the buffer, a buffer EMPTY status where no data can be read from the buffer, and a buffer UNKNOWN status where the buffer status is neither the buffer FULL status nor the buffer EMPTY status.
In one aspect of embodiments disclosed herein, when the method applies the set of rules, the method traverses the data flow associated with the parallel data processing environment to apply the rules to each operator in the data flow.
In one aspect of embodiments disclosed herein, when the method determines the performance bottleneck status associated with the parallel data processing environment, the method renders a visualization of the data flow associated with the parallel data processing environment with respect to the operator bottleneck status associated with the operator. The method identifies that the operator is causing a bottleneck in the parallel data processing environment.
In one aspect of embodiments disclosed herein, when the method determines the performance bottleneck status associated with the parallel data processing environment, the method periodically monitors the buffer, and applies the set of rules over a period of time to monitor performance of the parallel data processing environment. The method plots the performance bottleneck status over the period of time to monitor the performance bottleneck status over the period of time.
In one aspect of embodiments disclosed herein, the method plots the performance bottleneck status with respect to the operator bottleneck status to monitor progress of the operator bottleneck status over the period of time.
System and computer program products corresponding to the above-summarized methods are also described and claimed herein.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java® (Java, and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both), Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer special purpose computer or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified local function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
At 201, the method, via the processor monitors the buffer to determine a buffer status associated with the buffer. The buffer is used to transport data between operators. As noted in
At 202, the method, via the processor, applies a set of rules to identify an operator bottleneck status associated with the operator. The set of rules is applied to the operator (based on the operator type), the buffer status, and the relationship that the buffer has with the operator to determine the operator bottleneck status. The operator bottleneck status may indicate whether the operator is causing a bottleneck, whether the operator is not causing a bottleneck, or whether it is unknown if the operator is causing a bottleneck. Table 1 displays example rules that are applied to the operator, based on the operator type, the buffer status, and the relationship that the buffer has with the operator. For example, the buffer may be an input to the operator (i.e., “Input Link Buffer”) or an output to the operator (i.e., “Output Link Buffer”). In an example embodiment, the method applies the rules to the operator, based on the operator type, and the buffer status, taking into account whether the buffer is an Input Link Buffer or an Output Link Buffer.
For example, in
At 203, the method, via the processor, determines a performance bottleneck status associated with the parallel data processing environment, based on the operator bottleneck status. For example, the operator bottleneck status may be “Bottleneck” (i.e., the operator is the bottleneck in the data flow pipeline), “Not bottleneck” or “Unknown”.
In an example embodiment, when the method examines data flow associated with the parallel data processing environment, the method may perform some preparation before the method determines the operator bottleneck status. For example, the method identifies a first sub-operator connected to a second sub-operator, yet no buffer exists between the first sub-operator and the second sub-operator. In this scenario, the method treats both operators as a single operator by combining the first sub-operator and the second sub-operator to create the single operator. In yet another example embodiment, if there exists no buffer between Operator A and Operator B, and a buffer exists between Operator B and Operator C, but no buffer exists between Operator C and Operator D, then the method combines Operator A and Operator B to create Operator A-B. The method also combines Operator C and Operator D to create Operator C-D, and applies the set of rules to Operator A-B, Operator C-D, and the buffer that exists between these two operators. In other words, the method applies the set of rules based the respective type of operators (i.e., Operator A-B, and Operator C-D), the interaction between Operator A-B and the buffer, the interaction between Operator C-D and the buffer, and the status of the buffer to determine the operator bottleneck status of each of Operator A-B and Operator C-D.
In an example embodiment, when the method examines data flow associated with the parallel data processing environment, the method determines the operator type associated with the operator based on a data partition configuration that is associated with the buffer, and the relationship that the buffer has with the operator. In serial processing mode, a first operator writes data to a buffer where the next operator (i.e., downstream from the first operator) reads the data. In parallel processing mode, the data may be passed from one operator (i.e., an upstream operator) to another operator (i.e., a downstream operator) in a number of different ways. The data may be partitioned and processed by multiple instances of operators in different nodes, for example, logical or physical. For example, a first operator may partition the data in N number of ways, while a downstream operator may be partitioned to receive the data in M number of ways.
There are many ways in which data may be partitioned. For example, data may be received from a downstream operator in the same way it was partitioned from the upstream operator. An upstream operator may run in serial mode, and split the data into partitions for a downstream operator that is running in parallel mode. The data may be partitioned at the upstream operator and re-partitioned to a downstream operator (i.e., the upstream operator, running in parallel processing mode, is partitioned into N number of ways while the downstream operator, also running in parallel processing mode, is partitioned in M number of ways). The data may be partitioned at the upstream operator, running in parallel mode, collected into one partition for the downstream operator that is running in serial mode.
In an example embodiment, when the method monitors the buffer to determine the buffer status associated with the buffer, the method determines the buffer status based on a flow of data from at least one operator (i.e., the upstream operator) to another operator or operators (i.e., the downstream operator(s)). An operator may have an input buffer(s) and/or an output buffer(s). If an operator is slower than its upstream operator, the upstream operator will write the data at a rate faster than the downstream operator can consume. Eventually, the buffer (i.e., the output buffer of the upstream operator is also the input buffer of the downstream operator) will be filled up. However, the status of the buffer (i.e., “FULL”) does not indicate whether the bottleneck is a result of the downstream operator not processing the data quickly enough. The bottleneck may be a result of another operator further downstream from the downstream operator. Thus, the method monitors the buffer status of both the input buffer and the output buffer of an operator. For example, if the downstream operator has a full input buffer and an empty output buffer, then that downstream operator is the bottleneck. However, if the downstream operator has both a full input buffer and full output buffer, then that downstream operator is not the bottleneck (most likely another operator further downstream is the bottleneck).
In an example embodiment, when the method determines the buffer status based on the flow of data, the method identifies the buffer status. The buffer status may be a buffer FULL status where no data can be written to the buffer. For example, if the upstream operator processes data faster than the downstream operator can consume the data, eventually, the upstream operator will not be able to write data to the buffer until the downstream operator consumes some of the data in the buffer. The buffer status may be a buffer EMPTY status where no data can be read from the buffer. For example, if the downstream operator is consuming the data (provided by the upstream operator) faster than the upstream operator is writing the data, eventually the downstream operator will need to wait for the upstream operator to write more data before the downstream operator can proceed to consume the data. The buffer status may be a buffer UNKNOWN status where the buffer status is neither the buffer FULL status nor the buffer EMPTY status. For example, the buffer UNKNOWN status may be a temporary status before the upstream operator writes data to the buffer, or the downstream operator consumes the data. Or, the buffer UNKNOWN status may be a result of the upstream operator writing data at the same rate that the downstream operator consumes the data.
In an example embodiment, when the method applies the set of rules, the method traverses the data flow associated with the parallel data processing environment to apply the rules to each operator in the data flow.
In an example embodiment, when the method determines the performance bottleneck status associated with the parallel data processing environment, the method renders a visualization of the data flow associated with the parallel data processing environment with respect to the operator bottleneck status associated with operator(s). The visualization provides information as to which operator(s) may (or may not) be causing a bottleneck within the parallel processing environment. In an example embodiment, the visualization may be plotted as a chart, or report interactively where the bottleneck(s) are, and how they shift over time during the execution of the parallel data processing environment.
In an example embodiment, snapshots of the visualization may be taken periodically to determine and track which operator(s) are causing bottlenecks at any given period of time. The method determines snapshot data of the status of the buffer(s). The method has knowledge of which operator(s) are in the parallel processing data environment, and the respective associated operator types. The method applies rules to determine if any operator(s) are a bottleneck for that snapshot. The method then continues to track the operator bottleneck status of multiple operators over a plurality of snapshots to plot data flows through the parallel processing data environment. These snapshots may be used to plot a chart depicting the operator bottleneck status of operator(s) over time. The snapshot visualizations may also be used as “live” monitoring of the parallel data processing environment to visualize any current bottlenecks as they are progressing (i.e., bottlenecks occurring, continuing to occur, bottlenecks being resolved, etc.) and or shifting (i.e., a bottleneck at an upstream operator now becomes a bottleneck at a downstream operator).
In an example embodiment, the method may identify that at least one operator is causing a bottleneck in the parallel data processing environment. The method may identify this operator based on the visualization.
In an example embodiment, when the method determines the performance bottleneck status associated with the parallel data processing environment, the method periodically monitors the buffer(s) and applies the set of rules over a period of time to monitor performance of the parallel data processing environment. The method monitors and tracks the performance of the parallel data processing environment, over a period of time, to determine the performance bottleneck status of the entire parallel data processing environment. The method also provides information as to which operators are causing bottlenecks within the parallel data processing environment.
In an example embodiment, the method plots the performance bottleneck status over the period of time to monitor the performance bottleneck status over the period of time. The performance bottleneck status is the overall status of the parallel data processing environment. Thus, the method tracks whether there exists any bottlenecks within the parallel data processing environment. For example, at Time=2, there may be no bottlenecks, at Time=3, there may be one bottleneck, and at Time=4, there may be two bottlenecks at two different operators.
In an example embodiment, the method plots the performance bottleneck status with respect to the operator bottleneck status over the period of time to monitor the progress of the operator bottleneck status over the period of time. The operator bottleneck status is the status of a bottleneck that is associated with an operator. In an example embodiment, the method tracks an individual bottleneck as it appears, as it potentially travels to another operator, or as it disappears.
The descriptions of the various embodiments of the present invention has been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
4850027 | Kimmel | Jul 1989 | A |
4974223 | Ancheta | Nov 1990 | A |
5197127 | Waclawsky et al. | Mar 1993 | A |
5473773 | Aman et al. | Dec 1995 | A |
5517323 | Propach | May 1996 | A |
5539531 | Propach | Jul 1996 | A |
5566000 | Propach | Oct 1996 | A |
5937202 | Crosetto | Aug 1999 | A |
6311265 | Beckerle et al. | Oct 2001 | B1 |
6330008 | Razdow et al. | Dec 2001 | B1 |
6434613 | Bertram et al. | Aug 2002 | B1 |
6457143 | Yue | Sep 2002 | B1 |
6470464 | Bertram et al. | Oct 2002 | B2 |
6557035 | McKnight | Apr 2003 | B1 |
6721826 | Hoglund | Apr 2004 | B2 |
6801938 | Bookman et al. | Oct 2004 | B1 |
6970805 | Bierma et al. | Nov 2005 | B1 |
7203746 | Harrop | Apr 2007 | B1 |
7299216 | Liang et al. | Nov 2007 | B1 |
7673291 | Dias et al. | Mar 2010 | B2 |
7689690 | Loboz et al. | Mar 2010 | B2 |
7748001 | Burns | Jun 2010 | B2 |
8041834 | Ferri et al. | Oct 2011 | B2 |
8055367 | Kneisel | Nov 2011 | B2 |
8122050 | Mordvinov et al. | Feb 2012 | B2 |
8126315 | Kim et al. | Feb 2012 | B2 |
8225291 | Chung et al. | Jul 2012 | B2 |
8234635 | Isshiki | Jul 2012 | B2 |
8370823 | Bashkansky et al. | Feb 2013 | B2 |
8738972 | Bakman et al. | May 2014 | B1 |
20030014507 | Bertram et al. | Jan 2003 | A1 |
20030061324 | Atherton et al. | Mar 2003 | A1 |
20030061413 | Hoglund | Mar 2003 | A1 |
20030120778 | Chaboud et al. | Jun 2003 | A1 |
20040015381 | Johnson et al. | Jan 2004 | A1 |
20040059701 | Fedorov | Mar 2004 | A1 |
20040221038 | Clarke et al. | Nov 2004 | A1 |
20050050404 | Castelli et al. | Mar 2005 | A1 |
20050071842 | Shastry | Mar 2005 | A1 |
20050187991 | Wilms et al. | Aug 2005 | A1 |
20050222819 | Boss et al. | Oct 2005 | A1 |
20050268299 | Picinich et al. | Dec 2005 | A1 |
20060153090 | Bishop et al. | Jul 2006 | A1 |
20060218450 | Malik | Sep 2006 | A1 |
20070118401 | Mahesh et al. | May 2007 | A1 |
20080091720 | Klumpp et al. | Apr 2008 | A1 |
20080127149 | Kosche et al. | May 2008 | A1 |
20080222634 | Rustagi | Sep 2008 | A1 |
20080282232 | Cong et al. | Nov 2008 | A1 |
20090066712 | Gilger | Mar 2009 | A1 |
20090077005 | Yung et al. | Mar 2009 | A1 |
20090307597 | Bakman | Dec 2009 | A1 |
20100036810 | Wu et al. | Feb 2010 | A1 |
20100125565 | Burger et al. | May 2010 | A1 |
20100250748 | Sivasubramanian et al. | Sep 2010 | A1 |
20100312776 | Burrichter et al. | Dec 2010 | A1 |
20110061057 | Harris et al. | Mar 2011 | A1 |
20110099559 | Kache et al. | Apr 2011 | A1 |
20110179371 | Kopycinski et al. | Jul 2011 | A1 |
20110225017 | Radhakrishnan | Sep 2011 | A1 |
20110229071 | Vincelette et al. | Sep 2011 | A1 |
20110314233 | Yan | Dec 2011 | A1 |
20120044814 | Natarajan | Feb 2012 | A1 |
20120054147 | Goetz et al. | Mar 2012 | A1 |
20120102007 | Ramasubramanian et al. | Apr 2012 | A1 |
20120154405 | Baumgartner et al. | Jun 2012 | A1 |
20120278594 | Kumar | Nov 2012 | A1 |
20120327794 | Han et al. | Dec 2012 | A1 |
20130024179 | Mazzaro et al. | Jan 2013 | A1 |
20130176871 | Bertze et al. | Jul 2013 | A1 |
20130185702 | Choi et al. | Jul 2013 | A1 |
20130227573 | Morsi et al. | Aug 2013 | A1 |
20140026150 | Kline et al. | Jan 2014 | A1 |
20140278337 | Branson et al. | Sep 2014 | A1 |
20140280895 | Branson et al. | Sep 2014 | A1 |
20150058865 | Slinger et al. | Feb 2015 | A1 |
20150268990 | Greene et al. | Sep 2015 | A1 |
20150269006 | Caufield et al. | Sep 2015 | A1 |
Number | Date | Country |
---|---|---|
0210434 | Sep 1993 | EP |
0419805 | Nov 1995 | EP |
1296220 | Mar 2003 | EP |
0963102 | Jan 2006 | EP |
2010115442 | May 2010 | JP |
4790793 | Oct 2011 | JP |
2006102442 | Sep 2006 | WO |
2014143247 | Sep 2014 | WO |
Entry |
---|
“Hadoop Performance Monitoring”, hadoop-toolkit wiki, update Jun. 14, 2010 by impetus.opensource. https://code.google.com/p/hadoop-toolkit/wiki/HadoopPerformanceMonitoring. |
Informatica, “Proactive Monitoring Option: Identify Data Integration Risks Early and Improve Governance with Proactive Monitoring”, (Enterprise Data Integration: Power Center: Options), © 2013 Informatica Corporation. http://www.informatica.com/us/products/enterprise-data-integration/powercenter/options/proactive-monitoring-option/. |
Oracle, “3: Monitoring the ETL Process”, Oracle® Argus Mart Administrator's Guide, Release 1.0, Apr. 2013, Copyright © 2011, 2013 Oracle. http://docs.oracle.com/cd/E40596—01/doc.10/e38589/monitoringetl.htm#BGBHCCBH. |
Battre et al., “Detecting Bottlenecks in Parallel DAG-based Data Flow Programs,” 2010 IEEE Workshop on Many-Task Computing on Grids and Supercomputers, Nov. 15, 2010, pp. 1-10 DOI: 10.1109/MTAGS.2010.5699429. |
Ravali et al., “Implementing Bottleneck Detection Algorithm in IaaS Cloud for Parallel Data Processing Using,” International Journal of Computer Science and Management Research, Aug. 2013, vol. 2, Issue 8 ISSN 2278-733X. |
Warneke, D., “Massively Parallel Data Processing on Infrastructure as a Service Platforms,” Sep. 28, 2011. |
Iqbal et al., “SLA-Driven Automatic Bottleneck Detection and Resolution for Read Intensive Multi-tier Applications Hosted on a Cloud,” Advances in Grid and Pervasive Computing, Lecture Notes in Computer Science vol. 6104, pp. 37-46, 2010, Copyright Springer-Verlag Berlin Heidelberg 2010. |
Mell et al., “The NIST Definition of Cloud Computing,” National Institute of Standards and Technology, U.S. Department of Commerce, Special Publication 800-145, Sep. 2011. |
Greene et al., “Performance Management for Data Integration ,” U.S. Appl. No. 14/217,567, filed Mar. 18, 2014. |
Caufield et al., “Bottleneck Detection for Performance Management,” U.S. Appl. No. 14/671,102, filed Mar. 27, 2015. |
Longcore, Jeff, Bottleneck identification in cycle time reduction and throughput improvement efforts in an automotive transmission plant, IP.com Prior Art Database Technical Disclosure, Dec. 31, 1999, IPCOM000128111D. |
IBM, General Purpose Bottleneck Analysis for Distributed Data Movement Applications, IP.com Prior Art Database Technical Disclosure, Aug. 4, 2006, IPCOM000138798D. |
Number | Date | Country | |
---|---|---|---|
20150193368 A1 | Jul 2015 | US |