This application is related to International Patent Application No. PCT/US13/076796, filed on Dec. 20, 2013 and entitled “Generating a visualization of a metric at a level of execution”, and International Patent Application No. PCT/US13/076784, filed on Dec. 20, 2013 and entitled “Discarding data points in a time series”, both of which are hereby incorporated by reference.
Enterprises (e.g. business concerns, educational organizations, government agencies) can depend on reports and analyses of data. To generate the reports and analyses, workloads, such as queries, can be executed in an execution environment. For example, a query engine (e.g., HP Vertica) can execute a query over a database. Database query monitoring tools collect measurements of performance metrics (e.g., memory and CPU usage) while a query is executing. These metric measurements are often made available through log files or system tables. The metric measurements can be used to understand and diagnose query performance issues. Other metrics, such as network activity, can be collected as well.
The following detailed description refers to the drawings, wherein:
Workloads, such as queries, may be executed in an execution environment. For example, a query engine (e.g., HP Vertica) can execute a query over a database. Most database systems include monitoring tools that collect performance metrics for individual queries. These metrics are often low-level metrics that may be incomprehensible to a typical user. Moreover, a long-running query may result in tens or hundreds of thousands of metric measurements. These measurements may be even more numerous and complex if the query engine is a parallel database engine, such as HP Vertica. It is unreasonable to expect a user to comprehend thousands of low-level metrics and be able to understand how they relate to the user's query. Simply providing a high-level overview of the performance of the query is also inadequate as much of the information in the metrics may be lost in the abstraction to the higher level. Additionally, monitoring tools often do not collect all of the metrics that could impact query performance, such as network activity.
According to an embodiment implementing the techniques described herein, a path in a workload may be identified that may be responsible for deviant behavior (e.g., unexpected performance issues) demonstrated during execution of the workload. Multiple measurements of a plurality of metrics relating to execution of the workload in an execution environment may be received. The measurements may be received from database monitoring tools or from other sources. The multiple measurements may be aggregated at multiple levels of execution of the workload. Where the workload is a query, example levels of execution include a query level, a query phase level, a path level, a node level, a path level, and an operator level.
In one example, the measurements of a first metric at a first level may be compared to estimates of the first metric at the first level to determine whether the measurements deviate from the estimates by a threshold value. If determined that the measurements deviate from the estimates by a threshold value, a path in the workload that may be responsible for the deviation may be identified. For instance, the path may be identified based on examining metric measurements at lower levels to identify a logical operator or physical operator exhibiting deviant behavior. In another example, measurements of a first metric at a first level may be compared to measurements of a second metric at the same or a different level to determine whether there is a deviation from an expected correlation between the two metrics. If determined that there is a deviation from the expected correlation between the two metrics, a path in the workload that may be responsible for the deviation may be identified.
In either case, the identified path may be presented to a user via a visualization. The visualization may highlight the path, such as by highlighting the path in an execution plan for the workload. The visualization may also include additional information, such as a natural language explanation for the deviation. As a result, the techniques described herein may aid a user in analyzing, troubleshooting, and/or debugging a workload. Furthermore, the techniques described herein may reduce the amount of information to be presented to the user, so as to avoid information overload, and aid the user in focusing on potentially problematic portions of a workload. Additional examples, advantages, features, modifications and the like are described below with reference to the drawings.
Methods 100, 200, 250, and 300 will be described here relative to system 410 of
A controller may include a processor and a memory for implementing machine readable instructions. The processor may include at least one central processing unit (CPU), at least one semiconductor-based microprocessor, at least one digital signal processor (DSP) such as a digital image processing unit, other hardware devices or processing elements suitable to retrieve and execute instructions stored in memory, or combinations thereof. The processor can include single or multiple cores on a chip, multiple cores across multiple chips, multiple cores across multiple devices, or combinations thereof. The processor may fetch, decode, and execute instructions from memory to perform various functions. As an alternative or in addition to retrieving and executing instructions, the processor may include at least one integrated circuit (IC), other control logic, other electronic circuits, or combinations thereof that include a number of electronic components for performing various tasks or functions.
The controller may include memory, such as a machine-readable storage medium. The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Thus, the machine-readable storage medium may comprise, for example, various Random Access Memory (RAM), Read Only Memory (ROM), flash memory, and combinations thereof. For example, the machine-readable medium may include a Non-Volatile Random Access Memory (NVRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage drive, a NAND flash memory, and the like. Further, the machine-readable storage medium can be computer-readable and non-transitory. Additionally, system 310 may include one or more machine-readable storage media separate from the one or more controllers.
System 410 may include a number of components. For example, system 410 may include a database 412 for storing measurements 413, an aggregator 414, an analysis module 415, a visualization generator 416, and a web server 417. System 410 may be connected to execution environment 420 and user interface 430 via a network. The network may be any type of communications network, including, but not limited to, wire-based networks (e.g., cable), wireless networks (e.g., cellular, satellite), cellular telecommunications network(s), and IP-based telecommunications network(s) (e.g., Voice over Internet Protocol networks). The network may also include traditional landline or a public switched telephone network (PSTN), or combinations of the foregoing. The components of system 410 may also be connected to each other via a network.
Method 100 may begin at 110, where multiple measurements 413 of a plurality of metrics may be received. The multiple measurements may relate to execution of a workload in an execution environment 420. The multiple measurements may be stored in database 412.
Execution environment 420 can include an execution engine and a storage repository of data. An execution engine can include one or multiple execution stages for applying respective operators on data, where the operators can transform or perform some other action with respect to data. A storage repository refers to one or multiple collections of data. An execution environment can be available in a public cloud or public network, in which case the execution environment can be referred to as a public cloud execution environment. Alternatively, an execution environment that is available in a private network can be referred to as a private execution environment.
As an example, execution environment 420 may be a database management system (DBMS). A DBMS stores data in relational tables in a database and applies database operators (e.g. join operators, update operators, merge operators, and so forth) on data in the relational tables. An example DBMS environment is the HP Vertica product.
A workload may include one or more operations to be performed in the execution environment. For example, the workload may be a query, such as a Structured Language (SQL) query. The workload may be some other type of workflow, such as a Map-Reduce workflow to be executed in a Map-Reduce execution environment or an Extract-Transform-Load (ETL) workflow to be executed in an ETL execution environment.
The multiple measurements 413 of the plurality of metrics relate to execution of the workload. For example, the metrics may include performance metrics like elapsed time, execution time, memory allocated, memory reserved, rows processed, and processor utilization. The metrics may also include other information that could affect workload performance, such as network activity or performance within execution environment 420. For instance, poor network performance could adversely affect performance of a query whose execution is spread out over multiple nodes in execution environment 420. Additionally, estimates of the metrics for the workload may also be available. The estimates may indicate an expected performance of the workload in execution environment 420. Having the estimates may be useful for evaluating the actual performance of the workload.
The metrics (and estimates) may be retrieved from the execution environment 420 and received at system 410. The metrics may be measured and recorded at set time intervals by monitoring tools in the execution environment. The measurements may then be retrieved periodically, such as after an elapsed time period (e.g., every 4 seconds). Alternatively, the measurements could be retrieved all at once after the query has been fully executed. The metrics may be retrieved from log files or system tables in the execution environment.
At 120, the multiple measurements may be aggregated by aggregator 414 at multiple levels of execution. A level of execution as used herein is intended to denote an execution perspective through which to view the metric measurements. Where the workload is a query, example levels of execution include a query level, a query phase level, a path level, a node level, a path level, and an operator level. These will be illustrated through an example in which HP Vertica is the execution environment 420.
Monitoring tools in the HP Vertica engine collect metrics for each instance of each physical operator in the physical execution tree of a submitted query. The measurements of these metrics at the physical operator level correspond to the “operator level”. Second, from a user perspective, the query execution plan is the tree of logical operators (referred to as paths in HP Vertica) shown by the SQL explain plan command. Each logical operator (e.g., GroupBy) is a path and comprises a one or more physical operators in the physical execution tree (e.g., ExpressionEval, HashGroupBy). Accordingly, the metric measurements may be aggregated at the logical operator level, which corresponds to the “path level”. Third, a physical operator may run as multiple threads on a node (e.g., a parallel tablescan). Additionally, because HP Vertica is a parallel database, a physical operator may execute on multiple nodes. Thus, the metric measurements may be aggregated at the node level, which corresponds to the “node level”.
Fourth, a phase is a sub-tree of a query plan where all operators in the sub-tree may run concurrently. In general, a phase ends at a blocking operator, which is an operator that does not produce any output until it has read all of its input (or, all of one input if the operator has multiple inputs, like a Join). Examples of blocking operators are Sort and Count. Accordingly, the metric measurements may be aggregated at the phase level, which corresponds to the “query phase level”. Fifth, the metric measurements may be reported for the query as a whole. Thus, the metric measurements may be aggregated at a top level, which corresponds to the “query level”.
Consequently, metric measurements as interpreted by aggregator 414 form a multi-dimensional, hierarchical dataset where the dimensions are the various levels of execution. The metrics may then be aggregated (rolled-up) at the operator level, the path level, the node level, the query phase level, and the query level.
At 130, the analysis module 415 may determine whether there is a deviation. A deviation as used herein means unexpected performance exhibited during execution of the workload. The unexpected performance may relate to any of the metrics, such as unexpected elapsed time, execution time, memory allocated, memory reserved, rows processed, processor utilization, or network activity. There are various ways in which a deviation may be determined, such as illustrated by
Turning now to
As an example, assume that the workload is a query, “rows processed” is the first metric, the threshold value is “<5%”, and the level of execution is “query phase level”. Thus, rows processed during execution of the query at the query phase level are compared to an estimate of rows that were expected to be processed during execution of the query at the query phase level. If 505K rows were processed but 500K rows were estimated, it is determined that there is no deviation since the measurement (505K rows) deviates from the estimate (500K rows) by less than 5%. On the other hand, if 505K rows were processed but 400K rows were estimated, it is determined that there is a deviation since the measurement (505K rows) deviates from the estimate (400K rows) by at least 5%.
At 220, if the analysis module 415 determines that the measurements deviate from the estimates by a threshold value, it may identify a path in the workload that may be responsible for the deviation. For example, upon determining that there is a deviation at a higher level (e.g., query phase level), the analysis module 415 may drill down to one or more lower levels (e.g., node level, path level, operator level) and compare measurements of the first metric at that lower level to estimates of the first metric at that level. Returning to the example in the previous paragraph, by drilling down to a lower level, it may be determined that the deviation between measurement (505K rows) and estimate (400K rows) at the query phase level is largely due to a deviation between measurement and estimate of rows at a particular one of the nodes processing the query, at a particular path (logical operator) executing in that query phase, and/or at a particular physical operator executing in that logical operator. As will be described with reference to
Turning now to
As an example, assume that the workload is a query, “rows processed” is the first metric, “execution time” is the second metric, the level is “node level”, and the correlation is that 100K rows should be processed in about 1 second. Thus, rows processed during execution of the query at the node level are compared to execution time at the node level to determine if there is a deviation from the expected correlation. If 500K rows were processed in about 5 seconds on a particular node, it is determined that there is no deviation since this is consistent with the expected correlation. On the other hand, if 500K rows were processed in 45 seconds on a particular node, it is determined that there is a deviation since there is a significant deviation from the expected correlation. Similar to method 200, a threshold value can be established for determining whether a deviation is significant.
At 220, if the analysis module 415 determines that there is a deviation from the correlation, it may identify a path in the workload that may be responsible for the deviation. For example, the analysis module 415 may drill down to one or more lower levels (e.g., path level, operator level) and compare measurements of the first metric at that lower level to measurements of the second metric at that level. Returning to the example in the previous paragraph, by drilling down to a lower level, it may be determined that the deviation from the correlation at the node level is largely due to a deviation between rows processed and execution time at a at a particular path (logical operator) executing in that query phase, and/or at a particular physical operator executing in that logical operator. As will be described with reference to
The user interface 430 may allow selection of one or more of the metrics for display in the visualization. For example, the metrics may be selected in any of various ways, such as by clicking on a tab in the user interface 430, each tab representing a different metric. Alternatively, other selection options may be a dropdown menu, radio buttons, textual input, voice input, touch input, and the like. In addition, the user interface 430 may allow selection of one or more levels. The one or more levels may be selected in any of various ways as well, such as via a dropdown menu or by drilling down on a current displayed level via clicking on the displayed level. For instance, clicking on a path may drill down to the operator level for that path. Additionally, system 410 may provide user interface 430 with the explain plan (generated by the execution engine) and any events that occurred during execution (e.g., hash table overflow to disk). In some examples, some or all of this information can be incorporated into the visualization for display to the user, as illustrated in
At 320, a visualization to be displayed on the user interface may be generated. The visualization may identify a path in the workload associated with a deviation (such as identified in methods 100, 200, and 250). The visualization may include graphs, charts, and/or text. The visualization may be generated by visualization generator 416, which interfaces with aggregator 414 to obtain the metric measurements and estimates aggregated at the appropriate level. An example user interface 430 and example visualizations are described with reference to
Portion 540 provides identification information for a query whose metrics are being currently displayed in portion 560. Portion 550 comprises various tabs, each tab corresponding to a metric. The metrics are elapsed time, execution time, memory allocated, memory reserved, rows processed, and history. The history metric represents a view of execution of the entire query up to the current time over one or more metrics. Portion 560 constitutes the representation of the selected metric (elapsed time) at the selected level of execution (node level). The representation is a bar graph representing elapsed time for each node. When the analyze operation has been selected, portion 560 can be updated to highlight metrics relevant to the one or more paths that may be responsible for a deviation.
While this analysis has been illustrated through visualizations 600 and 650, this analysis may be performed by analysis module 415 without the use of visualizations. The visualizations merely illustrate how this information could be presented to a user. In addition, this type of analysis may be performed for any of the metrics, such as comparing memory reserved to memory allocated. Analysis module 415 may be configured with multiple heuristic algorithms for comparing metrics and estimates at various levels to identify deviations between metrics and estimates as well as deviations from expected correlations.
Visualization 700 illustrates a visualization that may be useful for a database administrator or other skilled person. In particular, key operations (physical operators) in the paths may be identified. The key operations may be identified using the query explain plan, where the key operation in a path would be the first operation in the path. Additionally, statistics regarding execution of the operation may be provided, the statistics being based on the metric measurements and estimates. This is illustrated in visualization 700 via boxes 710 and 720. Box 710 identifies the Join operation and provides relevant execution statistics and box 720 identifies the GroupBy operation and provides relevant execution statistics. The skilled user may use this information to analyze the query, check the database schema, consider adding or dropping projections or indices, change resource pool configurations, tune the workload management policies being used, etc.
Visualization 750 illustrates a visualization that may be useful for non-technical people, such as business and data analysts. Nonetheless, the information provided in this visualization may also be useful to the skilled user, such as a database administrator. Here, boxes 760 and 770 provide information about the identified paths in a less technical, more user friendly manner. Essentially, the potential problems with the path are explained in prose. Furthermore, suggestions for fixing the problem are provided. For instance, it is suggested that the statistics used for the computation should be recomputed and the user is urged to consult the database administrator for assistance. Boxes 760 and 770 may be mapped to critical operators in the query statement, such as shown in visualization 700. Alternatively, boxes 760 and 770 may be mapped to relevant portions of a natural language description of the query, as shown here in visualization 750. The natural language description may be automatically generated using known techniques for generating natural language descriptions from query language.
In addition, users of computer 910 may interact with computer 910 through one or more other computers, which may or may not be considered part of computer 910. As an example, a user may interact with computer 910 via a computer application residing on computer 910 or on another computer, such as a desktop computer, workstation computer, tablet computer, or the like. The computer application can include a user interface (e.g., touch interface, mouse, keyboard, gesture input device).
Computer 910 may perform methods 100, 200, 250, and 300, and variations thereof. Additionally, the functionality implemented by computer 910 may be part of a larger software platform, system, application, or the like. For example, computer 910 may be part of a data analysis system.
Computer(s) 910 may have access to a database. The database may include one or more computers, and may include one or more controllers and machine-readable storage mediums, as described herein. Computer 510 may be connected to the database via a network. The network may be any type of communications network, including, but not limited to, wire-based networks (e.g., cable), wireless networks (e.g., cellular, satellite), cellular telecommunications network(s), and IP-based telecommunications network(s) (e.g., Voice over Internet Protocol networks). The network may also include traditional landline or a public switched telephone network (PSTN), or combinations of the foregoing.
Processor 920 may be at least one central processing unit (CPU), at least one semiconductor-based microprocessor, other hardware devices or processing elements suitable to retrieve and execute instructions stored in machine-readable storage medium 930, or combinations thereof. Processor 920 can include single or multiple cores on a chip, multiple cores across multiple chips, multiple cores across multiple devices, or combinations thereof. Processor 920 may fetch, decode, and execute instructions 932-938 among others, to implement various processing. As an alternative or in addition to retrieving and executing instructions, processor 920 may include at least one integrated circuit (IC), other control logic, other electronic circuits, or combinations thereof that include a number of electronic components for performing the functionality of instructions 932-938. Accordingly, processor 920 may be implemented across multiple processing units and instructions 932-938 may be implemented by different processing units in different areas of computer 910.
Machine-readable storage medium 930 may be any electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. Thus, the machine-readable storage medium may comprise, for example, various Random Access Memory (RAM), Read Only Memory (ROM), flash memory, and combinations thereof. For example, the machine-readable medium may include a Non-Volatile Random Access Memory (NVRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage drive, a NAND flash memory, and the like. Further, the machine-readable storage medium 930 can be computer-readable and non-transitory. Machine-readable storage medium 930 may be encoded with a series of executable instructions for managing processing elements.
The instructions 932-938 when executed by processor 920 (e.g., via one processing element or multiple processing elements of the processor) can cause processor 920 to perform processes, for example, methods 100, 200, 250, and 300, and/or variations and portions thereof.
For example, receiving instructions 932 may cause processor 920 to receive multiple measurements of a plurality of metrics relating to execution of a workload over a database. The workload may be a query. Aggregating instructions 934 may cause processor 920 to aggregate the multiple measurements of the plurality of metrics at multiple levels of execution of the workload. The multiple levels of execution can include a query level, a query phase level, a node level, a path level, and an operator level. Comparing instructions 936 may cause processor 920 to compare measurements of a first metric at a first level to measurements of a second metric at the first level to determine whether there is a deviation from an expected correlation between the two metrics. Identifying instructions 938 may cause processor 920 to identify a path in the workload that may be responsible for the deviation if determined that there is a deviation from the expected correlation between the two metrics.
In the foregoing description, numerous details are set forth to provide an understanding of the subject matter disclosed herein. However, implementations may be practiced without some or all of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2013/076779 | 12/20/2013 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/094312 | 6/25/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5963936 | Cochrane et al. | Oct 1999 | A |
5982186 | Buschbom | Nov 1999 | A |
6593862 | Wong et al. | Jul 2003 | B1 |
7409321 | Repucci et al. | Aug 2008 | B2 |
7424530 | Chagoly et al. | Sep 2008 | B2 |
7617185 | Werner | Nov 2009 | B2 |
7660823 | Clover | Feb 2010 | B2 |
7783510 | Gilgur et al. | Aug 2010 | B1 |
7805510 | Bansal et al. | Sep 2010 | B2 |
7817563 | Buragohain et al. | Oct 2010 | B1 |
7996378 | Wang et al. | Aug 2011 | B2 |
8028055 | Duffield | Sep 2011 | B2 |
8122050 | Mordvinov et al. | Feb 2012 | B2 |
8224811 | Kuno | Jul 2012 | B2 |
8321805 | Tien et al. | Nov 2012 | B2 |
8452756 | Anderson et al. | May 2013 | B2 |
8531225 | Hussain | Sep 2013 | B1 |
8533218 | Debrot et al. | Sep 2013 | B2 |
8554699 | Ruhl et al. | Oct 2013 | B2 |
8572068 | Graefe | Oct 2013 | B2 |
8577871 | Dageville et al. | Nov 2013 | B2 |
8596385 | Benson | Dec 2013 | B2 |
8949224 | Gupta | Feb 2015 | B2 |
9128994 | Smolinski et al. | Sep 2015 | B2 |
9251464 | Kellas-Dicks et al. | Feb 2016 | B1 |
9501537 | Walter et al. | Nov 2016 | B2 |
9594791 | Bell et al. | Mar 2017 | B2 |
9632858 | Sasturkar et al. | Apr 2017 | B2 |
9852186 | Herwadkar | Dec 2017 | B2 |
10208580 | Benson | Feb 2019 | B2 |
10307060 | Tran | Jun 2019 | B2 |
20040111398 | England et al. | Jun 2004 | A1 |
20040210563 | Zait et al. | Oct 2004 | A1 |
20050102613 | Boukouvalas | May 2005 | A1 |
20060064426 | Barsness et al. | Mar 2006 | A1 |
20060248401 | Carroll et al. | Nov 2006 | A1 |
20060253471 | Wasserman | Nov 2006 | A1 |
20070010998 | Radhakrishnan et al. | Jan 2007 | A1 |
20080183865 | Appleby et al. | Jul 2008 | A1 |
20090313634 | Nguyen et al. | Dec 2009 | A1 |
20090327242 | Brown et al. | Dec 2009 | A1 |
20090327852 | McGregor et al. | Dec 2009 | A1 |
20100082125 | Pingel et al. | Apr 2010 | A1 |
20100082517 | Schapker et al. | Apr 2010 | A1 |
20100082599 | Graefe | Apr 2010 | A1 |
20100198806 | Graefe | Aug 2010 | A1 |
20100198807 | Kuno | Aug 2010 | A1 |
20100235349 | Kuno et al. | Sep 2010 | A1 |
20100312762 | Yan et al. | Dec 2010 | A1 |
20110055214 | Mui et al. | Mar 2011 | A1 |
20110072411 | Cornell | Mar 2011 | A1 |
20110082927 | Chambers et al. | Apr 2011 | A1 |
20110119374 | Ruhl et al. | May 2011 | A1 |
20110153603 | Adiba et al. | Jun 2011 | A1 |
20110283260 | Bucuvalas | Nov 2011 | A1 |
20110314403 | Yan | Dec 2011 | A1 |
20120022707 | Budhraja et al. | Jan 2012 | A1 |
20120072575 | King et al. | Mar 2012 | A1 |
20120179713 | Stolte et al. | Jul 2012 | A1 |
20120180108 | Siklos et al. | Jul 2012 | A1 |
20130124714 | Bednar | May 2013 | A1 |
20130185729 | Vasic et al. | Jul 2013 | A1 |
20130191411 | Carlson et al. | Jul 2013 | A1 |
20130212142 | Martinez Tieras | Aug 2013 | A1 |
20130262443 | Leida et al. | Oct 2013 | A1 |
20130268520 | Fisher et al. | Oct 2013 | A1 |
20130343213 | Reynolds | Dec 2013 | A1 |
20130346427 | Impink | Dec 2013 | A1 |
20140095541 | Herwadkar | Apr 2014 | A1 |
20140149947 | Blyumen | May 2014 | A1 |
20140278838 | Novak | Sep 2014 | A1 |
20140279824 | Tamayo | Sep 2014 | A1 |
20140324862 | Bingham | Oct 2014 | A1 |
20150033084 | Sasturkar et al. | Jan 2015 | A1 |
20150033086 | Sasturkar et al. | Jan 2015 | A1 |
20150039555 | Rao | Feb 2015 | A1 |
20150089054 | Rizzi | Mar 2015 | A1 |
20160246842 | Li | Aug 2016 | A1 |
Entry |
---|
A grid workload Modeling Approach for Intelligent Grid, IEEE, Liu et al., (Year: 2009). |
“Database Query Monitor”, Zoho Corporation, Oct. 23, 2008. |
“Troubleshooting Best Practices for Developing Windows Azure Applications”, Jan. 2012. |
Chuang, K-T et al, “Feature-Preserved Sampling Over Streaming Data”, Jan. 2009. |
Fajardo, W et al, “Pattern Characterization in Multivariate Data Series Using Fuzzy Logic”, Jul. 30, 2012. |
Grust, T et al, “Observing SQL Queries in Their Natural Habitat”, Oct. 11, 2012. |
Mayer, W et al, “Model-based Debugging with High-level Observations”, Jul. 12, 2004. |
Mishra, C et al, “A Lightweight Online Framework for Query Progress Indicators”, Feb. 10, 2007. |
Shah, N et al, “Dynamically Measuring Statistical Dependencies in Multivariate Financial Time Series Using Independent Component Analysis”, Mar. 30, 2013. |
Georgia Koutrika, et al., “Mirror mirror on the wall, which query's fairest of them all?”, 6th Biennial CIDR '13, Jan. 6-9, 2013, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20160292230 A1 | Oct 2016 | US |