This invention relates generally to database management systems and external object storage systems, and more particularly to improved methods for optimizing workload performance and costs within database management systems employing external cloud storage, such as public and private cloud storage solutions including Amazon Web Services (AWS), Microsoft Azure, Google Cloud, IBM Cloud, and others.
Traditional database systems typically employ a cost-based query optimizer which functions to produce the most efficient access and execution plan to retrieve the data that satisfies a SQL query submitted by a user. This execution plan determines the steps, the order, the parallelism, and data access method that will most efficiently deliver the result for the submitted SQL query. Cardinality estimation—the process of estimating the number of distinct values or rows that will be returned by a specific database query or operation—is a crucial aspect of query optimization and is used to determine the most efficient query execution plan.
As object store database systems are increasingly employed in cloud platforms, such as Teradata Corporation VantageCloud Lake data platform, an updated process for estimating cardinality in systems where data is saved in objects is desired. Unlike traditional database systems, it is also crucial to estimate the number of objects that could include qualifying rows. The cost of accessing objects may be a nontrivial factor of processing a SQL query in object store database systems.
Some implementations of the present disclosure are described with respect to the following figures.
A parallel, scalable network connection is provided between primary cluster 101 and multiple compute clusters 103. This connection provides load balancing between multiple compute clusters and transfers finalized query steps to the compute clusters for execution.
Primary cluster 101 contains a database management system consisting of one or more network compute units or nodes 205 that manage the storage, retrieval, and manipulation of data stored on one or more block storage disks 212 as shown in
Generally, requests in the form of queries 201 are transmitted via a network 203 to the primary cluster 101, and responses are received therefrom. The database management system of primary cluster 101 performs the workload comprised of the one or more queries 201 against a relational database comprised of one or more tables storing data. Specifically, the database management system performs the functions described below, including accepting the workload comprised of the queries 201, generating one or more query execution plans (QEPs) from the queries 201, and then performing the query execution plans to process data retrieved from the tables. Moreover, the results from these functions may be provided directly to clients, may be provided to other systems (not shown) by network 203, or may be stored by the data management system in the database.
As shown in
In one example, each compute unit 205 may include one or more physical processors 206 and memory 207. The memory 207 may include one or more memories and may be computer-readable storage media or memories, such as a cache, buffer, RAM, removable media, hard drive, flash drive, or other computer-readable storage media. Computer-readable storage media may include various types of volatile and nonvolatile storage media. Various processing techniques may be implemented by the processors 206 such as multiprocessing, multitasking, parallel processing and the like, for example.
The compute units 205 may include one or more other processing units such as parsing engine (PE) modules 208 and access modules (AM) 210. As described herein, “modules” are defined to include software, hardware or some combination thereof executable by one or more physical and/or virtual processors. Software modules may include instructions stored in the one or more memories that are executable by one or more processors. Hardware modules may include various devices, components, circuits, gates, circuit boards, and the like that are executable, directed, and/or controlled for performance by one or more processors.
The parsing engine modules 208 and the access modules 210 may each be virtual processors (vprocs) and/or physical processors. In the case of virtual processors, the parsing engine modules 208 and access modules 210 may be executed by one or more physical processors, such as those that may be included in the compute units 205. For example, in
In
The database management system stores data in one or more tables in block storage 212. In one example, the database system may be configured to distribute rows across access modules 210 and their associated block storage 212. These rows may include rows read from object store 105. Each parsing engine module 108 may organize the storage of data and the distribution of table rows and columns. The parsing engine modules 208 may also coordinate the retrieval of data from block storage 212 in response to queries received through connection with a network 203. The network 203 may be wired, wireless, or some combination thereof. The network 203 may be a virtual private network, web-based, directly-connected, or some other suitable network configuration.
In one example system, each parsing engine module 208 includes four primary components: a session control module 300, a parser module 302, an optimizer 304, and a dispatcher module 306 as shown in
As illustrated in
Selecting the optimal query-execution plan may include, among other things, identifying which primary cluster 101, compute clusters 103, and compute units 205 are involved in executing the query and which database tables are involved in the query, as well as choosing which data-manipulation techniques will serve best in satisfying the conditions of the query. To this end, for each parsing engine module 208, the parser module 302 (see
The data dictionary module may specify the organization, contents, and conventions of one or more databases, such as the names and descriptions of various tables maintained by database management system as well as fields of each database, for example. Further, the data dictionary module 406 may specify the type, length, and/or other various characteristics of the stored tables. The database management system typically receives queries in a standard format, such as the structured query language (SQL) put forth by the American National Standards Institute (ANSI). However, other formats, such as contextual query language (CQL), data mining extensions (DMX), and multidimensional expressions (MDX), for example, may be implemented in the database system separately or in conjunction with SQL. The data dictionary may be stored in block storage disks 212 or some other storage device and selectively accessed.
Referring again to
The interconnection 214 may be hardware, software, or some combination thereof. In instances of at least a partial-hardware implementation the interconnection 214, the hardware may exist separately from any hardware (e.g, processors, memory, physical wires, etc.) included in the compute units 205 or may use hardware common to the compute units 205. In instances of at least a partial-software implementation of the interconnection 214, the software may be stored and executed on one or more of the memories 207 and processors 206 of the compute units 106 or may be stored and executed on separate memories and processors that are in communication with the compute units 205. In one example, interconnection 214 may include multi-channel media such that if one channel ceases to properly function, another channel may be used. Additionally, or alternatively, more than one channel may also allow distributed communication to reduce the possibility of an undesired level of communication congestion among compute units 205.
Compute clusters 103 exist as separate clusters of network-connected nodes independent of primary cluster 101. Each compute cluster 103 is separate and may be specialized. Compute clusters 103 enable the extension and scaling of system compute power.
As shown in
Compute clusters 103 do not have any permanent data. A data dictionary structure exists on a compute cluster, but it serves only the transient needs of the compute cluster. It does not contain table or column descriptions or details about statistics, indexes, or privileges. All that detail is maintained in primary cluster 101.
A compute cluster 103 can read large tables in object storage 105. It can also hold intermediate data, keeping it in memory or in internal drives.
Elasticity and extensible compute power is provided to the database platform via different quantities, configurations, and sizes of compute clusters 103. Each compute cluster 103 stands alone and executes queries that access object storage 105 to perform compute-intensive work such as analytic functions, freeing up primary cluster 101 to perform session management, parsing engine work, and tactical or other short-term work.
Depending on workload, a compute configuration may employ compute clusters having differing quantities of compute nodes 505 and processing capability. A compute cluster having a greater number of compute units or nodes 505 will accordingly have more processors 506, memory 507, access modules 510. With more access modules, a query or task assigned to a larger compute cluster can execute at a higher level of parallelism and deliver faster response times. Compute clusters can be categorized as either Small, Medium, Large, or X-Large depending upon the number of compute units or nodes 505 contained in a compute cluster 103.
A compute configuration may employ zero or many compute clusters, with compute clusters being added or removed to the configuration to meet workload needs. A compute configuration with zero compute clusters would consist of only primary cluster 101. Groupings of compute clusters can automatically scale up additional compute clusters based on resource demand or the number of active queries.
The optimizer 304 in the primary cluster 101 determines which query steps go to a compute cluster 103 and builds a query plan. During optimization, the work that a query needs to accomplish is broken into several steps. Some of these steps will execute on primary cluster 101, and if appropriate privileges are in place, some steps will execute on a compute cluster 103. Even if there are several compute clusters within a cluster configuration, a single query can only execute steps in one compute cluster. An execution plan may include processing a query step or two in primary cluster 101, and then processing one or more steps on a compute cluster 103. The compute cluster parsing engine 508 receives the query plan from primary cluster 101 and is responsible for dispatching steps down to the compute cluster access modules 510 for execution. When to use a compute cluster 103, what compute clusters to use, and the quantity of clusters to use, is determined by the optimizer in primary cluster 101 at the time the initial query plan is built.
Each cluster in the database environment is independent and isolated from other clusters, though queries can span a primary cluster 101 and one or more compute clusters 103 with communication between primary cluster 101 and compute clusters 103 occurring through a network connection 203. Data and instructions about how to execute query 201 may also be transmitted between the primary and compute clusters means of a data access layer referred to as data fabric, such as QueryGrid provided by Teradata Corporation. Results generated by compute clusters 103 are provided through the network or data fabric back to primary cluster 101.
Within an object store, minimum and maximum values of individual columns are typically maintained in objects. This information tells whether an object might have a row qualifying a given condition. For example, given a query “SELECT c1 as criterial1 FROM ProductTest WHERE test_id between 10 and 12;”, objects whose maximum value of test_id is less than 10 or the minimum value of test_id is greater than 12 are guaranteed to not have any row qualifying the condition, so those objects are not to be accessed for answering the query.
An object cardinality estimation is critical for the optimizer to compute the cost of executing queries and come up with optimal execution plans. One example is to decide whether to scan all objects or to locate objects by an index (which maintains the minimum and the maximum values of objects for each column). Another example is to decide an object arrangement for a parallel join processing. If a few objects are estimated to be accessed for a join relation, then those objects could be assigned to all the parallel processing units redundantly, while objects of the other join relation are assigned to parallel processing units uniformly and mutually. The cost estimate is also used for a workload management system to control query executions and meet their service level goal. The object cardinality estimation is more critical than the estimation of the number of blocks in traditional database systems, since the latency of accessing objects is much larger.
Three object cardinality estimation approaches which exploit different statistics maintained within an object store are described herein: (1) general estimation, (2) stratified estimation, and (3) object-range-histogram-based estimation. The described approaches are applicable for estimating any storage units if those units maintain minimum and the maximum values of columns. For example, in Parquet, the solution can be used to estimate the number of row groups and/or the number of pages to be accessed.
The object cardinality estimation approaches estimate the number of objects to be accessed for a given query condition q on a column c of a relation r composed of a set of objects O, where each object maintains the minimum value and the maximum value of individual columns. Together,
Value Range. A value range is a set of contiguous values represented with a lower-bound (lb) and upper-bound (ub). A range is represented as [lb:ub] when the lower-bound and the upper-bound are included in the range. When boundaries are not included in a range, the range is represented as either {lb:ub] where lb is not included in the range, [lb:ub}, where ub is not included in the range, or {lb:ub} when neither lb nor ub is included in the range.
Four Types of Value Ranges. This solution considers the following ranges: “global range”, “object range”, “condition range”, and “qualified range”, explained below.
Range Width. The width of a value range is denoted as W([lb:ub]), W([lb:ub}), W({lb:ub]), W({lb:ub}). For discrete data, the width is the number of possible values in the range. So, W([lb:ub])=(ub−lb)/gap+1, W({lb:ub])=W([lb:ub})=((ub−lb)/gap), and W({lb:ub})=((ub−lb)/gap)−1, where gap is a distance between two adjacent values. For the example data in
Sum of Object Range Widths. A new statistic, the sum of object range widths of a column c, denoted as S(c,O) or simply S, is proposed to be maintained as a part of the global statistics Z. The sum is formally defined as the equation below. For example,
Average of Object Range Widths. The average object range width of a column c, denoted as Aw or Aw(c,O), is defined as below. The average width of a column c is used to generalize object ranges within the global range of the column c. Note that the average width is not maintained in Z but derived from S and |O|.
Object Cardinality Estimation (E(q) or simply E). Estimating the number of objects to be accessed for a query from O, is conceptually the same as estimating the number of object ranges which overlap a query condition range [q min:q max] within a global range [G Min:G Max]. For an unsatisfiable condition q, i.e., a qualified range does not exist like cr3 over c1 in
Intuition Behind E(q).
Limitation of the General Approach—Variance of Object Range Widths. The general estimate approach assumes that all the objects have the same widths as the average. When object range widths diverge significantly, the estimation inaccuracy could get larger. Comparing [15,26] over c4 (804) and c5 (805) in
Stratified Estimation. The stratified estimation approach is to group object value ranges which have similar widths, then use an adjusted average value range for each group in the estimation. That is, E(q) is the sum of the estimated number of objects for each group. The differences of object range widths are assumed to follow a normal distribution. Considering ten groups for E(q), then each group consists of “|O|*10%” objects. According to the well-known probability density of the normal distribution (https://en.wikipedia.org/wiki/Normal distribution), 10% of the objects can be estimated to have widths less than Aw−1.29*σ, the next 10% of objects have widths between Aw−1.29*σ and Aw−0.85*σ, and so on, where σ is the standard deviation.
Adjusted Object Count and Adjusted Average Width. Table 1 shows the adjusted object count and the adjusted average width for the estimation in each group. The adjusted average width is the value whose probability density function output is the mid-point of the two outputs of the probability density function for the minimum width and the maximum width in each group.
Maintenance and Computation of the Standard Deviation. To get the standard deviation, the sum of squared object widths for each column c, denoted as S2(c), are to be collected/maintained in Z while S(c) is collected/maintained. The standard deviation is the square root of the difference between S2(c) divided by |O| and the square of S(c) divided by |O|. That is, σ=√{square root over ((S2(c)/|O|)−(S(c)/|O|)2)}.
Determination of the number of groups. The number of groups can be determined based on the significance of the standard deviation. When the standard deviation is trivial or 0, regardless of how many groups are considered, the adjusted widths of all groups would be the same, which means that the stratified estimation becomes the same as the general estimate. When the standard deviation is significant, the more the groups are, the better the estimation accuracy is. If an adjusted average width is less than the minimum width or if the adjusted average width is greater than the maximum width, then the average width is adjusted to the minimum width and the maximum width. The statistics of the minimum width and the maximum width can be maintained additionally. Otherwise, the minimum width is set to 1 for discrete data and 0 for indiscrete data, and the maximum width is set to the global width of W([G Min:G Max]).
Limitation of the General Approach—Skewed Positions of Object Ranges. The general estimate approach assumes that all object ranges are evenly positioned within the global range. When many object ranges are heavily jammed in certain subranges and/or much less object value ranges positioned in other subranges, the estimation accuracy may be negatively impacted. Referring to
Object Range Histogram. The object range histogram of a column c, ORH(c) or simply ORH, is a set of intervals which are mutually exclusive subranges consisting of the global range of the column c, and which keep the information of the objects that overlap individual subranges. The ORH can give an insight where object ranges are positioned within the global range. For discrete data, the range of the k'th interval is represented as [I Min k(c):I Max k(c)]. For indiscrete data, the interval k can be represented as [I Min k(c):I Max k(c)} or {I Min k(c):I Max k(c)].
Four Object Types. A set of objects which overlap the interval k is denoted as Ok. Objects in Ok are classified into four types. First, there are objects whose ranges cover the interval k entirely and overlap the interval (k−1) and the interval (k+1). A set of the first-type objects is denoted as . Second, there are objects whose ranges overlap the interval k and interval (k−1), but do not overlap (k+1). A set of the second-type objects is denoted as
. Third, there are objects whose range overlaps the interval k and the interval (k+1), but do not overlap the interval (k−1). A set of the third-type objects is denoted as {right arrow over (Ok)}. Fourth, there are objects whose ranges overlap the interval k only, do not overlap the interval (k−1) and the interval (k+1). A set of the fourth-type objects is denoted as
,
, {right arrow over (Ok)} and
|, |
|, |{right arrow over (Ok)}|, and |
,
, {right arrow over (Ok)} and
), Sk(c,
), Sk(c, {right arrow over (Ok)}), and Sk(c,
,
, {right arrow over (Ok)} and
), S2k(c,
), S2k(c, {right arrow over (Ok)}), and S2k(c,
Scope of the Statistics of an interval. or
of interval (k+1). Likewise, an object in
may belong to
or
of interval (k+1). However, it is obvious that the sum of |
| and |{right arrow over (Ok)}| equals the sum of |
| and |
|.
Example of Object Range Histogram.
Three Kinds of ORH. Depending on the information kept for each interval, three types of ORHs can be used. As more information is kept for intervals, the complexity of maintaining/using an ORH is increased, but a better estimation accuracy is expected.
E(q) with ORH-1. The first step is to identify the qualified range, i.e., [q min:q max] and the intervals that overlap the qualified range. The second step is to combine those intervals to one interval. The first and the second steps can be done progressively while scanning intervals. Let us suppose m number of intervals which overlap a given query range condition, from interval k through interval (k+m), the statistics of the combined interval can be determined as shown below, and E(q) can be computed as if the combined interval is the global range.
E(q) with ORH-2 and ORH-3. The estimation with ORH-2 and ORH-3 is performed on individual intervals, and the estimated number of objects to be accessed for each interval are summed up and become the final estimate. The estimate for an interval k is denoted as Ek(q). Table 2 shows five estimation cases for Ek(q), followed by how Ek(q) is computed.
Interval Creation. For the use of an ORH, intervals must be created first as objects are generated. When the global minimum and the global maximum values are known (e.g., when application domain experts know, when an object-store table is created from another object-store table, or object-store tables are reorganized, etc.), equal-width intervals can be created as many as the threshold of the maximum number of intervals. Otherwise, intervals can be created as new objects are generated and when existing intervals do not cover a new object range. Let suppose a new object range, in terms of a column c, i.e., [O Min(c,o):O Max(c,o)]. When there is no interval, an interval can be created with the range of [O Min(c,o):O Max(c,o)]. When O Min(c,o)<I Min 1(c), a new interval can be created as the range of [O Min(c,o):I Min i(c)}. When I Maxn(c)<O Max(c,o), where n is the number of intervals in ORH, a new interval can be created as the range of {I Maxn(c):O Max(c,o)].
Interval Merging. The interval merging is to combine two adjacent intervals l and r, and create a new interval k which covers both intervals l and r. This merging can be performed if the number of intervals is greater than or close to the threshold of the maximum number of intervals. For ORH-1, the merging process is simple. For ORH-2 and ORH-3, belong to
, because it is guaranteed that an object in
cannot span interval k+1. Likewise, it is obvious that all objects in {right arrow over (Or)} belong to {right arrow over (Ok)}. It is also obvious that all objects in
to
. On the other hand, it is not obvious how many objects {right arrow over (Ol)},
,
and
, belong to
and
, as shown in arrows going to two object types for interval k. One simple strategy to determine an after-merge object type is assuming that objects in {right arrow over (Ol)}and
, correspond to
and
in proportion to the ratio of |
| and |
|. That is, for {right arrow over (Ol)}, |{right arrow over (Ol)}|*(|
|*(|
|+|
|)) objects belong to
|*(|
|+|{right arrow over (Or)}|)) objects belong to {right arrow over (Ok)}. For
, |
|*(|
|*(|
|+|
|)) objects belong to
, and |
|*(|
|*(|
|+|{right arrow over (Or)}|)) objects belong to
.
Interval Splitting. The interval splitting is to split an interval k into two adjacent intervals l and r. This operation can be performed when the width of interval k is relatively much greater than others and/or when || or |{right arrow over (Ok)}| is relatively much greater than the others like |
|, |
|, |{right arrow over (Ok)}|, and |
,
, {right arrow over (Or)} and
Interval Removal. The interval removal is to get rid of an interval so that the interval may not be maintained anymore. It can happen when existing objects are removed and there is no object whose range overlap the interval. Note that it happens on the first or the last interval, so that the starting boundary of the first interval and the upper boundary of the last interval match the global minimum value and the global maximum value.
Parallelism and Multiple Histograms. When there are multiple parallel processing units (PPUs) to build an ORH, a PPU can build its own ORH for the data assigned to the PPU (i.e., as data objects are generated for the data). It is also possible that one PPU partitions its own data and creates multiple ORHs for each column. In such a case, the final estimate for a column is the sum of the estimates from all individual ORHs. If it is not tolerable to iteratively compute the estimate for each ORH, or if it is not tolerable to keep and maintain multiple ORHs for a column, then those multiple ORHs for a column can be combined. Note that ORHs may have different interval boundaries. To align interval boundaries of two ORHs, the interval splitting operation and the interval merging operation can be performed.
The approaches for estimating cardinality, determining query execution plan costs, and selecting a query plan for execution in systems where data is saved in objects described herein provide a unique approach to improve workload performance and costs within database management systems employing external object storage. In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.