The invention relates generally to the field of relational databases and specifically to the field of optimizing quenes on databases.
Most query optimizers for relational database management systems (RDBMS) rely on a cost model to choose the best possible query execution plan for a given query. Thus, the quality of the query execution plan depends on the accuracy of cost estimates. Cost estimates, in turn, crucially depend on cardinality estimations of various sub-plans (intermediate results) generated during optimization. Traditionally, query optimizers use statistics built over base tables for cardinality estimates, and assume independence while propagating these base-table statistics through the query plans. However, it is widely recognized that such cardinality estimates can be off by orders of magnitude. Therefore, the traditional propagation of statistics can lead the query optimizer to choose significantly low-quality execution plans.
Using conditional selectivity as a framework for manipulating query plans to leverage statistical information on intermediate query results can result in more efficient query plans. A number of tuples returned by a database query having a set of predicates that each reference a set of database tables can be approximated. The query is decomposed to form a product of partial conditional selectivity expressions. The partial conditional selectivity expressions are then matched with stored statistics on query expressions to obtain estimated partial conditional selectivity values. The selectivity of the query is then estimated by combining the obtained partial conditional selectivity results. The resulting query selectivity estimate can be multiplied by a Cartesian product of the tables referenced in the query to arrive at a cardinality value.
The decomposition of the query can be performed recursively by repeatedly separating conditional selectivity expressions into atomic decompositions. During matching an error can be associated with a selectivity estimation that is generated using a given statistic and those statistics with the lowest error may be selected to generate the query selectivity estimation. The error may be based on the difference between a statistic that is generated by an intermediate query result and a statistic on the corresponding base table. Statistics on query expressions that correspond to a subset of the predicates represented in a given selectivity expression may be considered for estimating the selectivity of the given selectivity expression. In an optimizer environment, the decomposition may be guided by the sub-plans generated by the optimizer. A wider variety of queries can be decomposed by transforming disjunctive query predicates into conjunctive query predicates.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:
Exemplary Operating Environment
With reference to
A number of program modules may be stored on the hard disk, magnetic disk 129, optical disk 31, ROM 24 or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A database system 55 may also be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24 or RAM 25. A user may enter commands and information into personal computer 20 through input devices such as a keyboard 40 and pointing device 42. Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to processing unit 21 through a serial port interface 46 that is coupled to system bus 23, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB). A monitor 47 or other type of display device is also connected to system bus 23 via an interface, such as a video adapter 48. In addition to the monitor, personal computers typically include other peripheral output devices such as speakers and printers.
Personal computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49. Remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to personal computer 20, although only a memory storage device 50 has been illustrated in
When using a LAN networking environment, personal computer 20 is connected to local network 51 through a network interface or adapter 53. When used in a WAN networking environment, personal computer 20 typically includes a modem 54 or other means for establishing communication over wide area network 52, such as the Internet. Modem 54, which may be internal or external, is connected to system bus 23 via serial port interface 46. In a networked environment, program modules depicted relative to personal computer 20, or portions thereof, may be stored in remote memory storage device 50. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
Cost Estimation Using Cardinality Estimates Based on Statistics on Intermediate Tables
SITs are statistics built over the results of query expressions or intermediate tables, and their purpose is to eliminate error propagation through query plan operators. For the purposes of this description, a SIT is defined as follows: Let R be a table, A an attribute of R, and Q an SQL query that contains R.A in the SELECT clause. SIT(R.A|Q) is the statistic for attribute A on the result of the executing query expression Q. Q is called the generating query expression of SIT (R.A|Q). This definition can be extended for multi-attribute statistics. Furthermore, the definition can be used as the basis for extending the CREATE STATISTICS statement in SQL where instead of specifying the table name of the query, more general query expression such as a table valued expression can be used.
In U.S. patent application Ser. No. 10/191,822, which issued as U.S. Pat. No. 6,947,927 on Sep. 20, 2005, incorporated herein by reference in its entirety, the concept of SITs was introduced. A particular method of adapting a prior art query optimizer to access and utilize a preexisting set of SITs for cost estimation was described in detail in this application, which method is summarized here briefly as background information.
Referring to
In general, the use of SITs is enabled by implementing a wrapper (shown in phantom in
According to the embodiment described in application Ser. No. 10/191,822, is now U.S. Pat. No. 6,947,927 the transformed plan that is passed to the cardinality estimation module exploits applicable SITs to enable a potentially more accurate cardinality estimate. The original cardinality estimation module requires little or no modification to accept the transformed plan as input. The transformation of plans is performed efficiently, which is important because the transformation will be used for several sub-plans for a single query optimization.
In general, there will be no SIT that matches a given plan exactly. Instead, several SITs might be used for to some (perhaps overlapping) portions of the input plan The embodiment described in application Ser. No. 10/191,822 is now U.S. Pat. No. 6,947,927 integrates SITs with cardinality estimation routines by transforming the input plan into an equivalent one that exploits SITs as much as possible. The transformation step is based on a greedy procedure that selects which SITs to apply at each iteration, so that the number of independence assumptions during the estimation for the transformed query plan is minimized. Identifying whether or not a SIT is applicable to a given plan leverages materialized view matching techniques as can be seen in the following example.
In the query shown in
Because the previous example employed view matching techniques as the main engine to guide transformations, no alternative was explored that exploited both SITs simultaneously. This is a fundamental constraint that results from relying exclusively on materialized view matching to enumerate alternatives. Therefore it is desirable to supplement the enumerated alternatives from materialized view matching with additional alternatives that leverage multiple SITs simultaneously. This is accomplished by using conditional selectivity as a formal framework to reason with selectivity values to identify and exploit SITs for cardinality estimation.
Conditional Selectivity
The concept of conditional selectivity allows expression of a given selectivity value in many different but equivalent ways. This description will focus on conjunctive Select Project Join queries, but the methods herein can be extended to handle more general queries.
An arbitrary SPJ query is represented in a canonical form by first forming the Cartesian product of the tables referenced in the query, then applying all predicates (including joins) to the Cartesian product, and projecting out the desired attributes. Thus, an SPJ query is represented as:
q=πa
where ai are attributes of R1 x . . . x Rn, and pi are predicates over R1x . . . x Rn (e.g. R1.a≦25, or R1.x=R2.y).
Each set of predicates {pi} that is applied to R1 x . . . x Rn results in the subset of tuples that simultaneously verify all pi. Using bag semantics, projections do not change the size of the output, and therefore projections are omitted from consideration when estimating carnalities. To estimate the size of the output, or its cardinality, the fraction of tuples in R1 x . . . x Rn that simultaneously verify all predicates pi (i.e. the selectivity of all pi) is approximated, and then this fraction is multiplied by |R1 x . . . x Rn|, which can be obtained by simple lookups over the system catalogs. The use of selectivities to obtain carnalities results in simpler derivations. The classical definition of selectivity is extended as follows:
Let ={R1, . . . ,Rn} be a set of tables, and P={p1, . . . ,pj}, Q={q1, . . . ,pk} be sets of predicates over Rx=R1 x . . . x Rn. The selectivity of p with respect to σq1 . . . qk (Rx), denoted SelR(P|Q), is defined as the fraction of tuples in σq1 . . . qk (Rx) that simultaneously verify all predicates in P. Therefore,
If Q=Ø, this reduces to Sel(P), which agrees with the traditional definition of selectivity.
In this description, tables(P) denotes the set of tables referenced by a set of predicates P, and attr(P) denotes the set of attributes mentioned in P. To simplify the notation, “P,Q” denotes “P∪Q” and “p,Q” denotes “{p}∪Q”, where p is a predicate and P and Q are sets of predicates. For example, given the following query:
SELECT*FROM R,S,T
WHERE R.x=S.y AND S.a<10and T.b>5
the selectivity of q, Sel{R,S,T}(R.x=S.y,S.a<10,T.b>5) is the fraction of tuples in RST that verify all predicates. Additionally, tables(R.x=S.y,S.a<10)={R,S}, and attr(R.x=S.y,S.a<10)={R.x,S.y,S.a}.
In general the task is to estimate Sel(p1, . . . ,pk) for a given query σp1 . . . pk (Rx). Two properties, atomic decomposition and separable decomposition, are verified by conditional selectivity values and allow a given selectivity to be expressed in many equivalent ways. Proofs of the properties are omitted.
Atomic decomposition is, based on the notion of conditional probability and unfolds a selectivity value as the product of two related selectivity values:
Sel(P,Q)=Sel(P|Q)Sel(Q)
The property of atomic decomposition holds for arbitrary sets of predicates and tables, without relying on any assumption, such as independence. By repeatedly applying atomic decompositions over an initial selectivity value S, a very large number of alternative rewritings for S can be obtained, which are called decompositions. The number of different decompositions of Sel(p1, . . . ,n), denoted by T(n), is bounded as follows: 0.5(n+1)!≦T(n)≦1.5nn! for n≦1.
In the presence of exact selectivity information, each possible decomposition of Sel(P) results in the same selectivity value (since each decomposition is obtained through a series of equalities). In reality, exact information may not be available. Instead, a set of SITs is maintained and used to approximate selectivity values. In such cases, depending on the available SITs, some decompositions might be more accurate than others. To determine which decompositions are more accurate, a measure of how accurately S can be approximated using the current set of available SITs is assigned to each decomposition S of Sel(P). Then approximating Sel(P) can be treated as an optimization problem in which the “most accurate” decomposition of Sel(P) for the given set of available SITs is sought.
A naïve approach to this problem would explore exhaustively all possible decompositions of Sel(P), estimate the accuracy of each decomposition and return the most accurate one. To improve on this approach, the notion of separability is used. Separability is a syntactic property of conditional selectivity values that can substantially reduce the space of decompositions without missing any useful one. It is said that Sel(P) is separable (with Q possibly empty) if non-empty sets X1 and X2 can be found such that P∪Q=X1∪X2 and tables(X1)∩tables(X2)=Ø. In that case, X1 and X2 are said to separate Sel(P). For example, given P={T.b=5,S.a<10}, Q={R.x=S.y}, and S=Sel{R,S,T}(P|Q),X1={T.b=5} and X2={R.x=S.y,S.a<10} separate S. This is because tables(X1}={T} and tables (X2)={R,S}. If S.y=T.z were added to Q, the resulting selectivity expression is no longer separable.
Intuitively, Sel(P|Q) is separable if σp^Q(Rx) combines some tables by using Cartesian products. It is, important to note, however, that even if the original query does not use any Cartesian product, after applying atomic decompositions some of its factors cane become separable. The property of separable decomposition, which is applicable where the independence assumption is guaranteed to hold, follows:
Given that {P1,P2} and {Q1,Q2} are partitions of P and Q, respectively, and X1=P1∪Q1 and X2=P2∪Q2; and R1=tables(X1) and R2=tables(X2). Sel(P|Q) can be separated into Sel(P1|Q1)·Sel(P2|Q2). For example, {T.b=5} and {R.x=S.y,S.a<10} can be separated into S=Sel{R,S,T}(T.b=5,S.a<10|R.x=S.y) which yields S=Sel{R,S}( (R.x=S.y,S.a<10)·Sel{T} (T.b=5).
Using the separable decomposition property, it can be assumed that if is a statistic that approximates Sel(P|Q) and Sel(P|Q) is separable as Sel1 (P1|Q1)·Sel2 (P2|Q2) then there are two statistics 1 and 2 that approximate Sel1(P1|Q1) and Sel2(P2|Q2) such that: 1) 1 and 2 combined require at most as much space as does, and 2) the approximation using 1 and 2 is as accurate as that of . For example, Sel{R,S}( R.a<10,S.b>20), is separable as Sel{R}(R.a<10)·Sel{S}(S.a>20). In this situation, using two uni-dimensional histograms H(R.a) and H(S.b) to estimate each factor and then multiplying the resulting selectivity values assuming independence (which is correct in this case) will be at least as accurate as using directly a two dimensional histogram H(R.a,S.b) built on R×S. In fact, the independence assumption holds in this case, so the joint distribution over {R.a,S.b} can be estimated correctly from uni-dimensional distributions over R.a and S.b. For that reason, statistics that directly approximate separable factors of decompositions do not need to be maintained since such statistics can be replaced by more accurate and space-efficient ones. Therefore, all decompositions =1· . . . ·n for which some i is separable can be discarded without missing the most accurate decompositions.
The separable decomposition property and the above assumption can substantially reduce the search space, since consideration of large subsets of decompositions can be avoided. However, in many cases the search space is still very large. To make the optimization problem manageable, some restrictions can, be imposed on the way the accuracy of decomposition is measured. A dynamic-programming algorithm can then return the most accurate decomposition for a given selectivity value, provided that the function that measures the accuracy of the decompositions is both monotonic and algebraic.
The error of a decomposition, which measures the accuracy of the available set of statistics approximating the decomposition, must verify two properties, monotonicity and algebraic aggregation. Given =Sel(p1, . . . , pn) is a selectivity value and = 1· . . . ·k is a non-separable decomposition of such that i=Sel(Pi|Qi). If statistic i is used to approximate i, the error(i,i) is the level of accuracy of i approximating i. The value error(i,i) is a positive real number, where smaller values represent better accuracy. The estimated overall error for =1 . . . k is given by an aggregate function E(e1, . . . , en), where ei=error(i,i).
E is monotonic if every time that ei≦e′i for all i, E(e1, . . . , en)≦E(e′1, . . . , e′n). Monotonicity is a reasonable property for aggregate functions representing overall accuracy: if each individual error e′i is at least as high as error ei, then the overall E(e′1, . . . , e′n) would be expected to be at lest as high as E(e1, . . . , en).
F is distributive if there is a function G such that F(x1, . . . , xn)=G(F(x1, . . . , xn), F(xi+1, . . . , xn)). Two examples of distributive aggregates are max (with G=max) and count (with G=sum). In general, E is algebraic if there are distributive functions F1, . . . , Fm and a function H such that E(x1, . . . , xn)=H(F1(x1, . . . , xn), . . . , Fm(x1 . . . , xn)). For example avg is algebraic with F1=sum, F2=count, and H(x,y)=x|y. For simplicity, for an algebraic E, Emerge(E(x1, . . . , xi), . . . ,(E(xi+1, . . . , xn)) is defined as E(x1, . . . , xn). Therefore, avgmerge(avg(1,2),avg(3,4))=avg(1,2,3,4).
Monotonicity imposes the principle of optimality for error values, and allows a dynamic programming strategy to find the optimal decomposition of Sel(P). The principle of optimality states that the components of a globally optimal solution are themselves optimal. Therefore the most accurate decomposition of Sel(P) can be found by trying all atomic decompositions Sel(P|Q)=Sel(P′|Q)·Sel(Q), recursively obtaining the optimal decomposition of Sel(Q), and combining the partial results. In turn, the key property of an algebraic aggregate E is that a small fixed-size vector can summarize sub-aggregations and therefore the amount of information needed to carry over between recursive calls to calculate error values can be bounded.
Building on the decomposition and error principles discussed above,
In step 410, the algorithm considers an input predicate P over a set of tables R. The algorithm first checks to see if Sel(P) has already been stored in a memoization table indicated as 490. If the value is stored, the algorithm returns that value and the process ends. If the value has not yet been stored, the algorithm determines if the input selectivity value predicate Sel(P) is separable and if so separates Sel(P) into i factors (step 420). For each factor, getSelectivity is recursively called (Step 460) and the optimal decomposition is obtained for each factor. Then, partial results and errors are combined in steps 470 and 475 and returned. Otherwise, Sel(P) is not separable and it is passed to steps 430 and 440 where all atomic decompositions Sel(P′|Q)·Sel(Q) are tried. For each alternative decomposition, Sel(Q) is recursively passed to getSelectivity (Step 460). Additionally, in step 450 Sel(P′|Q) is approximated using the best available SITs among the set of available statistics 455. If no single statistic is available in step 450 the errorP|Q is set to ∞ and another atomic decomposition of the factor is considered. After all atomic decompositions are explored in steps 440 and 450 the most accurate estimation for Sel(P) (493) and its associated error is calculated in steps 470 and 475 and returned (and stored in the table 490). As a byproduct of getSelectivity, the most accurate selectivity estimation for every sub-query σp′(χ) with P′⊂P is obtained. It can be shown that getSelectivity(R,P) returns the most accurate approximation of Sel(P) for a given definition of error among all non-separable decompositions. A pseudo code implementation of getSelectivity follows:
The worst-case complexity of getSelectivity is (3n), where n is the number of input predicates. In fact, the number of different invocations of getSelectivity is at most 2n, on for each subset of P. Due to memoization, only the first invocation to getSelectivity for each subset of P actually produces some work (the others are simple lookups). The running time of getSelectivity for k input predicates (not counting recursive calls) is (k2) for separable factors and (2k) for non-separable factors.
Therefore the complexity of getSelectivity is
or (3n). In turn, the space complexity of getSelectivity is (2n) to store in the memoization table selectivity and error values for Sel(p) with p⊂P.
The worst-case complexity of getSelectivity, (3n), can be contrasted with the lower bound of possible decompositions of a predicate, ((n+1)!). Since (n+1)!|3n is Ω(2n), by using monotonic error functions the number of decompositions that are explored is decreased exponentially without missing the most accurate one. If many subsets of P are separable, the complexity of getSelectivity is further reduced, since smaller problems are solved independently. For instance if Sel(P)=Sel1(P1)·Sel2(P2), where |P1|=k1 and |P2|=k2, the worst case running time of getSelectivity is (3k1+3k2), which is much smaller than (3k1+k2).
In step 450, getSelectivity obtains the statistic to approximate Sel(P|Q) that minimizes errors(, Sel(P|Q)). This procedure consists of 1)obtaining the set of candidate statistics that can approximate Sel(p|Q) and 2)selecting from the candidate set the statistic that minimizes error(, Sel(p|Q)).
In general, a statistic consists of a set of SITs. For simplicity the notation is modified to represent SITs as follows. Given query expression q=σp1 . . . pk(χ), SIT(a1, . . . ,aj|p1, . . . ,pk) will be used instead of SIT(a1, . . . ,aj|q). That is, the set of predicates of q over χis enumerated, which agrees with the notation for selectivity values. It should be noted that for the purposes of this discussion SITs will be described as histograms, but the general ideas can be applied to other statistical estimators as well. Therefore (a1, . . . ,aj|p1, . . . ,pk) is a multidimensional histogram over attributes a1, . . . ,aj built on the result of executing σp1 . . . pk(χ). As a special case, if there are no predicates pi, (a1, . . . ,aj|) is written, which is a traditional base-table histogram. The notion of predicate independence is used to define the set of candidate statistics to consider for approximating a given Sel(P|Q).
Given sets of predicates P1, P2, and Q it is said that P1 and P2 are independent with respect to Q if the following equality holds Sel(P1,P2|Q)=Sel(P1|Q)·Sel(P2|Q) where R1=tables(P1,Q) and R2=tables(P2,Q). If P1 and P2 are independent with respect to Q, then Sel(P1|P2,Q)=Sel1(P1|Q) holds as well. If there is no available statistic approximating Sel(p|Q), but there is an available statistic approximating Sel(p|Q′), independence between P and Q′ is assumed with respect to Q-Q′ and is used to approximate Sel(p|Q). This idea is used to define a candidate set of statistics to approximate Sel(P|Q).
Given that =Sel(P|Q) where P is a set of filter predicates, such as {R.a<5 S.b>8}, the candidate statistics to approximate are all {H(A|Q′))} that simultaneously verify the following three properties. 1) attr(P) ⊂A (the SIT can estimate the predicates). 2) Q′⊂Q (assuming independence between P and Q-Q′). In a traditional optimizer, Q′=Ø, so P and Q are always assumed independent. 3) Q′ is maximal, i.e., there is no H(A|Q″) available such that Q′⊂Q″⊂Q.
In principle, the set of candidate statistics can be defined in a more flexible way, e.g., including statistics of the form H(A|Q′), where Q′ subsumes Q. The candidate sets of statistics are restricted as described above to provide a good tradeoff between the efficiency to identify them and the quality of the resulting approximations. For example given =Sel(R.a<5|p1,p2) and the statistics HR(R.a|p1), HR(R.a|p2), HR(R.a|p1,p2,p3), and HR(R.a), the set of candidate statistics for include {HR(R.a|p1)} and {HR(R.a|p2)}. HR(R.a) does not qualify since its query expression is not maximal; and HR(R.a|p1,p2,p3), does not qualify since it contains an extra predicate p3.
In many cases a predicate P is composed of both filter and join predicates, e.g., P={T.a<10, R.x=S.y, S.b>5}. To find Sel(P|Q),in this case several observations about histograms are used. If 1=HR(x,X|Q) and 2=HR(y,Y|Q) and both are SITs, the join 1x=y2 returns not only the value Sel(x=y|Q) for the join, but also a new histogram j=HR(x,X,Y|x=y,Q). Therefore j can be used to estimate the remaining predicates involving attributes x(=y), X, and Y. As an example, to find Sel(R.a<5, R.x=S.y|Q) given histograms 1=HR1(R.x,R.a|Q) and 2=HR2(S.y|Q), the join 1R.x=S.y 2 returns the scalar selectivity value s1=Sel(R.x=S.y|Q) and also 3=H(R.a|R.x=S.y,Q). The selectivity of Sel(R.x=S.y,R.a<5|Q) is then conceptually obtained by the following atomic decomposition: s1·s2==Sel(R.a<5|R.x=S.y,Q)·Sel(R.x=S.y|Q), where s2 is estimated using 3.
As the example shows, Sel(P|Q) can be approximated by getting SITs covering all attributes in P, joining the SITs, and estimating the remaining range predicates in P. In general, the set of candidate statistics to approximate Sel(P|Q) is conceptually obtained as follows: 1) All join predicates in P are transformed to pairs of wildcard selection predicates P′. For instance, predicate R.x=S.y is replaced by the pair (R.x=?, S.y=?), and therefore Sel(R.x=S.y, T.a<10, S.b>5|Q) results in Sel(R.x=?, S.y=?, T.a<10, S.b>5|Q). 2) Because the join predicates in P were replaced with filter predicates in P′ above, the resulting selectivity value becomes separable. Applying the separable decomposition property yields Sel1(P′1|Q1)· . . . ·Selk(P′k|Qk), where no Sel{circumflex over (v)}^(P′i|Qi) is separable. 3) Each Sel{circumflex over (v)}^(P′i|Qi) contains only filter predicates in P′i, so each candidate set of statistics can be found independently. In order to approximate the original selectivity value with the candidate set of statistics obtained in this way, all i are joined by the attributes mentioned in the wildcard predicates and the actual range of predicates is estimated as in the previous example.
Once the set of candidate statistics is obtained to approximate a given Sel(P|Q), the one that is expected to result in the most accurate estimation for Sel(P|Q) must be selected, i.e., the statistic that minimizes the value of error(, Sel(P|Q)).
In getSelectivity, error(,) returns the estimated level of accuracy of approximating selectivity using statistic . There are two requirements for the implementation of error(, ). First, it must be efficient, since error(, ) is called in the inner loop of getSelectivity. Very accurate but inefficient error functions are not useful, since the overall optimization time would increase and therefore exploiting SITs becomes a less attractive alternative. For instance, this requirement bans a technique that looks at the actual data tuples to obtain exact error values.
The second requirement concerns the availability of information to calculate error values. At first sight, it is tempting to reformulate error as a meta-estimation technique. Then, in order to estimate the error between two data distributions (actual selectivity values vs. SIT-approximated selectivity values ) additional statistics, or meta-statistics could be maintained over the difference of such distributions. Therefore, estimating error(, ) would be equivalent to approximate range queries over these meta-statistics. However, this approach is flawed, since if such meta-statistics existed, they could be combined with the original statistic to obtain more accurate results in the first place. As an example given =HR(R.a|p1), approximating S=Sel(R.a<10|p1,p2). If a meta-statistic is available to estimate values error(, Sel(c1≦R.a≦c2|p1,p2)), and can be combined to obtain a new statistic that directly approximates Sel(R.a<10|p1,p2)
Therefore error values must be estimated using efficient and coarse mechanisms. Existing information such as system catalogs or characteristics of the input query can be used but not additional information created specifically for such purpose.
Application Ser. No. 10/191,822 is now U.S. Pat. No. 6,947,927 introduced an error function, nInd, that is simple and intuitive, and uses the fact that the independence assumption is the main source of errors during selectivity estimation. The overall error of a decomposition is defined as =Sel1(P1|Q1)· . . . ·Seln (Pn|Qn) when approximated, respectively, using 1(A1|Q′1), . . . , n(An|Q′n)(Q′1⊂Qi), as the total number of predicate independence assumptions during the approximation, normalized by the maximum number of independence assumptions in the decomposition (to get a value between 0 and 1). In symbols, this error function is as follows:
Each term in the numerator represents the fact that Pi and Qi-Q′i are independent with respect to Qi, and therefore the number of predicate independent assumptions is |Pi |·|Qi−Q′i|. In turn, each term in the denominator represents the maximum number of independence assumptions when Q′i=Ø, i.e |Pi|·|Qi|. As a very simple example, consider =SelR(R.a<10,R.b>50) and decomposition =SelR(R.a<10|R.b>50) SelR(R.b>50). If base table histograms H(R.a) and H(R.b) are used, the error using nInd is
i.e., one out of one independence assumptions (between R.a<10 and R.b>50). nInd is clearly a syntactic definition which can be computed very efficiently.
While nInd is a very simple metric, often many alternative SITs are given the same score, and nInd needs to break ties arbitrarily. This behavior is problematic when there are two or more available SITs to approximate a selectivity value, and while they all result in the same “syntactic” nInd score, the actual benefit of using each of them is drastically different, as illustrated in the following example.
Consider RR.s=S.s(σs.a<10S)S.t=T.tT, with both foreign-key joins, and the following factor of a decomposition: 1=SelST(S.a<10|RS, ST). If the only candidate SITs to approximate S1 are H1=H{R,S}(S.a|RS) and H2=H{R,S}(S.a|ST), using the error function nInd, each statistic would have a score of ½, meaning that in general each alternative will be chosen at random 50% of the time. However, in this particular case H1 will always be more helpful than H2. In fact, since SS.t=T.tT is a foreign key join, the distribution of attribute S.a over the result of SS.t=T.tT is exactly the same as the distribution of S.a over base table S. Therefore, SS.t=T.tT is actually independent of S. a<10and H2 provides no benefit over the base histogram H(S.a).
An alternative error function, Diff, is defined as follows. A single value, diffH, between 0 and 1 is assigned to each available SIT H=H(R.a|Q). In particular, diffH=0 when the distribution of R.a on the base table R is exactly the same as that on the result of executing query expression Q. On the other hand, diffH=1 when such distributions are very different (note that in general there are multiple possible distributions for which diffH =1, but only one for which diffH=0). Using diff values, the Diff error function generalizes nInd by providing a less syntactic notion of independence. In particular, the overall error value of a decomposition =Sel1(P1|Q1)· . . . ·Seln(Pn|Qn) when approximated using H1, . . . ,Hn, respectively is given by:
The intuition behind the expression above is that the value |Qi|·(1−diffHi) in the numerator represents a “semantic” number of independence assumptions when approximating Si with Hi, and replaces the syntactic value |Qi−Q′i| of nInd. In fact in the previous example, diffH1=0, and H1 effectively contributes the same as a base-table histogram H(S.a), so in that case the error function is 1 (the maximum possible value). In contrast, for H2=H(S.a|RS), the more different the distributions of S.a on S and on the result of executing RS, the more likely that H2 encodes the dependencies between S.a and{RS, ST}, and therefore the lower the overall error value.
For H=H(a|Q) with ′ denoted as a σQ()(the result of evaluating Q over ), diffH can be defined as:
where f(,x) is the frequency of value x in (diffH is the squared deviation of frequencies between the base table distribution and the result of executing H's query expression). It can be shown that 0≦diffH≦1, and that diffH verifies the properties stated above. Values of diff are calculated just once and are stored with each histogram, so there is no overhead at runtime. diffH
In essence, Diff is a heuristic ranking function and has some natural limitations. For instance, it uses a single number (Diff) to summarize the amount of divergence between distributions, and it does not take into account possible cancellation of errors among predicates. However, the additional information used by Diff makes it more robust and accurate than nInd with almost no overhead.
Referring again to the query of
Therefore, both available SITs are exploited simultaneously, producing a much more accurate cardinality estimate for the original query than any alternative produced by previous techniques.
Integration with an Optimizer
The algorithm getSelectivity can be integrated with rule based optimizers. For q =σp1 . . . pk(102 ), getSelectivity (, {p1, . . . ,pk}) returns the most accurate selectivity estimation for both q and all its sub-queries, i.e., Sel(P) for all P ⊂{p1, . . . ,pk}. A simple approach to incorporate getSelectivity into an existing rule-based optimizer is to execute getSelectivity before optimization starts, and then use the resulting memoization table to answer, selectivity requests over arbitrary sub-quenes. This approach follows a pattern similar to those used by prior art frameworks to enumerate candidate sub-plans, in which a first step generates exhaustively all possible equivalent expressions and then, in a second phase, the actual search and pruning is performed. It was later established that this separation is not useful, since only a fraction of the candidate sub-plans generated during exploration is actually considered during optimization. Instead, newer frameworks interleave an exploration by demand strategy with the search and pruning phase.
Cascades is a state-of-the-art rule based optimization framework. During the optimization of an input query, a Cascades based optimizer keeps track of many alternative sub-plans that could be used to evaluate the query. Sub-plans are grouped together into equivalence classes, and each equivalence class is stored as a separate node in a memoization table (also called memo). Thus, each node in the memo contains a list of entries representing the logically equivalent alternatives explored so far. Each entry has the form [op, {input1, . . . ,inputn}, {parameter1, . . . ,parameterk}], where op is a logical operator, such as join, inputi is a pointer to some other node (another class of equivalent sub-queries), and parameterj is a parameter for operator op.
During optimization, each node in the memo is populated by applying transformation rules to the set of explored alternatives. Rules consist of antecedent and consequent patterns, and optional applicability conditions. The application of a given transformation rule is a complex procedure that involves: (i) finding all bindings in the memo, (ii) evaluating rule preconditions, (iii) firing the rule, i.e., replacing the antecedent pattern with the consequent pattern, and (iv) integrating the resulting expression (if it is new) to the memo table. As a simple example, the first entry in the node at the top of
The algorithm getSelectivity can be integrated with a Cascades based optimizer. If the optimizer restricts the set of available statistics, e.g., handles only uni-dimensional SITs, then getSelectivity can be implemented more efficiently without missing the most accurate decomposition. For uni-dimensional SITs, it can be shown that no atomic decomposition Sel(P)=Sel(P′|Q)·Sel(Q) with |P′|>1 will have a non-empty candidate set of statistics, and therefore be useful. In this case, line 10 in getSelectivity can be changed to:
for each P′⊂P, Q=P−P′ such that |P′|≦1 do 10
without missing any decomposition. Using this optimization, the complexity of getSelectivity is reduced from (3n) to
and the most accurate selectivity estimations will be returned. As a side note, this is the same reduction in complexity as obtained when linear join trees during optimization as opposed to bushy join trees.
The search space of decompositions can be further pruned so that getSelectivity can be integrated with a cascades based optimizer by coupling its execution with the optimizer's own search strategy. This pruning technique is then guided by the optimizer's own heuristics, and therefore might prevent getSelectivity from finding the most accurate estimation for some selectivity values. However, the advantage is that the overhead imposed to an existing optimizer is very small and the overall increase in quality can be substantial.
As explained, for an input SPJ query q=σp1, . . . pk(χ), each node in the memoization table of a Cascades based optimizer groups all alternative representations of a sub-query of q. Therefore the estimated selectivity of the sub-query represented by n, i.e., Sel(P) for P ⊂{p1, . . . ,pk} can be associated with each node n in the memo. Each entry in n can be associated to a particular decomposition of the sub-query represented by n.
The node at the top of
Each entry in a memo node n divides the set of predicates P that are represented by n into two groups: (i) the parameters of , that are denoted p, and (ii) the predicates in the set of inputs to , denoted Q=P−p. The entry in is then associated with the decomposition Sel(P)=Sel(P|Q) Sel(Q) where each Sel(Q) is separable into Sel1(Q1)· . . . ·Selk(Qk), where each Selv(Qi) is associated with the i-th input of .
In summary, the set of decompositions is restricted in line 10 of getSelectivity to exactly those induced by the optimization search strategy. Each time we apply a transformation rule that results in a new entry in the node associated with Sel(P) to obtain the decomposition =Sel(P|Q) Sel(Q). If has the smallest error found so far for the current node Sel(P) is updated using the new approximation. Therefore, the overhead imposed to a traditional Cascades based optimizer by incorporating getSelectivity results from getting, for each new entry , the most accurate statistic that approximates Sel(P|Q).
So far in this description all input queries have been conjunctive SPJ queries. Disjunctive SPJ queries can also be handled by the discussed techniques. For that purpose, the identity σp1p2(χ)=χ−σp1p2(102 )) is used, and the disjunctive query is translated using a de Morgan transformation to selectivity values as Sel(p1p2) =1−Sel(p1,p2). The algorithm then proceeds as before with the equality above used whenever applicable. For example, decomposition Sel{R,S,T}(R.a<5(S.b>10T.c.=5)) is rewritten as 1−Sel{R,S,T}(R.a≧5, (S.b≦10T.c≠5)). The second term is separable and is simplified to Sel{R}(R.a≧5)·Sel{S,T}(S.b≧10T.c≠5)). The second factor can be transformed again to 1−Sel{S,T}(S.b>10,T.c=5) which is again separable, and so on.
The techniques discusses can also be extended to handle SPJ queries with Group-By clauses. In the following query.
Thus, to approximate selectivity values of SPJ queries with Group By clauses, the selectivity values for SPJ queries must be estimated with set semantics, i.e., taking into account duplicate values. The definition of conditional selectivity can be extended to handle distinct values as described next. If P is a set of predicates and A is a set of attributes, tables(P|A) is defined as the set of tables referenced either in A or in P, and attr(P|A) is defined as the attributes either in A or referenced in P. ={1, . . . ,n} is a set of tables and P and Q are sets of predicates, over χ=1x . . . xn. A and B are sets of attributes over such that attr(P|A) ⊂B. The definition of conditional selectivity is extended as:
SelR (P|A|Q|B)=|πA*(σP(πB*(σQ(χ)))|||πB*(σQ(χ))|
where πA*() is a version of the projection operator that eliminates duplicate values.
The notation of =Sel(P|A|Q|B) is simplified, if possible, as follows. If B contains all attributes in ,|B is omitted from . Similarly, if A=B then|A is omitted from . Finally, if B contains all attributes in and Q=φ, the selectivity is rewritten as =Sel(P|A). The value Sel(P|A) is then the number of distinct A values for tuples in σp(χ) divided by |χ|. Therefore, for a generic SPJ query with a group-by clause, the quantity Sel(p1, . . . .pj|a1, . . . ,ak) is to be estimated.
The atomic decomposition definition can be extended as follows. is a set of tables, P is a set of predicates over , and A is a set of attributes in . Then:
Sel(P|A)=Sel(P1|A|P2|B)●Sel(P2|B) where P1 and P2 partition P and attr(P1|A) ⊂B.
This generalized atomic decomposition can be integrated with a rule-based optimizer that implements coalescing grouping transformations for queries with group-by clauses. Coalescing grouping is an example of push-down transformations, which typically allow the optimizer to perform early aggregation. In general, such transformations increase the space of alternative execution plans that are considered during optimization. The coalescing grouping transformation shown in
For SPJ queries the atomic and separable decompositions can be used alone to cover all transformations in a rule-based optimizer. In general, the situation is more complex for queries with group-by clauses. The separable decomposition property can be extended similarly as for the atomic property. In some cases rule-based transformations require the operators to satisfy some semantic properties such as the invariant grouping transformation shown in
In the context of SITs as histograms, traditional histogram techniques can be exploited provided that they record not only the frequency but also the number of distinct values per bucket. Referring again to
SITs can be further extended to handle queries with complex filter conditions as well as queries with having clauses. The following query asks for orders that were shipped no more than 5 days after they were placed. SELECT*FROM orders WHERE ship-date-place-date<5. A good cardinality estimation of the query cannot be obtained by just using uni-dimensional base-table histograms over columns ship-date and place-date. The reason is that single-column histograms fail to model the strong correlation that exists between the ship and place dates of any particular order. A multidimensional histogram over both ship-date=place-date might help in this case, but only marginally. In fact, most of the tuples in the two-dimensional space ship-date×place-date will be very close to the diagonal ship-date=place-date because most orders are usually shipped a few days after they are placed. Therefore, most of the tuples in orders will be clustered in very small sub-regions of the rectangular histogram buckets. The uniformity assumption inside buckets would then be largely inaccurate and result in estimations that are much smaller than the actual cardinality values.
The scope of SIT(A|Q) can be extended to obtain a better cardinality estimate for queries with complex filter expressions. Specifically, A is allowed to be a column expression over Q. A column expression over Q is a function that takes as inputs other columns accessible in the SELECT clause of Q and returns a scalar value. For instance, a SIT that can be used to accurately estimate the cardinality of the query over is H=SIT(diff-date|Q) where the generating query Q is defined as: SELECT-ship-date-place-date as diff-date FROM orders. In fact, each bucket in H with range [xL . . . xR] and frequency f specifies that f orders were shipped between xL and xR days after they were placed. Thus, the cardinality of the query above can be estimated accurately with a range query (−∞, . . . ,5] over H.
This idea can also be used to specify SITs that help estimating the cardinality of queries with group-by and having clauses. The following query:
It can be seen from the foregoing description that using conditional selectivity as a framework for manipulating query plans to leverage statistical information on intermediate query results can result in more efficient query plans. Although the present invention has been described with a degree of particularity, it is the intent that the invention include all modifications and alterations from the disclosed design falling within the spirit or scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4769772 | Dwyer | Sep 1988 | A |
5724570 | Zeller et al. | Mar 1998 | A |
5806061 | Chaudhuri et al. | Sep 1998 | A |
5950186 | Chaudhuri et al. | Sep 1999 | A |
6029163 | Ziauddin | Feb 2000 | A |
6061676 | Srivastava et al. | May 2000 | A |
6088691 | Bhargava et al. | Jul 2000 | A |
6272487 | Beavin et al. | Aug 2001 | B1 |
6311181 | Lee et al. | Oct 2001 | B1 |
6363371 | Chaudhuri et al. | Mar 2002 | B1 |
6438741 | Al-omari et al. | Aug 2002 | B1 |
6477534 | Acharya et al. | Nov 2002 | B1 |
6516310 | Paulley | Feb 2003 | B2 |
6529901 | Chaudhuri et al. | Mar 2003 | B1 |
6629095 | Wagstaff et al. | Sep 2003 | B1 |
6714938 | Avadhanam et al. | Mar 2004 | B1 |
6778976 | Haas et al. | Aug 2004 | B2 |
6915290 | Bestgen et al. | Jul 2005 | B2 |
6947927 | Chaudhuri et al. | Sep 2005 | B2 |
6961721 | Chaudhuri et al. | Nov 2005 | B2 |
6983275 | Koo et al. | Jan 2006 | B2 |
7010516 | Leslie | Mar 2006 | B2 |
20030018615 | Chaudhuri et al. | Jan 2003 | A1 |
20030120682 | Bestgen et al. | Jun 2003 | A1 |
20030229635 | Chaudhuri et al. | Dec 2003 | A1 |
20040249810 | Das et al. | Dec 2004 | A1 |
20040260675 | Bruno et al. | Dec 2004 | A1 |
20050071331 | Gao et al. | Mar 2005 | A1 |
20060294065 | Dettinger et al. | Dec 2006 | A1 |
Number | Date | Country |
---|---|---|
2416368 | Oct 2003 | CA |
0743607 | Nov 1996 | EP |
1564620 | Aug 2005 | EP |
WO 9215066 | Sep 1992 | WO |
WO 9826360 | Jun 1998 | WO |
WO 0241185 | May 2002 | WO |
WO 02089009 | Nov 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20050004907 A1 | Jan 2005 | US |