Method and apparatus for using conditional selectivity as foundation for exploiting statistics on query expressions

Information

  • Patent Application
  • 20050004907
  • Publication Number
    20050004907
  • Date Filed
    June 27, 2003
    21 years ago
  • Date Published
    January 06, 2005
    19 years ago
Abstract
By transforming a query into a product of conditional selectivity expressions, an existing set of statistics on query expressions can be used more effectively to estimate cardinality values. Conditional selectivity values are progressively separated according to rules of conditional probability to yield a set of non-separable decompositions that can be matched with the stored statistics on query expressions. The stored statistics are used to estimate the selectivity of the query and the estimated selectivity can be multiplied by the Cartesian product of referenced tables to yield a cardinality value.
Description
TECHNICAL FIELD

The invention relates generally to the field of relational databases and specifically to the field of optimizing quenes on databases.


BACKGROUND OF THE INVENTION

Most query optimizers for relational database management systems (RDBMS) rely on a cost model to choose the best possible query execution plan for a given query. Thus, the quality of the query execution plan depends on the accuracy of cost estimates. Cost estimates, in turn, crucially depend on cardinality estimations of various sub-plans (intermediate results) generated during optimization. Traditionally, query optimizers use statistics built over base tables for cardinality estimates, and assume independence while propagating these base-table statistics through the query plans. However, it is widely recognized that such cardinality estimates can be off by orders of magnitude. Therefore, the traditional propagation of statistics can lead the query optimizer to choose significantly low-quality execution plans.


SUMMARY OF THE INVENTION

Using conditional selectivity as a framework for manipulating query plans to leverage statistical information on intermediate query results can result in more efficient query plans. A number of tuples returned by a database query having a set of predicates that each reference a set of database tables can be approximated. The query is decomposed to form a product of partial conditional selectivity expressions. The partial conditional selectivity expressions are then matched with stored statistics on query expressions to obtain estimated partial conditional selectivity values. The selectivity of the query is then estimated by combining the obtained partial conditional selectivity results. The resulting query selectivity estimate can be multiplied by a Cartesian product of the tables referenced in the query to arrive at a cardinality value.


The decomposition of the query can be performed recursively by repeatedly separating conditional selectivity expressions into atomic decompositions. During matching an error can be associated with a selectivity estimation that is generated using a given statistic and those statistics with the lowest error may be selected to generate the query selectivity estimation. The error may be based on the difference between a statistic that is generated by an intermediate query result and a statistic on the corresponding base table. Statistics on query expressions that correspond to a subset of the predicaies represented in a given selectivity expression may be considered for estimating the selectivity of the given selectivity expression. In an optimizer environment, the decomposition may be guided by the sub-plans generated by the optimizer. A wider variety of queries can be decomposed by transforming disjunctive query predicates into conjunctive query predicates.




BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:



FIG. 1 illustrates an exemplary operating environment for a system for evaluating database queries using statistics maintained on intermediate query results;



FIG. 2 is a block diagram of a prior art optimizer that can be used in conjunction with the present invention;



FIG. 3 is tree diagram for a query and two alternative execution sub-plans for a prior art optimizer;



FIG. 4 is a block diagram for a method for evaluating database queries using statistics maintained on intermediate query results according to an embodiment of the present invention;



FIG. 5 is block diagram of a memo table for an optimizer that implements the method of FIG. 4; and



FIG. 6 is a tree diagram that illustrates a coalescing grouping transformation of query that can be used in practice of an embodiment of the present invention; and



FIG. 7 is a tree diagram that illustrates an invariant grouping transformation of a query that can be used in practice of an embodiment of the present invention.




DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

Exemplary Operating Environment



FIG. 1 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. Although not required, the invention will be described in the general context of computer-executable instructions, such as program modules, being executed by a personal computer. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.


With reference to FIG. 1, an exemplary system for implementing the invention includes a general purpose computing device in the form of a conventional personal computer 20, including a processing unit 21, a system memory 22, and a system bus 24 that couples various system components including system memory 22 to processing unit 21. System bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. System memory 22 includes read only memory (ROM) 24 and random access memory (RAM) 25. A basic input/output system (BIOS) 26, containing the basic routines that help to transfer information between elements within personal computer 20, such as during start-up, is stored in ROM 24. Personal computer 20 further includes a hard disk drive 27 for reading from and writing to a hard disk, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29 and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM or other optical media. Hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical drive interface 34, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for personal computer 20. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 29 and a removable optical disk 31, it should be appreciated by those skilled in the art that other types of computer-readable media which can store data that is accessible by computer, such as random access memories (RAMs), read only memories (ROMs), and the like may also be used in the exemplary operating environment.


A number of program modules may be stored on the hard disk, magnetic disk 129, optical disk 31, ROM 24 or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A database system 55 may also be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24 or RAM 25. A user may enter commands and information into personal computer 20 through input devices such as a keyboard 40 and pointing device 42. Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to processing unit 21 through a serial port interface 46 that is coupled to system bus 23, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB). A monitor 47 or other type of display device is also connected to system bus 23 via an interface, such as a video adapter 48. In addition to the monitor, personal computers typically include other peripheral output devices such as speakers and printers.


Personal computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49. Remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to personal computer 20, although only a memory storage device 50 has been illustrated in FIG. 1. The logical connections depicted in FIG. 1 include local area network (LAN) 51 and a wide area network (WAN) 52. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.


When using a LAN networking environment, personal computer 20 is connected to local network 51 through a network interface or adapter 53. When used in a WAN networking environment, personal computer 20 typically includes a modem 54 or other means for establishing communication over wide area network 52, such as the Internet. Modem 54, which may be internal or external, is connected to system bus 23 via serial port interface 46. In a networked environment, program modules depicted relative to personal computer 20, or portions thereof, may be stored in remote memory storage device 50. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.


Cost Estimation Using Cardinality Estimates Based on Statistics on Intenrediate Tables


SITs are statistics built over the results of query expressions or intermediate tables, and their purpose is to eliminate error propagation through query plan operators. For the purposes of this description, a SIT is defined as follows: Let R be a table, A an attribute of R, and Q an SQL query that contains R.A in the SELECT clause. SIT(R.A|Q) is the statistic for attribute A on the result of the executing query expression Q. Q is called the generating query expression of SIT (R.A|Q). This definition can be extended for multi-attribute statistics. Furthermore, the definition can be used as the basis for extending the CREATE STATISTICS statement in SQL where instead of specifying the table name of the query, more general query expression such as a table valued expression can be used.


In U.S. patent application Ser. No. 10/191,822, incorporated herein by reference in its entirety, the concept of SITs was introduced. A particular method of adapting a prior art query optimizer to access and utilize a preexisting set of SITs for cost estimation was described in detail in this application, which method is summarized here briefly as background information.


Referring to FIG. 2, the query optimizer examines an input query and generates a query execution plan that most efficiently returns the results sought by the query in terms of cost. The cost estimation module and its imbedded cardinality estimation module can be modified to utilize statistics on query expressions, or intermnediate tables, to improve the accuracy of cardinality estimates.


In general, the use of SITs is enabled by implementing a wrapper (shown in phantom in FIG. 2) on top of the original cardinality estimation module of the RDBMS. During thee optimization of a single query, the wrapper will be called many times, once for each different query sub-plan enumerated by the optimizer. Each time the query optimizer invokes the modified cardinality estimation module with a query plan, this input plan is transformed by the wrapper into another one that exploits SITs. The cardinality estimation module uses the input plan to arrive at a potentially more accurate cardinality estimation that is returned to the query optimizer. The transformed query plan is thus a temporary structure used by the modified cardinality and is not used for query execution.


According to the embodiment described in application Ser. No. 10/191,822, the transformed plan that is passed to the cardinality estimation module exploits applicable SITs to enable a potentially more accurate cardinality estimate. The original cardinality estimation module requires little or no modification to accept the transformed plan as input. The transformation of plans is performed efficiently, which is important because the transformation will be used for several sub-plans for a single query optimization.


In general, there will be no SIT that matches a given plan exactly. Instead, several SITs might be used for to some (perhaps overlapping) portions of the input plan. The embodiment described in application Ser. No. 10/191,822 integrates SITs with cardinality estimation routines by transforming the input plan into an equivalent one that exploits SITs as much as possible. The transformation step is based on a greedy procedure that selects which SITs to apply at each iteration, so that the number of independence assumptions during the estimation for the transformed query plan is minimized. Identifying whether or not a SIT is applicable to a given plan leverages materialized view matching techniques as can be seen in the following example.


In the query shown in FIG. 3(a) Rcustom characterS and Rcustom characterT are (skewed) foreign-key joins. Only a few tuples in S and T verify predicates σS.a<10(S) and σT.b>20(T) and most tuples in R join precisely with these tuples in S and T. In the absence of SITs, independence is assumed between all predicates and the selectivity of the original query is estimated as the product of individual join and filter selectivity values. This will produce a very small number, clearly a gross underestimate of the selectivity value. In the presence of the two SITs shown in FIG. 3, the two maximal equivalent rewritings shown in FIG. 3(b) and 3(c) are explored and one of them is selected as the transformed query plan. Each alternative exploits one available SIT and therefore takes into consideration correlations introduced by one of the skewed joins. Thus, the resulting estimations, although not perfect, have considerably better quality than when base-tables statistics are used.


Because the previous example employed view matching techniques as the main engine to guide transformations, no alternative was explored that exploited both SITs simultaneously. This is a fundamental constraint that results from relying exclusively on materialized view matching to enumerate alternatives. Therefore it is desirable to supplement the enumerated alternatives from materialized view matching with additional alternatives that leverage multiple SITs simultaneously. This is accomplished by using conditional selectivity as a formal framework to reason with selectivity values to identify and exploit SITs for cardinality estimation.


Conditional Selectivity


The concept of conditional selectivity allows expression of a given selectivity value in many different but equivalent ways. This description will focus on conjunctive Select Project Join queries, but the methods herein can be extended to handle more general queries.


An arbitrary SPJ query is represented in a canonical form by first forming the Cartesian product of the tables referenced in the query, then applying all predicates (including joins) to the Cartesian product, and projecting out the desired attributes. Thus, an SPJ query is represented as:

q=πa1, . . . , anap1custom character . . . custom characterpnp(R1x . . . x Rn))

where ai are attributes of R1 x . . . x Rn, and pi are predicates over R1x . . . x Rn (e.g. R1.a≦25, or R1.x=R2.y).


Each set of predicates {pi} that is applied to R1 x . . . x Rn results in the subset of tuples that simultaneously verify all pi. Using bag seimantics, projections do not change the size of the output, and therefore projections are omitted from consideration when estimating carnalities. To estimate the size of the output, or its cardinality, the fraction of tuples in R1 x . . . x Rn that simultaneously verify all predicates pi (i.e. the selectivity of all pi) is approximated, and then this fraction is multiplied by |R1 x . . . x Rn|, which can be obtained by simple lookups over the system catalogs. The use of selectivities to obtain carnalities results in simpler derivations. The classical definition of selectivity is extended as follows:


Let custom character={R1, . . . ,Rn} be a set of tables, and P={p1, . . . ,pj}, Q={q1, . . . ,pk} be sets of predicates over Rx=R1 x . . . x Rn. The selectivity of p with respect to σq1custom character . . . custom characterqk (Rx), denoted SelR(P|Q), is defined as the fraction of tuples in σq1custom character . . . custom characterqk (Rx) that simultaneously verify all predicates in P. Therefore,
SelR(PQ)=σplpj(σqlqk(R1××Rn))σqlqk(R1××Rn)

If Q=Ø, this reduces to Selcustom character (P), which agrees with the traditional definition of selectivity.


In this description, tables(P) denotes the set of tables referenced by a set of predicates P, and attr(P) denotes the set of attributes mentioned in P. To simplify the notation, “P,Q” denotes “P∪Q” and “p,Q” denotes “{p} ∪Q”, where p is a predicate and P and Q are sets of predicates. For example, given the following query:

  • SELECT*FROM R,S,T
  • WHERE R.x=S.y AND S.a<10 and T.b>5


    the selectivity of q, Sel{R,S,T}(R.x=S.y,S.a<10,T.b>5) is the fraction of tuples in RST that verify all predicates. Additionally, tables(R.x=S.y,S.a<10)={R,S}, and attr(R.x=S.y,S.a<10)={R.x,S.y,S.a}.


In general the task is to estimate Selcustom character (p1, . . . ,pk) for a given query σp1custom character . . . custom characterpk (Rx). Two properties, atomic decomposition and separable decomposition, are verified by conditional selectivity values and allow a given selectivity to be expressed in many equivalent ways. Proofs of the properties are omitted.


Atomic decomposition is, based on the notion of conditional probability and unfolds a selectivity value as the product of two related selectivity values:

Selcustom character (P,Q)=Selcustom character (P|Q)·Selcustom character (Q)

The property of atomic decomposition holds for arbitrary sets of predicates and tables, without relying on any assumption, such as independence. By repeatedly applying atomic decompositions over an initial selectivity value S, a very large number of alternative rewritings for S can be obtained, which are called decompositions. The number of different decompositions of Selcustom character (p1, . . . ,n), denoted by T(n), is bounded as follows: 0.5(n+1)!≦T(n)<1.5nn! for n≦1.


In the presence of exact selectivity information, each possible decomposition of Selcustom character (P) results in the same selectivity value (since each decomposition is obtained through a series of equalities). In reality, exact information may not be available. Instead, a set of SITs is maintained and used to approximate selectivity values. In such cases, depending on the available SITs, some decompositions might be more accurate than others. To determine which decompositions are more accurate, a measure of how accurately S can be approximated using the current set of available SITs is assigned to each decomposition S of Selcustom character (P). Then approximating Selcustom character (P) can be treated as an optimization problem in which the “most accurate” decomposition of Selcustom character (P) for the given set of available SITs is sought.


A naïve approach to this problem would explore exhaustively all possible decompositions of Selcustom character (P), estimate the accuracy of each decomposition and return the most accurate one. To improve on this approach, the notion of separability is used. Separability is a syntactic property of conditional selectivity values that can substantially reduce the space of decompositions without missing any useful one. It is said that Selcustom character (P) is separable (with Q possibly empty) if non-empty sets X1 and X2 can be found such that P∪Q=X1∪X2 and tables(X1)∩tables(X2)=Ø. In that case, X1 and X2 are said to separate Selcustom character (P). For example, given P={T.b=5,S.a<10}, Q={R.x=S.y}, and S=Sel{R,S,T}(P|Q),X1={T.b=5} and X2={R.x=S.y,S.a<10} separate S. This is because tables(X1}={T} and tables (X2)={R,S}. If S.y=T.z were added to Q, the resulting selectivity expression is no longer separable.


Intuitively, Selcustom character (P|Q) is separable if σp{circumflex over ( )}Q(Rx) combines some tables by using Cartesian products. It is, important to note, however, that even if the original query does not use any Cartesian product, after applying atomic decompositions some of its factors cane become separable. The property of separable decomposition, which is applicable where the independence assumption is guaranteed to hold, follows:


Given that {P1,P2} and {Q1,Q2} are partitions of P and Q, respectively, and X1=P1∪Q1 and X2=P2∪Q2; and R1=tables(X1) and R2=tables(X2). Selcustom character (P|Q) can be separated into Selcustom character (P1|Q1)·Selcustom character (P2|Q2). For example, {T.b=5} and {R.x=S.y,S.a<10)} can be separated into S=Sel{R,S,T}(T.b=5,S.a<10|R.x=S.y) which yields S=Sel{R,S}(R.x=S.y,S.a<10)·Sel{T} (T.b=5).


Using the separable decomposition property, it can be assumed that if custom character is a statistic that approximates Selcustom character (P|Q) and Selcustom character(P|Q) is separable as Selcustom character1(P1|Q1)·Selcustom character (P2|Q2) then there are two statistics custom character1 and custom character2 that approximate Selcustom character1(P1|Q1) and Selcustom character2(P2|Q2) such that: 1) custom character1 and custom character2 combined require at most as much space as custom character does, 2) the approximation using custom character1 and custom character2 is as accurate as that of custom character. For example, Sel{R,S} (R.a<10,S.b>20), is separable as Sel{R}(R.a<10)·Sel{S}(S.a>20). In this situation, using two uni-dimensional histograms H(R.a) and H(S.b) to estimate each factor and then multiplying the resulting selectivity values assuming independence (which is correct in this case) will be at least as accurate as using directly a two dimensional histogram H(R.a,S.b) built on R×S. In fact, the independence assumption holds in this case, so the joint distribution over {R.a,S.b} can be estimated correctly from uni-dimensional distributions over R.a and S.b. For that reason, statistics that directly approximate separable factors of decompositions do not need to be maintained since such statistics can be replaced by more accurate and space-efficient ones. Therefore, all decompositions S=S1· . . . ·Sn for which some Si is separable can be discarded without missing the most accurate decompositions.


The separable decomposition property and the above assumption can substantially reduce the search space, since consideration of large subsets of decompositions can be avoided. However, in many cases the search space is still very large. To make the optimization problem manageable, some restrictions can, be imposed on the way the accuracy of decomposition is measured. A dynamic-programming algorithm can then return the most accurate decomposition for a given selectivity value, provided that the function that measures the accuracy of the decompositions is both monotonic and algebraic.


The error of a decomposition, which measures the accuracy of the available set of statistics approximating the decomposition, must verify two properties, monotonicity and algebraic aggregation. Given custom character=Selcustom character(p1, . . . , pn) is a selectivity value and custom character=custom character1· . . . ·custom characterk is a non-separable decomposition of custom character such that custom characteri=Selcustom character(Pi|Qi). If statistic custom characteri is used to approximate custom characteri, the error(custom characteri,custom characteri) is the level of accuracy of custom characteri approximating custom characteri. The value error(custom characteri,custom characteri) is a positive real number, where smaller values represent better accuracy. The estimated overall error for custom character=custom character1 . . . custom characterk is given by an aggregate function E(e1, . . . , en), where ei=error(custom characteri,custom characteri).


E is monotonic if every time that ei≦e′i for all i, E(e1, . . . , en)≦E(e′1, . . . , e′n). Monotonicity is a reasonable property for aggregate functions representing overall accuracy: if each individual error e′i is at least as high as error ei, then the overall E(e′1, . . . , e′n) would be expected to be at lest as high as E(e1, . . . , en).


F is distributive if there is a function G such that F(x1, . . . , xn)=G(F(x1, . . . , xn), F(xi+1, . . . , xn)). Two examples of distributive aggregates are max (with G=max) and count (with G=sum). In general, E is algebraic if there are distributive functions F1, . . . , Fm and a function H such that E(x1, . . . , xn)=H(F1(x1, . . . , xn), . . . , Fm(x1 . . . , xn)). For example avg is algebraic with F1=sum, F2=Count, and H(x,y)=x/y. For simplicity, for an algebraic E, Emerge(E(x1, . . . , xi), . . . ,(E(xi+1, . . . , xn)) is defined as E(x1, . . . , xn). Therefore, avgmerge(avg(1,2),avg(3,4))=avg(1,2,3,4).


Monotonicity imposes the principle of optimality for error values, and allows a dynamic programming strategy to find the optimal decomposition of Selcustom character (P). The principle of optimality states that the components of a globally optimal solution are themselves optimal. Therefore the most accurate decomposition of Selcustom character (P) can be found by trying all atomic decompositions Selcustom character(P|Q)=Selcustom character (P′|Q)·Selcustom character (Q), recursively obtaining the optimal decomposition of Selcustom character (Q), and combining the partial results. In turn, the key property of an algebraic aggregate E is that a small fixed-size vector can summarize sub-aggregations and therefore the amount of information needed to carry over between recursive calls to calculate error values can be bounded.


Building on the decomposition and error principles discussed above, FIG. 4 illustrates a recursive algorithm “getSelectivity” designated generally as 400 for obtaining an accurate approximation of a selectivity value. In general, getSelectivity separates a selectivity value into simpler factors and then recursively calls itself to obtain partial selectivity values that are then combined to obtain the requested selectivity value. The algorithm relies on the error function being monotonic and algebraic, and avoids considering decompositions with separable factors. The pruning technique uses the fact that there is always a unique decomposition of Selcustom character (P) into non-separable factors of the form Selcustom character (Pi). In other words, given a desired Selcustom character (P), and repeatedly applying the separable decomposition property until no single resulting factor is separable, the same non-separable decomposition of Selcustom character (P) will result.


In step 410, the algorithm considers an input predicate P over a set of tables R. The algorithm first checks to see if Selcustom character (P) has already been stored in a memoization table indicated as 490. If the value is stored, the algorithm returns that value and the process ends. If the value has not yet been stored, the algorithm determines if the input selectivity value predicate Selcustom character (P) is separable and if so separates Selcustom character (P) into i factors (step 420). For each factor, getSelectivity is recursively called (Step 460) and the optimal decomposition is obtained for each factor. Then, partial results and errors are combined in steps 470 and 475 and returned. Otherwise, Selcustom character (P) is not separable and it is passed to steps 430 and 440 where all atomic decompositions Selcustom character (P′|Q)·Selcustom character (Q) are tried. For each alternative decomposition, Selcustom character (Q) is recursively passed to getSelectivity (Step 460). Additionally, in step 450 Selcustom character (P′|Q) is approximated using the best available SITs among the set of available statistics 455. If no single statistic is available in step 450 the errorP|Q is set to ∞ and another atomic decomposition of the factor is considered. After all atomic decompositions are explored in steps 440 and 450 the most accurate estimation for Selcustom character (P) (493) and its associated error is calculated in steps 470 and 475 and returned (and stored in the table 490). As a byproduct of getSelectivity, the most accurate selectivity estimation for every sub-query σp′(custom characterχ) with P′P is obtained. It can be shown that getSelectivity(R,P) returns the most accurate approximation of Selcustom character (P) for a given definition of error among all non-separable decompositions. A pseudo code implementation of getSelectivity follows:

getSelectivity (R:tables, P:predicates over Rx)Returns (SelR(P), errorP) such that errorP is best among allnon-separable decompositions01  if (SelR(P)) was already calculated)02   (SelR(P)), errorP) = memoization_table_lookup(P)03  else if SelR(P) is separable04   get the standard decomposition of SelR(P):     SelR(P)= SelR1(P1)·....·SelRn(Pn)05   (SPi, errorPi) = getSelectivity(Ri,Pi) (for each i=1..n)06   Sp=Spl...Spn07   errorP =Emerge(errorPl,..., errorPn)08  else // SelR(P) is non-separable09   errorP = ∞; bestH = NULL10   for each P′ P, Q = P − P′     // check atomic decomposition SelR(P′|Q)) · SelR(Q))11    (SQ errorQ) = getSelectivity(R,Q)12    (H, errorP|Q) = best statistic (along with the estimated error)     to approximate SelR(P′|Q))13    if (Emerge(errorP′|Q, errorQ) ≦ errorP)14     Sp = Emerge(errorP′|Q, errorQ)15     bestH = H16   Sp′|Q = estimation of SelR(P′|Q) using bestH17   Sp = SP′|Q·SQ18  memoization_table_insert(P, Sp, errorP)19  return (Sp, errorP)


The worst-case complexity of getSelectivity is custom character(3n), where n is the number of input predicates. In fact, the number of different invocations of getSelectivity is at most 2n, on for each subset of P. Due to memoization, only the first invocation to getSelectivity for each subset of P actually produces some work (the others are simple lookups). The running time of getSelectivity for k input predicates (not counting recursive calls) is custom character(k2) for separable factors and custom character(2k) for non-separable factors. Therefore the complexity of getSelectivity is
O(k=1n(nk)·2k),
or custom character(3n). In turn, the space complexity of getSelectivity is custom character(2n) to store in the memoization table selectivity and error values for Selcustom character (p) with pP.


The worst-case complexity of getSelectivity, custom character(3n), can be contrasted with the lower bound of possible decompositions of a predicate, custom character((n+1)!). Since (n+1)!/3n is Ω(2n), by using monotonic error functions the number of decompositions that are explored is decreased exponentially without missing the most accurate one. If many subsets of P are separable, the complexity of getSelectivity is further reduced, since smaller problems are solved independently. For instance if Selcustom character (P)=Selcustom character1 (P1)·Selcustom character2 (P2), where |P1|=k1 and |P2|=k2, the worst case running time of getSelectivity is custom character(3k1+3k2), which is much smaller than custom character(3k1+k2).


In step 450, getSelectivity obtains the statistic custom character to approximate Selcustom character (P|Q) that minimizes errors(custom character, Selcustom character (P|Q)). This procedure consists of 1)obtaining the set of candidate statistics that can approximate Selcustom character (p|Q) and 2)selecting from the candidate set the statistic custom character that minimizes error(custom character, Selcustom character (p|Q)).


In general, a statistic custom character consists of a set of SITs: For simplicity the notation is modified to represent SITs as follows. Given query expression q=σp1custom character . . . custom characterpk(custom characterχ), SITcustom character (a1, . . . ,aj|p1, . . . ,pk) will be used instead of SIT(a1, . . . ,aj|q). That is, the set of predicates of q over custom characterχis enumerated, which agrees with the notation for selectivity values. It should be noted that for the purposes of this discussion SITs will be described as histograms, but the general ideas can be applied to other statistical estimators as well. Therefore custom charactercustom character (a1, . . . ,aj|p1, . . . ,pk) is a multidimensional histogram over attributes a1, . . . ,aj built on the result of executing σp1custom character . . . custom characterpk(custom characterχ). As a special case, if there are no predicates pi, custom charactercustom character (a1, . . . ,aj|) is written, which is a traditional base-table histogram. The notion of predicate independence is used to define the set of candidate statistics to consider for approximating a given Selcustom character (P|Q).


Given sets of predicates P1, P2, and Q it is said that P1 and P2 are independent with respect to Q if the following equality holds Selcustom character (P1,P2|Q)=Selcustom character (P1|Q)·Sel (P2|Q) where R1=tables(P1,Q) and R2=tables(P2;Q). If P1 and P2 are independent with respect to Q, then Selcustom character (P1|P2,Q)=Selcustom character1 (P1|Q) holds as well. If there is no available statistic approximating Selcustom character (p|Q), but there is an available statistic custom character approximating Selcustom character (p|Q′), independence between P and Q′ is assumed with respect to Q-Q′ and custom character is used to approximate Selcustom character (p|Q). This idea is used to define a candidate set of statistics to approximate Selcustom character (P|Q).


Given that custom character=Selcustom character (P|Q) where P is a set of filter predicates, such as {R.a<5 S.b>8}, the candidate statistics to approximate custom character are all {Hcustom character (A|Q′))} that simultaneously verify the following three properties. 1) attr(P) A (the SIT can estimate the predicates). 2) Q′Q (assuming independence between P and Q-Q′). In a traditional optimizer, Q′=Ø, so P and Q are always assumed independent. 3) Q′ is maximal, i.e., there is no Hcustom character (A|Q″) available such that Q′⊂Q″Q.


In principle, the set of candidate statistics can be defined in a more flexible way, e.g., including statistics of the form Hcustom character (A|Q′), where Q′ subsumes Q. The candidate sets of statistics are restricted as described above to provide a good tradeoff between the efficiency to identify them and the quality of the resulting approximations. For example given custom character=Selcustom character (R.a<5|p1,p2) and the statistics HR(R.a|p1), HR(R.a|p2), HR(R.a|p1,p2,p3), and HR(R.a), the set of candidate statistics for custom character include {HR(R.a|p1)} and {HR(R.a|p2)}. HR(R.a) does not qualify since its query expression is not maximal; and HR(R.a|p1,p2,p3), does not qualify since it contains an extra predicate p3.


In many cases a predicate P is composed of both filter and join predicates, e.g., P={T.a<10, R.x=S.y, S.b>5}. To find Selcustom character (P|Q),in this case several observations about histograms are used. If custom character1=HR(x,X|Q) and custom character2=HR(y,Y|Q) and both are SITs, the join custom character1custom characterx=ycustom character2 returns not only the value Sel (x=y|Q) for the join, but also a new histogram j=HR(x,X,Y|x=y,Q). Therefore custom characterj can be used to estimate the remaining predicates involving attributes x(=y), X, and Y. As an example, to find Sel (R.a<5, R.x=S.y|Q) given histograms custom character1=HR1(R.x,R.a|Q) and custom character2=HR2(S.y|Q), the join custom character1custom characterR.x=S.y custom character2 returns the scalar selectivity value s1=Sel (R.x=S.y|Q) and also 3=H (R.a|R.x=S.y,Q). The selectivity of Sel (R.x=S.y,R.a<5|Q) is then conceptually obtained by the following atomic decomposition: s1·s2==Sel (R.a<5|R.x=S.y,Q)·Sel (R.x=S.y|Q), where s2 is estimated using custom character3.


As the example shows, Sel (P|Q) can be approximated by getting SITs covering all attributes in P, joining the SITs, and estimating the remaining range predicates in P. In general, the set of candidate statistics to approximate Sel (P|Q) is conceptually obtained as follows: 1) All join predicates in P are transformed to pairs of wildcard selection predicates P′. For instance, predicate R.x=S.y is replaced by the pair (R.x=?, S.y=?), and therefore Sel (R.x=S.y, T.a<10, S.b>5|Q) results in Selcustom character (R.x=?, S.y=?, T.a<10, S.b>5|Q). 2) Because the join predicates in P were replaced with filter predicates in P′ above, the resulting selectivity value becomes separable. Applying the separable decomposition property yields Selcustom character1 (P′1|Q1)· . . . ·Selcustom characterk (P′k|Qk), where no Selcustom characterkv (P′i|Qi) is separable. 3) Each Selcustom characterv(P′i|Qi) contains only filter predicates in P′i, so each candidate set of statistics can be found independently. In order to approximate the original selectivity value with the candidate set of statistics obtained in this way, all custom characteri are joined by the attributes mentioned in the wildcard predicates and the actual range of predicates is estimated as in the previous example.


Once the set of candidate statistics is obtained to approximate a given Selcustom character (P|Q), the one that is expected to result in the most accurate estimation for Selcustom character (P|Q) must be selected, i.e., the statistic custom character that minimizes the value of error(custom character , Selcustom character (P|Q)).


In getSelectivity, error(custom character ,custom character) returns the estimated level of accuracy of approximating selectivity custom character using statistic custom character . There are two requirements for the implementation of error(custom character ,custom character). First, it must be efficient, since error(custom character ,custom character) is called in the inner loop of getSelectivity. Very accurate but inefficient error functions are not useful, since the overall optimization time would increase and therefore exploiting SITs becomes a less attractive alternative. For instance, this requirement bans a technique that looks at the actual data tuples to obtain exact error values.


The second requirement concerns the availability of information to calculate error values. At first sight, it is tempting to reformulate error as a meta-estimation technique. Then, in order to estimate the error between two data distributions (actual selectivity values custom character vs. SIT-approximated selectivity values custom character ) additional statistics, or meta-statistics could be maintained over the difference of such distributions. Therefore, estimating error(custom character ,custom character) would be equivalent to approximate range queries over these meta-statistics. However, this approach is flawed, since if such meta-statistics existed, they could be combined with the original statistic to obtain more accurate results in the first place. As an example given custom character =HR(R.a|p1), approximating S=Selcustom character (R.a<10|p1,p2). If a meta-statistic custom character is available to estimate values error(custom character , Selcustom character (c1≦R.a≦c2|p1,p2)), custom character and custom character can be combined to obtain a new statistic that directly approximates Selcustom character (R.a<10|p1,p2)


Therefore error values must be estimated using efficient and coarse mechanisms. Existing information such as system catalogs or characteristics of the input query can be used but not additional information created specifically for such purpose.


Application Ser. No. 10/191,822 introduced an error function, nInd, that is simple and intuitive, and uses the fact that the independence assumption is the main source of errors during selectivity estimation. The overall error of a decomposition is defined as custom character =Sel1(P1|Q1)· . . . ·Selcustom charactern (Pn|Qn) when approximated, respectively, using custom character1(A1|Q′1), . . . , custom charactercustom charactern (An|Q′n)(Q′1Qi), as the total number of predicate independence assumptions during the approximation, normalized by the maximum number of independence assumptions in the decomposition (to get a value between 0 and 1). In symbols, this error function is as follows:
nInd({SelR(PiQi),R(AiQi)})=iPi·Qi-QiiPi·Qi


Each term in the numerator represents the fact that Pi and Qi-Q′i are independent with respect to Qi, and therefore the number of predicate independent assumptions is |Pi |·|Qi−Q′i|. In turn, each term in the denominator represents the maximum number of independence assumptions when Q′i=Ø, i.e |Pi|·|Qi|. As a very simple example, consider custom character =SelR(R.a<10,R.b>50) and decomposition custom character =SelR(R.a<10|R.b>50) SelR(R.b>50). If base table histograms H(R.a) and H(R.b) are used, the error using nInd is
1·(1-0)+1·(0-0)1·1+1·0=1/1=1,

i.e., one out of one independence assumptions (between R.a<10 and R.b>50). nInd is clearly a syntactic definition which can be computed very efficiently.


While nInd is a very simple metric, often many alternative SITs are given the same score, and nInd needs to break ties arbitrarily. This behavior is problematic when there are two or more available SITs to approximate a selectivity value, and while they all result in the same “syntactic” nInd score, the actual benefit of using each of them is drastically different, as illustrated in the following example.


Consider Rcustom characterR.s=S.ss.a<10S)custom characterS.t=T.tT, with both foreign-key joins, and the following factor of a decomposition: custom character1=Selcustom characterST(S.a<10|Rcustom characterS, Scustom characterT). If the only candidate SITs to approximate S1 are H1=H{R,S}(S.a|Rcustom characterS) and H2=H{R,S}(S.a|Scustom characterT), using the error function nInd, each statistic would have a score of ½, meaning that in general each alternative will be chosen at random 50% of the time. However, in this particular case H1 will always be more helpful than H2. In fact, since Scustom characterS.t=T.tT is a foreign key join, the distribution of attribute S.a over the result of Scustom characterS.t=T.tT is exactly the same as the distribution of S.a over base table S. Therefore, Scustom characterS.t=T.tT is actually independent of S. a<10 and H2 provides -no benefit over the base histogram H(S.a).


An alternative error function, Diff, is defined as follows. A single value, diffH, between 0 and 1 is assigned to each available SIT H=H(R.a|Q). In particular, diffH=0 when the distribution of R.a on the base table R is exactly the same as that on the result of executing query expression Q. On the other hand, diffH=1 when such distributions are very different (note that in general there are multiple possible distributions for which diffH=1, but only one for which diffH=0). Using diff values, the Diff error function generalizes nInd by providing a less syntactic notion of independence. In particular, the overall error value of a decomposition custom character =Selcustom character1 (P1|Q1)· . . . ·Selcustom charactern (Pn|Qn) when approximated using H1, . . . ,Hn, respectively is given by:
Diff({SelR(PiQi),Hi})=iPi·Qi·1-diffHiiPi·Qi


The intuition behind the expression above is that the value |Qi|·(1−diffHi) in the numerator represents a “semantic” number of independence assumptions when approximating Si with Hi, and replaces the syntactic value |Qi−Q′i|of nInd. In fact in the previous example, diffH1=0, and H1 effectively contributes the same as a base-table histogram H(S.a), so in that case the error function is 1 (the maximum possible value). In contrast, for H2=H(S.a|Rcustom characterS), the more different the distributions of S.a on S and on the result of executing Rcustom characterS, the more likely that H2 encodes the dependencies between S.a and{Rcustom characterS, Scustom characterT}, and therefore the lower the overall error value.


For H=Hcustom character (a|Q) with custom character′ denoted as a σQ(custom character)(the result of evaluating Q over custom character), diffH can be defined as:
diffH=1/2·xdom(a)(f(R,x)R-f(R,x)R)2

where f(custom character,x) is the frequency of value x in custom character(diffH is the squared deviation of frequencies between the base table distribution and the result of executing H's query expression). It can be shown that 0≦diffH≦1, and that diffH verifies the properties stated above. Values of diff are calculated just once and are stored with each histogram, so there is no overhead at runtime. diffHR(a|Q) can be calculated when HR(a|Q) is created, but that might impose a certain overhead to the query processor to get the f(R,a). Instead, diffH is approximated by carefully manipulating both H and the corresponding base-table histogram (which, if it does not exist, can be efficiently obtained using sampling). The procedure is similar to calculating the join of two histograms.


In essence, Diff is a heuristic ranking function and has some natural limitations. For instance, it uses a single number (Diff) to summarize the amount of divergence between distributions, and it does not take into account possible cancellation of errors among predicates. However, the additional information used by Diff makes it more robust and accurate than nInd with almost no overhead.


Referring again to the query of FIG. 3, the value of Sel{R,S,T}ab,custom characterRS, custom characterRT) is to be estimated where σa and σb represent the filter predicates over S.a and T.b, respectively, and custom characterRS and custom characterRT represent the foreign key join predicates. Using nInd for errors, getSelectivity returns the decomposition; custom character1=SelR,S,Tab,custom characterRS, custom characterRT)·SelR,S,Tb|custom characterRS, custom characterRT)·SelR,S,T(custom characterRS |custom characterRT)·SelR,T(custom characterRT) using respectively, statistics HR,S(S.a|custom characterRS) HR,T(T.b|custom characterRT), {HR(R.s), HS(S.s)}, and {HR(R.t), HT(T.t)}. Therefore, both available SITs are exploited simultaneously, producing a much more accurate cardinality estimate for the original query than any alternative produced by previous techniques.


Integration with an Optimizer


The algorithm getSelectivity can be integrated with rule based optimizers. For q=σp1custom character . . . custom characterpk(custom character102 ), getSelectivity (custom character, {p1, . . . ,pk}) returns the most accurate selectivity estimation for both q and all its sub-queries, i.e., Selcustom character (P) for all P {p1, . . . ,pk}. A simple approach to incorporate getSelectivity into an existing rule-based optimizer is to execute getSelectivity before optimization starts, and then use the resulting memoization table to answer, selectivity requests over arbitrary sub-quenes. This approach follows a pattern similar to those used by prior art frameworks to enumerate candidate sub-plans, in which a first step generates exhaustively all possible equivalent expressions and then, in a second phase, the actual search and pruning is performed. It was later established that this separation is not useful, since only a fraction of the candidate sub-plans generated during exploration is actually considered during optimization. Instead, newer frameworks interleave an exploration by demand strategy with the search and pruning phase.


Cascades is a state-of-the-art rule based optimization framework. During the optimization of an input query, a Cascades based optimizer keeps track of many alternative sub-plans that could be used to evaluate the query. Sub-plans are grouped together into equivalence classes, and each equivalence class is stored as a separate node in a memoization table (also called memo). Thus, each node in the memo contains a list of entries representing the logically equivalent alternatives explored so far. Each entry has the form [op, {input1, . . . ,inputn}, {parameter1, . . . ,parameterk}], where op is a logical operator, such as join, inputi is a pointer to some other node (another class of equivalent sub-queries), and parameterj is a parameter for operator op.



FIG. 5 illustrates a memo that corresponds to an intermediate state while optimizing the query SELECT*FROM R, S WHERE R.x=s.Y AND R.a<10 AND S.b>5. The node at the top of the figure groups together all query plans equivalent to (σR.a<10(R))custom characterR.x=S.y S.b>5(S)) that were already explored. The first entry in such node is [SELECT, {Rcustom characterR.x=S.y S.b>5(S))}, {R.a<10}], that is, a filter operator, with parameter R.a<10, applied to the node that groups all equivalent expressions for sub-query Rcustom characterR.x=S.y S.b>5(S)). Analogously, the second entry corresponds to a join operator applied to two other nodes.


During optimization, each node in the memo is populated by applying transformation rules to the set of explored alternatives. Rules consist of antecedent and consequent patterns, and optional applicability conditions. The application of a given transformation rule is a complex procedure that involves: (i) finding all bindings in the memo, (ii) evaluating rule preconditions, (iii) firing the rule, i.e., replacing the antecedent pattern with the consequent pattern, and (iv) integrating the resulting expression (if it is new) to the memo table. As a simple example, the first entry in the node at the top of FIG. 5 could have been obtained from the second entry by applying the following transformation rule: [Ti] custom characterp(T2))custom characterσp (T1custom characterT2) which pulls out selections above join predicates (T1 and T2 function as placeholders for arbitrary sub-queries).


The algorithm getSelectivity can be integrated with a Cascades based optimizer. If the optimizer restricts the set of available statistics, e.g., handles only uni-dimensional SITs, then getSelectivity can be implemented more efficiently without missing the most accurate decomposition. For uni-dimensional SITs, it can be shown that no atomic decomposition Selcustom character (P)=Selcustom character (P′|Q)·Selcustom character (Q) with |P′|>1 will have a non-empty candidate set of statistics, and therefore be useful. In this case, line 10 in getSelectivity can be changed to:

10 for each P′P, Q=P−P′ such that |P′|≦1 do

without missing any decomposition. Using this optimization, the complexity of getSelectivity is reduced from custom character(3n) to
Oi(ni)·i=O(n·2n-1),

and the most accurate selectivity estimations will be returned. As a side note, this is the same reduction in complexity as obtained when linear join trees during optimization as opposed to bushy join trees.


The search space of decompositions can be further pruned so that getSelectivity can be integrated with a cascades based optimizer by coupling its execution with the optimizer's own search strategy. This pruning technique is then guided by the optimizer's own heuristics, and therefore might prevent getSelectivity from finding the most accurate estimation for some selectivity values. However, the advantage is that the overhead imposed to an existing optimizer is very small and the overall increase in quality can be substantial.


As explained, for an input SPJ query q=σp1custom character, . . . custom characterpk(custom characterχ), each node in the memoization table of a Cascades based optimizer groups all alternative representations of a sub-query of q. Therefore the estimated selectivity of the sub-query represented by n, i.e., Selcustom character (P) for P {p1, . . . ,pk} can be associated with each node n in the memo. Each entry in n can be associated to a particular decomposition of the sub-query represented by n.


The node at the top of FIG. 5, which represents all equivalent representations of (σR′a<10(R))custom characterR.x=S.y S.b>5(S)). The second entry in such node (the join operator) can be associated with the following decomposition: Sel{R.S}(R.s=S.y |R.a<10, S.b>5)·Sel{R,S}(R.a<10, S.b>5). The first factor of this decomposition is approximated using available statistics as already explained. In turn, the second factor is separable and can be simplified as Sel{R}(R.a<10)·Sel{S}(S.b>5). The estimated selectivity of each factor of the separable decomposition is obtained by looking in the corresponding memo nodes (the inputs of the join entry being processed). Finally the estimations are multiplied together and then by the first factor of the atomic decomposition Sel{R,S}(R.s=S.y |R.a<10, S.b>5) to obtain a new estimation for Sel{R,S}(R.s=S.y, R.a<10, S.b>5).


Each entry custom character in a memo node n divides the set of predicates P that are represented by n into two groups: (i) the parameters of custom character, that are denoted p, and (ii) the predicates in the set of inputs to custom character, denoted Q=P−p. The entry custom character in is then associated with the decomposition Selcustom character (P)=Selcustom character (Pcustom character|Qcustom character) Selcustom character(Qcustom character) where each Selcustom character (Qcustom character) is separable into Selcustom character1 (Q1)· . . . ·Selcustom characterk (Qk), where each Selv (Qi) is associated with the i-th input of .


In summary, the set of decompositions is restricted in line 10 of getSelectivity to exactly those induced by the optimization search strategy. Each time we apply a transformation rule that results in a new entry custom character in the node associated with Selcustom character (P) to obtain the decomposition custom character =Selcustom character (Pcustom character|Qcustom character) Selcustom character (Qcustom character). If custom character has the smallest error found so far for the current node Selcustom character (P) is updated using the new approximation. Therefore, the overhead imposed to a traditional Cascades based optimizer by incorporating getSelectivity results from getting, for each new entry custom character, the most accurate statistic that approximates Selcustom character (Pcustom character|Qcustom character).


So far in this description all input queries have been conjunctive SPJ queries. Disjunctive SPJ queries can also be handled by the discussed techniques. For that purpose, the identity σp1custom characterp2(custom characterχ)=custom characterχ−σcustom characterp1custom charactercustom characterp2(custom character102 )) is used, and the disjunctive query is translated using a de Morgan transformation to selectivity values as Selcustom character (p1custom characterp2)=1−Sel (p1,custom characterp2). The algorithm then proceeds as before with the equality above used whenever applicable. For example, decomposition Sel{R,S,T}(R.a<5custom character(S.b>10custom characterT.c.=5)) is rewritten as 1−Sel{R,S,T}(R.a≧5, (S.b≦10custom characterT.c≠5)). The second term is separable and is simplified to Sel{R}(R.a≧5)·Sel{S,T}(S.b≧10vT.c≠5)). The second factor can be transformed again to 1−Sel{S,T}(S.b>10,T.c=5) which is again separable, and so on.


The techniques discusses can also be extended to handle SPJ queries with Group-By clauses. In the following query.


SELECT b1, . . . ,bn


FROM R1, . . . ,Rn


WHERE p1 AND . . . AND pj


GROUP BY a1, . . . ,ak


each bi is either included in {a1, . . . ,ak} or is an aggregate over columns of custom characterχ. The cardinality of q is equal to the number of groups in the output, i.e., the number of distinct values (a1, . . . ,ak) in σp1custom character . . . custom characterpk(custom characterχ), and is obtained by multiplying |custom characterχ| by the selectivity of the query below:


SELECT DISTINCT a1, . . . ,ak


FROM R1, . . . ,Rn


WHERE p1, . . . ,pj


Thus, to approximate selectivity values of SPJ queries with Group By clauses, the selectivity values for SPJ queries must be estimated with set semantics, i.e., taking into account duplicate values. The definition of conditional selectivity can be extended to handle distinct values as described next. If P is a set of predicates and A is a set of attributes, tables(P/A) is defined as the set of tables referenced either in A or in P, and attr(P|A) is defined as the attributes either in A or referenced in P. custom character={custom character1, . . . ,custom charactern} is a set of tables and P and Q are sets of predicates, over custom characterχ=custom character1x . . . xcustom charactern. A and B are sets of attributes over custom character such that attr(P/A) B. The definition of conditional selectivity is extended as:

SelR (P/A|Q/B)=|πA*(σPB*(σQ(custom characterχ)))|/|πB*(σQ(custom characterχ))|


where πA*(custom character) is a version of the projection operator that eliminates duplicate values.


The notation of custom character =Selcustom character (P/A|Q/B) is simplified, if possible, as follows. If B contains all attributes in custom character,/B is omitted from custom character . Similarly, if A=B then/A is omitted from custom character . Finally, if B contains all attributes in custom character and Q=φ, the selectivity is rewritten as custom character =Selcustom character (P/A). The value Selcustom character (P/A) is then the number of distinct A values for tuples in σp(custom characterχ) divided by |custom characterχ|. Therefore, for a generic SPJ query with a group-by clause, the quantity Selcustom character (p1, . . . .pj/a1, . . . ,ak) is to be estimated.


The atomic decomposition definition can be extended as follows. custom character is a set of tables, P is a set of predicates over custom character, and A is a set of attributes in custom character. Then:

Selcustom character (P/A)=Selcustom character (P1/A|P2/B)·Selcustom character (P2/B) where P1 and P2 partition P and attr(P1/A) B.


This generalized atomic decomposition can be integrated with a rule-based optimizer that implements coalescing grouping transformations for queries with group-by clauses. Coalescing grouping is an example of push-down transformations, which typically allow the optimizer to perform early aggregation. In general, such transformations increase the space of alternative execution plans that are considered during optimization. The coalescing grouping transformation shown in FIG. 6 is associated with the following instance of the atomic decomposition property Selcustom character (custom character/A)=Selcustom character (φ/A|custom characterB)·Selcustom character (custom character/B). For the general case, the >< in the equality is replaced with the corresponding set of predicates.


For SPJ queries the atomic and separable decompositions can be used alone to cover all transformations in a rule-based optimizer. In general, the situation is more complex for queries with group-by clauses. The separable decomposition property can be extended similarly as for the atomic property. In some cases rule-based transformations require the operators to satisfy some semantic properties such as the invariant grouping transformation shown in FIG. 7 that requires that the join predicate be defined over a foreign key of R1 and the primary key of R2. In this case, specific decompositions must be introduced that mimic such transformations. Using atomic decomposition it is obtained that Selcustom character (custom character/A)=Selcustom character (custom character|φ/A)·Selcustom character(φ/A) However,if the invariant group transformation Selcustom character (custom character/A)=Sel ()·Sel (φ/A) can be applied as well. For that reason the Selcustom character (custom character/A)=Selcustom character (custom character)·Selcustom character (φ/A′) is used which can be easily integrated with a rule-based optimizer. This transformation is not valid for arbitrary values of P and A, but instead holds whenever the invariant grouping transformation can be applied.


In the context of SITs as histograms, traditional histogram techniques can be exploited provided that they record not only the frequency but also the number of distinct values per bucket. Referring again to FIG. 6, Selcustom character (custom character/A)=Selcustom character (φ/A|custom character/B)·Selcustom character (custom character/B). H (A|/B) can be used to approximate Selcustom character (φ/A|custom character/B). In general, to approximate Selcustom character (φ/A|Q/B), some candidate SITs of the form H (A|Q′/B) where Q′Q are used.


SITs can be further extended to handle queries with complex filter conditions as well as queries with having clauses. The following query asks for orders that were shipped no more than 5 days after they were placed. SELECT*FROM orders WHERE ship-date-place-date<5. A good cardinality estimation of the query cannot be obtained by just using uni-dimensional base-table histograms over columns ship-date and place-date. The reason is that single-column histograms fail to model the strong correlation that exists between the ship and place dates of any particular order. A multidimensional histogram over both ship-date=place-date might help in this case, but only marginally. In fact, most of the tuples in the two-dimensional space ship-date×place-date will be very close to the diagonal ship-date=place-date because most orders are usually shipped a few days after they are placed. Therefore, most of the tuples in orders will be clustered in very small sub-regions of the rectangular histogram buckets. The uniformity assumption inside buckets would then be largely inaccurate and result in estimations that are much smaller than the actual cardinality values.


The scope of SIT(A|Q) can be extended to obtain a better cardinality estimate for queries with complex filter expressions. Specifically, A is allowed to be a column expression over Q. A column expression over Q is a function that takes as inputs other columns accessible in the SELECT clause of Q and returns a scalar value. For instance, a SIT that can be used to accurately estimate the cardinality of the query over is H=SIT(diff-date|Q) where the generating query Q is defined as: SELECT-ship-date-place-date as diff-date FROM orders. In fact, each bucket in H with range [xL . . . xR] and frequency f specifies that f orders were shipped between xL and xR days after they were placed. Thus, the cardinality of the query above can be estimated accurately with a range query (−∞, . . . ,5] over H.


This idea can also be used to specify SITs that help estimating the cardinality of queries with group-by and having clauses. The following query:

    • SELECT A, sum (C)
    • FROM R
    • GROUP BY A
    • HAVING avg(B)<10


      conceptually groups all tuples in R by their A values, then estimates the average value of B in each group, and finally reports only the groups with an average value smaller than 10. The cardinality of this query can be estimated using H2=SIT(avgB|Q2), where the generating query Q2 is defined as:
    • SELECT avg(B) as avgB
    • FROM R
    • GROUP BY A


      in this case, H2 is a histogram in which each bucket with range [xL . . .xR] and frequency f specifies that f groups of tuples from R (grouped by A values) have an average value of B between xL and xR. Therefore, the cardinality of the original query above can be estimated with a range query, with range [−∞ . . . 10], over H2.


It can be seen from the foregoing description that using conditional selectivity as a framework for manipulating query plans to leverage statistical information on intermediate query results can result in more efficient query plans. Although the present invention has been described with a degree of particularity, it is the intent that the invention include all modifications and alterations from the disclosed design falling within the spirit or scope of the appended claims.

Claims
  • 1. A method for approximating a number of tuples returned by a database query that comprises a set of predicates that each reference a set of database tables, the method comprising the steps of: a) expressing the query as a query selectivity; b) determining if the query is separable and if so separating the query selectivity to form a product of query selectivity factors; c) if the query is not separable, decomposing the query selectivity to form a product that comprises a conditional selectivity expression; d) recursively performing steps b)-f) to determine a selectivity value for each query selectivity factor; e) matching any conditional selectivity expression with stored statistics to obtain statistics that can estimate the selectivity of the conditional selectivity expressions and using the statistics to obtain an estimated selectivity value; and f) combining the Selectivity values obtained in step d) and the estimated selectivity values obtained in step e) to estimate the selectivity of the query.
  • 2. The method of claim 1 comprising the step of multiplying the estimated selectivity by a Cartesian product of the tables referenced by the predicates to obtain a cardinality of the query.
  • 3. The method of claim 1 wherein the step of separating the query selectivity is performed by separating the predicates that reference different sets of database tables to form a product of query selectivity factors that reference different sets of database tables.
  • 4. The method of claim 1 wherein the product formed in step c) further comprises a query selectivity factor and wherein steps b)-f) are recursively performed to determine a selectivity value for the query selectivity factor in step c).
  • 5. The method of claim 1 wherein steps b)-f) are recursively performed until a non-separable query selectivity that can only be decomposed into a single conditional selectivity expression results.
  • 6. The method of claim 1 comprising the step of storing the estimated selectivity of the query obtained in step f) in memory.
  • 7. The method of claim 6 comprising the step of first determining whether an, estimated selectivity is stored for a query and returning that value to approximate the number of tuples returned by the query.
  • 8. The method of claim 1 comprising the step of associating an error with the estimated selectivity value that is based on an accuracy with which the statistic matched with the conditional selectivity expression can estimate the selectivity of the conditional selectivity expression.
  • 9. The method of claim 8 comprising the step of combining the error associated with each conditional selectivity expression to obtain an estimated error for the selectivity estimation for the query.
  • 10. The method of claim 1 wherein the stored statistics comprise histograms on results of previously executed query expressions.
  • 11. The method of claim 1 wherein the step of matching the conditional selectivity expressions with stored statistics is performed by: compiling a set of candidate statistics that can be used to estimate the selectivity of the conditional selectivity expression; and selecting candidate statistics to estimate the selectivity of the conditional selectivity expression based on a selection criteria.
  • 12. The method of claim 11 wherein the selection criteria for a candidate statistic is determined by computing a number of independence assumptions that are made when the candidate is used to estimate the selectivity of the conditional selectivity expression and the selection criteria is to select the candidate that results in the least number of independence assumptions.
  • 13. The method of claim 11 wherein the selection criteria for a candidate statistic is determined by comparing the candidate statistic with a base statistic over the same column as the candidate statistic and assigning a difference value to the candidate statistic based on a level of difference between the candidate statistic and the base statistic.
  • 14. The method of claim 11 wherein the step of compiling a set of candidate statistics is performed by including statistics that are on results of queries having the same tables referenced by the conditional selectivity expression or a subset of the tables referenced by the conditional selectivity expression and the same predicates over the tables referenced in the conditional selectivity expression or a subset of the predicates over the tables referenced in the conditional selectivity expressions.
  • 15. The method of claim 1 wherein the steps of decomposing the query selectivity and matching the conditional selectivity expressions are repeated to generate alternative products and wherein one of those products is selected to estimate the selectivity of the query.
  • 16. The method of claim 15 wherein the step of decomposing the query is done by exhausting every alternative way of decomposing the query.
  • 17. The method of claim 15 wherein the steps of decomposing the query selectivity to form products of conditional selectivity expressions is performed based on an optimizer search strategy.
  • 18. The method of claim 1 wherein the query is disjunctive and comprising the step of transforming the disjunctive predicates into conjunctive predicates by performing a De Morgan transformation on the disjunctive query.
  • 19. The method of claim 1 wherein the query comprises a GROUP BY predicate over a grouping column and wherein the query is transformed prior to performance of the method steps to return a number of distinct values in the grouping column.
  • 20. The method of claim 19 wherein the step of decomposing the query selectivities performed by considering decompositions that are induced by coalescing grouping.
  • 21. The method of claim 19 wherein the step of decomposing the query selectivities performed by considering decompositions that are induced by invariant grouping.
  • 22. The method of claim 1 wherein the stored statistics comprise histograms built over computed columns in a query result.
  • 23. For use with a database system, a computer readable medium having computer executable instructions stored thereon for performing method steps to approximate a number of tuples returned by a database query that comprises a set of predicates that each reference a set of database tables, the method comprising the steps of: a) expressing the query as a query selectivity; b) determining if the query is separable and if so separating the query selectivity to form a product of query selectivity factors; c) if the query is not separable, decomposing the query selectivity to form a product that comprises a conditional selectivity expression; d) recursively performing steps b)-f) to determine a selectivity value, for each query selectivity factor; e) matching any conditional selectivity expression with stored statistics to obtain statistics that can estimate the selectivity of the conditional selectivity expressions and using the statistics to obtain an estimated selectivity value; and f) combining the selectivity values obtained in step d) and the estimated selectivity values obtained in step e) to estimate the selectivity of the query.
  • 24. The computer readable medium of claim 23 comprising the step of multiplying the estimated selectivity by a Cartesian product of the tables referenced by the predicates to obtain a cardinality of the query.
  • 25. The computer readable medium of claim 23 wherein the step of separating the query selectivity is performed by separating the predicates that reference different sets of database tables to form a product of query selectivity factors that reference different sets of database tables.
  • 26. The computer readable medium of claim 23 wherein the product formed in step c) further comprises a query selectivity factor and wherein steps b)-f) are recursively performed to determine a selectivity value for the query selectivity factor in step c).
  • 27. The computer readable medium of claim 23 wherein steps b)-f) are recursively performed until a non-separable query selectivity that can only be decomposed into a single conditional selectivity expression results.
  • 28. The computer readable medium of claim 23 comprising the step of storing the estimated selectivity of the query obtained in step f) in memory.
  • 29. The computer readable medium of claim 28 comprising the step of first determining whether an estimated selectivity is stored for a query and returning that value to approximate the number of tuples returned by the query.
  • 30. The computer readable medium of claim 23 comprising the step of associating an error with the estimated selectivity value that is based on an accuracy with which the statistic matched with the conditional selectivity expression can estimate the selectivity of the conditional selectivity expression.
  • 31. The computer readable medium of claim 30 comprising the step of combining the error associated with each conditional selectivity expression to obtain an estimated error for the selectivity estimation for the query.
  • 32. The computer readable medium of claim 23 wherein the stored statistics comprise histograms on results of previously executed query expressions.
  • 33. The computer readable medium of claim 23 wherein the step of matching the conditional selectivity expressions with stored statistics is performed by: compiling a set of candidate statistics that can be used to estimate the selectivity of the conditional selectivity expression; and selecting candidate statistics to estimate the selectivity of the conditional selectivity expression based on a selection criteria.
  • 34. The computer readable medium of claim 33 wherein the selection criteria for a candidate statistic is determined by computing a number of independence assumptions that are made when the candidate is used to estimate the selectivity of the conditional selectivity expression and the selection criteria is to select the candidate that results in the least number of independence assumptions.
  • 35. The computer readable medium of claim 33 wherein the selection criteria for a candidate statistic is determined by comparing the candidate statistic with a base statistic over the same column as the candidate statistic and assigning a difference value to the candidate statistic based on a level of difference between the candidate statistic and the base statistic.
  • 36. The computer readable medium of claim 33 wherein the step of compiling a set of candidate statistics is performed by including statistics that are on results of queries having the same tables referenced by the conditional selectivity expression or a subset of the tables referenced by the conditional selectivity expression and the same predicates over the tables referenced in the conditional selectivity expression or a subset of the predicates over the tables referenced in the conditional selectivity expressions.
  • 37. The computer readable medium of claim 23 wherein the steps of decomposing the query selectivity and matching the conditional selectivity expressions are repeated to generate alternative products and wherein one of those products is selected to estimate the selectivity of the query.
  • 38. The computer readable medium of claim 37 wherein the step of decomposing the query is done by exhausting every alternative way of decomposing the query.
  • 39. The computer readable medium of claim 37 wherein the steps of decomposing the query selectivity to form products of conditional selectivity expressions is performed based on an optimizer search strategy.
  • 40. The computer readable medium of claim 23 wherein the query is disjunctive and comprising the step of transforming the disjunctive predicates into conjunctive predicates by performing a De Morgan transformation on the disjunctive query.
  • 41. The computer readable medium of claim 23 wherein the query comprises a GROUP BY predicate over a grouping column and wherein the query is transformed prior to performance of the method steps to return a number of distinct values in the grouping column.
  • 42. The computer readable medium of claim 41 wherein the step of decomposing the query selectivities performed by considering decompositions that are induced by coalescing grouping.
  • 43. The computer readable medium of claim 41 wherein the step of decomposing the query selectivities performed by considering decompositions that are induced by invariant grouping.
  • 44. The computer readable medium of claim 23 wherein the stored statistics comprise histograms built over computed columns in a query result.
  • 45. An apparatus for approximating a number of tuples returned by a database query that comprises a set of predicates that each reference a set of database tables comprising: a) means for expressing the query as a query selectivity; b) means for determining if the query is separable; c) means for separating the query selectivity to form a product of query selectivity factors if the query is separable; c) means for decomposing the query selectivity to form a product that comprises a conditional selectivity expression if the query is not separable; d) means for recursively performing steps b)-f) to determine a selectivity value for each query selectivity factor; e) means for matching any conditional selectivity expression with stored statistics to obtain statistics that can estimate the selectivity of the conditional selectivity expressions and means for using the statistics to obtain an estimated selectivity value; and f) means for combining the selectivity values obtained in step d) and the estimated selectivity values obtained in step e) to estimate the selectivity of the query.
  • 46. A method for approximating a number of tuples returned by a database query that comprises a set of predicates that each reference a set of database tables, the method comprising the steps of: a) expressing the query as a query selectivity; b) determining if the query is separable and if so separating the query selectivity by separating the predicates that reference different sets of database tables to form; a product of query selectivity factors that reference different sets of database tables; c) if the query is not separable, repeatedly decomposing the query selectivity to form a product that comprises a conditional selectivity expression to generate alternative products and wherein one of those products is selected to estimate the selectivity of the query; d) recursively performing steps b)-f) to determine a selectivity value for each query selectivity factor; e) matching any conditional selectivity expression with stored statistics to obtain statistics that can estimate the selectivity of the conditional selectivity expressions by: i) compiling a set of candidate statistics that can be used to estimate the selectivity of the conditional selectivity expression; ii) selecting candidate statistics to estimate the selectivity of the conditional selectivity expression based on a selection criteria; and iii) using the statistics to obtain an estimated selectivity value; and
  • 47. The method of claim 46 wherein the selection criteria for a candidate statistic is determined by computing a number of independence assumptions that are made when the candidate is used to estimate the selectivity of the conditional selectivity expression and the selection criteria is to select the candidate that results in the least number of independence assumptions.
  • 48. The method of claim 46 wherein the selection criteria for a candidate statistic is determined by comparing the candidate statistic with a base statistic over the same column as the candidate statistic and assigning a difference value to the candidate statistic based on a level of difference between the candidate statistic and the base statistic.
  • 49. The method of claim 46 wherein the step of compiling a set of candidate statistics is performed by including statistics that are on results of queries having the same tables referenced by the conditional selectivity expression or a subset of the tables referenced by the conditional selectivity expression and the same predicates over the tables referenced in the conditional selectivity expression or a subset of the predicates over the tables referenced in the conditional selectivity expressions.
  • 50. The method of claim 46 wherein the product formed in step c) further comprises a query selectivity factor and wherein steps b)-f) are recursively performed to determine a selectivity value for the query selectivity factor in step c).
  • 51. The method of claim 46 wherein steps b)-f) are recursively performed until a non-separable query selectivity that can only be decomposed into a single conditional selectivity expression results.
  • 52. The method of claim 46 comprising the step of associating an error with the estimated selectivity value that is based on an accuracy with which the statistic matched with the conditional selectivity expression can estimate its selectivity.
  • 53. The method of claim 52 comprising the step of combining the error associated with each conditional selectivity expression to obtain any estimated error for the selectivity estimation for the query.
  • 54. A a computer readable medium having computer executable instructions stored thereon for approximating a number of tuples returned by a database query that comprises a set of predicates that each reference a set of database tables, the method comprising the steps of: a) expressing the query as a query selectivity; b) determining if the query is separable and if so separating the query selectivity by separating the predicates that reference different sets of database tables to form a product of query selectivity factors that reference different sets of database tables; c) if the query is not separable, repeatedly decomposing the query selectivity to form a product that comprises a conditional selectivity expression to generate alternative products and wherein one of those products is selected to estimate the selectivity of the query; d) recursively performing steps b)-f) to determine a selectivity value for each query selectivity factor; e) matching any conditional selectivity expression with stored statistics to obtain statistics that can estimate the selectivity of the conditional selectivity expressions by: i) compiling a set; of candidate statistics that can be used to estimate the selectivity of the conditional selectivity expression; ii) selecting candidate statistics to estimate the selectivity of the conditional selectivity expression based on a selection criteria; and iii) using the statistics to obtain an estimated selectivity value; and
  • 55. The computer readable medium of claim 55 wherein the selection criteria for a candidate statistic is determined by computing a number of independence assumptions that are made when the candidate is used to estimate the selectivity of the conditional selectivity expression and the selection criteria is to select the candidate that results in the least number of independence assumptions.
  • 56. The computer readable medium of claim 55 wherein the selection criteria for a candidate statistic is determined by comparing the candidate statistic with a base statistic over the same column as the candidate statistic and assigning a difference value to the candidate statistic based on a level of difference between the candidate statistic and the base statistic.
  • 57. The computer readable medium of claim 55 wherein the step of compiling a set of candidate statistics is performed by including statistics that are on results of queries having the same tables referenced by the conditional selectivity expression or a subset of the tables referenced by the conditional selectivity expression and the same predicates over the tables referenced in the conditional selectivity expression or a subset of the predicates over the tables referenced in the conditional selectivity expressions.
  • 58. The computer readable medium of claim 55 wherein the product formed in step c) further comprises a query selectivity factor and wherein steps b)-f) are recursively performed to determine a selectivity value for the query selectivity factor in step c).
  • 59. The computer readable medium of claim 55 wherein steps b)-f) are recursively performed until a non-separable query selectivity that can only be decomposed into a single conditional selectivity expression results.
  • 60. The computer readable medium of claim 55 comprising the step of associating an error with the estimated selectivity value that is based on an accuracy with which the statistic matched with the conditional selectivity expression can estimate its selectivity.
  • 61. The computer readable medium of claim 60 comprising the step of combining the error associated with each conditional selectivity expression to obtain an estimated error for the selectivity estimation for the query.