Join tuple assembly by partial specializations

Information

  • Patent Grant
  • 8296289
  • Patent Number
    8,296,289
  • Date Filed
    Tuesday, May 18, 2010
    14 years ago
  • Date Issued
    Tuesday, October 23, 2012
    12 years ago
Abstract
Various embodiments of systems and methods for join tuple assembly by partial specializations are described herein. The join tuple assembly by partial specializations is a phase of the method for join query evaluation by semi-join reduction. By using partial specializations of the non-join part of the WHERE clause of a join query and matching sets, the join tuple assembly is organized in a manner that all computations are necessary, none are repeated, and failure to complete a partial join tuple to a full tuple is detected as early as possible. The method can be applied to inner and outer joins, and to arbitrary join graphs and non-join conditions in the WHERE clause. It can also be used outside the context of semi-join reductions.
Description
FIELD

Embodiments of the invention generally relate to the software arts, and, more specifically, to methods and systems for join query evaluation by semi-join reduction.


BACKGROUND

In the world of commercial computation, a major part of all computation is devoted to join evaluation. The cost in evaluating joins is high as well with respect to memory consumption as to processing time. A common technique for reducing the amount of data is the use of semi-joins. A join (e.g., an SQL join) combines two or more tables in a database, producing a new one that can be saved as a table or used as an intermediate result of more complex computations. The join combines the fields from the two tables by using values that are common to each of them. A semi join is a binary operator on two relations. If these relations are R and S, the result of the semi-join of R with S is the set of all rows in R for which there is a row in S that is equal on their common attribute value. A relation is a data structure that consists of a heading (an unordered set of attributes as columns in a table) and a body (an unordered set of rows that share the same type). In computer science, a row represents an ordered list of attribute values. An n-tuple is a sequence (or an ordered list) of “n” elements, where “n” is a positive integer.


A semi-join between two tables consists of rows from the first table where one or more matches are found in the second table. If there are two relations R and S, the difference between the semi-join of R with S and the join between R and S is: the semi-join is a subset of R alone, whereas the join is a subset of the product R×S. As a subset, the semi-join contains every row of R at most once. Even if S contains two matches for a row in R, only one copy of the row in R is retained. Conceptually, if J is the join between R and S, the semi-join is the projection of J to R.


A join query is typically processed in the following way: first, semi-join reductions of the sizes of the joining relations are performed; then, the reduced relations are assembled to compute the join, and finally from every tuple in the join the attributes referenced in the expressions in the SELECT clause are projected, the expressions are evaluated and the results are returned to the user.


SUMMARY

Various embodiments of systems and methods for join tuple assembly by partial specializations are described herein. In an embodiment, the method includes receiving a join query, a materialization graph representing a join part of the join query, and a plurality of matching sets. The method further includes configuring a tuple construction counter, which value indicates progression in an overall length of constructing a join tuple and an iterator that traverses through elements in a matching set from the plurality of matching sets. While the tuple construction counter value is a positive integer, a partial specialization of an operator tree is computed, wherein the operator tree represents a non-join part of the join query. If the computed partial specialization satisfies the non-join part of the join query and the tuple construction counter value is less than the overall length of the join tuple, a subset of the plurality of matching sets are recomputed. Further, if no empty matching set is encountered during recomputation, the tuple construction counter value is increased.


In an embodiment, the system includes a database storage unit for storing one or more of a plurality of matching sets derived from semi-join reduction of a plurality of relations, a join query, and a materialization graph representing a join part of the join query. Further, the system includes a processor in communication with the database storage unit that executes instructions including: configuring a tuple construction counter, which value indicates progression in an overall length of constructing a join tuple and an iterator that traverses through elements in a matching set from the plurality of matching sets. While the tuple construction counter value is a positive integer, a partial specialization of an operator tree is computed, wherein the operator tree represents a non-join part of the join query. If the computed partial specialization satisfies the non-join part of the join query and the tuple construction counter value is less than the overall length of the join tuple, a subset of the plurality of matching sets are recomputed. Further, if no empty matching set is encountered during recomputation, the tuple construction counter value is increased.


These and other benefits and features of embodiments of the invention will be apparent upon consideration of the following detailed description of preferred embodiments thereof, presented in connection with the following drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The claims set forth the embodiments of the invention with particularity. The invention is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. The embodiments of the invention, together with its advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings.



FIG. 1 is a block diagram illustrating a number of exemplary relations.



FIG. 2 is a block diagram illustrating a join graph representing a first part of join query 140.



FIG. 3 is a block diagram illustrating an operator tree representing a second part of join query 140.



FIG. 4 is a block diagram illustrating reduced relations R1, R2, R3, R4, R5, R6, and R7 as part of the query evaluation method.



FIG. 5 is a block diagram illustrating a materialization graph according to the join tuple assembly phase.



FIG. 6 is a block diagram illustrating a part of a materialization graph according to an embodiment.



FIG. 7 is a flow diagram of an embodiment of a method for join tuple assembly by partial specializations.



FIGS. 8A and 8B are flow diagrams illustrating an example of the method for query evaluation including the join tuple assembly algorithm by partial specializations.



FIG. 9 is a block diagram illustrating an exemplary computer system 900.





DETAILED DESCRIPTION

Embodiments of techniques for join tuple assembly by partial specializations are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.


Reference throughout this specification to “one embodiment”, “this embodiment” and similar phrases, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of these phrases in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.



FIG. 1 is a block diagram illustrating a number of exemplary relations. FIGS. 1-5 present an example of a method for query evaluation with the purpose to illustrate the basic steps of this method and provide general knowledge in this area without showing specific details. FIG. 1 includes the following exemplary relations: R1 105, R2 110, R3 115, R4 120, R5 125, R6 130, and R7 135. All relations include a set of attributes heading their columns (e.g., a, b, c, etc.) and a set of row identifiers heading the rows of the tables. All attributes in the exemplary relations are of integer type. The column in a relation headed by the relations name Rn (in the example, n=1 . . 7) always contains the row identifiers. The row identifiers are consecutive integers that are unique within their relation. The function of a row identifier is to identify a row within its table. The relations R1 105, R2 110, R3 115, R4 120, R5 125, R6 130, and R7 135 represent coded data in the form of integer values.


A method of query evaluation by semi-join reduction is applied to the following query using the relations R1 105, R2 110, R3 115, R4 120, R5 125, R6 130, and R7 135:









TABLE 1





Join query (140)
















SELECT
R1.z + R2.z + R3.z + R4.z + R5.z + R6.z + R7.z


FROM
R1, R2, R3, R4, R5, R6, R7


WHERE
R1.a = R2.a AND



R1.b = R3.b AND



R1.c = R4.c AND



R1.d = R5.d AND



R1.e = R6.e AND



R2.f = R4.f AND



R2.g = R5.g AND



R2.h = R7.h AND



R3.i = R4.i AND



R3.j = R6.j AND



R3.k = R7.k AND



R4.l = R7.l AND



(R1.z)2 + (R2.z)2 + (R3.z)2 + (R4.z)2 +



(R5.z)2 + (R6.z)2 + (R7.z)2 <=



1000 AND



( (R1.z * R2.z * R3.z * R4.z >= 1000) OR



(R1.z * R2.z * R3.z * R4.z = 0) )









Table 1 shows an example of a join query 140 based on the relations R1 105, R2 110, R3 115, R4 120, R5 125, R6 130, and R7 135. The SELECT statement defines what data to be retrieved. In the example, there is only one expression, but this expression references attributes from all seven tables. For simplicity, all these attributes are named as “z”. The FROM clause defines which relations are necessary to evaluate the join query 140. The WHERE clause includes one or more conditions that have to be fulfilled. Some of these conditions (in the example, all conditions except the last one) are join conditions, making the query a join query. Join query 140 will join relations R1 105, R2 110, R3 115, R4 120, R5 125, R6 130, and R7 135 using the corresponding columns specified in the WHERE clause. For example, the first condition of the WHERE clause is R1.a=R2.a, which means that columns with attribute “a” from relations R1 105 and R2 110 will be joined. This means that in every 7-tuple (r1, r2, r3, r4, r5, r6, r7) ε R1×R2×R3×R4×R5×R6×R7 satisfying the complete WHERE clause, the values in the “a” columns of R1 and R2 must be equal. Analogously, this is valid for the remaining join conditions of the WHERE clause.


All joins in join query 140 are inner joins. An inner join is a common join operation used in applications and represents the default join type. The inner join creates a new result table by combining column values of two tables (A and B) based upon a join predicate. However, the method can also be applied to left outer joins, right outer joins, and full outer joins. An outer join does not require each record in the two joined tables to have a matching record. The joined table retains each record, even if no other matching record exists. The result of a left outer join (or simply “left join”) of table A with table B always contains all records of the “left” table (A), even if the join condition does not find any matching record in the “right” table (B). A right outer join (or “right join”) closely resembles a left outer join, except with the treatment of the tables reversed. A full outer join combines the results of both left and right outer joins.



FIG. 2 is a block diagram illustrating a join graph representing a first part of join query 140. Join graph (JG) 210 represents the join part of the WHERE clause of join query 140. Join graph 210 includes all relations R1 105, R2 110, R3 115, R4 120, R5 125, R6 130, and R7 135 as vertices of the graph. Join graph 210 also includes edges between any two of the relations exactly if there is a join condition joining them (in the join part of the WHERE clause of join query 140). Since all join conditions are simple equality conditions on equally named attributes, the respective join condition can be identified by placing its common attribute name on top of the respective edge. For example, since the join condition between relation R1 105 and R2 110 is based on the attribute “a” (R1.a=R2.a), the edge between R1 and R2 is identified with “a”.



FIG. 3 is a block diagram illustrating an operator tree representing a second part of join query 140. The WHERE clause of the join query 140 may also contain additional conditions that are not join conditions. In the example, there is only one such condition:

Γ=(R1.z)2+(R2.z)2+(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=1000
AND
((R1.z*R2.z*R3.z*R4.z>=1000) OR (R1.z*R2.z*R3.z*R4.z=0))

Operator tree 310 represents the non-join conditions (Γ) graphically.


The method of query evaluation by semi-join reduction applied to the join query 140 should result in a set of join tuples. Every join tuple is a 7-tuple (r1, r2, r3, r4, r5, r6, r7) where r1 is a join tuple in R1 105, r2 in R2 110, r3 in R3 115, r4 in R4 120, r5 in R5 125, r6 in R6 130, and r7 in R7 135. Every such 7-tuple has to satisfy both the join conditions, assembled in the join graph 210, and the non-join conditions, assembled in the operator tree 310.



FIG. 4 is a block diagram illustrating the reduced relations R1, R2, R3, R4, R5, R6, and R7 as part of the query evaluation method. The first step of the method of query evaluation by semi-join reduction consists of recursively eliminating from all relations of the join, all rows for which there is a join condition in which they have no join partner. Eliminating one row can result in other rows losing some or all join partners. Following the join conditions of the join part of the WHERE clause of the join query 140 and thus recursively eliminating the rows with no join partner, the relations R1 105, R2 110, R3 115, R4 120, R5 125, R6 130, and R7 135 are reduced to relations R1 405, R2 410, R3 415, R4 420, R5 425, R6 430, and R7 435.


The second step of the method of query evaluation by semi-join reduction is join tuple assembly. Its purpose is to construct all 7-tuples (r1, r2, r3, r4, r5, r6, r7) ε R1×R2×R3×R4×R5×R6×R7 satisfying the complete WHERE clause of the SELECT statement, or at least the needed number of 7-tuples, if the query was a first-k query. A first k-query is a query, where the size of the result is limited by the user, for example, the user may specify: “return only the first 20 rows of the result”. For the join tuple assembly phase, a materialization graph (MG) is constructed. The MG will be used to construct the join tuples satisfying the complete WHERE clause.



FIG. 5 is a block diagram illustrating a materialization graph according to the join tuple assembly phase. For constructing the MG 510, first the set of vertices of the MG 510 is determined. The set is always a subset of the vertices of the JG 210, it contains all relations referenced in the SELECT clause and the operator tree Γ. According to the example, all nodes of the join graph 210 are also nodes of the MG 510. The next step of the join tuple assembly is to make this set into a sequence, representing the order of materialization. One arbitrary possibility for join tuple construction is to start with a row r3 in the reduced relation R3, giving a 1-tuple (r3), to prolong it by a row r6 in the reduced relation R6 satisfying the join between R3 and R6: r3.j=r6.j giving a 2-tuple (r3, r6), to prolong it by a row r7 in the reduced relation R7 satisfying the join between R3 and R7: r3.k=r7.k giving a 3-tuple (r3, r6, r7), and so on. Another arbitrary possibility would be to start with an r4 ε R4, prolong by a suitable r2 ε R2, prolong by a suitable r7 ε R7, where suitable means to satisfy the two simultaneous conditions: r4.l=r7.l and r2.h=r7.h, and so on.


In general, the choice of a good materialization sequence is an optimization issue. For simplicity reasons, the natural order R1, R2, R3, R4, R5, R6, R7 is used in the example. The sequence chosen is represented in the materialization graph 510 by relabeling its vertices by their position in this sequence. For example, RI is relabeled as 1, R2 is relabeled as 2, and so on. If the materialization sequence was chosen to be R5, R2, R4, R7, R1, R6, R3, then the vertices would be relabeled in the following way: R1 as 5, R2 as 2, R3 as 7 and so on. Additionally, the edges in MG 510 are given a direction. The edges are made arrows, pointing from lower numbered tail to higher numbered head. For example, arrow “a” points from 1 to 2, where 1 is the tail of the arrow and 2 is the head of the arrow.


The query evaluation by semi-join reduction is complete when all join tuples are assembled, meaning that all 7-tuples (r1, r2, r3, r4, r5, r6, r7) ε R1×R2×R3×R4×R5×R6×R7 satisfying the complete WHERE clause of the SELECT statement are known. For example, the 7-tuple (r1=3, r2=5, r3=2, r4=4, r5=3, r6=1, r7=1), or just (3, 5, 2, 4, 3, 1, 1), satisfies the WHERE clause of join query 140. This can be verified in the following way: first, the join condition r1.a=r2.a is checked for the rows r1=3 and r2=5. Looking into the reduced relations R1 405 and R2 410, since r1=3, the row identifier for R1 405 is “3”, which corresponds to the second row in table 405. Looking at the second row of table 405 and at the “a” attribute, it shows that r1. a=3. Again, looking at the reduced relations R1 405 and R2 410, since r2=5, the row identifier for R2 410 is “5”, which corresponds to the second row in table 410. Looking at the second row of table 405 and at the “a” attribute, it shows that r2.a=3. Therefore, r1.a=3=r2.a. The following list shows the verification of all join conditions of the WHERE clause on the 7-tuple given above.

    • r1·a=3=r2·a,
    • r1·b=6=r3·b,
    • r1·c=9=r4·c,
    • r1·d=12=r5·d,
    • r1·e=15=r6·e,
    • r2·f=14=r4·f,
    • r2·g=9=r5·g,
    • r2·h=10=r7·h,
    • r3·i=16=r4·i,
    • r3·j=15=r6·j,
    • r3·k=1=r7·k,
    • r4·l=20=r7·l


In addition to satisfying the join part of the WHERE clause, the 7-tuple should also satisfy the remaining non-join condition:

Γ=(R1.z)2+(R2.z)2+(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=1000
AND
((R1.z*R2.z*R3.z*R4.z>=1000) OR (R1.z*R2.z*R3.z*R4.z=0))

For the 7-tuple (3, 5, 2, 4, 3, 1, 1), it can be checked that: r1.z=10, r2.z=0, r3.z=12, r4.z=12, r5.z=4, r6.z=20, r7.z=14. Then:

102+02+122+122+42+202+142=1000
AND
10*0*12*12=0

The first condition in F is satisfied (since 1000 is <=1000) and the second condition is also satisfied (by the OR part, since 0 is equal to 0). Thus, the 7-tuple satisfies the complete WHERE clause. After this verification, it is still not evident how many tuples there are in total satisfying the WHERE clause, but for the current example query there is at least one. In general, there could be no such tuple at all or any number of them, up to the product of the cardinalities of the base relations.


Let J be the number of join tuples of the current example. If only the base relations in FIG. 1 are known, only the basic estimation 0<=J<=5^7=78.125 is possible, since there are 7 relations, each having 5 rows. When the reduced relations of FIG. 4 are known, this estimation can be improved to 0<=J<=2^6*4=256, since now 6 relations have cardinality 2 and 1 relation has cardinality 4 Taking into account that with (3, 5, 2, 4, 3, 1, 1) at least one join tuple was just presented, the latter estimation can still be improved to 1<=J<=256.


When all join tuples are constructed, their attributes referenced in the SELECT clause are fetched and the final answer is computed. For example, corresponding to the 7-tuple (3, 5, 2, 4, 3, 1, 1), these are just the z-values from which their sum is computed: 10+0+12+12+4+20+14=72. “72” is therefore one element of the result returned by the query.


As mentioned above, FIGS. 1-5 present an example of the method for query evaluation by semi-join reduction. The query evaluation method includes several steps, among which is the join tuple assembly. The standard techniques first construct all tuples satisfying the join part of the WHERE clause, then select from these tuples those that additionally satisfy the non-join conditions. This construction is expensive with respect to time and memory consumption. Depending on the data and the topology of the join graph (e.g., cyclic joins), certain obstacles can occur such as: 1) perfect reduction of tables to the projections of the join, by semi-joins alone, is challenged; and 2) to flatten the join graph in the reduction phase into a tree by duplicating vertices leads to practical inefficiencies when afterwards, in the assembly phase, one has to secure the identity of the join tuple entries at the positions of the duplicated nodes. In these cases, and also in the presence of non-join conditions that can only be evaluated on the already evaluated join, it is common in practice that the phase of join tuple assembly is the one consuming most of the computation time and/or the phase with the highest memory consumption.


Embodiments of techniques for a join tuple assembly algorithm by partial specializations are described herein. For simplicity reasons, the algorithm for a join tuple assembly is described using parts of the example introduced with FIGS. 1-5. In an embodiment, the algorithm is applied on a join query (e.g., join query 140). The WHERE clause is divided into a join part and a non-join part. The join part can be visualized as a join graph (e.g., JG 210) and the non-join part can be visualized as an operator tree (e.g., operator tree 310). The join tuples to be constructed should also satisfy the condition represented by the operator tree (i.e., Γ). In the operator tree, the inner nodes are Boolean operators and the leaves are certain terms. Every term references some subset of the base relations R1, R2, . . . , R7 of the join.


A materialization graph G=(V, A), where “V” is a set of vertices and “A” is a set of arrows, has been built that in general corresponds to a subgraph of the join graph. In the example, MG 510 corresponds to the full join graph. In general, the materialization graph G is a directed graph on the vertex set V={1, 2, 3 . . . N}, where Nε N. In general, every vertex n represents one base relation Rm of the join graph (e.g., JG 210). In the materialization graph G, the relations are renamed as in MG 510. In the current example, R1 is 1, R2 is 2, and so on. Here, every arrow a ε A,




embedded image



represents a join condition between the relations Rm and Rn, inherited from the join graph. The expression




embedded image



specifies that there is an arrow pointing from Rm to Rn. In the materialization graph G, there are no parallel arrows. This means that there is at most one arrow between any two relations. In addition, the arrows are ascending. This means that for all arrows a ε A, where




embedded image



and n, m ε V, m is less than n (m<n). As mentioned before, m specifies the tail of the arrow and n specifies the head of the arrow. Further, for all vertices n ε N, there is at least one directed path in G from 1 to n.


In an embodiment, the underlying join graph part (including a set of vertices V and a set of edges E, where the set of edges E is obtained from a set of arrows A but without a direction) may contain undirected cycles, but not loops (cycles of length l). Also, every full subgraph G′=G ∩ {1, 2, . . . , n}, 1<=n<=N of G has the same properties as G. A subgraph G′ of a graph G is called full, if all edges in G connecting vertices in G′ are also edges in G′. Similarly to G, G′ is a directed acyclic graph with ascending arrows, it has exactly one source, and for all vertices v ε G′, that is, for 1<=v<=n, there is at least one directed path in G′ from 1 to v.


In an embodiment, for every arrow a ε A, a head h and a tail t are defined. If an arrow a is




embedded image



then h (a)=n and t (a)=m, which means that the head of the arrow a is defined with m and the tail of the arrow a is defined with n. This definition can be extended to arbitrary non-empty paths in G. A path of length l in G is either an arrow a ε A or an inverse arrow a−1. For example, if




embedded image



then m




embedded image



so h (a−1)=m and t (a−1)=n. This means that a−1 is the arrow obtained from a by reversing the orientation. A path p of length rε N in G is a sequence of the type: p=a1 a2 . . . ar, such that every ai, where 1<=i<=r, is a path of length l, such that these paths have to be composable, which means: for 1<=i<r: h (ai)=t (ai+1). Head and tail of the path p are: h (p)=h (ar) and t (p)=t (a1).



FIG. 6 is a block diagram illustrating a part of a materialization graph according to an embodiment. Graph part 610 includes a set of arrows: a 620, b 630, and c 640. Arrow a 620 has a head n and tail m; arrow b 630 has a head n and tail k; and arrow c 640 also has a head n and tail 1. Let also assume that m<k<l<n. In an embodiment, every arrow has at most one successor. A successor of an arrow is another arrow that has the same head as the first arrow, but has the next higher tail. For example, in FIG. 6, a 620 and b 630 have a successor, c 640 has no successor. The successor of a 620 is and b 630, the successor of and b 630 is c 640. c 640 is not the successor of a 620, since b 630 is between them. This can be written as S(a)={b}, S(b)={C}, and S(c)=Ø.


In an embodiment, an arrow has at most one predecessor. A predecessor of an arrow is another arrow that has the same head as the first arrow, but the next lower tail. In the same example, c 640 has a predecessor, namely b 630, and b 630 also has a predecessor, namely a 620, whereas a 620 has no predecessor. a 620 is not the predecessor of c 640 since b 630 is between them. This can be written as P(c)={b}, P(b)={a} and P(a)=Ø.


In an embodiment, matching sets from the reduction of the full relations R1 105, R2 110, R3 115, R4 120, R5 125, R6 130, and R7 135 are defined as part of the algorithm for join tuple assembly by partial specializations. For every vertex n ε V, the matching set Mn,0custom characterRn is the reduction result of the full table Rn. Additionally, for every arrow a ε A,




embedded image



there will be a matching set Macustom characterRn. Further, for every arrow a ε A,




embedded image



that has a successor: S(a)={b},




embedded image



there will be a matching set Mab−1custom characterRk. The matching sets (i.e., Mn,0, Ma, Mab−1) are different as objects, meaning that their definitions will differ, but some of them may contain identical elements.


In an embodiment, the collection of all matching sets is defined as M. M contains exactly |V| sets of the form Mn,0, exactly |A| sets of the form Ma, and at most |A| sets of the form Mab−1. Further, M contains at least one set of the form Mab−1 exactly when the materialization graph G contains unoriented cycles.


The definition of the matching sets also includes defining the head and tail functions on M (as so far, these are defined only for paths in G). In an embodiment, M ε M is a matching set of the form Mn,0, Ma, or Mab−1. If M=Mn,0 for some vertex nε V, then h(Mn,0)=n and t(Mn,0)=0. If M=Mc for some path c=a (or c=ab−1) in G, then h(Mc)=h(c) and t(Mc)=t(c). These definitions lead to the following two consequences: 1) for every matching set M there is Mcustom characterRn, where n=h(M); and 2) for any nε V there are matching sets (M ε M) with t(M)=n exactly when n is not a sink (a vertex of a directed graph with no outgoing arrows) in G.


In an embodiment, the above described head and tail functions on M are used to define an ordering on M. First, the tail function is applied: for all M, M′ ε M:t (M)<t (M′)custom characterM before M′. This means that if the tail of matching set M is lower than the tail of matching set M′, then M is ordered before M′. Having ordered the matching sets with unequal tails, the matching sets of equal tail remain to be ordered. This is first done for the case of two matching sets that both have tail “0”: For all l<=m<n≦=N, Mm,0 is ordered before Mn,0.


Then, two matching sets having a common tail t are ordered, where l<=t<=N. For this purpose, let




embedded image



be all arrows starting at tail t and ending at heads n1, n2, . . . , ns, where n1<n2< . . . <ns. The corresponding matching sets Ma1, . . . , Mas are ordered in the following way: Ma1 before Ma2 before Ma3 . . . before Mas. Additionally, if there are matching sets of the type Mab−1 (this is, if arrow a has as a successor arrow b) their position is set in the following way: Mab−1 immediately after Ma.


In another embodiment, the set M of all matching sets is subdivided using the head function for every vertex n ε V, let Mn={M ε M: h(M)=n}custom characterM . Then, M=M1 ∪ M2 ∪ . . . ∪ Mn. This means that Mn is a subset of matching sets from the collection M that have the same head and M is their disjoint union. Since M is already ordered, every subset Mn of M is also ordered. Thus, it remains to make this ordering of Mn explicit: for all M, M′ ε Mn: t (M)<t (M′)custom characterM is placed before M′. Since the head of the matching sets M and M′ is the same (i.e., n), then the tails of the matching sets are compared. If they are different, the matching set with the lower tail is placed first (in this case, matching set M) in the ordering. Mn always contains the matching set Mn,0 (where h (Mn,0)=n and t (Mn,0)=0), which is placed first in Mn.


The ordering of Mn is to be made explicit for matching sets of equal tails. This is done for all matching sets M E Mn having t(M)=t, where t is a fixed value with 1<=t<N. These matching sets can be obtained as follows: there is an allow a ε A,




embedded image



for some vertex k ε V, where k>=n. If k=n, then M=Ma. If k>n, then the successor of arrow a is some arrow b ε A,




embedded image



and M=Mab−1. These sets M are ordered by increasing k. For example, if




embedded image



leads to a matching set M (i.e., M=Ma or M=Mab−1) and




embedded image



leads to a different matching set M′ (M≠M′), then k≠k′. Further, if k<k′, then matching set M is placed before matching set M′. If in some embodiments, the materialization graph G has a direct arrow




embedded image



the matching set Ma ε Mn is placed first among all matching sets M ε Mn with tail t(M)=t.


The ordering in Mn can be written as: Mn={Mn,0, Mn,1, Mn,2, . . . , Mn,r}, r ε N0, where Mn,0 is placed before Mn,1, which is placed before Mn,2 and so on till Mn,r. This expression provides the matching sets M, which have the form M=Mc for some path c=a (or c=ab−1), with an additional name M=Mn,i, where i>0 represents their position in the ordered collection Mn. These matching sets can be referred as “higher matching sets”. The first matching set, Mn,0, is the matching set already defined above as reduction result of relation Rn. Mn,0 is always placed first in Mn. The ordered collection Mn has some properties including: 1) Mn,0 ε Mn for all n ε V; and 2) |Mn|>1 for all n>1.


In an embodiment, the matching sets Mn,i form a descending chain (for any fixed n ε V) of the type: Rncustom characterMn,0custom characterMn,1custom characterMn,2custom character . . . custom characterMn,r, where rε N0. The descending order of this chain comes from the fact that their members satisfy successively more conditions that are necessary to qualify as the next element in a join tuple construction process. Elements for the construction process will only be taken from the last smallest set Mn,r, but the intermediate chain members Mn,i also represent valuable information. This information can be reused without being recomputed and accelerates the entire tuple construction process.


In an embodiment, every matching set defined as part of the algorithm for join tuple assembly by partial specializations depends on some t-tuple (r1, r2, . . . , rt) ε R1×R2× . . . ×Rt, for 0<=t<N. This means that the tuple (r1, r2, . . . , rt) is needed for M to be defined. The dependency of the matching sets on tuples can be described abstractly in the following way: when M depends on (r1, r2, . . . , rt), this could be any tuple in R1×R2× . . . ×Rt. When executing the join tuple algorithm, the tuples (r1, r2, . . . , rt) that occur in this context will be no longer arbitrary—they occur during the algorithm as valid join tuples for some subgraph of G that are in the process to be completed to a full N-tuple (r1, r2, . . . , rN) ε R1×R2× . . . ×RN, satisfying the complete WHERE clause.


The matching sets are defined recursively in the order of M. The induction basis is formed by the reduction results M1,0, . . . , MN,0, which are already defined and ordered first in M. In the ordering of M, let M be the first matching set that is yet undefined. In an embodiment, the first case is considered that M=Ma for some arrow




embedded image



a ε A. At the same time, there is the alternative naming of the same set as M=Mn,i for some i>0, hence M=Mn,icustom characterRn. The matching set Mn,i−1 (Mn,i−1custom characterRn) is ordered before Mn,i, so it has already been defined. This can be used to define M: M=Ma=Mn,i=Mn,i−1 ∩ a(rt). The set a(rt) is the subset of Rn that matches the given element rt of (N−1)-tuple (r1, r2, . . . , rN−1) ε R1×R2× . . . ×RN−1, with respect to the join condition a: a(rt)={x ε Rn: a(rt, x)==True}custom characterRn. Hence, M can be defined as: M=Ma=Mn,i={x ε Mn,i−1: a(rt, x)==True}custom characterMn,i−1custom characterRn. This means that the process sifts through the matching set Mn,i−1 and selects those elements that match rt under a.


In another embodiment, the first yet undefined matching set M has the form: M=Mab−1 for some arrow




embedded image



a ε A as above, and an arrow




embedded image



where S(a)={b}. Similarly, there is an alternative naming of the same set as M=Mk,i for some i>0, and hence M=Mk,icustom characterRk. In the ordering of the collection of matching sets M, matching set Ma is placed before Mab−1. Therefore, both matching sets Ma and Mk,i−1 are already defined. This can be used to define M: M=Ma=Mab−1=Mk,i=Mk,i−1 ∩ b−1(Ma). The set b−1(Ma) is the subset of Rk that matches at least one element of Ma with respect to the join condition b: b−1(Ma)={x ε Rk: there is a row y ε Ma such that b(x,y)==True}custom characterRk. Again, this means that the process sifts through Mk,i−1 using the join condition b.


In an embodiment, let Γ be an operator tree, such as operator tree 310 and 1<=n<=N, and rn ε Rn are given. Γ(rn), the specialization of Γ by rn, is an operator tree obtained from Γ as follows: into any term in Γ (all terms are at the leaf positions of the tree) that references the relation Rn, the value rn is substituted for the variable Rn. Thus, all terms are specialized, resulting in new terms not referencing Rn. Some of the new terms may even become constant, this is, identical to True of False. Further, the specialization includes obtaining all constant terms from the operator tree that are not in a root position. Then, the Boolean values of the constant terms are propagated upwards through the nodes of the operator tree Γ. This leads to further operator tree nodes becoming constant. Γ(rn) is the result of this specialization process of Γ.


In an embodiment, Γ is the operator tree (e.g., operator tree 310) representing those conditions of a join query that are not join conditions (and cannot be evaluated before the join has been evaluated). For any N-tuple (r1, r2, . . . , rN) ε R1×R2× . . . ×RN, a sequence of successive partial specializations of Γ may be defined as follows: let Γ0=Γ, Γ10 (r1), Γ21(r2), . . . , ΓNN−1 (rN). In this case, ΓN, the full specialization of Γ, is either True or False, at the latest. In an embodiment, if some previous specialization Γn has already been constant, then all specializations that follow Γn are identical to it and are constant.


The above definitions present matching sets of the form Mab−1 only in the case where there are two arrows a, b ε A and the successor of arrow a is arrow b: S(a)={b}. If arrow a has a tail m and a head n,




embedded image



and arrow b has a tail k and also a head n,




embedded image



then h(ab−1)=k>m=t(ab−1). If the matching sets Mab−1 were presented for all arrows a, b ε A with h (a)=h (b) and t (a)<t (b) so that h (ab−1)=t (b)>t (a)=t (ab−1), then the computation of the sets Mab−1 with b ∉ S(a) would have been redundant.


If an (N−1)-tuple (r1, r2, . . . , rN−1) ε R1×R2× . . . ×RN−1 is provided and a matching set M ε M with 0<=t=t(M)<N has been defined as above, then: 1) M does not depend on rt+1, . . . , rN−1; and 2) if t>0, then M depends on rt.


In an embodiment, memory consumption can be greatly reduced by not explicitly storing all matching sets Mn,i of the following descending chain: Rncustom characterMn,0custom characterMn,1custom characterMn,2custom character . . . custom characterMn,r, where rε N0, pertaining to some fixed vertex n ε V. It is enough to store Mn,0 and together with every element x (x ε Mn,0), an integer i (0<=i<=r), which interpretation is: 1) x ε Mn,i, x ∉ Mn,i+1, if 0<=i<r; and 2) x ε Mn,r, if i=r.


Additionally, the computation of matching sets can be further decreased. Let M ε M be a matching set and t=t (M), where M depends at most on r1, r2, . . . , rt. If M has been first computed for some t-tuple (r1, r2, . . . , rt) and is later needed for a different t-tuple (r1′, r2′, . . . , rt′), then the attribute values of the rows of the relations that entered the computation from both t-tuples are compared and it is checked if these attribute values are the same on (r1′, r2′, . . . , rt′) as they were on (r1, r2, . . . , rt). If they agree, M does not need to be recomputed, although (r1, r2, . . . , rt) changed. This leads to one more decrease in recomputation since repeated attribute values are quite common in practice.



FIG. 7 is a flow diagram illustrating an embodiment of a method for join tuple assembly by partial specializations. Algorithm 700 is applied to a join query, such as join query 140, including join and non-join conditions in the WHERE clause. The non-join conditions are presented as an operator tree, such as Γ 310. In addition, a materialization graph G is built (e.g., materialization graph 510) and the matching sets M1,0, . . . , MN,0 of the reduction results of the full relations are provided. The materialization graph G is a subgraph of the join graph such as join graph 210. At decision block 705, the algorithm checks if one of the reduction results Mn,0 is empty or if the operator tree Γ equals the constant expression False. If the result from decision block 705 is “YES”, then the algorithm stops at block 710 as the join is empty. If the result from decision block 705 is “NO”, then the algorithm continues at block 715.


At block 715, a defined number of initializations are performed. In an embodiment, the initializations include the following elements: 1) a tuple construction counter t; 2) an iterator it1; and 3) a trivial partial specialization Γ0 of the operator tree Γ. The tuple construction counter t indicates how far the algorithm 700 has proceeded in building one more join tuple. For example, if the relations are seven in number, then 7-tuples (r1, r2, . . . , r7) are to be generated and the counter t indicates how far the construction of the current tuple has gone. For example, if t=3, this means that a 2-tuple (r1, r2) has been identified and constructed that can be completed to a 7-tuple (r1, r2, . . . , r7) and all conditions on r1 and r2 have been checked, and currently the algorithm will search for a suitable r3. Initially t is 1 (t=1). The iterator is an object that allows the algorithm 700 to traverse through all elements of a matching set such as iterator itt and M1,0. More iterators are used later in the algorithm, such as: it2 traverses M2,3, it3 traverses M3,4, it4 traverses M4,4, it5 traverses M5,2, it6 traverses M6,2 and it7 traverses M7,3. In general, every iterator itn traverses the matching set ordered last in Mn.


At block 720, the algorithm 700 checks if the value of the counter t is a positive integer, i.e., t>0. If the result from decision block 720 is “NO”, i.e., t=0, then the algorithm 700 stops at block 725 meaning that all tuples satisfying the complete WHERE clause of the join query have been identified as produced. If the result from decision block 720 is “YES”, then the algorithm continues at block 730. At block 730, the iterator itt is accessed. This iterator has been initialized either in block 715 or in an earlier traversal of block 770. The row rt is the row that itt currently points to; afterwards, itt is made to point to the next row in the matching set itt traverses, if there is one: rt=*itt; itt++. When the value of the iterator increases, the algorithm moves to the next row of the matching set. For example, if t=1, then it1 first points to 2, since here M1,0={2, 3}. At block 730, the partial specialization Γtt−1(rt) is computed from Γt−1. This means that the algorithm is at a specific row (rt) of the table and there are some attribute values in this row, which are taken to compute the partial specialization Γt (since attribute values the reduction results are given as input for the algorithm). The computation of Γt−1(rt) is performed by taking the attribute values from the specified row rt and substituting these values into Γt−1.


At decision block 735, the algorithm 700 checks if the computed partial specialization Γt is False, this is, whether the operator tree Γt has degenerated into the simple constant expression False. If the result from decision block 735 is “YES”, then the algorithm continues at block 740. At block 740, while the counter t is a positive integer (t>0) and the current iterator (itt) points to the end of the matching set it traverses, hence no row, the value of the counter t is decreased by one or more count value. In an embodiment if, for the original value of t, itt points to some row of its matching set, then t remains unchanged. In another embodiment, t can be decreased by more than 1. For example, if t is decreased from 5 to 3, this means that in the process of constructing the next join tuple, there is a 4-tuple (r1, r2, r3, r4) constructed, which so far satisfies all necessary conditions and a suitable r5 has to be found. However, this may fail. Thus, a better r4 has to be found. But this may also fail, so a better r3 will be needed in that case. Therefore, the algorithm resumes work on the current 2-tuple (r1, r2), searching for a suitable r3 as part of a 3-tuple (r1, r2, r3) that can be completed to a full N-tuple (r1, r2, r3, . . . , rN) satisfying the WHERE clause. The algorithm 700 is returned to decision block 720. If the result from decision block 735 is “NO”, then the algorithm continues at block 750. At decision block 750, the algorithm 700 checks whether the value of the tuple construction counter t is equal to N, the needed length for a tuple to be completed, which also equals the number of vertices in the materialization graph. For example, N=7 is the number of relations from which a row is materialized and thus, the length of every valid tuple is 7.


If the result from decision block 750 is “YES”, then the algorithm continues at block 755. At block 755, the tuple construction counter t is equal to the needed length (t==N), i.e., another N-tuple (r1, r2, . . . , rN) is obtained satisfying the complete WHERE clause. In an embodiment, when a valid N-tuple (r1, r2, . . . , rN) is found, it can be used immediately during the algorithm. This could be the choice if, for example, the join result is used as input by another operation on a second computer. For example, sending the join tuple to the second computer, letting it begin its work immediately instead of waiting for the first computer to complete. In an alternative embodiment, all found N-tuples (r1, r2, . . . , rN) can be first collected and stored in a storage unit for later use.


If the result from decision block 750 is “NO”, then the algorithm continues at block 760. At this point, the tuple constructed so far is shorter than a full N-tuple and the algorithm 700 prepares for finding the rows that are still needed to complete the full tuple. At block 760, all matching sets that depend on rt are recomputed. These matching sets are identified by the condition that their tail equals t. At decision block 765, the algorithm checks whether any one of the just recomputed matching sets turned empty. If the result from decision block 765 is “YES”, then the algorithm continues at block 740. This means that if there is an empty matching set, then no full join tuple can be constructed from the current shorter tuple. At block 740, while counter t is a positive integer (t>0) and the current iterator (itt) points to the end of the matching set it traverses, hence to no row, the value of the tuple construction counter t is decreased. The algorithm returns to block 720 and tries to find a new row rt, with the decreased value of t.


If the result from decision block 765 is “NO”, then the algorithm continues at block 770. This means that there are no empty matching sets. At block 770, the value of the counter t is increased by “1” (as part of the loop), indicating that the current tuple, which is to be prolonged to a full N-tuple, has successfully been prolonged by 1. The algorithm is returned to block 720 where the algorithm searches for a row rt, with the increased value of t. The algorithm 700 terminates when t=0, indicating that all join tuples have been produced.


Algorithm 700 can be presented with the following code as well:









TABLE 2





Join Tuple Assembly Algorithm by Partial Specializations (700)















if Γ == False or Mn,0 == Ø for any 1 <= n <= N : Stop, the join is empty.


t = 1 ; it1 = M1,0.begin( ) ; Γ0 = Γ;


while (t > 0) { // loop invariants: 1) (r1, r2, . . . , rt−1) is a join tuple for G ∩ {1, 2, . . . , t−1},


      //        2) all M ε M with t(M) < t are known and not empty,


      //        3) itt points to some row in the matching set it enumerates, and


      //        4) Γt−1 != False.


 rt = * itt; itt ++; // (r1, r2, . . . , rt−1) is a valid join tuple for G ∩ {1, 2, . . . , t}


 use rt to compute the next partial specialization: Γt = Γt−1(rt);


 incr = (Γt != False); // t is incremented custom character  rt prolongs (r1, r2, . . . , rt−1)


 if (incr and t < N) { // recompute all M ε M with t(M) = t


  embedded image


   compute Ma = Mn,i = Mn,i−1 ∩ a(rt);


   incr = (Ma != Ø);


   embedded image


    compute Mab−1 = Mk,i = Mk,i−1 ∩ b−1 (Ma);


    incr = (Mab−1 != Ø);


   }


  }


 }


 if (incr) {


  t ++ ; // for the new t, let Mt,r be the matching set ordered last in Mt


  itt = Mt,r .begin( ) ; // since Mt,r ≠ Ø, itt points to some row


 }


 else {


  if (t == N and ΓN == True ): (r1, r2, . . . , rN), the next join tuple, has been produced


  // try next rt, at the same t if possible, otherwise decrease t:


  while (t > 0) and itt does not point to any row: t.−−;


 }


}









The algorithm 700 accelerates the process of join tuple assembly in the method of semi-join reduction such that the method itself becomes faster and, when compared to other implementations, also decreases memory consumption. The join tuple assembly process is accelerated by algorithm 700 by using decision steps at several places (e.g., blocks 730 and 750) checking as early as possible whether to stop the computation, if it is clear that the current tuple cannot be completed. For example, at decision block 735, if the operator tree Γt, depending on (r1, r2, . . . , rt), is identical to False, then there is no chance that the last operator tree ΓN could ever become True. Instead, the algorithm is returned to block 720 to find another rt that does not make the operator tree Γt identical to False. Similarly, at block 765 if during recomputation of a subset of the matching sets, an empty matching set is encountered, then the algorithm is returned again to block 720, because the algorithm is aware that at this place for some positions of the join tuple to be filled later, there will be no eligible candidates. These are the earliest possible detections of future failures, using only computations that may not be avoided, hence causing no extra cost. In this way, algorithm 700 avoids redundant iterations in the loop and performing the same computation twice, which reflects on the join tuple assembly acceleration.



FIGS. 8A and 8B are flow diagrams of the method for query evaluation including the join tuple assembly algorithm by partial specializations. Algorithm 700 can be applied to the example of the method for query evaluation provided with FIGS. 1-5 to illustrate better how it works. Method 801 of FIG. 8A presents an overview of the join tuple assembly algorithm, including its data independent preparatory steps. Method 802 gives the details left implicit in step 830 of method 801; the computations are data dependent. Having the materialization graph (e.g., MG 510), method 801 begins with some calculations depending only on the topology of MG 510. At block 805, the successors of all arrows are calculated: S(a)=Ø; S(b)=Ø; S(c)={f}; S(d)={g}; S(e)={j}; S(f)={i}; S(g)=Ø; S(h)={k}; S(i)=Ø; S(j)=Ø; S(k)={l}; S(l)=Ø. At block 810, the matching sets with their head and tail functions are computed from MG 510:









TABLE 3





Matching Sets with Their Head and Tail Values







h(M1,0) = 1, t(M1,0) = 0


h(M2,0) = 2, t(M2,0) = 0


h(M3,0) = 3, t(M3,0) = 0


h(M4,0) = 4, t(M4,0) = 0


h(M5,0) = 5, t(M5,0) = 0


h(M6,0) = 6, t(M6,0) = 0


h(M7,0) = 7, t(M7,0) = 0


h(Ma) = 2, t(Ma) = 1


h(Mb) = 3, t(Mb) = 1


h(Mc) = 4, t(Mc) = 1


h(Mcf−1) = 2, t(Mcf−1) = 1


h(Md) = 5, t(Md) = 1


h(Mdg−1) = 2, t(Mdg−1) = 1


h(Me) = 6, t(Me) = 1


h(Mej−1) = 3, t(Mej−1) = 1


h(Mf) = 4, t(Mf) = 2


h(Mfi−1) = 3, t(Mfi−1) = 2


h(Mg) = 5, t(Mg) = 2


h(Mh) = 7, t(Mh) = 2


h(Mhk−1) = 3, t(Mhk−1) = 2


h(Mi) = 4, t(Mi) = 3


h(Mj) = 6, t(Mj) = 3


h(Mk) = 7, t(Mk) = 3


h(Mkl−1) = 4, t(Mkl−1) = 3


h(Ml) = 7, t(Ml) = 4









Then, the collection of all matching sets M is ordered using the computed head and tail functions, at block 815. The ordering of the matching sets follows the definitions given above. First, the matching sets that are not of the type Mab−1 are ordered, these are the reduction results Mn,0 and the sets of type Ma. For these matching sets, the order is: first by ascending tail function; and then, for equal values of the tail function, by ascending head function. These indications are sufficient since for these types of sets no two different sets have the same head and tail. Then, every matching set Mab−1 is placed immediately after its corresponding set Ma. According to the example, the resulting ordering is:









TABLE 4





Ordered Collection of all Matching Sets







M = {M1,0, M2,0, M3,0, M4,0, M5,0, M6,0, M7,0, Ma, Mb, Mc, Mcf−1, Md,


Mdg−1, Me, Mej−1, Mf, Mfi−1, Mg, Mh, Mhk−1, Mi, Mj, Mk, Mkl−1, Ml}









At block 820, the collection of all matching sets M is divided into a number of sub-collections containing matching sets with equal head function:









TABLE 5





Sub-Collections of all Matching Sets by Equal Head







M1 = {M1,0}


M2 = {M2,0, Ma, Mcf−1, Mdg−1}


M3 = {M3,0, Mb, Mej−1, Mfi−1, Mhk−1}


M4 = {M4,0, Mc, Mf, Mi, Mkl−1}


M5 = {M5,0, Md, Mg}


M6 = {M6,0, Me, Mj}


M7 = {M7,0, Mh, Mk, Ml}









These sub-collections can be renumbered to represent the order of a given matching set in a given sub-collection. This is:









TABLE 6





Renumbered Matching Sets







M1 = {M1,0}


M2 = {M2,0, M2,1, M2,2, M2,3}


M3 = {M3,0, M3,1, M3,2, M3,3, M3,4}


M4 = {M4,0, M4,1, M4,2, M4,3, M4,4}


M5 = {M5,0, M5,1, M5,2}


M6 = {M6,0, M6,1, M6,2}


M7 = {M7,0, M7,1, M7,2, M7,3}


where: M2,1 = Ma, M2,2 = Mcf−1, M2,3 = Mdg−1


M3,1 = Mb, M3,2 = Mej−1, M3,3 = Mfi−1, M3,4 = Mhk−1


M4,1 = Mc, M4,2 = Mf, M4,3 = Mi, M4,4 = Mkl−1


M5,1 = Md, M5,2 = Mg, M6,1 = Me, M6,2 = Mj


M7,1 = Mh, M7,2 = Mk, M7,3 = Ml









At block 825, the matching sets Mn,i with i>0 are recursively defined using the reduction results Mn,0 as the induction basis. As set in the example of FIGS. 1-5, it is assumed that a 6-tuple (r1, r2, r3, r4, r5, r6) ε R1×R2×R3×R4×R5×R6 is given. The example of FIGS. 1-5 includes a 7-tuple (r1, r2, r3, r4, rs, r6, r7) ε R1×R2×R3×R4×R5×R6×R7, but since the last vertex is a sink in the materialization graph 510, reflecting a general property of all materialization graphs, the last tuple (i.e., r7) will never be used. The defined matching sets depend on appropriate parts of this 6-tuple. Simultaneously, it is verified that the extent of dependency of the matching sets on the 6-tuple (r1, r2, r3, r4, r5, r6) is correctly reproduced by the tail function, as described above.


The definition of the higher matching sets begins by setting: Ma=M2,1=M2,0∩a (r1). The predicate a is a(x, y)=(x.a=y.a) for x ε R1, y ε R2, where the set a(r1), depending on the first element r1 of the 6-tuple, is a(r1)={x ε R2: x.a=r1.a} so that matching set Ma has been defined: Ma=M2,1={x ε M2,0: x.a=r1.a}. Hence, Ma=M2,1 depends on r1, but not on r2, r3, r4, r5, r6. Since t(Ma)=1, this agrees with the following general observation: if M ε M is a matching set with 0<=t=t(M)<N, then: 1) M does not depend on rt+1, . . . , rN−1; and 2) if t>0, then M depends on rt. Analogously, matching sets Mb and Mc can be defined: Mb=M3,1=M3,0∩b(r1) and Mc=M4,1=M4,0∩c(r1).


The next matching set in the ordering of M is Mcf−1=M2,2. The definition of Mcf−1 uses the previously defined Mc=M4,1 and Ma=M2,1: Mcf−1=M2,2=M2,1∩f1 (Mc). The predicate f in this example is f(x, y)=(x.f=y.f) for x ε R2, y ε R4. An explicit description of the set f1(Mc) is: f1 (Mc)={x ε R2: x.f=y.f for some y ε Mc}. Therefore, the matching set Mcf−1 can be defined explicitly as: Mcf−1=M2,2={x ε M2,1: x.f=y.f for some y εMc}. Since Mc=M4,1 and Ma=M2,1 both depend on r1, but not on r2, r3, r4, r5, r6; the same is valid for Mcf−1=M2,2, in accordance with t(Mcf−1)=1.


Analogously, the following matching sets can be defined, as all of them have a tail equal to 1 and depend on r1, but not on r2, r3, r4, r5, r6:









TABLE 7





Matching Sets M with tail t(M) = 1







Md = M5,1 = M5,0 ∩ d(r1)


Mdg−1 = M2,3 = M2,2 ∩ g−l(Md)


Me = M6,1 = M6,0 ∩ e(r1)


Mej−1 = M3,2 = M3,1 ∩ j−1(Me)









Further, in the ordering of M the matching sets with tail t(M)=2 are defined. These matching sets depend on r1, r2, but not on r3, r4, r5, r6:









TABLE 8





Matching Sets M with tail t(M) = 2







Mf = M4,2 = M4,1 ∩ f(r2)


Mfi−1 = M3,3 = M3,2 ∩ i−1(Mf)


Mg = M5,2 = M5,1 ∩ g(r2)


Mh = M7,1 = M7,0 ∩ h(r2)


Mhk−1 = M3,4 = M3,3 ∩ k−1(Mh)









Further, in the ordering of M the matching sets with tail t(M)=3 are defined. These matching sets depend on r1, r2, r3, but not on r4, r5, r6:









TABLE 9





Matching Sets M with tail t(M) = 3







Mi = M4,3 = M4,2 ∩ i(r3)


Mj = M6,2 = M6,1 ∩ j(r3)


Mk = M7,2 = M7,1 ∩ k(r3)


Mkl−1 = M4,4 = M4,3 ∩ l−1(Mk)









Finally, in the ordering of M the matching set with tail t(M)=4 is defined, as in this example there is only one such matching set: M1=M7,3=M7,2∩l(r4). Similarly, the matching set M1 depends on r1, r2, r3, r4, but not on r5, r6. In this example of the method for join query evaluation, there are no matching sets with tail t(M)=5, 6, 7, since these vertices are sinks in the materialization graph 510. That is why no matching set depends on r5 or r6. However, it should be noted that for other materialization graphs on 7 vertices matching sets with tail t(M)=5, 6 may exist.


At block 830, the join tuple assembly algorithm is initiated to construct all join tuples satisfying the complete WHERE clause; each of them is a 7-tuple (r1, r2, r3, r4, r5, r6, r7) ε R1×R2×R3×R4×R5×R6×R7. At the outset, their number is unknown; both options are possible: there might be no join tuples at all or there might be up to |R1|*|R2|*|R3|*|R4|*|R5|*|R6|*|R7| join tuples. Their number also has to be determined by the algorithm.



FIG. 8B and method 802 continue the example of method 801 by presenting the join tuple assembly algorithm 700 in details. The example is based on the example of FIGS. 1-5. The reduction results (e.g., reduced relations R1 405, R2 410, R3 415, R4 420, R5 425, R6 430, and R7 435) are taken as an input for the algorithm 700. Therefore, having the reduced relation in hand, the following matching sets with reduction results are obtained: M1,0={2, 3}, M2,0={4, 5}, M3,0={2, 3}, M4,0={3, 4}, M5,0={2, 3}, M6,0={1, 2, 4, 5}, and M7,0={1, 2}. It should be noted that reduction means repeatedly eliminating all rows that have no join partner from the relations.


At block 835, the non-join conditions of the WHERE clause are checked. Since M1,0={2, 3}≠Ø, . . . , M7,0={1, 2}≠Ø and

Γ=(R1.z)2+(R2.z)2+(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=1000
AND
((R1.z*R2.z*R3.z*R4.z>=1000) OR (R1.z*R2.z*R3.z*R4.z=0))

is not identical to the constant expression False, the algorithm is initialized by setting the counter t=1, the iterator it1 points to 2 ε M1,0={2, 3} and Γ0=Γ is set up as the first member of the sequence of successive partial specializations to come. At this point of the algorithm, a O-tuple is constructed. The algorithm 700 has to select a suitable r1 ε M1,0, of which it might be possible to complete it to a full 7-tuple (r1, r2, r3, r4, r5, r6, r7) ε R1×R2×R3×R4×R5×R6×R7 satisfying the complete WHERE clause.


At block 840, the algorithm 700 checks if the tuple construction counter is a positive integer. Since in the current example, the tuple counter is a positive integer, the loop of algorithm 700 is entered. Executing rt=*itt; itt++ from block 730 of algorithm 700 leads to r1=2 and it1 pointing to 3 ε M1,0={2, 3}. At block 845, the first partial specialization is computed. Using the attribute value r1.z=9, the partial specialization Γ10 (r1) of Γ0 can be computed by substituting the attribute value into Γ0:

Γ1=92+(R2.z)2+(R3z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=1000 AND
((9*R2.z*R3.z*R4.z>=1000) OR (9*R2.z*R3.z*R4.z=0))

This expression can be simplified to:

Γ1=(R2.z)2+(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=919 AND
((R2.z*R3.z*R4.z>=111.111) OR (R2.z*R3.z*R4.z=0))

At block 850, the computed partial specialization is checked whether it is identical to the constant expression False. In the current example, this is not the case: since Γ1 still contains variables, e.g. R2.z, it is not constant, so it is neither False nor True. If the computed partial specialization was False, then the algorithm will continue at block 855. At block 855, the tuple construction counter t may remain unchanged if it1 still points to some row, i.e., to 3 ε M1,0 or the tuple counter may be decreased by one or more counts if the iterator it1 points to no row. The algorithm then continues at block 840 where the same procedure has to be performed for the value r1=3 that was just performed for the value r1=2.


At block 858, the tuple construction counter is checked if its value is equal to the needed length of any join tuple. Since currently t=1 and the needed length is 7, a full 7-tuple is not constructed and the method continues at block 860. Otherwise the method continues at block 859 where the constructed full tuple is stored or used directly in some functionality. At block 860, all matching sets with t(M)=1 are computed for the first time (and recomputed on each next execution of the loop of algorithm 700), this is, those matching sets that depend on ri only. According to MG 510, all arrows with tail( )=1, ordered by head( ) ascending, are:




embedded image



First, Ma=M2,1 is computed: Ma=M2,1={x ε M2,0: x.a=r1.a}. Since r1.a=2, Ma=M2,1={x ε M2,0: x.a=2}. Since M2,0={4, 5} and the row x ε R2 with row ID=4 has x.a=2, whereas the row y ε R2 with row ID=5 has y.a=3, then Ma=M2,1={4}. The interpretation of the matching set Ma is: only row ID=4 ε M2,0 matches r1=2 under the join condition a ε MG and thus has any chance to complete r1=2 to a full 7-tuple (r1, r2, r3, r4, r5, r6, r7) ε R1×R2×R3×R4×R5×R6×R7 satisfying the complete WHERE clause.


Analogously, the matching sets Mb and Mc are computed. Since Mb=M3,1={x ε M3,0: x.b=r1.b} and Mc=M4,1={x ε M4,0: x.c=r1.c} and r1.b=4 and r1.c=6, then Mb=M3,1={x ε M3,0: x.b=4}={3} and Mc=M4,1={x ε M4,0: x.c=6}={3}.


Further, the matching set Mcf−1=M2,2 is computed. As defined above, Mcf−1=M2,2={x ε M2,1: x.f=y.f for some y ε Mc}. Since Mc={3} in this case is a singleton (a set with exactly one element), the set {y.f: y ε Mc}={12} is also a singleton. Therefore, Mcf−1=M2,2={x ε M2,1: x.f=12} has to be evaluated. Since only row ID=4 in reduction table R2 410 satisfies this condition, then Mcf−1=M2,2={4}.


Analogously, the matching sets Md, Mdg−1, Me, and Mej−1 are computed: Md=M5,1={x ε M5,0: x.d=r1.d}={x ε M5,0: x.d=8}={2}; Mdg−1=M2,3={x ε M2,2: x.g=y.g for some y ε Md}={x ε M2,2: x.g=9}={4}; Me=M6,1={x ε M6,0: x.e=r1.e}={x ε M6,0: x.e=10}={2, 4}; and Mej−1=M3,2={x ε M3,1: x.j=y.j for some y ε Me}. Although Me={2, 4} is a 2-element set, these two rows happen to have identical j attributes and thus {y.j: y ε Me}={12} and Mej−1=M3,2={x ε M3,1: x.j=12}={3}. The last computed matching set Mej−1 can be interpreted as follows: for r1=2, only the subset: {3}=Mej−1=M3,2custom characterM3,0={2, 3}, matches r1 under the join condition b ε MG (since M3,2custom characterM3,1=Mb) and matches, under join condition j ε MG, some row in M6,0, which under join condition e ε MG, matches r1. Thus, only the elements of Mej−1=M3,2 satisfy the stated necessary conditions for r1 to become completed to a full 7-tuple (r1, r2, r3, r4, r5, r6, r7) ε R1×R2×R3×R4×R5×R6×R7 satisfying the complete WHERE clause.


At this point of the computation, it is possible that the 1-tuple (r1=2) can be completed to a full 7-tuple (r1, r2, r3, r4, r5, r6, r7), because: 1) the first partial specialization Γ1 is not False, thus the complete specialization Γ7 could become True; and 2) all matching sets depending exactly on these are Ma, Mb, Mc, Mcf−1, Md, Mdg−1, Me, and Mej−1, were computed and proven to be non-empty sets, thus r1 has a consistent choice of join partners in all joins it participates in.


At block 865, the tuple construction counter t is increased by “1”. This leads to t=2, which means that a 2-tuple (r1, r2) is going to be constructed from the 1-tuple (r1=2). Considering the following descending chain of matching sets: R2custom characterM2,0custom characterM2,1custom characterM2,2custom characterM2,3, the iterator it2 points to row ID=4 ε M2,3={4}. Method 802 is returned to block 840, where algorithm 700 is returned to the initial step of the loop as r2=4 and it2 points to the end of M2,3. Using the attribute value r2.z=5 (the only attribute of R2 referenced in Γ1), the next partial specialization Γ21(r2) is computed, as in block 845. Substituting the attribute value r2.z=5 into Γ1 leads to:

Γ2=52+(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=919 AND
((5*R3.z*R4.z>=111.111) OR (5*R3.z*R4.z=0))

This expression can be simplified to:

Γ2=(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=894 AND
((R3.z*R4.z>=22.222) OR (*R3.z*R4.z=0))


As Γ2 is not False, all matching sets with tail t(M)=2 (those matching sets that depend exactly on r2) are computed:









TABLE 10





Computed Matching Sets Depending on (r1 = 2, r2 = 4)







Mf = M4,2 = {x ε M4,1: x.f = r2.f} = {x ε M4,1: x.f = 12} = {3}


Mfi−1 = M3,3 = {x ε M3,2: x.i = y.i for some y ε Mf} =


{x ε M3,2: x.i = 14} = {3}


Mg = M5,2 = {x ε M5,1: x.g = r2.g} = {x ε M5,1: x.g = 9} = {2}


Mh = M7,1 = {x ε M7,0: x.h = r2.h} = {x ε M7,0: x.h = 5} = {2}


Mhk−1 = M3,4 = {x ε M3,3: x.k = y.k for some y ε Mh} =


{x ε M3,3: x.k = 2} = {3}









Similarly, the 2-tuple (r1, r2) could possibly be completed to a full 7-tuple (r1, r2, r3, r4, r5, r6, r7), because of the same reasons: 1) the second partial specialization Γ2 is not False, thus the complete specialization Γ7 could become True; and 2) all matching sets depending exactly on r2 were computed and proven to be non-empty sets. Therefore, the tuple construction counter t is increased again to t=3, as in block 860. The method 802 continues the loop considering the descending chain R3custom characterM3,0custom characterM3,1custom characterM3,2custom characterM3,3 custom characterM3,4 with t=3, gets r3=3 and it3 pointing to the end of M3,4=[3]. Using the attribute value r3.z=14, the next partial specialization Γ32(r3) is computed:

Γ3=142+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=894 AND
((14*R4.z>=22.222) OR (14*R4.z=0))

This expression can be simplified to:

Γ3=(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=698 AND
((R4.z>=1.5873) OR (R4.z=0))


As Γ3 is not False, all matching sets with tail t(M)=3 (those matching sets that depend exactly on r3) are computed:









TABLE 11





Computed Matching Sets Depending on (r1 = 2, r2 = 4, r3 = 3)







Mi = M4,3 = {x ε M4,2: x.i = r3.i} = {x ε M4,2: x.i = 14} = {3}


Mj = M6,2 = {x ε M6,1: x.j = r3.j} = {x ε M6,1: x.j = 12} = {2, 4}


Mk = M7,2 = {x ε M7,1: x.k = r3.k} = {x ε M7,1: x.k = 2} = {2}


Mkl−1 = M4,4 = {x ε M4,3: x.l = y.l for some y ε Mk} =


{x ε M4,3: x.l = 15} = {3}









Similarly, the tuple construction counter is increased once more to t=4. Considering the descending chain R4custom characterM4,0custom characterM4,1custom characterM4,2custom characterM4,3custom characterM4,4, on the next entry on the loop, r4=3 and it4 pointing to the end of M4,4={3}. Using the attribute value r4.z=10, the next partial specialization Γ43(r4) is computed:

Γ4=102+(R5.z)2+(R6.z)2+(R7.z)2<=698 AND
((10>=1.5873) OR (10=0))

This expression can be simplified to:

Γ4=(R5.z)2+(R6.z)2+(R7.z)2<=598 AND


(True or False)


The second part of the expression becomes True, since one sub-part of it is True. Therefore: Γ4=(R5.z)2+(R6.z)2+(R7.z)2<=598 AND True. The AND node contains no branch that is False, therefore it does not become False. In this case all True branches of the operator tree can be removed: Γ4=(R5.z)2+(R6.z)2+(R7.z)2<=598 AND. Now, the AND node contains exactly one branch and can therefore be replaced by this branch, decreasing the height of F4 and making it a tree of one node: Γ4=(R5.z)2+(R6.z)2+(R7.z)2<=598.


Since Γ4 is still not constant, hence different from False, all matching sets with tail t(M)=4 are computed (those matching sets that depend exactly on r1=2, r2=4, r3=3, r4=3). This is only one set:

Ml=M7,3=M7,2∩l(r4)={xεM7,2:x.l=r4.l}={xεM7,2:x.l=15}={2}


Further, the tuple construction counter is increased to t=5. Considering the matching set chain R5custom characterM5,0custom characterM5,1custom characterM5,2, r5=2, and the iterator it5 points to the end of M5,2={2}. Using the attribute value r5.z=12, the next partial specialization Γ54(r5) is computed: Γ5122+(R6.z)2+(R7.z)2<=598, which simplified leads to: Γ5=(R6.z)2+(R7.z)2<=454. Since there are no matching sets to compute, steps 760 and 765 of the algorithm 700 are skipped and the algorithm proceeds with increasing the counter t to t=6 (as in step 770). Considering the descending chain R6custom characterM6,0custom characterM6,1custom characterM6,2, on the next entry in the loop, r6=2 and it6 points to row ID=2 ε M6,2={2, 4}. Using the attribute value r6.z=25, the next partial specialization Γ65(r6) is computed: Γ6=252+(R7.z)2<=454, which simplified leads to: Γ6=(R7.z)2<=−171. All attributes values of the relations are integer values, hence real numbers, and a square of a real number can never be negative. Thus, the partial specialization Γ6 is False.


At this point of the example, one of the partially specialized operator trees Γt is False. The tuple construction counter remains unchanged at t=6, since at block 740 it6 points to 4 ε M6,2={2, 4}. Therefore, the constructed tuple (r1=2, r2=4, r3=3, r4=3, r5=2) also remains unchanged and the computed matching sets with tail t(M)<=5 are not recomputed. At block 730, r6=4 and it6 pointing to the end of M6,2 are set. At this point, the algorithm has to compute Γ6 for r6=4. However, the attribute value is the same for both rows of the table: again, r6.z=25. Thus, the recomputation of Γ6 becomes unnecessary as the partial specialization Γ6 is still False. At this point, there is no other r6 available (meaning no other rows in the matching set M6,2={2, 4}). The current iterator positions are: it6 points to the end of M6,2={2, 4}; it5 points to the end of M5,2={2}; it4 points to the end of M4,4={3}; it3 points to the end of M3,4={3}; it2 points to the end of M2,3={4}; it1 points to 3 ε M1,0={2, 3}. Therefore, the first iterator in this sequence pointing to any row within its matching set is it1.


This means that the 6-tuple (r4=2, r2=4, r3=3, r4=3, r5=2, r6=4) cannot be completed to any 7-tuple satisfying the complete WHERE clause. Additionally, when (r4=2, r2=4, r3=3, r4=3, r5=2) are fixed, there is no choice of a next r6. Therefore, even the 5-tuple (r4=2, r2=4, r3=3, r4=3, r5=2) cannot be completed to 7-tuple. Again, when (r4=2, r2=4, r3=3, r4=3) are fixed, there is no choice of a next r5. Therefore, even the 4-tuple (r4=2, r2=4, r3=3, r4=3) cannot be completed. This continues until it becomes evident that even the 1-tuple (r4=2) cannot be completed. Thus, a new r1 is needed and since in the current scenario, in contrast to the scenarios before, a new r1 exists, the algorithm decides to continue and not to stop as join tuples satisfying the WHERE clause could exist. Algorithm 700 reflects this situation, in step 740, by decreasing the tuple construction counter from t=6 to t=1.


Method 802 is then returned to block 840, where the loop is entered again. At this point r1=3 and iterator it1 pointing to the end of M1,0 are set. The last partial specialization used was Γt−10. From this, using the attribute value r1.z=10, the next partial specialization Γ10(r1), is computed:

Γ1=102+(R2.z)2+(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=1000
AND
((10*R2.z*R3.z*R4.z>=1000) OR (10*R2.z*R3.z*R4.z=0))

This expression can be simplified to:

Γ1=(R2.z)2+(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=900 AND
((R2.z*R3.z*R4.z>=100) OR (R2.z*R3.z*R4.z=0))


Since Γ1 is not the constant expression False and t=1 did not yet reach the required level of N=7, the matching sets with tail t(M)=1, i.e., those matching sets that depend exactly on r1=3 are recomputed. It should be noted that this cannot be avoided since r1 changed and its attribute values also changed. At the same time all matching sets with tail less than t are not recomputed. In the specific examples, where t scaled down from t=6 to t=1, this could be unnoticed, since this leads to the event that only the reduction results Mn,0 are not recomputed, having a tail( ) of 0. If however t had only scaled down from t=6 to t=5, which indeed is the typical case, all matching sets of tails 0, 1, 2, 3 or 4, that is, all matching sets would remain valid:









TABLE 12





Computed Matching Sets Depending on (r1 = 3)







Ma = M2,1 = {x ε M2,0: x.a = r1.a} = {x ε M2,0: x.a = 3} = {5}


Mb = M3,1 = {x ε M3,0: x.b = r1.b} = {x ε M3,0: x.b = 6} = {2}


Mc = M4,1 = {x ε M4,0: x.c = r1.c} = {x ε M4,0: x.c = 9} = {4}


Mcf−1 = M2,2 = {x ε M2,1: x.f = y.f for some y ε Mc} =


{x ε M2,1: x.f = 14} = {5}


Md = M5,1 = {x ε M5,0: x.d = r1.d} = {x ε M5,0: x.d = 12} = {3}


Mdg−1 = M2,3 = {x ε M2,2: x.g = y.g for some y ε Md} =


{x ε M2,2: x.g = 9} = {5}


Me = M6,1 = {x ε M6,0: x.e = r1.e} = {x ε M6,0: x.e = 15} = {1, 5}


Mej−1 = M3,2 = {x ε M3,1: x.j = y.j for some y ε Me} =


{x ε M3,1: x.j = 15} = {2}









Following algorithm 700, the tuple construction counter is increased to t=2; r2=5, and the iterator it2 points to the end of M2,3={5}. Using the attribute value r2.z=0, the next partial specialization Γ21(r2) is computed, as in block 845. Substituting the attribute value r2.z=0 into Γ1 leads to:

Γ2=02+(R3z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<32 900 AND
((0*R3.z*R4.z>=1000) OR (0*R3.z*R4.z=0))

As the second part of the Boolean expression is True, the height of the operator tree Γ2 decreases and Γ2 becomes a one-node tree:

Γ2=(R3.z)2+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=900


Since Γ2 is not the constant expression False, the matching sets with tail t(M)=2 are:









TABLE 13





Computed Matching Sets Depending on (r1 = 3, r2 = 5)







Mf = M4,2 = {x ε M4,1: x.f = r2.f} = {x ε M4,1: x.f = 14} = {4}


Mfi−1 = M3,3 = {x ε M3,2: x.i = y.i for some y ε Mf} =


{x ε M3,2: x.i = 16} = {2}


Mg = M5,2 = {x ε M5,1: x.g = r2.g} = {x ε = M5,1: x.g = 9} = {3}


Mh = M7,1 = {x ε M7,0: x.h = r2.h} = {x ε M7,0: x.h = 10} = {1}


Mhk−l = M3,4 = {x ε M3,3: x.k = y.k for some y ε Mh} =


{x ε M3,3: x.k = 1} = {2}









Analogously, the tuple construction counter is increased to t=3; r3=2, and the iterator it3 points to the end of M3,4={2}. Using the attribute value r3.z=12, the next partial specialization Γ32(r3) is computed, as in block 845. Substituting the attribute value r3.z=12 into Γ2 leads to:

Γ3=122+(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=900

This expression can be simplified to:

Γ3=(R4.z)2+(R5.z)2+(R6.z)2+(R7.z)2<=756


As Γ3 is not identical to False, all matching sets with tail t(M)=3 are computed:









TABLE 14





Computed Matching Sets Depending on (r1 = 3, r2 = 5, r3 = 2)







Mi = M4,3 = {x ε M4,2: x.i = r3.i} = {x ε M4,2: x.i = 16} = {4}


Mj = M6,2 = {x ε M6,1: x.j = r3.j} = {x ε M6,1: x.j = 15} = {1, 5}


Mk = M7,2 = {x ε M7,1: x.k = r3.k} = {x ε M7,1: x.k = 1} = {1}


Mkl−1 = M4,4 = {x ε M4,3: x.l = y.l for some y ε Mk} =


{x ε M4,3: x.l = 20} = {4}









Further, the tuple construction counter is increased to t=4; r4=4, and the iterator it4 points to the end of M4,4={4}. Using the attribute value r4.z=12, the next partial specialization Γ43(r4) is computed, as in block 845. Substituting the attribute value r4.z=12 into Γ3 leads to:

Γ4=122+(R5.z)2+(R6.z)2+(R7.z)2<=756

This expression can be simplified to:

Γ4=(R5.z)2+(R6.z)2+(R7.z)2<=612

As Γ4 is not False, all matching sets with tail t(M)=4 are computed. This is only one set:

M1=M7,3={xεM7,2: x.l=r4.l}={xεM7,2: x.l=20}={1}.


Further, the tuple construction counter is increased to t=5, r5=3, and the iterator it5 points to row the end of E M5,2={3}. Using the attribute value r5.z=4, the next partial specialization Γ54(r5) is computed, as in block 845. Substituting the attribute value r5.z=4 in Γ5 leads to:

Γ5=42+(R6.z)2+(R7.z)2<=612

This expression can be simplified to:

ΓF5=(R6.z)2+(R7.z)2<=596


Since Γ5 is not False and there are no matching sets to compute, step 860 of method 802 is skipped and the algorithm proceeds with increasing the counter t, as in step 865. Thus, the tuple construction counter is increased to t=6, r6=1, and the iterator it6 points to the end of ε M6,2={1, 5}. Using the attribute value r6.z=20, the next partial specialization Γ65(r6) is computed. Substituting the attribute value r6.z=20 into Γ5 leads to:

Γ6=202+(R7.z)2<=596

This expression can be simplified to:

Γ5=(R7.z)2<=196


Since Γ6 is not False and there are no matching sets to compute, step 860 of method 802 is skipped and the algorithm proceeds with increasing the counter t, as in step 865. Thus, the tuple construction counter is increased to t=7, r7=1, and the iterator it7 points to the end of ε M7,3={1} Using the attribute value r7.z=14, the next partial specialization Γ76(r7) is computed. Substituting the attribute value r7.z=14 into Γ6 leads to: Γ7=142<=196, which makes: Γ7=True. According to block 750 of algorithm 700, the value of the counter t is equal to the number N of nodes in the materialization graph MG 510. The first full 7-tuple (r1=3, r2=5, r3=2, r4=4, r5=3, r6=1, r7=1) ε R1×R2×R3×R4×R5×R6×R7 satisfying the complete WHERE clause has been constructed (at block 860).


However, there may be more than one full 7-tuple satisfying the complete WHERE clause of the join query. The algorithm 700 continues to search for any other possible tuples. At block 855, the tuple construction counter is decreased to t=6, so that all matching sets with tail t(M)<=5 can be reused (which are all matching sets). Thus, the tuple construction counter is t=6; r6=5; and the iterator it6 points to the end of M6,2. Using the attribute value r6.z=21, from the old partial specialization Γ5=(R6.z)2+(R7.z)2<=596, which is also reused, the next partial specialization Γ6 is computed. Substituting the attribute value r6.z=21 into 15 leads to: Γ6=212+(R7.z)2<=596, which is: Γ6=(R7.z)2<=155.


Similarly, the partial specialization F6 is not False and since there are no matching sets to compute, the algorithm proceeds with increasing the counter t, as in step 865. Again, the tuple construction counter is increased to t=7 and iterator it7 points to 1 ε M7,3={1}. It should be noted that the matching set M7,3 is also reused without being recomputed since t (M7,3)=4, the matching set M7,3 depends exactly on (r1=3, r2=5, r3=2, r4=4), which did not change. The tuple construction counter is t=7; r7=1; and the iterator it7 points to the end of M7,3. Using the attribute value r7.z=14, the partial specialization Γ76(r7) is computed. Substituting the attribute value r7.z=14 into Γ6 leads to: Γ7=142<=155, which makes: Γ7=False. This means that the algorithm reached t=7 for the second time, but Γ7=False implies that the second full 7-tuple (r1=3, r2=5, r3=2, r4=4, r5=3, r6=5, r7=1) that satisfies all join conditions, does yet not satisfy the complete WHERE clause. Since the current iterator positions are: it7 pointing to the end of M7,3; it6 pointing to the end of M6,2; it5 pointing to the end of M5,2; it4 pointing to the end of M4,4; it3 pointing to the end of M3,4; it2 pointing to the end of M2,3; and it1 pointing to the end of M1,0, no more 7-tuples can be created. In view of these iterator positions, the algorithm decreases from t=7 fully down to t=0, causing the algorithm to finally terminate. At this point, the already found 7-tuple (r1=3, r2=5, r3=2, r4=4, r5=3, r6=1, r7=1) is in fact the only 7-tuple satisfying the complete WHERE clause.


It should be noted that in the example describing methods 801 and 802, in most cases the general relation Mn,icustom characterMn,i−1 was satisfied by equality and that no matching set was ever encountered empty. This was due to the fact that the previous reduction of the full relations Rn to the reduction results Mn,0 was considered as “perfect”. This means that the reduction results Mn,0 were as small as possible—every Mn,0 equaled the projection of the solution of the join conditions alone to Rn. However, a perfect reduction is not possible for all join graphs and all distributions of data. In the current example, the failure of a partial tuple to complete to a full tuple was only caused by some partially specialized operator tree becoming False. However, in other embodiments this could also be caused by some matching set becoming empty.


The algorithm for join tuple assembly by partial specializations (700) handles the assembly phase of the method for join query evaluation by semijoin reduction. It improves it simultaneously with regard to time and space consumption. The algorithm is applicable to outer joins as well as to inner joins, since the differences in the evaluation procedure derived from the join type occur only in the reduction phase, which is before the join tuple assembly phase. Further, the algorithm does not require flattening the join graph into a tree in the reduction phase, by showing how to handle cycles in the assembly phase. In an embodiment, the algorithm is suitable for first-k queries and is applicable to pure join queries (Γ=True). Moreover, algorithm 700 is not limited to the method of semi-join reduction: it can be used as a general method for evaluating any join query, as previous reductions accelerate the method, but are not a prerequisite; the algorithm stays correct if in place of Mn,0 any set S with Mn,0custom characterScustom characterRn is taken. Finally, algorithm 700 is well suited to distributed computation. The row IDs in M1,0 can be distributed to distinct processors for possible completion to a join tuple; if |M1,0| is too small for this distribution, the pairs of M1,0×M2,0 can be distributed, and so on.


Some embodiments of the invention may include the above-described methods being written as one or more software components. These components, and the functionality associated with each, may be used by client, server, distributed, or peer computer systems. These components may be written in a computer language corresponding to one or more programming languages such as, functional, declarative, procedural, object-oriented, lower level languages and the like. They may be linked to other components via various application programming interfaces and then compiled into one complete application for a server or a client. Alternatively, the components maybe implemented in server and client applications. Further, these components may be linked together via various distributed programming protocols. Some example embodiments of the invention may include remote procedure calls being used to implement one or more of these components across a distributed programming environment. For example, a logic level may reside on a first computer system that is remotely located from a second computer system containing an interface level (e.g., a graphical user interface). These first and second computer systems can be configured in a server-client, peer-to-peer, or some other configuration. The clients can vary in complexity from mobile and handheld devices, to thin clients and on to thick clients or even other servers.


The above-illustrated software components are tangibly stored on a computer readable storage medium as instructions. The term “computer readable storage medium” should be taken to include a single medium or multiple media that stores one or more sets of instructions. The term “computer readable storage medium” should be taken to include any physical article that is capable of undergoing a set of physical changes to physically store, encode, or otherwise carry a set of instructions for execution by a computer system which causes the computer system to perform any of the methods or process steps described, represented, or illustrated herein. Examples of computer readable storage media include, but are not limited to: magnetic media, such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs, DVDs and holographic devices; magneto-optical media; and hardware devices that are specially configured to store and execute, such as application-specific integrated circuits (“ASICs”), programmable logic devices (“PLDs”) and ROM and RAM devices. Examples of computer readable instructions include machine code, such as produced by a compiler, and files containing higher-level code that are executed by a computer using an interpreter. For example, an embodiment of the invention may be implemented using Java, C++, or other object-oriented programming language and development tools. Another embodiment of the invention may be implemented in hard-wired circuitry in place of, or in combination with machine readable software instructions.



FIG. 9 is a block diagram illustrating an exemplary computer system 900. The computer system 900 includes a processor 905 that executes software instructions or code stored on a computer readable storage medium 955 to perform the above-illustrated methods of the invention. The computer system 900 includes a media reader 940 to read the instructions from the computer readable storage medium 955 and store the instructions in storage 910 or in random access memory (RAM) 915. The storage 910 provides a large space for keeping static data where at least some instructions could be stored for later execution. The stored instructions may be further compiled to generate other representations of the instructions and dynamically stored in the RAM 915. The processor 905 reads instructions from the RAM 915 and performs actions as instructed. According to one embodiment of the invention, the computer system 900 further includes an output device 925 (e.g., a display) to provide at least some of the results of the execution as output including, but not limited to, visual information to users and an input device 930 to provide a user or another device with means for entering data and/or otherwise interact with the computer system 900. Each of these output 925 and input devices 930 could be joined by one or more additional peripherals to further expand the capabilities of the computer system 900. A network communicator 935 may be provided to connect the computer system 900 to a network 950 and in turn to other devices connected to the network 950 including other clients, servers, data stores, and interfaces, for instance. The modules of the computer system 900 are interconnected via a bus 945. Computer system 900 includes a data source interface 920 to access data source 960. The data source 960 can be access via one or more abstraction layers implemented in hardware or software. For example, the data source 960 may be access by network 950. In some embodiments the data source 960 may be accessed via an abstraction layer, such as, a semantic layer.


A data source 960 is an information resource. Data sources include sources of data that enable data storage and retrieval. Data sources may include databases, such as, relational, transactional, hierarchical, multi-dimensional (e.g., OLAP), object oriented databases, and the like. Further data sources include tabular data (e.g., spreadsheets, delimited text files), data tagged with a markup language (e.g., XML data), transactional data, unstructured data (e.g., text files, screen scrapings), hierarchical data (e.g., data in a file system, XML data), files, a plurality of reports, and any other data source accessible through an established protocol, such as, Open DataBase Connectivity (ODBC), produced by an underlying software system (e.g., ERP system), and the like. Data sources may also include a data source where the data is not tangibly stored or otherwise ephemeral such as data streams, broadcast data, and the like. These data sources can include associated data foundations, semantic layers, management systems, security systems and so on.


In the above description, numerous specific details are set forth to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however that the invention can be practiced without one or more of the specific details or with other methods, components, techniques, etc. In other instances, well-known operations or structures are not shown or described in details to avoid obscuring aspects of the invention.


Although the processes illustrated and described herein include series of steps, it will be appreciated that the different embodiments of the present invention are not limited by the illustrated ordering of steps, as some steps may occur in different orders, some concurrently with other steps apart from that shown and described herein. In addition, not all illustrated steps may be required to implement a methodology in accordance with the present invention. Moreover, it will be appreciated that the processes may be implemented in association with the apparatus and systems illustrated and described herein as well as in association with other systems not illustrated.


The above descriptions and illustrations of embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. These modifications can be made to the invention in light of the above detailed description. Rather, the scope of the invention is to be determined by the following claims, which are to be interpreted in accordance with established doctrines of claim construction.

Claims
  • 1. An article of manufacture including a non-transitory computer readable storage medium to tangibly store instructions, which when executed by a computer, cause the computer to: receive a join query, a materialization graph representing a join part of the join query, and a plurality of matching sets;configure a tuple construction counter, which value indicates progression in an overall length of constructing a join tuple and an iterator that traverses a matching set from the plurality of matching sets;upon determining that the value of the tuple construction counter is a positive integer: (a) compute a partial specialization of an operator tree, wherein the operator tree represents a non-join part of the join query;(b) when the computed partial specialization satisfies the non-join part of the join query and the value of the tuple construction counter is less than the overall length of the join tuple to be constructed, recompute a subset of the plurality of matching sets;(c) when no empty matching set is encountered during recomputation, increase the value of the tuple construction counter by one count;(d) when the value of the tuple construction counter is equal to the overall length of the join tuple, identify a first join tuple satisfying the join part and the non-join part of the join query; and(e) when the computed partial specialization does not satisfy the non-join part of the join query or an empty matching set is encountered, decrease the value of the tuple construction counter by one or more counts; andcontinue steps (a) to (e) until the value of the tuple construction counter is equal to zero, wherein all join tuples satisfying the join part and the non-join part of the join query are identified when the value of the tuple construction counter is equal to zero.
  • 2. The article of manufacture of claim 1, wherein the first join tuple is used directly in a functionality or stored in a storage unit.
  • 3. The article of manufacture of claim 1, wherein the subset of the plurality of matching sets is recomputed with a tail function equal to the value of the tuple construction counter.
  • 4. The article of manufacture of claim 1, wherein the subset of the plurality of matching sets represents reduction results derived from semi-join reduction of a plurality of relations.
  • 5. The article of manufacture of claim 1, wherein the join part and the non-join part of the join query represent a plurality of Boolean conditions.
  • 6. A computer-implemented method comprising: receiving a join query, a materialization graph representing a join part of the join query, and a plurality of matching sets;configuring, using a computing processor, a tuple construction counter, which value indicates progression in an overall length of constructing a join tuple and an iterator that traverses through elements in a matching set from the plurality of matching sets;upon determining, using the computing processor, that the value of the tuple construction counter is a positive integer: (a) computing a partial specialization of an operator tree, wherein the operator tree represents a non-join part of the join query;(b) when the computed partial specialization satisfies the non-join part of the join query and the value of the tuple construction counter is less than the overall length of the join tuple to be constructed, recomputing a subset of the plurality of matching sets;(c) when no empty matching set is encountered during recomputation, increasing the value of the tuple construction counter by one count;(d) when the value of the tuple construction counter is equal to the overall length of the join tuple, identifying a first join tuple satisfying the join part and the non-join part of the join query; and(e) when the computed partial specialization does not satisfy the non-join part of the join query or an empty matching set is encountered, decreasing the value of the tuple construction counter; andcontinuing steps (a) to (e) until the value of the tuple construction counter is equal to zero, wherein all join tuples satisfying the join part and the non-join part of the join query are identified when the value of the tuple construction counter is equal to zero.
  • 7. The method of claim 6, wherein the first join tuple is used directly in a functionality or stored in a storage unit.
  • 8. The method of claim 6, wherein the subset of the plurality of matching sets is recomputed with a tail function equal to the value of the tuple construction counter.
  • 9. The method of claim 6, wherein the subset of the plurality of matching sets represent reduction results derived from semi-join reduction of a plurality of relations.
  • 10. The method of claim 6, wherein the join part and the non-join part of the join query represent a plurality of Boolean conditions.
  • 11. A computing system comprising: a database storage unit for storing one or more of a plurality of matching sets derived from semi-join reduction of a plurality of relations, a join query, and a materialization graph representing a join part of the join query;a processor in communication with the database storage unit that executes instructions including: configuring a tuple construction counter, which value indicates progression in an overall length of constructing a join tuple and an iterator that traverses through elements in a matching set from the plurality of matching sets;upon determining that the value of the tuple construction counter is a positive integer: (a) computing a partial specialization of an operator tree, wherein the operator tree represents a non-join part of the join query;(b) when the computed partial specialization satisfies the non-join part of the join query and the value of the tuple construction counter is less than the overall length of the join tuple to be constructed, recomputing a subset of the plurality of matching sets;(c) when no empty matching set is encountered during recomputation, increasing the value of the tuple construction counter by one count;(d) when the value of the tuple construction counter is equal to the overall length of the join tuple, identifying a first join tuple satisfying the join part and the non-join part of the join query; and(e) when the computed partial specialization does not satisfy the non-join part of the join query or an empty matching set is encountered, decreasing the value of the tuple construction counter; andcontinuing steps (a) to (e) until the value of the tuple construction counter is equal to zero, wherein all join tuples satisfying the join part and the non-join part of the join query are identified when the value of the tuple construction counter is equal to zero.
  • 12. The computing system of claim 11, wherein the first join tuple is used directly in a functionality or stored in a storage unit.
  • 13. The computing system of claim 11, wherein the subset of the plurality of matching sets is recomputed with a tail function equal to the value of the tuple construction counter.
  • 14. The computing system of claim 11, wherein the subset of the plurality of matching sets represent reduction results derived from semi-join reduction of a plurality of relations.
  • 15. The computing system of claim 11, wherein the join part and the non-join part of the join query represent a plurality of Boolean conditions.
US Referenced Citations (13)
Number Name Date Kind
5345585 Iyer et al. Sep 1994 A
6439783 Antoshenkov Aug 2002 B1
6496819 Bello et al. Dec 2002 B1
8086598 Lamb et al. Dec 2011 B1
20080027904 Hill et al. Jan 2008 A1
20080033914 Cherniack et al. Feb 2008 A1
20080189239 Bawa et al. Aug 2008 A1
20080294615 Furuya et al. Nov 2008 A1
20090070313 Beyer et al. Mar 2009 A1
20100306189 Kim et al. Dec 2010 A1
20110119245 Sargeant et al. May 2011 A1
20110131199 Simon et al. Jun 2011 A1
20110161310 Tang et al. Jun 2011 A1
Related Publications (1)
Number Date Country
20110289069 A1 Nov 2011 US