A database is a collection of stored data that is logically related and that is accessible by one or more users or applications. A popular type of database is the relational database management system (RDBMS), which includes relational tables, also referred to as relations, made up of rows and columns (also referred to as tuples and attributes). Each row represents an occurrence of an entity defined by a table, with an entity being a person, place, thing, or other object about which the table contains information.
One of the goals of a database management system is to optimize the performance of queries for access and manipulation of data stored in the database. Given a target environment, an optimal query plan is selected, with the optimal query plan being the one with the lowest cost (e.g., response time) as determined by an optimizer. The response time is the amount of time it takes to complete the execution of a query on a given system.
In massively parallel processing (MPP) systems, dealing with data skew in parallel joins is critical to the performance of many applications. As is understood, a join comprises a structured query language (SQL) operation that combines records from two or more tables. Contemporary parallel database systems provide for the distribution of data to different parallel processing units, e.g., Access Module Processors (AMPs), by utilizing hash redistribution mechanisms. As referred to herein, hash redistribution comprises generating a hash value of, for example, column or index values of a table and redistributing the corresponding rows to processing modules based on the hash values. Then, when joining two relations, e.g., relations “R” and “S”, by join conditions such as R.a=S.b, rows in both tables with the same join column values need to be relocated to the same processing unit in order to evaluate the join condition. To achieve this, contemporary systems typically implement one of two options.
Assuming R and S are partitioned across various processing units and that neither R.a nor S.b is the primary index, e.g., the values that are hashed to distribute the base table rows to the processing units. The MPP optimizer may hash redistribute rows of R on R.a and hash redistribute rows of S on S.b. By using the same hash function, rows with the same join column values are ensured to be redistributed to the same processing unit. The optimizer will then choose the best join method in the local processing unit, e.g., based on collected statistics or other criteria. Such a parallel join mechanism is referred to herein as redistribution.
Redistribution is typically efficient when the rows are sufficiently evenly distributed among the processing units. However, consider the case where there is highly skewed data in column R.a and/or S.b. In this situation, a processing unit will have an excessive load with respect to other processing units involved in the join operation. A processing unit featuring an excessive load in such a situation is referred to herein as a hot processing unit. Consequently, the system performance is degraded and may result in an “out of spool space” error on the hot processing unit which may cause, for example, queries to abort after hours of operation in large data warehouses.
Alternatively, the optimizer may choose to duplicate the rows of one relation among the processing units. For example, assume the relation R is much larger than the relation S. In such a situation, the rows of R may be maintained locally at each processing unit where R resides, and the rows of S are duplicated among each of the processing units. Such a mechanism is referred to as table duplication. By this mechanism, rows with the same join column values will be located at the same processing unit thereby allowing completion of the parallel join operation. However, efficient performance utilizing a duplication mechanism requires for one relation to be sufficiently small to allow for duplicating on all the parallel units.
Further, partial redistribution, partial duplication (PRPD) mechanisms have been proposed for an optimizer to use when joining large tables where data skew is known and the skewed data values are also known. However, PRPD routines are disadvantageous when one relation is not significantly larger than the other relation.
Disclosed embodiments provide a system, method, and computer readable medium for optimizing join operations in a parallel processing system. A respective set of rows of a first table and a second table involved in a join operation are distributed to each of a plurality of processing modules. The join operation comprises a join on a first column of the first table and a second column of the second table. Each of the plurality of processing modules redistributes at least a portion of the rows of the first table distributed thereto substantially equally among the other processing modules and duplicates at least a portion of the rows of the second table distributed thereto among the plurality of processing modules. The disclosed optimization mechanisms provide for reduced spool space requirements for execution of the parallel join operation.
Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures, in which:
It is to be understood that the following disclosure provides many different embodiments or examples for implementing different features of various embodiments. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting.
As shown, the database system 100 includes one or more processing modules 1051 . . . Y that manage the storage and retrieval of data in data-storage facilities 1101 . . . Y. Each of the processing modules may host one or more processing units, such as one or more AMPs. Each of the processing modules 1051 . . . Y manages a portion of a database that is stored in a corresponding one of the data-storage facilities 1101 . . . Y. Each of the data-storage facilities 1101 . . . Y includes one or more disk drives or other storage medium.
The system stores data in one or more tables in the data-storage facilities 1101 . . . Y. The rows 1151 . . . Z of the tables are stored across multiple data-storage facilities 1101 . . . Y to ensure that the system workload is distributed evenly across the processing modules 1051 . . . Y. A parsing engine 120 organizes the storage of data and the distribution of table rows 1151 . . . Z among the processing modules 1051 . . . Y and accesses processing modules 1051 . . . Y via an interconnect 130. The parsing engine 120 also coordinates the retrieval of data from the data-storage facilities 1101 . . . Y in response to queries received from a user, such as one using a client computer system 135 connected to the database system 100 through a network connection 125. The parsing engine 120, on receiving an incoming database query, applies an optimizer component 122 to the query to assess the best plan for execution of the query. Selecting the optimal query-execution plan includes, among other things, identifying which of the processing modules 1051 . . . Y are involved in executing the query and which database tables are involved in the query, as well as choosing which data-manipulation techniques will serve best in satisfying the conditions of the query. Database statistics are used in making these assessments during construction of the query-execution plan. For example, database statistics may be used by the optimizer to determine data demographics, such as attribute minimum and maximum values and data ranges of the database. The database system typically receives queries in a standard format, such as the Structured Query Language (SQL) put forth by the American National Standards Institute (ANSI).
Select*from TableR, TableS where TableR.a=TableS.b
In the present example, column a elements of TableR are designated 2311-23124 and column b elements of TableS are designed 2321-23224. Assume that the rows of TableR and TableS are distributed among AMPs 2101-2109 via a hash of primary indexes of TableR and TableS and that the primary indexes include neither column a of TableR nor column b of TableS. In this situation, the rows may be redistributed by hashing the join columns and redistributing the rows based on the hash values such that rows from TableR and TableS that match on the join columns TableR.a and TableS.b are redistributed to the same AMPs. For example,
Redistribution of tables for execution of a parallel join is efficient if data skew is small. However, numerous problems may be encountered in the event that there is data skew in a column on which a join is to be performed as discussed hereinabove.
Table duplication provides an alternative mechanism for executing a parallel join. Suppose relation R is much larger than relation S. In this instance, all rows of the larger table, R, are maintained locally at each AMP to which the table R rows are originally allocated, while all rows of the smaller table, S, are duplicated among all processing modules. By this mechanism, rows with the same join column value will be located at the same processing module to complete the join. However, the duplication mechanism is inefficient when one table is not significantly larger than the other table.
For illustrative purposes, assume a join of relations R and S that are partitioned among four processing modules is to be performed with the join condition R.a=S.b. Further assume that R and S each have 24 million rows of equal size, and that R and S are not evenly distributed among the four processing modules as depicted by the diagrammatic representation of an exemplary relation distribution 400 of
For illustrative purposes, assume each AMP has a spool limit of 32 million rows and that rows of R and S comprise a common row size. Note that spool limits are typically specified in bytes or another data size, and the examples provided herein of spool limits based on the number of rows are chosen only to facilitate an understanding of the disclosed embodiments. Suppose there is data skew in the relations R and S, and that if redistribution is chosen for the join operation, a certain AMP will receive more than 32 million rows as a result of the redistribution. In this instance, an out of spool space error would occur. Even if the spool size limit was sufficient, the hot AMP consumes a large amount of time to process, while other AMPs might finish much earlier and become idle while waiting for the hot AMP to complete processing. As a result, the duplication mechanism must be chosen. However, because both R and S have 24M rows in total, there is no difference whether R or S is chosen for duplication. If S is chosen for duplication, the AMP demographics 500 depicted in the diagrammatic representation of
The problems of such a duplication mechanism are numerous. First, each AMP 4101 and 4103 that now has 34M rows exceeds the 32M spool space limit and thus runs out of spool space. Secondly, even if a larger spool space were allocated for the AMPs, AMPs 4102 and 4104 may finish processing much earlier than AMPs 4101 and 4103 thus resulting in inefficient utilization of the parallel resources.
In general, duplication is inefficient when one table is not significantly larger than the other table involved in a join operation. In accordance with disclosed embodiments, mechanisms are provided for optimization of parallel join operations involving two large skewed relations of equal or close sizes.
In an embodiment, a mechanism for optimizing a join operation in a parallel processing system is provided by redistributing one relation substantially or approximately equal among a plurality of processing modules. Consider the relation R and S demographics depicted in
Assume two relations R and S are to be joined by the condition R.a=S.b, that relations R and S are of similar sizes, and that relations R and S are distributed to m AMPs or other processing modules. Further assume that the rows sizes of R and S are RSR and RSS, respectively and that the spool size limit for each AMP is Sp among which j space is required to be reserved for join result spools. The traditional duplication method is unsuitable when:
Maxn(R)*RSR+|S|*RSS>Sp−j eq. 1
where Maxn(R) is the maximum number of rows in R on a particular AMP and |S| is the total number of rows of the relation S. In the above example, the traditional duplication method is unsuitable because the sum of the maximum size of the rows (Maxn(R)*RSR) of relation R redistributed to one of the AMPs and the size of the duplicated relation S (|S|*RSS) exceeds the spool space usable for receipt of redistributed and duplicated rows, that is Sp−j.
In such a situation, the above implementation may be utilized whereby one relation is redistributed substantially equally and the other relation is duplicated if the following condition exists:
|R|/m*RSR+|S|*RSS<Sp−j eq. 2
That is, if the sum of the size (|R|/m*RSR) of the relation R redistributed substantially equally among m processing modules and the complete size of the relation S is less than the spool space usable for storage of redistributed and duplicated rows, then the described mechanism of redistributing one relation substantially equally among the m processors and duplicating the other relation may be applied.
To provide close equality in the redistribution of relation R among the m AMPs, a random composite partition key (a, k) may be selected where k is a random number, and the relation R rows may be redistributed on the hash value of the key. Alternatively, rows of relation R may be redistributed on a key (a, C) where C comprises some other column(s) of R and the key (a, C) can assure good redistribution when hashed. In still another embodiment, the composite key (a, C) may be value partitioned if its values are substantially uniform.
After duplication of the relation S and substantially equal redistribution of the relation R rows, R and S are locally joined on each AMP in parallel among the m AMPs and the parallel join may then be completed (step 710). The optimized join routine cycle may then end (step 712).
In accordance with another embodiment, an optimized join mechanism is provided that accommodates tighter spool space restrictions. Again consider a join of two relations R and S with the demographics depicted in
In accordance with an embodiment, each relation R and S on each AMP is divided into two subsets (designated R1, R2, S1, and S2). Assume the relations R and S are divided into two subsets with the demographics 800 depicted in
After dividing the relations R and S into subsets, the rows of the subsets R1 of respective AMPs 4101-4104 are redistributed closely equal among all AMPs 4101-4104, and all rows of the subsets R2 of respective AMPs 4101-4104 are duplicated among all the AMPs. To achieve a close equal redistribution of the subset R1 rows among the AMPs, the same options as those described above are available. In a similar manner, the rows of the subsets S1 are redistributed substantially equally among the AMPs 4101-4104, and the rows of the subsets S2 are duplicated among the AMPs. On receipt of rows of a subset R1 redistributed to a particular AMP, the AMP that receives the redistributed rows of the subset R1 stores the redistributed row in a spool (illustratively designated SpoolRredis) allocated for receipt of rows of the relation R redistributed thereto. In a similar manner, on receipt of rows of the subset R2 that have been duplicated to an AMP, the AMP stores the received duplicated row of the relation R in a spool (illustratively designated SpoolRdup) allocated for duplicated rows of the relation R. Likewise, redistributed rows of the subset S1 that are received by an AMP are stored in a spool (illustratively designated SpoolSredis) allocated for receipt of relation S redistributed rows, and rows of the subset S2 that are received by an AMP as a result of the duplication of the subset S2 are stored in a spool (SpoolSdup) allocated therefor. In the present example, the demographics 900 depicted in
After redistribution of the rows of the subsets R1 and S1 and duplication of the rows of the subsets R2 and S2, local joins are performed on each AMP. Specifically, the rows of the spool SpoolRredis are joined with the rows of the spool SpoolSdup at each AMP, and the rows of the spool SpoolRdup are joined with the rows of the spool SpoolSredis at each AMP. Further, one AMP is selected, and the rows of the spool SpoolRdup are joined with the rows of SpoolSdup at the selected AMP. After that, all rows from the spools SpoolRdup and SpoolSdup have been utilized for the operation, and the SpoolRdup and SpoolSdup rows are then removed from the spool space. The exemplary cardinality 1000 depicted in
Each AMP, on receipt of rows of the subsets R1 and S1 redistributed thereto, places the redistributed rows into a respective spool SpoolRredis and SpoolSredis (step 1118). Likewise, each AMP, on receipt of rows of the subset R2 and S2 duplicated thereto, place the duplicated rows into a respective spool SpoolRdup and SpoolSdup (step 1120).
When all rows have been redistributed or duplicated among the AMPs, each AMP then joins the rows of the AMP's respective spools SpoolRredis and SpoolSdup (step 1122) and joins the rows of the AMP's respective spools SpoolSredis and SpoolRdup (step 1124). One AMP is then selected for joining the rows of the AMP's spools SpoolRdup and SpoolSdup (step 1126). The rows in each AMPs' respective spools SpoolRdup and SpoolSdup are then removed (step 1128). An evaluation may then be made to determine if the join operation may be completed according to a traditional redistribution or duplication routine (step 1130). If the allowed spool space is still insufficient, the routine may return to partition the rows of R at each AMP into two subsets according to step 1106. If the spool space is sufficient for a traditional redistribution or duplication parallel join operation, a traditional redistribution or duplication of the rows is performed followed by local joins and completion of the parallel join (step 1132). The optimized join routine cycle may then end (step 1134).
The partitioning subroutine is invoked (step 1202), and an idealized row count, Rc, of rows of table R per AMP after redistribution and duplication of table R is calculated (step 1204). For example, Rc may be calculated as follows:
Rc=R*RSR/(R*RSR+S*RSS)*Sp/RSR eq. 3
where R is the total number of rows in the table R, RSR is the row size of table R rows, e.g., in bytes, S is the total number of rows of the table S, RSS is the row size of table S rows, and Sp comprises the spool size limit on each AMP. In the present example, assume Sp comprises the spool space that may be used for rows of R and S, i.e., that Sp does not include spool space necessary for local join results. Thus, Rc is calculated as the product of the proportion of the table R size relative to the total size of both tables R and S and the allowable spool space relative to the table R row size.
Next, the partitioning subroutine calculates the total number of rows of R, Rr, to be redistributed (step 1206) and the total number of rows of R, Rd, to be duplicated (step 1208). For example, the total number of rows of R may be defined as follows:
Rr+Rd=R eq. 4
Thus, the idealized number of row count of R per AMP may be defined as follows:
Rc=Rr/m+Rd eq. 5
where m is the number of AMPs.
Accordingly, equations 3-5 may be solved to yield the following:
Rr=m/(m−1)*(R−R/(R*RSR+S*RSS)*Sp) eq. 6
The idealized number of rows of R to be duplicated may be expressed simply as:
Rd=R−Rr eq. 7
The partitioning subroutine may then calculate the number of rows of R, Rr(i), to be redistributed by each AMP(i) (step 1210) and the number of rows of R, Rd(i), to be duplicated by each AMP(i) (step 1212), and the partitioning subroutine cycle may then end (step 1214). For example, the number of rows of Rr(i) and Rd(i) for a particular AMP(i) may be calculated as follows:
Rr(i)=Ra/R*Rr eq. 8
Rd(i)=Ra/R*Rd eq. 9
Equations 6 and 8 may then be solved for Rr(i) as follows:
Rr(i)=m/(m−1)*Ra*(1−1/(R*RSR+S*RSS)*Sp) eq. 10
Likewise, equations 7 and 9 may be solved for Rd(i) as follows:
Rd(i)=1/(m−1)*Ra*(m*Sp/(R*RSR+S*RSS)−1) eq. 11
In this manner, the number of rows of the relation R to be redistributed, i.e., the number of rows of relation R allocated to the subset R1, are calculated according to equation 10 for each AMP(i), and the number of rows to be duplicated, i.e., the number of rows of relation R allocated to the subset R2, are calculated according to equation 11 for each AMP(i). It should be understood that the partitioning subroutine of
As described, a method, computer-readable medium, and system that facilitate duplication optimization for parallel join operations are provided. A respective set of rows of a first table and a second table involved in a join operation are distributed to each of a plurality of processing modules. The join operation comprises a join on a first column of the first table and a second column of the second table. Each of the plurality of processing modules redistributes at least a portion of the rows of the first table distributed thereto substantially equally among the other processing modules and duplicates at least a portion of the rows of the second table distributed thereto among the plurality of processing modules. The disclosed optimization mechanisms provide for reduced spool space requirements for execution of the parallel join operation.
The flowcharts of
The illustrative block diagrams and flowcharts depict process steps or blocks that may represent modules, segments, or portions of code that include one or more executable instructions for implementing specific logical functions or steps in the process. Although the particular examples illustrate specific process steps or procedures, many alternative implementations are possible and may be made by simple design choice. Some process steps may be executed in different order from the specific description herein based on, for example, considerations of function, purpose, conformance to standard, legacy structure, user interface design, and the like.
Aspects of the disclosed embodiments may be implemented in software, hardware, firmware, or a combination thereof. The various elements of the system, either individually or in combination, may be implemented as a computer program product tangibly embodied in a machine-readable storage device for execution by a processing unit. Various steps of embodiments may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions by operating on input and generating output. The computer-readable medium may be, for example, a memory, a transportable medium such as a compact disk, a floppy disk, or a diskette, such that a computer program embodying aspects of the disclosed embodiments can be loaded onto a computer. The computer program is not limited to any particular embodiment, and may, for example, be implemented in an operating system, application program, foreground or background process, or any combination thereof, executing on a single processor or multiple processors. Additionally, various steps of embodiments may provide one or more data structures generated, produced, received, or otherwise implemented on a computer-readable medium, such as a memory.
Although disclosed embodiments have been illustrated in the accompanying drawings and described in the foregoing description, it will be understood that embodiments are not limited to the disclosed examples, but are capable of numerous rearrangements, modifications, and substitutions without departing from the disclosed embodiments as set forth and defined by the following claims. For example, the capabilities of the disclosed embodiments can be performed fully and/or partially by one or more of the blocks, modules, processors or memories. Also, these capabilities may be performed in the current manner or in a distributed manner and on, or via, any device able to provide and/or receive information. Still further, although depicted in a particular manner, a greater or lesser number of modules and connections can be utilized with the present disclosure in order to accomplish embodiments, to provide additional known features to present embodiments, and/or to make disclosed embodiments more efficient. Also, the information sent between various modules can be sent between the modules via at least one of a data network, an Internet Protocol network, a wireless source, and a wired source and via a plurality of protocols.
Number | Name | Date | Kind |
---|---|---|---|
5873074 | Kashyap et al. | Feb 1999 | A |
5884320 | Agrawal et al. | Mar 1999 | A |
5978793 | Kashyap et al. | Nov 1999 | A |
7054852 | Cohen | May 2006 | B1 |
7085769 | Luo et al. | Aug 2006 | B1 |
7685193 | Cohen | Mar 2010 | B2 |
20060117036 | Cruanes et al. | Jun 2006 | A1 |
20070276788 | Cohen | Nov 2007 | A1 |
20090024568 | Al-Omari et al. | Jan 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20100057672 A1 | Mar 2010 | US |