The present invention relates generally to database systems, and, in particular, to a system and method for creating a distributed transaction manager supporting repeatable read isolation level in a massively parallel processing database.
A massively parallel processing (MPP) database is a database where a large number of processors perform a set of computations in parallel. In a MPP system, a program is processed by multiple processors in a coordinated manner, with each processor working on a different part of the program and/or different data. The compute resources of a MPP system are distributed and running on different physical/virtual nodes. A MPP database system can be based on shared-nothing (SN) or shared disk (SD) architecture, with the tables of the databases partitioned into partitions and distributed to different processing nodes. For database queries, the tasks of each query are divided and assigned to the processing nodes according to the data distribution and an optimized execution plan. The processing entities in each processing node manage only their portion of the data. However, the processing entities may communicate with one another to exchange necessary information during execution.
A transaction in a MPP database might update or select data on one or more networked computer systems. A transaction is a logical grouping of a set of actions, including queries, such as selecting data, updating the data, inserting the data, and deleting the data. A transaction system that spans multiple nodes needs to have the global knowledge of the current active transactions. Such information is typically referred to as transaction “snapshot”. This can be achieved by creating a centralized component that tracks snapshots globally for all the nodes. However, having a centralized component presents issues such as single point of failure (SPOF) and limiting scalability. An improved method for handling snapshots in a MPP database is needed.
In accordance with an embodiment, a method implemented by a first node for transaction processing between processing nodes in a cluster of a massively parallel processing (MPP) database system includes identifying, before starting a transaction, a second node involved in the transaction, and requesting, from the second node, a snapshot of current transactions at the second node. The method further includes receiving, from the second node, the snapshot of current transactions at the second node, and combining, into a reconciled snapshot, the received snapshot of transactions from the second node with current transactions at the first node. The reconciled snapshot is then transmitted form the first node to the second node. The first node then starts the transaction using the reconciled snapshot.
In accordance with another embodiment, a method implemented by a first node for transaction processing between processing nodes in a cluster of a MPP) database system includes receiving a request for a snapshot of current transactions at the first node. The request is received from a second node of the MPP system upon identifying the first node to be involved in the transaction and before starting the transaction at the second node. The method further includes sending, to the second node, the snapshot of current transactions at the first node, and receiving, from the second node, a reconciled snapshot combining the snapshot of current transactions at the first node and the second node. A branch transaction is then started at the first node, triggered by the transaction at the second node. The first node performs the branch transaction in accordance with the reconciled snapshot. Upon ending the branch transaction, the first node prepares the branch transaction for a commit command from the second node, and performs a two phase commit (2PC) protocol with the second node.
In accordance with another embodiment, a cluster node for transaction processing in a MPP database includes at least one processor and a non-transitory computer readable storage medium storing programming for execution by the at least one processor. The programming includes instructions to identify, before starting a transaction, a second cluster node involved in the transaction, and request, from the second cluster node, a snapshot of current transactions at the second cluster node. The programming further includes instructions to receive, from the second cluster node, the snapshot of current transactions at the second cluster node, and combine, into a reconciled snapshot, the received snapshot of current transactions from the second cluster node with current transactions at the cluster node. The cluster node is further configured to transmit the reconciled snapshot to the second cluster node, and start the transaction using the reconciled snapshot.
In accordance with yet another embodiment, a cluster node for participating in transaction processing in a MPP database includes at least one processor and a non-transitory computer readable storage medium storing programming for execution by the at least one processor. The programming includes instructions to receive a request for a snapshot of current transactions at the cluster node. The request is received from a second cluster node upon identifying the cluster node to be involved in the transaction and before starting the transaction at the second cluster node. The programming includes further instructions to send, to the second cluster node, the snapshot of current transactions at the cluster node, and receive, from the second cluster node, a reconciled snapshot combining the snapshot of current transactions at the cluster node and the second cluster node. The cluster node is further configured to start a branch transaction triggered by the transaction at the second cluster node, and perform the branch transaction in accordance with the reconciled snapshot. Upon ending the branch transaction, the cluster node prepares the branch transaction for a commit command from the second cluster node, and performs a two phase commit (2PC) protocol between the cluster node and the second cluster node.
The foregoing has outlined rather broadly the features of an embodiment of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of embodiments of the invention will be described hereinafter, which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiments disclosed may be readily utilized as a basis for modifying or designing other structures or processes for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:
Corresponding numerals and symbols in the different figures generally refer to corresponding parts unless otherwise indicated. The figures are drawn to clearly illustrate the relevant aspects of the embodiments and are not necessarily drawn to scale.
It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
Transactions form the foundation for atomicity, consistency, isolation and durability (ACID) properties of database systems. A transaction can have multiple isolation levels. ACID properties ensure that database transactions are reliably processed. Atomicity requires that if one part of a transaction fails, the entire transaction fails, and the database remains unchanged. Consistency ensures that a transaction transitions the database from one valid state to another valid state. Isolation ensures that the result of concurrent execution of transactions is the same as if the transactions were performed in a serial order. Further, durability requires that once a transaction has been committed, all changes made by the transaction remain durable and permanent, and the transaction remains committed even if the transient states of the processor nodes are lost, for example as a result of power outage or crash.
To maintain ACID properties, the intermediate states between the steps of a transaction should not be visible to other concurrent transactions. For atomicity, if a failure occurs that prevents the transaction from completing, then none of the steps affect the database, ensuring that consistent data is seen by everyone. In a single node non-distributed database system there is one database management instance with the transaction manager that ensures the ACID properties by implementing strict two phase locking (SS2PL) or snapshots.
Metadata information of the data and the system is used to create a snapshot. Each row is appended with the transaction ID that modifies it. A snapshot is a list of current active transactions on the system. By using the snapshot, the transaction manager determines the visibility of data before executing any action. If the transaction ID pertains to any of the transactions in the snapshot list, data should not be visible, since the transaction is still active, and the intermediate states of the action should not be seen by other transactions.
A distributed transaction is a transaction that performs an operation on two or more networked computer systems. In an example, a user may start a transaction on first node 102, and access data locally. If the transaction needs to access data on a remote node, such as on second node 104, a distributed transaction capability may be used to handle the transaction globally. In the case a centralized component maintains the state of all transactions, and thus maintains a global snapshot of the system, every transaction in the system may get a snapshot either at the beginning of the transaction or for each statement within the transaction depending on the isolation level of the transaction. Any transaction in the system transmits a request for a snapshot to the centralized component, which provides snapshots to the individual nodes of the system. However, such centralized component has issues regarding single point of failure (SPOF) and limiting scalability. The centralized component represents a SPOF since if this component fails, for some reason, it can stop the entire system from working. This is undesirable in any system with a goal of high availability or reliability. Further, the centralized component would limit scalability. Thus, a centralized transaction manager may be a potential bottleneck for the scale-out of the cluster and may jeopardize the high availability of the cluster.
Embodiments are provided herein to resolve such issues in handling snapshots of the system. Instead of a centralized component, the embodiments provide a distributed transaction manager supporting repeatable read isolation level in MPP database systems. The new model is a distributed model, where every node involved in the transaction plays a role without using one centralized component for this purpose. The model uses a method for keeping the snapshot information local to each of the nodes or processing units, thus providing a distributed implementation. In addition to supporting the repeatable read isolation level, the embodiments below also provide a read-committed isolation level. The read-committed isolation level can be supported according to algorithms described in U.S. Provisional application Ser. No. 13/798,344 filed on Mar. 13, 2013 by Tejeswar Mupparti et al. and entitled “System and Method for Performing a Transaction in a Massively Parallel Processing Database,” which is hereby incorporated herein by reference as if reproduced in its entirety.
Although data may be scattered across the system, the distribution is transparent to the user. For transaction originated at one node, if non-local data is needed, the node transparently opens branches of the same transaction on remote nodes. Additionally, atomicity and durability may be satisfied by using an implicit two phase commit (2PC) protocol, ensuring that, although data is modified and accessed across multiple nodes, all units of work are logically tied to one unit. In 2PC, a global transaction ID is assigned by the transaction manager (TM) to each resource manager (RM). In an example, the node where the parent transaction originated becomes the TM, and the branch transaction nodes become the RMs. Any node may be a transaction manager or a resource manager, depending on the particular transaction. The TM coordinates the decision to commit or rollback with each RM. Further, a local transaction ID is assigned by each RM. The TM adds the node name as a suffix to the parent transaction ID to obtain the global transaction ID for all branches of the transaction, ensuring that the global transaction ID is unique. For example, if a transaction is started on first node 102, first node 102 becomes the TM. Data accessed non-locally, residing on a remote node, may be executing under a new remote transaction. These new remote transactions are branches of the same parent transaction. When the client uses an explicit commit, the TM coordinates with the RMs a 2PC protocol to commit or rollback all the branches of the parent transaction.
Additionally, to ensure isolation consistency for the transaction, a parent transaction first identifies all required nodes for running the transaction. Subsequently, at the start time of the transaction, the parent transaction collects the snapshot information from all the remote nodes that are involved in the transactions. All of these snapshots are reconciled to eliminate any inconsistencies, and a new snapshot is constructed. This newly constructed snapshot is transmitted back to the participant nodes of this transaction, which is used by all the nodes to execute the statements of the transactions. This ensures that all the systems involved in the transaction see the same consistent view of the data and adhere to REPETABLE READ isolation level. The model may also be extended to the SERIALIZIBLE isolation level.
On the other hand, if the operation type is determined to be a read, the step 118 determines whether the operation is local to a node. If the operation is local to the node, the read operation is executed in step 120, and the system returns to step 114. If the read operation is remote or occurs both remotely and locally, then it is determined in step 122 if the remote node is already a part of the branch transaction. A branch transaction at the remote node or RM is a transaction started by a parent transaction at an originating node or TM in order to process data at the remote node for the parent transaction. If the remote node is already part of the branch transaction, the branch transaction is executed in step 124, and the system returns to step 114. However, if the remote node is not already part of the branch transaction, the parent transaction's reconciled snapshot is sent to the remote node in step 125. Next, in step 126, the read command is executed using the received snapshot from a parent transaction. The remote node does not directly use the received reconciled snapshot from the master node. Instead, the remote node first translates the received reconciled snapshot by transforming the master transaction IDs in the received reconciled snapshot to local traction IDs for the remote node, as described below. The system then returns to step 114.
Similarly, if the operation type is determined to be a write operation, step 132 determines if the operation is local to a node. If the operation is local to the node, the write command is executed in step 120, and the system returns to step 114. However, if the operation is remote or both local and remote, the system goes to step 134, where it determines if the remote node is already part of the branch transaction. If the remote node is already part of the branch transaction, the branch transaction is executed in step 124, and the system returns to step 114. However, if the remote node is not part of the branch transaction, the parent transaction's reconciled snapshot is sent to the remote node in step 125. Next, in step 136, a new branch transaction is started with the received snapshot from a parent transaction. Then, the new branch transaction is executed in step 138, and the system returns to step 114. The system obtains the next statement in step 114. The system continues to get new statements until a commit or rollback is performed.
Next, a commit command is issued explicitly by the client. First node N1 recognizes itself as the TM and the commit operation is automatically transformed into a two phase commit (2PC) protocol by first node N1. When a branch transaction is opened, the global ID is transmitted to other nodes along with the request to create a branch transaction. Now, the transactions txn1 and txn2 are prepared in the first phase of 2PC using the global ID Tnode-n1-200. Finally, responses are combined, and the second phase of committing is issued by first node N1.
In the method 500, a transaction is explicitly started on first node N5 by a client connection. Thus, the first node N5 is the TM, and is assigned a local transaction ID, for instance 6364. The automatically generated global transaction ID is 5:6364, which is created by appending node number “5” to the local transaction ID 6364. At step 2-001, the node N5 computes a reconciled snapshot for all the other nodes, N8 and N12 in this example. The reconciled snapshot is sent back to the other nodes (N8 and N12). The reconciled snapshot is subsequently used to perform individual transactions at each of the three nodes. At step 2-002, a Write(N5) command, which is a local write operation on node N5, is performed and executed in the context of <5:6364, 6364>. At step 2-003, Write(N8) is a remote operation performed on a second node N8. Accordingly, an implicit transaction is opened on node N8, and the local transaction manager of node N8 is assigned a local transaction ID of 8876. This new transaction is a branch of the parent transaction, and it obtains a master transaction ID from the parent transaction. In this example, the master transaction ID is 5:6364. Hence, the remote operation is executed in the context of <5:6364, 8876>.
At step 2-004, the operation Write(N12) is a remote transaction performed on a third node N12. Thus, a new branch transaction is opened on node N12, which obtains the same master transaction ID, 5.6364, as the parent transaction. This master transaction ID, also referred to herein as a global transaction ID, forms a pair with the local transaction ID 4387 of node N12. The operation Write(N12) is thus executed in the context of <5:6364, 4387>. At step 2-005, a commit operation deploys an implicit 2PC protocol to commit on all three nodes (N5, N8, and N12). The parent transaction 6364 is committed on node N5, branch transaction 8876 is committed on node N8, and branch transaction 4387 is committed on node N12. Although the parent and its branches execute on individual node as individual transactions, by assigning all transactions a pair of IDs, where the master or global transaction ID is common to all the transaction pairs, the transactions are identified as part of the same global transaction.
In a distributed environment, a single statement of a transaction may be executed on one node, for example “select coll from table where coll=data-on-local-node.” Alternatively, a single statement may be executed on more than one node, for example “select coll from table where TRUE.”
At step 3-001, a transaction txn1 having a local transaction ID of 100 is started on first node N1. The step 3-002 analyses the statements in the transaction to find all required nodes for the transaction. This can be achieved using various database objects (e.g., table and/or partition) names used in the statements. In another scheme, internally maintained metadata catalogs are consulted to learn the nodes where the corresponding database objects exist. For example, the catalog may have information such as table T1 exists on node N1 only, table T2 exists on node N2 only, and table T3 exists on both N1 and N2. In some cases, the predicates used in the statement queries are used to find the nodes. For example, it can be assumed that table T1 is partitioned into two parts based on a particular column's value being even or odd. For example, a query such as SELECT * FROM T1 WHERE COL=5, would need to run only on node N2, as the column ‘col’ value is an odd number. On the other hand, if the query is SELECT * FROM T1 WHERE COL>5, then the query analyzer may recognize that both nodes N1 and N2 are needed for this transaction.
Once the list of potentially participating nodes for the transaction are found, step 3-003 computes the global snapshot with which the transaction statements should be executed on corresponding nodes with REPEATABLE READ isolation level. This snapshot is a list of all active transactions on all participating nodes, represented in the global/master format, which is nodeID:local_transaction_number. Node N1 gets the snapshot <S1>122, 130. The transactions with ID's 122, 130 are considered currently running on node N1. Any data modified by these transactions should not be seen by the transaction txn1. Similarly, Node N1 requests node N2 to send its local snapshot, and receives <S2>372. Then a reconciled snapshot is computed which dictates what are the list of active transactions for this transaction across all participating nodes. Details of computing the reconciled global snapshot are explained below. Further, each node transforms the global snapshot to its local format when a local transaction is opened on respective nodes. At step 3-003, the Read(A) operation runs with the locally computed reconciled snapshot to ensure the REPEATABLE READ isolation. The next step 3-005, the Write(B) operation initiates a remote transaction txn2 (ID 400) on node N2 and forwards the query statement and the reconciled snapshot to the node N2. The transaction txn2 ensures the REPEATABLE READ isolation for statements run on node N2. Finally, a commit operation deploys an implicit 2PC protocol to commit on both nodes N1 and N2.
To eliminate such inconsistencies and handle these types of scenarios, a snapshot reconciliation method can be implemented.
At step 5-007, the master node N1 transmits the reconciled snapshot list to the participating nodes, N2 and N3. The reconciled snapshot may be forwarded once piggybacked on the first query sent to any participating node. At step 5-008, all the participating nodes receive the reconciled snapshot list in master ID format, and then convert it into local format. This means, for every master transaction ID in the reconciled list, a corresponding local transaction ID is retrieved, e.g., as described in method 500. The conversion of the reconciled snapshot from the global to local format involves a step of adjustment to eliminate inconsistencies. In this adjustment step, participating nodes take an intersection of the reconciled snapshot with the snapshot sent to the TM in step 5-004. For any transaction that was not part of the intersection, two possibilities exist. Either the current node never participated in the transaction, or the node participated in the transaction, but sees it as active on other nodes. If the current node never participated in the transaction, this transaction ID can be ignored. However, if the node participated in the transaction, the new transaction ID is further included as a part of the newly constructed snapshot, to ensure that if one node is not seeing the effects of a transaction, then none of the nodes will see it. At step 5-009, the TM transmits the query to all the participating nodes. At step 5-010, the participating nodes execute the query using the newly constructed snapshot of step 5-008. Finally, a commit operation deploys an implicit 2PC protocol to commit on nodes N1, N2, and N3.
The bus may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, video bus, or the like. CPU 274 may comprise any type of electronic data processor. Memory 276 may comprise any type of system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like. In an embodiment, the memory may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
Mass storage device 278 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus. Mass storage device 278 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
Video adaptor 280 and I/O interface 288 provide interfaces to couple external input and output devices to the processing unit. As illustrated, examples of input and output devices include the display coupled to the video adapter and the mouse/keyboard/printer coupled to the I/O interface. Other devices may be coupled to the processing unit, and additional or fewer interface cards may be utilized. For example, a serial interface card (not pictured) may be used to provide a serial interface for a printer.
The processing unit also includes one or more network interface 284, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or different networks. Network interface 284 allows the processing unit to communicate with remote units via the networks. For example, the network interface may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas. In an embodiment, the processing unit is coupled to a local-area network or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like.
While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.