Transaction processing systems are computer hardware and software systems that support concurrent execution of multiple transaction programs while ensuring preservation of so-called ACID properties comprising Atomicity, Consistency, Isolation and Durability. A transaction program is a specification of operations that are applied against application state, including the order in which the operations must be applied and any concurrency controls that must be exercised in order for the transaction to execute correctly. The most common concurrency control operation is locking, whereby the process corresponding to the transaction program acquires either a shared or exclusive lock on the data it reads or writes. A transaction refers to a collection of operations on the physical and abstract application state, usually represented in a database. A transaction represents the execution of a transaction program. Operations include reading and writing of shared state.
With respect to the ACID properties, atomicity refers to transactions that exhibit an all-or-none behavior, in that a transaction either executes completely or not at all. A transaction that completes is said to have committed; one that is abandoned during execution is said to have been aborted; one that has begun execution but has neither committed nor aborted is said to be in-flight.
Consistency refers to successful completion of a transaction that leaves the application state consistent vis-à-vis any integrity constraints that have been specified.
Isolation, also known as serializability, guarantees that every correct concurrent execution of a stream of transactions corresponds to some total ordering on the transactions that constitute the stream. In that sense, with respect to an executed transaction, the effects of every other transaction in the stream are the same as either having executed strictly before or strictly after it. Strong serializability refers to the degree to which the execution of concurrent transactions is constrained and creates different levels of isolation in transaction processing systems. In the context of the current document, concern is greatest with respect to transaction processing systems that exhibit the strongest forms of isolation, in which the updates made by a transaction are never lost and repeated read operations within a transaction produce the same result.
Durability refers to the property such that once a transaction has committed, its changes to application state survive failures affecting the transaction-processing system.
One challenge with transaction processing systems pertains to reducing the time that is needed for transactions to commit. Accordingly, this invention arose out of concerns associated with providing systems and methods that reduce the time that is needed for transactions to commit.
Overview
Various embodiments described herein utilize non-disk persistent memory in connection with transaction processing systems. By using non-disk persistent memory to commit transactions, the time associated with committing transactions can be reduced, thus lessening the demand for resources inside the transaction processing system and increasing transaction processing throughput. Various embodiments provide a unified buffering scheme that utilizes non-disk persistent memory both in checkpointing processes and write-aside buffering processes.
Exemplary General Transaction Processing System
Database writer 102 is configured to mutate data stored on data volumes (i.e., disks or collections of disks) as it carries out operations specified by the transaction programs. It is not particularly relevant to the discussion here how it preserves the ACID properties other than durability. For durability, the database writer ensures that the changes made by it to the database are recorded on durable media. In on-line transaction processing, the data affected by these changes tend to be randomly distributed on data volumes. Since random access to disk drives is highly inefficient, these changes are not written to the disk right away. Instead, the database writers 102 send their changes to the log writer 106 described below, which makes the changes durable in time for transaction commitment.
Transaction monitor 104 keeps track of transactions as they enter and leave the system. The transaction monitor keeps track of the database writer 102 mutating the database on behalf of a transaction, and ensures that any data volume changes related to that transaction sent by the database writer 102 to the log writer 106 are flushed to permanent media before the transaction is committed. The transaction monitor 104 also notates transaction states (e.g., commit or abort) in a transaction log.
Log writer 106 maintains a database audit trail, which explicitly records the changes made to the database by each transaction, and implicitly records the serial order in which the transactions committed. Once again focusing on the durability property, before a transaction can commit, the changes made by the transaction must be recorded on durable media. The log writer 106 enforces this constraint; it receives from the database writer 102 audit records describing state changes, and coordinates its recording actions with the rest of the transaction commitment infrastructure described below.
As will be appreciated by the skilled artisan, one or more of the above entities may be realized using multiple processes or threads for the purposes of scalability or fault tolerance. For instance, the database could be partitioned across multiple disk ‘volumes,’ each volume being managed by one or more database writer entities. Likewise, the task of writing the audit trail might be partitioned across multiple log writers, each dedicated to recording the changes made by a certain subset of database writers. In order to ensure continuous operation of the transaction processing system, each database writer might be realized using a set of two or more redundant entities, which synchronize state on a continuing basis, such that if one entity in the set should fail the surviving entities from the set may ‘take over’ without interrupting the processing of the transaction stream.
In this example, each of the entities described above is implemented using a pair of processes. Thus, each process pair includes a primary process (labeled “pri”) and a backup process (labeled “bak”). In this example, before communicating with any other component of the transaction processing system, each primary process checkpoints the relevant parts of its state to the backup process; in the event that the primary process fails, the backup process can take over quickly. The takeover interval is fairly short (lasting from a few milliseconds to a few seconds), during which in-flight transactions are aborted and possibly restarted. The processes and libraries that realize the elements of the transaction processing architecture in
In this example, database writers 202 are labeled “DP2” (for “Disk Process 2”) and log writers 206 are labeled “ADP” (for “Audit Disk Process”). The transaction monitors 204 are implemented using a distributed collection of processes and system libraries, called TMF (for “Transaction Monitoring Facility”). The transaction monitors 204 are labeled “TMP” (for “Transaction Monitor Process”) and coordinate the start and commit of transactions. TMF uses an operating system facility called TMFlib (TMF library) at each CPU in the cluster. This library allows the DP2 processes to register with TMF as they start and end their work with respect to any given transaction. The TMFlib instances communicate among themselves and with the TMP process pair to coordinate transaction commitment, as described below in the section entitled “Committing Transactions”.
Committing Transactions
With reference to
As the database writer 202 modifies the database state, the primary DP2 process 210 first checkpoints the state changes with its backup process 212 and then propagates a record of the state changes out to the log writers 206 and in particular, the ADP primary process 218. The ADP primary process buffers up state changes in memory until either a threshold on the amount of buffered audit data is exceeded (resulting in a so-called courtesy write) or sooner, if forced to commit by a message from the transaction monitor process (TMP) 204 (resulting in a so-called forced write). Just like DP2, the ADP primary process 218 checkpoints the state changes with its backup process 220 before issuing any disk operations or messages.
At a subsequent instant of time, the client will then encounter an End_Transacation or similar operation, which indicates the end of a particular transaction. Before the transaction monitor process 204 can commit the transaction, it needs to ensure, as explained above, that the database state changes sent to the log writers 206 on behalf of the transaction have been made durable. To accomplish this, the TMP sends a Phase 2 Flush message for that particular transaction to each log writer or ADP in the system, and then waits for the reply from each ADP confirming that the requisite state changes have been written from non-durable system buffers out to disk drives. Upon the transaction monitor 204 receiving all of the reply messages, it sends a transaction commit record to the specially designated Master ADP. Once the Master ADP acknowledges that it has written the commit record to durable media, the TMP notifies the client that the transaction has committed.
From the foregoing, it is evident that waiting for the state changes and commit records to flush to disk accounts for a good portion of the delay in committing TMF transactions, as will be appreciated by the skilled artisan. In addition, checkpointing by the database writers, transaction monitors and log writers adds to the overhead of committing transactions. Since disks are rotating mechanical media, disk latencies are not improving as rapidly as processor and memory speeds. Moreover, reliable checkpointing via message passing entails not just data transfer overheads but also the overhead of fully synchronizing a pair of processes. Due to these factors, TMF transaction commit times often range from a few milliseconds to several seconds.
While some applications can tolerate high response times, many others cannot. With high TMF transaction response times, there is a secondary adverse effect on transaction processing throughput due to the fact that an average transaction stays in the system longer, thus causing greater demand for resources inside the transaction processing system, which indirectly limits transaction processing throughput under finite resources. Examples of resources that may become oversubscribed if transactions stay in the system for too long are database locks, sockets and other connection resources.
Persistent Memory in General
In accordance with the embodiments described herein, non-disk persistent memory is employed in connection with a transaction processing system to reduce transaction response times in general, and transaction commit times and process-pair checkpointing times in particular. In the various embodiments described below, the non-disk persistent memory can be used both as a write-aside buffer for disk writes and as a buffer for process-state checkpoints.
Persistent memory is an architectural concept as will be appreciated by the skilled artisan. In accordance with the described embodiments, there are many possible implementations of non-disk persistent memory that can be utilized. As such, it is not the intent of this document to be limited to one particular embodiment of non-disk persistent memory.
To assist the reader in appreciating architectural principles associated with non-disk persistent memory, the following discussion describes characteristics that non-disk persistent memory systems can have in order to facilitate their use in the inventive transaction processing systems. Throughout this discussion, a few non-limiting examples of non-disk persistent memory systems are provided. It is to be appreciated and understood that the principles described in this document can be employed in other non-disk persistent memory architectures without departing from the spirit and scope of the claimed subject matter.
Non-disk persistent memory, as defined in this document, should exhibit the following properties: durability, connectivity, and access.
Durability refers to non-disk persistent memory that is durable, without refresh, and which can survive the loss of system power. It should additionally provide durable, self-consistent metadata in order to ensure continued access to the data stored on the non-disk persistent memory after power loss or soft failures.
With respect to connectivity, consider the following. Non-disk persistent memory may attach to memory controllers available with commercial chipsets. While special purpose memory controllers may be eventually designed to exploit the durability of non-disk persistent memory in a unique fashion, their existence is not required. In those cases where direct connectivity to a CPU's memory controller is not desirable-perhaps due to fault-tolerance implications, packaging considerations, physical slot limitations, or electrical load limits—first-level I/O attachment of non-disk persistent memory is permitted. For instance, non-disk persistent memory may be attached to PCI and other first-level I/O interconnects, such as PCI Express, RDMA over IP, InfiniBand, Virtual Interface over Fibre Channel (FC-VI) or ServerNet. Such interconnects support both memory mapping and memory-semantic access. This embodiment of non-disk persistent memory is referred to as a Communication-link Attached Persistent Memory Unit (CPMU).
Storage connectivity (e.g., SCSI)—or indeed any other second-level I/O connectivity—is not desirable for persistent memory due to performance considerations identified below.
With respect to access, consider the following. Non-disk persistent memory may be accessed from user programs like ordinary virtual memory, albeit at specially designated process virtual addresses, using the CPU's memory instructions (Load and Store). On certain system area networks (i.e. SANS) that support memory-semantic operations, non-disk persistent memory may be implemented as a network resource that is accessed using remote DMA (RDMA), or similar semantics. For example,
In addition to RDMA data movement operations, CPMU 310 can be configured to respond to various management commands. In a write operation initiated by processor node 302, for example, once data have been successfully stored in the CPMU, they are durable and will survive a power outage or processor node 302 failure. In particular, memory contents will be maintained as long as the CPMU continues to function correctly, even after the power has been disconnected for an extended period of time, or the operating system on processor node 302 has been rebooted. In this example, processor node 302 is a computer system consisting of at least one central processing unit (CPU) and memory wherein the CPU is configured to run an operating system. Processor node 302 is additionally configured to run application software such as databases. Processor node 302 uses SAN 306 to communicate with other processor nodes 302 as well as with devices such as CPMU 310 and I/O controllers (not shown).
In one implementation of this example, an RDMA-enabled SAN is a network capable of performing byte-level memory operations such as copy operations either between an initiator processor node 302 and a target processor node 302, or between an initiator processor node 302 and a device 310, without notifying the CPU of target processor node 302. In this case, SAN 306 is configured to perform virtual-to-physical address translation in order to enable the mapping of contiguous network virtual address spaces onto discontiguous physical address spaces. This type of address translation allows for dynamic management of CPMU 310. Commercially available SANs 306 with RDMA capability include, but are not limited to ServerNet, RDMA over IP, InfiniBand, and all SANs compliant with Virtual Interface Architecture.
Processor nodes 302 are generally attached to a SAN 306 through the NI 304, however, many variations are possible. More generally, however, a processor node need only be connected to an apparatus for communicating read and write operations. For example, in another implementation of this example, processor nodes 302 are various CPUs on a motherboard and, instead of using a SAN, an Input/Output bus is used, for example a PCI bus. It is noted that the present teachings can be scaled up or down to accommodate larger or smaller implementations as needed.
Network interface (NI) 308 is communicatively coupled to CPMU 310 to allow for access to the non-disk persistent memory contained within CPMU 310. Many technologies are available for the various components of
Where SAN 306 is used, memory should be fast enough for RDMA access. In this way, RDMA read and write operations are made possible over SAN 306. Where another type of communication apparatus is used, the access speed of the memory used should also be fast enough to accommodate the communication apparatus. It should be noted that persistent information is provided to the extent the non-disk persistent memory in use may hold data. For example, in many applications, non-disk persistent memory may be required to store data regardless of the amount of time power is lost; whereas in another application, non-disk persistent memory may only be required for a few minutes or hours.
In conjunction with this approach, memory management functionality is provided for creating single or multiple independent, indirectly-addressed memory regions. Moreover, CPMU meta-data is provided for memory recovery after loss of power or processor failure. Meta-data or information includes, for example, the contents and the layout of the protected memory regions within an CPMU. In this way, the CPMU stores the data and the manner of using the data. When the need arises, the CPMU can then allow for recovery from a power or system failure.
In
In
Accordingly, the backup task (i.e., data transfer from non-disk volatile memory 502 to non-volatile secondary memory store 508) can be performed by software running on CPU 504. The included NI 506 may be used by software running on CPU 504 to initiate RDMA requests or to send messages to other entities on SAN 306. Here again, CPU 504 receives management commands from the network through NI 506 and carries out the requested management operation.
Any embodiment of the CPMU, such as CPMU 400 or 500, has to be managed for the purposes of persistent memory allocation and sharing. In this example, CPMU management is carried out by a persistent memory manager (PMM). The PMM can be located within the CPMU or outside the CPMU such as on one of the previously described processor nodes 302. When a processor node 302 needs to allocate or de-allocate non-disk persistent memory in the CPMU 310, or when it needs to starts or stop using an existing region of non-disk persistent memory therein, the processor node should first communicate with the PMM to perform requested management tasks. Note that because CPMU 310 memory contents are durable (just like disk drives), the meta-data related to non-disk persistent memory regions within that CPMU must also be durable, maintained consistent with those regions, and preferably stored within the CPMU itself (just like file system meta-data on disk drives). The PMM must therefore perform management tasks in a manner that will always keep the meta-data of CPMU 310 consistent with the contents of its non-disk persistent memory. Thus, data stored in CPMU 310 can be meaningfully retrieved using the stored meta-data even after a possible loss of power, system shutdown or other failure impacting one or more of PMM, CPMU 310 and processor nodes 302. Upon a need for recovery, the system 300 using CPMU 310 is thus able to recover and resume its operation from the memory state in which a power failure or operating system crash occurred.
In those systems 300 where it is not feasible to use LOAD and STORE memory instructions of processing node 302 to directly or indirectly initiate RDMA data transfers over SAN 306, the reading and writing of CPMU 310 contents will require applications running on processing nodes 302 to initiate RDMA using an application programming interface, or API.
As should be apparent, one of the reasons non-disk persistent memory is appealing is because it supports finer-grain (meaning, smaller in size of access) read and write operations against durably stored data than do disk drives. That fine grain applies to both access size (how many bytes are read or written) and access alignment (the offset within a non-disk persistent memory region of the first byte that is read or written). Data structures within a non-disk persistent memory region can be aligned freely, thereby permitting more efficient and efficacious use of capacity than with disks. Another benefit relative to block-oriented disk storage and FLASH memories is that it is not necessary to read a large block of data first, before a small piece of it can be modified and written back; instead, a write operation can simply modify just those bytes that need to be altered. The raw speed of non-disk persistent memory is also appealing. Access latency is an order of magnitude better than that of disk drives. The relative ease of use of non-disk persistent memory compared to disk drives is significant also because of the fact that pointer-rich data structures can be stored into non-disk persistent memory without having to first convert all pointers into relative byte addresses at the time of writing, and then reconvert relative byte addresses back into pointers at the time of reading. These so-called marshalling-unmarshalling overheads can be quite significant for complex data structures. All of the above factors can enable application programmers to not only speed up the access and manipulation of the data structures they already make persistent, but also to consider making persistent certain data structures they would not have considered making persistent with slower storage devices, such as disk drives and FLASH memory. The greater the degree of persistence in an information processing system, the easier and faster it is to recover from failures because of smaller loss of information. Faster recovery implies greater system availability. The net benefit of the described embodiments is therefore not merely in performance but also in greater availability. In mission-critical transaction processing systems, where there is a high cost associated with lack of system availability, the availability benefits of the described embodiments are in fact likely to be of even greater value than the performance benefits. Moreover, new or improved database features can become possible through the use of the described embodiments, such as in-memory operation. Applications other than databases can also exploit the improved performance and availability of system 300 to deliver new customer capabilities. While too numerous to list here, many of those applications will be apparent to one skilled in the art. One such application is described next.
Using Non-Disk Persistent Memory to Reduce Transaction Commit Times
With respect to the transaction processing system 100 of
In one transaction processing embodiment, persistent memory is used to speed up transaction commit. Specifically with respect to
As an example, consider
In accordance with the described embodiment, log writer 606 comprises a primary audit disk process 608 and a backup disk process 610. A pair of non-disk persistent memory units is provided and comprises a primary non-disk persistent memory unit 612 (also referred to as a “CPMU”) and a mirror non-disk persistent memory unit 614 (also referred to as a “CPMU”). A primary audit log disk 616 and a mirror audit disk log 618 are provided for purposes which will become apparent below.
In the illustrated and described embodiment, data is written to both the primary non-disk persistent memory unit 612 and the mirror non-disk persistent memory unit 614. In some embodiments, data can be written to the primary and mirror units concurrently. Alternately, in some embodiments, data need not be written to the primary and mirror units concurrently. If the system is fully functional, in some embodiments, information is read from either the primary non-disk persistent memory unit 612 or mirror non-disk persistent memory unit 614. If only one of the non-disk persistent memory units (612, 614) should fail, then data will be read from the surviving non-disk persistent memory unit. Once a failed primary non-disk persistent memory unit is ready to be put back into service, its contents may be restored from the surviving non-disk persistent memory unit.
In the illustrated and described embodiment, one region per audit trail is allocated within each of the non-disk persistent memory units 612, 614 and the audit disk process pair 608, 610 maintains a write-aside buffer (designated as “WAB”) within each of those regions. Although any suitable write-aside buffer configuration can be used, in the present example, the write aside buffer is configured as a circular buffer as will be appreciated by the skilled artisan. As the primary audit disk process 608 receives the set of changes from the database writer 602, it uses the non-disk persistent memory units 612, 614 to very quickly commit those changes. Specifically, in the illustrated example, when ADP 608 receives the set of changes, it adds the information into the WAB of CPMU 612 at that WAB's tail address and then advances the WAB's tail address to point past the end of the most recently written information. It then repeats the operation with the WAB of CPMU 614. Skilled artisans will be able to vary the degree of concurrency between the write operations to CPMUs 612, 614.
In accordance with the described embodiment, write operations to a non-disk persistent memory region are suspended, and the WAB is marked full, if completing a requested write operation would advance the tail pointer past a head address in the WAB. Algorithms for advancing the WAB's head and tail address resemble the textbook approach to implementing circular queue data structures, except that the head address and the tail address of a WAB are both also stored and updated within the same non-disk persistent memory region that contains the WAB's circular buffer, as will be appreciated by the skilled artisan.
With cost-effective non-disk persistent memory, slower disk I/O may be completely eliminated from the transaction commitment process as log volumes are realized entirely using the higher-speed persistent memory devices instead of disks. However, presently and in the near future, it is likely that disk capacities will continue to greatly exceed non-disk persistent memory capacities, and that the cost per byte of disk storage will continue to be substantially lower than the cost per byte of non-disk persistent memory. In that case, relatively smaller-capacity non-disk persistent memory units will be used to implement WABs for relatively larger-capacity disk drives. In such arrangements, and in similar designs, synchronously written audit information will be lazily and asynchronously written to the disks. A variety of techniques for selecting which information to retain in non-disk persistent memory and which to flush to disk will be apparent to one skilled in the art of memory hierarchy design. For instance, in accordance with fuzzy control point system of transaction recovery, two ‘control points’ worth of log information may be retained in persistent memory so that upon a system crash, the usual recovery process of undoing then redoing in-flight transactions may be applied entirely out of persistent memory.
Moreover, with such cost-constrained non-disk persistent memory technology just described, ADP 608 can continue using disk write operations, but it will not need to wait for those disk operations to complete before it can allow TMP 604 to commit transactions. Instead, ADP 608 will eagerly and synchronously write all audit information received from database writers 602 to non-disk persistent memory devices 612, 614, albeit exercising the option to combine the information from multiple messages received within a suitably selected interval of time. Since the disk operations are no longer being waited on as described above, ADP 608 can now write more data per disk operation, thereby incurring fewer I/O-related overheads by doing fewer total operations for the same quantity of audit trail data. This improves disk throughput from audit disks 616, 618, as well as improves CPU utilization for the CPUs that run ADPs 608. Therefore, when the ADP 608 receives a request from TMP 604 to flush its audit trail, it flushes any pending changes to the WAB in the CPMUs 612, 614. It also buffers that information so that the information can be written to the audit log disks 616, 618 in a so-called lazy fashion. Once a predetermined condition is met, e.g. a certain threshold on buffered information is exceeded, or at a maximum fixed time interval, ADPs 608 issue disk write operations regardless of whether a flush message has been received from TMP 604. However, unlike in the conventional case, transactions in the inventive system may commit before the audit information has been written to disks 616, 618 but after it has been written to CPMUs 612, 614.
When lazily issued sequential write operations to audit disks 616, 618 complete, some of the information that had been stored in the CPMU WAB becomes eligible to be overwritten. The head addresses in the appropriate regions of CPMUs 612, 614 are then advanced past the last byte that was successfully written back. If the non-disk persistent memory unit had been marked “full” before the disk I/O completion was received by ADP 608, it is then marked “not full.” ADP 608 can then once again resume the use of the WAB. Whenever ADP 608 suspends its use of WAB, it reverts back to waiting for outstanding disk I/Os to log volumes 616, 618 before committing transactions. In such a circumstance, the size of write I/O operations will usually be set to a value (e.g. 4 KB to 128 KB, depending on the amount of audit collected between commit records) smaller than the one that yields optimal disk throughput (e.g. 128 KB to 1 MB) in order to limit the impact of disk I/O latency on transaction response time. With non-disk persistent memory, transactions do not wait for disk I/O to complete, so the ADP 608 can wait until there is more audit data buffered for writing. It is therefore able to use larger disk write I/O sizes (e.g. 512 KB) to get near-optimal throughput to log volumes 616, 618 and to substantially reduce the disk I/O overheads on the CPU that runs ADP 608.
Similar design modifications can be used to create a write-aside buffer using non-disk persistent memory for any other application whose performance is adversely impacted by waiting for disk write operations.
Other variations on the design will be apparent to the skilled artisan. For instance, instead of writing to the primary non-disk and mirror non-disk persistent memory units serially, one after the other, applications may choose to write to them concurrently.
Exemplary Method
Step 700 receives data associated with transaction-induced state changes. Such data can describe database state changes that are caused as a result of the transaction. In the illustrated and described embodiment, this data is received from a database writer component such as those described above. Step 702 writes the data to non-disk persistent memory. As noted, any suitable non-disk persistent memory architecture can be utilized. For example, in the example of
Using Non-Disk Persistent Memory for Log Writer Checkpointing
Consider now
In the system of
In this specific example, the same persistent memory units that are employed as the write aside buffer are also employed for log writer checkpointing. Thus, the very same circular buffer in persistent memory can be used for both purposes. In addition, advantages are achieved in that the ADP back up process does not need to read the persistent memory checkpoint information unless there is a failure of the ADP primary process.
As will be appreciated by the skilled artisan, this approach saves processing overhead and reduces data copying. Specifically, in the past, checkpointing to the ADP backup process required messages to be sent with the state changes to the ADP backup process, with such changes being subsequently written to the audit disk log 816, 818. With the above approach, the ADP backup process is taken out of the checkpointing loop which, in turn, saves processing overhead.
Using Non-Disk Persistent Memory for all Checkpointing
In accordance with one embodiment, non-disk persistent memory can be used for committing transactions as described above, and for both database writer checkpointing and log writer checkpointing. In this approach, the transaction commit process is streamlined by eliminating two steps in the data flow—those of checkpointing to the DP2 backup process and ADP backup process respectively.
As an example, consider
In this embodiment, instead of the database writer sending audit information or state changes to the log writer 906, the audit information or state changes are written into persistent memory 912, 914. Doing so effectively eliminates checkpointing the audit information to the DP2 backup process. Thus, the DP2 backup process (not specifically illustrated) is only required to read this information from persistent memory in the event of a failure of the DP2 primary process.
Continuing, once the audit information is written into persistent memory 912, 914, the log writer 906 can read the audit information and buffer the information in memory in preparation to commit the information to disk. Once this phase of the process is complete, the transaction monitor 904 can initiate the next phase of the commit process which is to have the information committed to disk. To do this, the transaction monitor writes the commit record to persistent memory 912, 914 and log writer 906 reads the commit record from the persistent memory 912, 914 and buffers in memory for writing to disk. Advantageously, each of the backup processes for the database writer (DP2) and the log writer (ADP) need only read the persistent memory 912, 914 in the event of a failure of their respective primary process.
In accordance with this embodiment, the log writer 906 can now lazily write the audit information and commit record to disk in an asynchronous fashion. This process effectively decouples the log writer from the commit process.
Advantageously, by virtue of using persistent memory in the checkpointing process, as well as the transaction commitment process as described above in connection with
Exemplary Computer System
In one embodiment, the above-described systems can be practiced on a computer system 1000 such as the one shown in
Referring to
Various embodiments described above utilize non-disk persistent memory in connection with transaction processing systems. By using non-disk persistent memory to commit transactions, the time associated with committing transactions can be reduced. Thus, the demand for resources inside the transaction processing system can be reduced, which can increase the throughput for transaction processing systems.
Although the invention has been described in language specific to structural features and/or methodological steps, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or steps described. Rather, the specific features and steps are disclosed as preferred forms of implementing the claimed invention.
This application is a continuation-in-part of and claims priority to U.S. patent application Ser. No. 10/797,258, filed Mar. 9, 2004, incorporated by reference herein. This application is also related to U.S. patent application Ser. Nos. 10/351,194 and 10/737,374, the disclosures of which are incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
Parent | 10797258 | Mar 2004 | US |
Child | 10864267 | Jun 2004 | US |