Transaction monitor process with pre-arranged modules for a multiprocessor system

Information

  • Patent Grant
  • 5576945
  • Patent Number
    5,576,945
  • Date Filed
    Monday, January 23, 1995
    29 years ago
  • Date Issued
    Tuesday, November 19, 1996
    27 years ago
Abstract
A multiple processor system includes at least one process pair each including a plurality of modules. The modules perform functions related to multiple independent threads, and are arranged in a predetermined order such that higher modules are dependent upon lower modules, and lower modules are independent from higher modules. Each process pair is initially unaware of the number and order of the modules. The order of the modules is related to dependency and interdependency between the modules so that there is a portion of higher modules and a portion of lower modules. Multiple independent threads process the modules to cause activities in the portions of higher modules to take place before activities in the portions of lower modules.
Description

COPYRIGHT NOTICE
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the xeroxographic reproduction by anyone of the patent document or the patent disclosure in exactly the form it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION
The present invention is directed to data processing systems, and more particularly to parallel processing which coordinates major system state change activities among multiple modules.
On-line transaction processing ("OLTP") has found a variety of commercial applications in today's industry such as, for example, assisting with financial transactions (e.g., coordinating information from bank ATMs), tracking data for the New York Stock Exchange, tracking billings for telephone companies, and tracking parts for manufacturing (e.g., automobile parts). Many of the commercial applications available for OLTP require elaborate protection of integrity of user data along with continuous availability to the OLTP applications for end users. For example, ATMs for banks must have excellent integrity (i.e., make a minimum of, if any, errors), and ATMs must be available to users for extended periods of time. ATM users would not tolerate mistakes associated with their transactions (e.g., a $500.00 deposit not being credited to a user's account). Moreover, ATMs are often preferably available to users 24 hours a day, seven days a week.
To achieve continuous availability in OLTP applications, users must be able to rely on supporting system software. Parallel processing (processing which utilizes process pairs) assists in allowing OLTP to quickly handle numerous individual transactions or small tasks which are distributed among multiple processors. A process pair involves two processes with the same set of instructions to execute. At any one time, one of the two processes (the primary) is performing all of the useful work, and occasionally sending messages to the other of the two processors (the backup) in order to update that processor's state. The second of the two processors, or backup processor, remains ready to assume the workload if the first of the two processors, or primary processor, fails.
Other parallel processing applications include maintaining and accessing large data bases for record-keeping and decision-making operations, or as media servers that provide an accessible store of information to many users. Parallel processing's particular advantage resides in the ability to handle large amounts of diverse data such as, for example, in decision making operations which may require searches of diverse information that can be scattered among a number of storage devices. Furthermore, a parallel processor media server application could be in an interactive service environment such as "movies-on-demand," that will call upon the parallel processor to provide a vast number of customers with access to a large reservoir of motion pictures kept on retrievable memory (e.g., disk storage devices). This latter application may well require the parallel processor to simultaneously service multiple requests by locating, selecting, and retrieving the requested motion pictures, and then forwarding the selections to the requesting customers.
While parallel processing increases OLTP capabilities, a technique is needed for coordinating major system state changes among multiple modules within process pairs.
SUMMARY OF THE INVENTION
In the preferred embodiment, a Transaction Monitoring Facility (TMF) provides transaction management and protects the integrity of user data. The programmatic construct called a "transaction" is an explicitly delimited operation, or set of related operations, that changes the content of a database from one consistent state to another. The database operations within a transaction are treated as a single unit.
In the preferred embodiment, a multithreaded process is utilized to provide a process in which there are multiple independent loci of control called "threads". Each "thread" has a different task to perform which furthers the larger objective of the process. Furthermore, a multithreaded process pair is a process pair in which each process may be (and usually is) multithreaded. Within TMF, the transaction monitor process (TMP) is a multithreaded process pair. The TMP uses one thread to track each transaction. Threads are also used for other purposes (e.g., one thread could represent each person using an ATM in a full banking system). In addition, approximately 500 to 1000 threads are occurring at one time in a busy system. Multiple modules act upon each "thread" such that desired changes are performed. Each module may contain one or more threads of activity that operate as largely independent subprocesses within the overall process infrastructure.
In the preferred embodiment, the invention provides a means for controlling operations within a multi-threaded process pair which includes multiple modules having varying degrees of interdependence. For example, the present invention ensures that operations (or events) such as (1) subsystem startup and shutdown, (2) process pair take over, (3) give ownership, etc. can be accomplished without deadlock and race conditions, and with a minimum of interaction between the various modules.
Deadlock occurs when two or more threads cannot go forward without something from the other, such that the system cannot run. In a race condition, two threads that should have been synchronized are operated upon at the same time, such that the transaction does not go forward properly. Race conditions often lead to system failure. Thus, both deadlock and race conditions are undesirable in a system which requires excellent integrity and continuous availability.
The system involved has several definite characteristics. First, the full set of potential modules is not initially known by the controlling entity.
Second, the modules affected by a given event must change state at approximately the same time and in a predetermined specific order. In the preferred embodiment, state changes must have a definite beginning, before which no affected module has changed state, and end, after which all affected modules have changed state.
Third, the modules involved have varying degrees of interdependence. In the preferred embodiment, one module depends on the services of another, and state changes must be made in a specific order so that any required dependency is satisfied.
Finally, the present invention is not limited to transaction management. In particular, process pairs and modules can be used in many applications which do not utilize a transaction manager/monitor. For example, a communication manager used for the Internet utilizes the present invention without the assistance of a transaction monitor/manager.
A further understanding of the nature and advantages of the invention will become apparent by reference to the remaining portions of the specification and drawings.





BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a simplified representation of a processor system with 2 to 16 processors;
FIG. 2 illustrates an arrangement of modules within the TMP;
FIG. 3 illustrates the TMP-primary and TMP-backup configurations;
FIG. 4 is a module state transition diagram; and
FIG. 5 is a module dependency graph.





DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
Turning now to the figures, and for the moment principally FIG. 1, illustrated in simplified form is a processor system with 2 to 16 processors, designated generally with the reference numeral 10. As shown, the processor system 10 comprises a plurality of processors 12, 14, . . . (CPUs). Each CPU 12, 14, . . . contains its own RAM 22, 24, . . . The 2 to 16 CPUs are connected by two high speed buses 26 and 28, and each CPU 12, 14, . . . has its own input/output line 30, 32, . . . (I.backslash.O). I.backslash.O lines 30, 32, . . . connect CPUs 12, 14, . . . to disk controller 34 and communication controller 36.
Disk Controller 34 translates between protocols from the I/O buses 30, 32, . . . to the actual electrical signals which are needed by the disks. In the preferred embodiment, one disk controller is provided for eight disks. Similarly, communication controller 36 translates between protocols from the I/O buses 30, 32, . . . to external communication lines and executes some of the communication protocol. Different types of communication controllers are used for handling different communication lines.
This processor system arrangement 10 uses a process pair scheme. There are two CPUs in a process pair, one is the primary CPU and the other is a standby CPU which is only utilized if the other primary CPU is inoperable. The standby CPU is continually updated such that if an error occurs and the primary CPU cannot perform a transaction, the standby CPU will have access to any piece of the required information and, thus, will be able to independently complete the transaction.
This processor system 10 has many other features which apply to every transaction. In the preferred embodiment, for example, for each request that takes place in the system, a response must be sent to the transaction manager. In addition, if any mistake or failure takes place, system 10 returns to its prior state as though the transaction never started. Thus, part of a transaction or request may be undone if needed.
Now turning to FIG. 2, In the preferred embodiment, TMP 50 is a TMF monitoring process (or a transaction monitor), and it acts as one of the TMPs in a TMP process pair for the entire system 10. TMP 50 allows for system 10 to have multiple independent activities which are not dependent on the number or order of modules. FIG. 2 illustrates an arrangement of modules within TMP 50. These modules, for example, can be transaction management 52, data volume management 54, writing audit record 56, and audit trail management 58. Additionally these modules can be deleted, can be changed, or new modules can be added.
While many independent activities are occurring in the modules of TMP 50, an audit trail keeps track of all the processes (changes, undos, etc.) as they are completed. This audit trail can be considered as a data journal or log. In the preferred embodiment, each module carries out a separately defined function. For example, transaction management 52 controls the state of the transaction. The state of a certain transaction depends on whether that transaction is, for example, (1) still active, (2) committed, (3) complete, (4) aborted or (5) aborting. Data volume management 54 controls the state of the individual disks. A disk state changes when a disk fails, is removed, is added, etc. Writing audit record 56 writes the records in the audit trail. Audit trail management 58 places the records in the audit trail, tracks the audit trail position and coordinates the layout of the audit trail. If more than one disk volume is needed for the audit trail, audit trail management 58 organizes the disks such that available disk space is used efficiently.
The order of modules 52, 54, 56, and 58, (such that transaction management 52 is on top, data volume management 54 is second from the top, writing audit record 56 is second from the bottom, and audit trail management 58 is on the bottom) is significant because this order facilitates the tracking of the state of activities. This allows system 10 to coordinate activities such that certain activities occur before or after other activities, as needed, automatically. While modules 52, 54, 56, and 58 are set forth in TMP 50, as stated above, these modules can be deleted and/or other modules (such as audit dumping, etc.) can be added as desired. This is particularly useful during development and expansion.
TMP 50 works in a process pair for the entire system 10. As shown in FIG. 3, TMP 50 is a primary TMP and TMP 60 is a backup TMP. In the preferred embodiment, both TMP primary 50 and TMP backup 60 contain the same number and type of modules. For example, these modules could be primary transaction management 52, backup transaction management 62, primary data volume management 54, backup data volume management 64, primary audit write 56, backup audit write 66, primary audit trail 58, and backup audit trail 68. The primary modules 52, 54, 56 and 58 continually update backup modules 62, 64, 66, and 68, respectively. When system 10 is initially started, a particular sequence for the turning on of modules occurs. First, audit trail 58 in TMP primary 50 is started. Second, audit trail 68 in TMP backup 60 is started. Third, audit write 56 is started, and so on, such that transaction management 62 in TMP backup 60 is the last module to be started. This is considered a "bottom up" startup. Similarly, a shutdown is "top down". Thus, in a shutdown, transaction management 62 in TMP backup 60 is the first module to be shut down and audit trail 58 in TMP primary 50 is the last module to be shut down.
If system 10 needs to utilize TMP backup 60 (e.g., a mistake or failure has occurred), then TMP backup 60 begins to act as the primary and TMP primary 50 acts as a backup. In order to accomplish this, both TMP 50 and TMP 60 must be reconfigured. Reconfiguration for TMP primary 50 (from primary to backup) occurs in "top down" fashion and reconfiguration for TMP backup 60 (from backup to primary) occurs in a "bottom up" fashion.
Due to the transition sequencing shown in FIG. 3, if a certain module is "on" then all lower modules are "on" and if a certain module is primary, then all lower modules are primary. For example, if audit write 56 is "on" then audit trail 58 is also "on". In addition, a module depends on all the modules which are lower within its TMP, and each module is independent of the modules which are higher within its TMP. For example, data volume management 54 depends on both writing audit 56 and audit trail management 58, but data volume management 54 is independent of transaction management 52. Because of the "top down" and "bottom up" approach used for starting and stopping modules, one can determine that if a certain modules is, for example, stopped, then all modules above it are also stopped. Conversely, if a certain module is not stopped, then all modules it depends on are started. Similarly, if a module is not a backup, then all modules it depends on are primary.
FIG. 4 provides a module state transition diagram. The four states involved are stop primary 70, start Primary 72, stop backup 74, and start Backup 76. During the transition in which modules are changed from primary to backup or vice versa, ownership is given by one TMP while the other TMP takes over, as shown in transition states 78, 80, 82, and 84. Similarly, when one TMP is started or stopped, a transition takes place as shown in transition states 86, 88, 90 and 92.
Many module service requests come from outside the process in the form of messages to the process. Since each module's ownership transition happens at slightly different times, the module to which a request is directed independently makes the decision of whether it can perform the request at the time that it is received. If a module decides that it cannot handle a request because it is not primary, then it calls a centralized procedure to dispose of that request (e.g., "TmpControl HandleBackupMsg"). This procedure performs the following routine in the preferred embodiment: (1) if no transition is going on in any module in that process pair then a reply is sent to the originator of the message, indicating that the originator must resend the message to the other member of the process pair; and (2) if a transition is in progress, then the request is placed in a holding list, and after the entire process completes the current transition, either the request is sent back to the module that services it (if the process is now primary), or a reply is sent to the message's originator indicating that the request must be re-sent to the other member of the process pair (if the process is now backup). Therefore, during the changeover time, all requests are saved/held and dealt with after the changeover is complete.
As with all transactions, after the held requests are examined, either (1) a reply is sent, or (2) the request is accepted, a transaction is done if needed, and a reply confirming acceptance/action is sent. Thus, the system waits for the transition between primary and backup to fully occur before acting on requests. Additionally, before any module changes from primary to backup, that module would need to deal with all of its "current" threads/requests before making the changeover.
In the preferred embodiment, the following events lead to coordinated state changes among the modules of the TMP process: (1) process pair events (e.g., switch ownership, takeover ownership, reload backup and backup down), and (2) subsystem events (e.g., start and stop). Reload backup occurs when a backup TMP comes back on-line and needs to be updated continually by the primary TMP. Backup down occurs when a backup TMP is down and the primary TMP is told to stop updating the ineffective backup TMP.
In summary, the sequencing of the modules results in the following properties: (1) if any module is started, starting OR stopped, then every module it depends on is started; and (2) if any module is primary, taking over OR giving ownership, then every module it depends on is primary. These properties allow any module, during its own transition(s), to safely depend on the services from other modules in order to complete its own transition.
FIG. 5 provides a module dependency graph. As illustrated in this graph, both transaction management 52 and data volume management 54 are dependent on audit write 56, and audit write 56 is dependent on audit trail management 58. Thus, no cycle of dependency exist. Dependency is determined between modules by going up or down the sequence of modules as shown in both FIG. 2 and FIG. 3, and described more fully above.
This linear dependency allows for simpler addition and deletion of modules. The code does not need to know which modules are being used and how they depend on each other because the full set and order of modules is not initially known by the controlling entity. During development and expansion, modules can easily be added or deleted by placing them in the sequence of modules within TMP primary 50 and TMP backup 60. For example, audit trail initialization occurs automatically after a startup call is made to TMP control. Such a call may be to start procedure, stop procedure, take over procedure, etc. When a call is made to TMP control, TMP control just knows that it is starting, stopping, etc., and it is unaware of which modules are available and how they are arranged until after the "top down" or "bottom up" procedure through the available modules has taken place.
As stated in the Summary of the Invention, the present invention is not limited to TMF or the TMP. The present disclosure discusses TMF and TMP only as the preferred embodiment of the present invention. Thus, the invention can be applied to any system which utilizes process pairs and modules (as defined above). Moreover, this invention is not limited to the number of process pairs. While this invention utilizes hundreds of process pairs at any one time in the preferred embodiment, one to one thousand-plus process pairs can be used to practice the present invention.
An example of the source code used in the preferred embodiment to implement the above described functionality is included in the attached Appendix.
While a full and complete disclosure of the invention has been provided herein above, it will be obvious to those skilled in the art that various modifications and changes may be made. ##SPC1##
Claims
  • 1. A multiple processor system, comprising:
  • At least one process pair; and
  • a plurality of modules located within at least one of said process pair, said modules performing functions related to multiple independent threads, said modules arranged in a predetermined order, said predetermined order causing higher modules to be dependent on lower modules and lower modules to be independent from higher modules;
  • whereby said process pair is initially unaware of the number and order of said modules.
  • 2. The multiple processor system of claim 1, wherein a redundant bus interconnects said process pair.
  • 3. The multiple processor system of claim 1, wherein said plurality of modules include a transaction management module, said transaction management module controlling the state of each transaction.
  • 4. The multiple processor system of claim 1, wherein said plurality of modules include a data volume management module, said data volume management module controlling the state of disks.
  • 5. The multiple processor system of claim 1, wherein said plurality of modules include a writing audit record module, said writing audit record module capable of writing records in an audit trail.
  • 6. The multiple processor system of claim 1, wherein said plurality of modules include an audit trail management module, said audit trail management module capable of placing records in an audit trail, tracking the position of said audit trail, and coordinating the layout of said audit trail.
  • 7. A method of arranging modules in a multiple processor system, comprising the steps of:
  • placing said modules in a predetermined order, said order related to dependency and interdependency between said modules, said order causing said modules to include a portion of higher modules and a portion of lower modules;
  • processing multiple independent threads in said modules, said processing causing activities in said portion of higher modules to take place before activities in said portion of lower modules;
  • starting said modules in a sequence related to said order of said modules; and
  • stopping said modules in a sequence related to said order of said modules.
  • 8. The method of claim 7, wherein said portion of higher modules depend on said portion of lower modules, and said portion of lower modules are independent from said portion of higher modules.
  • 9. The method of claim 7, wherein said system is initially oblivious to the number of said modules and to said order of said modules, and wherein said system is oblivious to when said modules are added and to when said modules are deleted.
  • 10. The method of claim 7, wherein said modules are located within at least two transaction monitors, and wherein said transaction monitors include at least one primary transaction monitor and at least one backup transaction monitor.
  • 11. The method of claim 7, further comprising:
  • changing the state of said modules in a sequence related to said order of said modules, said changing causing only one of said modules to change state at any one period in time.
US Referenced Citations (10)
Number Name Date Kind
4228496 Kaatzman et al. Oct 1980
4365295 Katzman et al. Dec 1982
4484275 Katzman et al. Nov 1984
4590555 Bourrez May 1986
4663706 Allen et al. May 1987
4667287 Allen et al. May 1987
4672535 Katzman et al. Jun 1987
4817091 Katzman et al. Mar 1989
5297281 Emma et al. Mar 1994
5379428 Belo Jan 1995
Non-Patent Literature Citations (2)
Entry
Pradeep K. Dubey, Arvind Krishna, and Michael J. Flynn. Analytical Modeling of Multithreaded Pipeline Performance Proceedings of the 27th Hawaii International Conference on System Sciences vol. 1: Architecture, pp. 361-367. Jan./1994.
Herbert H. J. Hum, Kevin B. Theobald and Guang R. Gao. Building Multithreaded Architectures with Off-the-Shelf Microprocessors, Proceedings of the Eighth International Parallel Processing Symposium (Cat. No. 94TH0652-8), pp. 288-294. 1994.