The present invention relates generally to the field of transaction processing. More specifically, the present invention relates to the executing of a transaction task within a transaction processing system employing a multiprocessor (MP) architecture.
Historically, transaction processing systems, such as for example Automatic Call Distributors (ACDs), have employed multiple processors and multiple operating systems for managing various tasks, including call routing, within such transaction processing systems. For example, in an exemplary ACD, a single processor and a single operating system may be dedicated to servicing non-critical tasks, such as historical and real-time reporting, database administration and system maintenance tasks. A further single processor and a single operating system within the ACD may then be dedicated to servicing real-time, critical tasks, such as ring no answer timing and other central office signaling tasks. Accordingly, the different operating systems may be utilized for servicing the respective non-critical tasks and the real-time, critical tasks. For example, the reporting, administration and maintenance tasks may be performed by a multipurpose operating system such as Unix. On the other hand, the real-time, critical tasks may be performed by a real-time operating system such as the VxWorks operating system developed by Wind River Systems, Inc. of Alameda, Calif., the PSOS operating system or the Lynx operating system. By restricting the execution of tasks to a particular processor and a particular operating system, a transaction processing system may be unable to respond to peak performance demands in certain situations.
According to the present invention, there is provided a method of executing a transaction task within a transaction processing system. Responsive to an event, a workflow associated with the event is identified. A transaction task, that at least partially executes the workflow, is distributed to an available thread within a pool threads operating within a multiprocessor system.
Other features of the present invention will be apparent from the accompanying drawings and from the detailed description which follows.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
Other features of the present invention will be apparent from the accompanying drawings and from the detailed description which follows.
A method and apparatus for executing a transaction task within a multiprocessor (MP) transaction processing system are described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
For the purposes of the present specification, the term “workflow” shall be taken to mean a sequence of steps that are performed to, at least partially, process a transaction. Further, the term “task” shall be taken to mean a process, method, operation or step that implements the performance of a workflow sequence. A task may furthermore execute a series of “sub-tasks”.
The term “thread” shall be taken to refer to any entity to which an operating system allocates processor time within a computer system. Optionally, a thread may execute any part of an application's code, including a part currently being executed by another thread instance. All threads of a process may share a virtual address space, global variables, and operating system resources of the process. A process may include one or more threads that run in the context of the process. A “process” may be an application that may include a private virtual address space, code, data and other operating system resources (e.g., files, pipes and synchronization objects that are visible to the process).
The transaction processing subsystem 31 may employ a Symmetric Multiprocessing (SMP) architecture and include a bank of processors 32 that share a memory 34 and an input/output (I/O) subsystem 36. The bank of processors 32 may include between two (2) and thirty-two (32) processors, each of which may be an Intel Pentium® Pro or Pentium II® processor manufactured by Intel Corp. of Santa Clara, Calif., or a SPARC microprocessor manufactured by Sun Microelectronics of Mountain View, Calif. The processors 32, the shared I/O subsystem 36 and the shared memory 34 are all controlled by a single executing SMP-enabled operating system 38 that resides in the shared memory 34. Examples of an SMP-enabled operating system 38 include the Windows NT® operating system developed by Microsoft Corp. of Redmond, Wash. state, the OS/2 operating system developed by IBM Corp., or a variant of the Unix operating system, such as the Solaris® operating system. The shared memory 34 is furthermore shown to host both non-critical and critical real-time applications, such as a reporting application 40, an administrative application 42 and a signaling application 44.
In a further embodiment of the present invention, the transaction processing system 30 may comprise a clustered SMP system, in which case a number of SMP systems, such as that illustrated as 31, may be included within the transaction processing system 30.
Each of the call center sites 52 and 54 is equipped to receive transaction requests (e.g., calls, e-mails or network requests) over a variety of media, and to process and facilitate transactions between, for example, a source and a human (or software) agent responsive to such transaction requests. To this end, each of the call center sites 52 and 54 is shown to include a number of transaction processing systems, namely an e-mail server 64, a web server 66, an Interactive Voice Response (IVR) workflow server 68, an ACD 70, a Computer Telephony Integration (CTI) server 72 and a workflow server 74. Each call center site 52 and 54 is also shown to include a telephony device server 76, an agent access server 78 and agent desktop clients 80. The ACD 70 is also shown to include call processing functionality 83, whereby telephone calls (e.g., both switched and voice-over-IP calls) may be processed and routed to an available agent teleset (not shown).
Each of the call center sites 52 and 54 also includes a number of administrative clients 82, whereby a call center site administrator may configure and edit workflow definitions that define workflows that are executed by various workflow servers within the respective call center sites.
The call center site 90 includes a number of agent stations 98, each of which may include a teleset 100 via which a human agent may respond to transaction requests received via any of the media servers and a collection of agent desktop applications 102 for facilitating transaction processing over, for example, the Internet utilizing e-mail or the World Wide Web (WWW). For example, the agent desktop applications 102 may include an e-mail client, a browser client, a web collaboration client and a video conferencing client. These agent desktop applications may be highly integrated, or may be stand-alone applications. Alternatively, each agent station 98 may comprise a software agent, which is able to respond to transaction requests, and process a transaction to completion, in an automated fashion. In one embodiment, the above described transaction request is associated with a transaction event and a transaction task, the transaction task responsive to the transaction request.
The present invention will be described below within the context of a workflow router, which includes a workflow server engine. It will be appreciated that the teachings of the present invention may be applied to any one of the workflow servers, workflow server engines, or call processing functions illustrated in
Each of the event subsystems 126 generates events by calling an event generator routine provided by the execution server 122. Each of the event subsystems 126 furthermore includes a unique subsystem identifier. In one embodiment, the event subsystems may furthermore be classified as being either (1) administrative event subsystems or (2) schedule event subsystems providing respective administrative and schedule tasks to the tasks queue 128.
An exemplary event 146, that may be generated by any one of the event subsystems, is illustrated in
Exemplary event subsystems 126 include an administrative event subsystem that collects TCP/IP messages that control the execution server 122. These messages typically originate from administrative clients 82, such as a workflow builder 132 or an administration console 134, and include a command to be executed by the execution server 122 on the order of the relevant clients. Such commands may include commands directing the execution server 122 (1) to start, stop, suspend, resume or step a workflow, (2) to modify a task being executed within the execution server 122, (3) to modify the number of worker threads included within a pool of such threads, (4) to add or remove an event-workflow binding, or (5) to suspend, resume or shutdown the execution server 122.
An exemplary telephony event subsystem 150 collects messages from, for example, a CTI server 72 regarding telephone calls received at that server. An exemplary schedule event subsystem 152 propagates tasks to the task queue 128 according to a schedule specified by, for example, the administrative console 134. The events generated by the schedule event subsystem 152 may be for any subsystem identifier and event identifier, and may also comprise command events. An exemplary pre-call routing subsystem 154 services pre-routing queries from the PSTN 77, and interacts with pre-routing clients using TCP/IP connection-oriented sockets to provide an interface between such pre-routing clients and the pre-call routing subsystem 154. Other event subsystems may include a web event subsystem 156 and an e-mail event subsystem 158.
The workflow execution server 122 includes a single task queue 128 to manage tasks received from the task dispatchers 200. As noted above, each of the event subsystems 126 generates events that are translated into tasks dispatched to the task queue 128. The tasks are prioritized within the task queue 128 by task priority logic 230, each task being assigned a default priority of zero (0). The task queue 128 utilizes Adaptive Communication Environment (ACE) synchronization methods to ensure that multiple event subsystems 126 may properly share the task queue 128. ACE is a freely available C++ framework, and provides abstractions for sockets, queues and high-level components. ACE is distributed by Douglas Schmidt at Washington University, and further details regarding ACE can be found at: http://www.cs.wust.edu/˜schmidt/ACE.html.
Each task dispatcher 200 furthermore uses ACE notification methods to effectively dispatch tasks to the task queue 128. Specifically, a task dispatcher 200 may look to an event header to determine how to handle the relevant event. If the event is identified as being a workflow event, the task dispatcher 200 matches the event to an associated workflow utilizing the event-workflow binding information 214 stored in the database server 124. The task dispatcher 200 utilizes the subsystem identifier 144 and the event identifier 142 of a relevant event to identify an associated workflow. More than one event-workflow binding may be located. If a matching workflow (or set of workflows) is identified, the workflow (or set of workflows) is instantiated by the task dispatcher 200 to create a task object (or multiple task objects) to execute the workflow(s). These task objects are dispatched to the task queue 128. It should thus be noted that an event may have multiple tasks associated therewith.
In addition to workflow centers that are mapped to workflows using the event-workflow binding information 214 in the manner described above, further event types exist that may conveniently be classified as “task” events and events that may be classified as “command” events. A valid task identifier (not shown) distinguishes a task event 146 in an event header that identifies an associated task. The task dispatcher 200 dispatches a task event to a task specified and identified by the task identifier. Task events send events to an executing task and do not invoke, create or start new tasks. A command event 146 is dispatched by the task dispatcher 200 to a command interpreter (not shown) to execute an included command. A command event may be handled by the pool of worker threads 202, or may alternatively be for a subsystem.
Within the task queue 128, each task 250 has a unique task identifier 252 associated therewith.
The pool of worker threads 202 is responsible for executing the tasks 205 queued within the task queue 128. As each worker thread becomes available, a scheduler 204 identifies the highest priority task from the task queue 128, and feeds the task to the available worker thread that executes a single step of the relevant task. Further details regarding the execution of tasks by the pool of threads, where the pool of threads are executed on a multiprocessor platform, are provided below.
In an alternative embodiment of the present invention, an algorithm implemented within a scheduler associated with the task queue 128 may intelligently determined a “BestMatch” between an available thread and the tasks that are queued within the task queue 128. This “BestMatch” determination may be based on any number of parameters, such as a dynamically assigned priority or processor affinity.
In identifying a task to be attributed to an available worker thread, the scheduler 204 may identify a “real-time” priority associated with a task. Specifically, a task identified as having a “real-time” priority will be regarded as having a highest priority, and assigned to an available thread ahead of any other tasks not having a “real-time” priority. In one embodiment of present invention, specific threads may be members of a “real-time” process priority class, and a task having a “real-time” priority will be attributed to such threads by the scheduler.
As illustrated in
Further, the dispatcher within the kernel of the operating system may recognize “real-time” priorities attributed to certain threads within the pool threads 202. Such threads may be dispatched to processors ahead of threads having non-“real-time” priorities. This is especially applicable in a real-time operating system that guarantees interrupt latency or some other way for threads to obtain a guaranteed execution time.
At the same decision box 358, a determination is then made as to whether task is a “command” task for command execution. If so, the relevant command is executed at step 360, whereafter the task is destroyed at step 362. Alternatively, should the task not be a command task, a determination is made at decision box 364 as to whether the task is a workflow task. If so, the next step of the relevant task is executed at step 366. Pending task notifications, indicated at 368, cause available exception handlers to set the next step. At decision box 370, a determination is made as to whether the step executed at step 366 was the last step of the task. If not, the method 350 proceeds to make a further determination at decision box 376 whether execution should continue for the same thread. If so, the method 350 loops back to step 366, and a next consecutive step of the relevant task is executed. Following a negative determination at decision box 374, the task is returned to, and again queued within, the task queue 128.
If the last step of the task has been executed, as recognized at decision box 370, the task is destroyed at step 362. The method 350 then terminates at step 372.
After all actions or steps associated with a task are completed, the thread then grabs the next available task from the task priority queue 128 for execution.
Accordingly, it will be appreciated that a task, which at least partially implements a workflow, is executed by any one of the worker threads within the pool of threads 202 that is available, or becomes available. Each of the worker threads within the pool 202 may execute on a designated processor within a bank of processors 32, such as that illustrated in
Accordingly, a method and apparatus for executing a transaction task within a transaction processing system employing a multiprocessor architecture have been described. Although the present invention has been described with reference to specific exemplary embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
5185861 | Valencia | Feb 1993 | A |
5214756 | Franklin et al. | May 1993 | A |
5323452 | Dickman et al. | Jun 1994 | A |
5327557 | Emmond | Jul 1994 | A |
5367624 | Cooper | Nov 1994 | A |
5404523 | DellaFera et al. | Apr 1995 | A |
5455854 | Dilts et al. | Oct 1995 | A |
5455903 | Jolissaint et al. | Oct 1995 | A |
5535322 | Hecht | Jul 1996 | A |
5555179 | Koyama et al. | Sep 1996 | A |
5560029 | Papadopoulos et al. | Sep 1996 | A |
5586312 | Johnson et al. | Dec 1996 | A |
5649131 | Ackerman et al. | Jul 1997 | A |
5734837 | Flores | Mar 1998 | A |
5745763 | Mealey et al. | Apr 1998 | A |
5745778 | Alfieri | Apr 1998 | A |
5765033 | Miloslavsky | Jun 1998 | A |
5799297 | Goodridge et al. | Aug 1998 | A |
5812989 | Witt et al. | Sep 1998 | A |
5818469 | Lawless et al. | Oct 1998 | A |
5832611 | Schmitz et al. | Nov 1998 | A |
5842226 | Barton et al. | Nov 1998 | A |
5848393 | Goodridge et al. | Dec 1998 | A |
5903730 | Asai et al. | May 1999 | A |
5918226 | Tarumi et al. | Jun 1999 | A |
5926539 | Shtivelman | Jul 1999 | A |
5940804 | Turley et al. | Aug 1999 | A |
5946387 | Miloslavsky | Aug 1999 | A |
5953332 | Miloslavsky | Sep 1999 | A |
5953405 | Miloslavsky | Sep 1999 | A |
5999911 | Berg et al. | Dec 1999 | A |
5999965 | Kelly | Dec 1999 | A |
6002760 | Gisby | Dec 1999 | A |
6021428 | Miloslavsky | Feb 2000 | A |
6044145 | Kelly et al. | Mar 2000 | A |
6044368 | Powers | Mar 2000 | A |
6052684 | Du | Apr 2000 | A |
6067357 | Kishinsky et al. | May 2000 | A |
6105053 | Kimmel et al. | Aug 2000 | A |
6108711 | Beck et al. | Aug 2000 | A |
6134318 | O'Neil | Oct 2000 | A |
6138139 | Beck et al. | Oct 2000 | A |
6151688 | Wipfel et al. | Nov 2000 | A |
6167395 | Beck et al. | Dec 2000 | A |
6167423 | Chopra et al. | Dec 2000 | A |
6170011 | Beck et al. | Jan 2001 | B1 |
6175563 | Miloslavsky | Jan 2001 | B1 |
6175564 | Miloslavsky et al. | Jan 2001 | B1 |
6185292 | Miloslavsky | Feb 2001 | B1 |
6192121 | Atkinson et al. | Feb 2001 | B1 |
6222530 | Sequeira | Apr 2001 | B1 |
6223207 | Lucovsky et al. | Apr 2001 | B1 |
6226377 | Donaghue, Jr. | May 2001 | B1 |
6237024 | Wollrath et al. | May 2001 | B1 |
6243092 | Okita et al. | Jun 2001 | B1 |
6243105 | Hoyer et al. | Jun 2001 | B1 |
6263359 | Fong et al. | Jul 2001 | B1 |
6269390 | Boland | Jul 2001 | B1 |
6279009 | Smirnov et al. | Aug 2001 | B1 |
6289369 | Sundaresan | Sep 2001 | B1 |
6314089 | Szlam et al. | Nov 2001 | B1 |
6314430 | Chang | Nov 2001 | B1 |
6345305 | Beck et al. | Feb 2002 | B1 |
6351778 | Orton et al. | Feb 2002 | B1 |
6373836 | Deryugin et al. | Apr 2002 | B1 |
6389007 | Shenkman et al. | May 2002 | B1 |
6393015 | Shtivelman | May 2002 | B1 |
6411982 | Williams | Jun 2002 | B2 |
6418458 | Maresco | Jul 2002 | B1 |
6434590 | Blelloch et al. | Aug 2002 | B1 |
6650748 | Edwards et al. | Nov 2003 | B1 |
6658447 | Cota-Robles | Dec 2003 | B2 |
6690788 | Bauer et al. | Feb 2004 | B1 |
6721778 | Smith et al. | Apr 2004 | B1 |
6732156 | Miloslavsky | May 2004 | B2 |
20030115545 | Hull et al. | Jun 2003 | A1 |