Pipeline systems and method for transferring data in a network environment

Information

  • Patent Grant
  • 8326915
  • Patent Number
    8,326,915
  • Date Filed
    Friday, June 10, 2011
    13 years ago
  • Date Issued
    Tuesday, December 4, 2012
    12 years ago
Abstract
A communications system having a data transfer pipeline apparatus for transferring data in a sequence of N stages from an origination device to a destination device. The apparatus comprises dedicated memory having buffers dedicated for carrying data and a master control for registering and controlling processes associated with the apparatus for participation in the N stage data transfer sequence. The processes include a first stage process for initiating the data transfer and a last Nth stage process for completing data transfer. The first stage process allocates a buffer from a predetermined number of buffers available within the memory for collection, processing, and sending of the data from the origination device to a next stage process. The Nth stage process receives a buffer allocated to the first stage process from the (N−1)th stage and to free the buffer upon processing completion to permit reallocation of the buffer.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The invention relates to data transfer mechanisms, and in particular, to a software-based, high speed data pipe for providing high speed and reliable data transfer between computers.


2. Description of the Related Art


It is fairly obvious that data, in the process of being archived or transferred from one location to another, will pass through various phases where different operations such as compression, network transfer, storage, etc. will take place on it. There are essentially two approaches that can be taken when implementing such a transfer mechanism. One would be to split the archival process into sub-tasks, each of which would perform a specific function (e.g., compression). This would then require copying of data between the sub-tasks, which could prove processor intensive. The other method would be to minimize copies, and have a monolithic program performing all of the archival functions. The downside to this would be loss of parallelism. A third alternative would of course be to use threads to do these tasks and use thread-signaling protocols, however, it is realized that this would not be entirely practical since threads are not fully supported on many computing platforms.


Accordingly, it is highly desirable to obtain a high-speed data transfer mechanism implemented in software and developed for the needs of high speed and reliable data transfer between computers.


It is an object of the invention to disclose the implementation of the DataPipe in accordance with CommVault System's Vault98 backup and recovery product. While developing the DataPipe, it is assumed that data, as it moves from archiving source (backup client) to archiving destination (backup server as opposed to media), may undergo transformation or examination at various stages in between. This may be to accommodate various actions such as data compression, indexing, object wrapping, etc., that need to be performed on data being archived. Another assumption is the data may be transmitted over the network to remote machines or transferred to a locally attached media for archival.


Both the sending and the receiving computers execute software referred to herein as the DataPipe. Although the DataPipe transfer mechanism to be described herein is operative as a key component of backup and recovery software product schemes, the DataPipe is not restricted to that use. It is a general purpose data transfer mechanism implemented in software that is capable of moving data over a network between a sending and a receiving computer at very high speeds and in a manner that allows full utilization of one or more network paths and the full utilization of network bandwidth. A DataPipe can also be used to move data from one storage device to another within a single computer without the use of a network. Thus, the DataPipe concept is not confined to implementation only in networked systems, but is operable to transfer data in non-networked computers as well.


SUMMARY OF THE INVENTION

It is an object of the invention to provide in a communications system having an origination storage device and a destination storage device, a data transfer pipeline apparatus for transferring data in a sequence of N stages, where N is a positive integer greater than 1, from said origination to said destination device, comprising: dedicated memory means having a predetermined number of buffers dedicated for carrying data associated with the transfer of data from said origination storage device to said destination device; and master control means for registering and controlling processes associated with said data transfer apparatus for participation in the N stage data transfer sequence, wherein said processes include at least a first stage process for initiating said data transfer and Nth stage process for completing data transfer, wherein said first stage process is operative to allocate a buffer from said predetermined number of buffers available within said dedicated memory means for collection, processing, and sending of said data from said origination device to a next stage process; and wherein said last Nth stage process is operative to receive a buffer allocated to said first stage process from the (N−1)th stage process in the data transfer sequence and to free said buffer upon processing completion and storage in the destination device to permit reallocation of said buffer, said master control means further including monitor means for monitoring the number of buffers from said pool of buffers allocated or assigned to particular processes in said pipeline, wherein said monitor means is operative to prevent allocation of further buffers to a particular process when said number of buffers currently allocated exceeds a predetermined threshold.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be better understood with reference to the following drawings, in which:



FIG. 1 is a block diagram of the data pipe architecture in accordance with the present invention.



FIG. 2A is a schematic of the data pipe transfer process on a single computer according to an embodiment of the invention.



FIG. 2B is a schematic of the data pipe transfer process on multiple computers according to another embodiment of the invention.



FIG. 2C is a schematic of the data pipe transfer buffer allocation process from a buffer pool stored in the shared memory according to an embodiment of the invention.



FIG. 2D is a schematic illustrating the controlling relationship of the master monitor process to the various attached processes according to an embodiment of the invention.



FIGS. 3A-3C illustrate various messages transferred between application processes and the master monitor process according to an embodiment of the invention.



FIGS. 4A and 4B illustrate schematics of the module attachment process to shared memory space in accordance with the present invention.



FIGS. 5A and 5B depict flow diagrams of the operation of the sequencer and resequencer processes according to the present invention.



FIG. 6 depicts an exemplary data transfer flow among various processing stages within the pipeline according to the present invention.



FIG. 7 illustrates a data pipe transfer process on multiple computers having processes with multiple instantiations according to an embodiment of the present invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

Before embarking on a detailed discussion of the data transfer mechanism of the present invention, the following should be understood. The objective of the DataPipe according to the present invention is to move data as quickly as possible from point A to point B (which may be on the same or different computers within a network) while performing a variety of operations (compression, encryption, content analysis, etc.) on the data. In order to meet this objective, parallel processing must be fully exploited, network bandwidth must be fully utilized, and CPU cycles must be minimized. The DataPipe must be efficiently implemented on a wide variety of computer systems such that heterogeneous systems on a network can use a DataPipe to transfer data to each other.


A DataPipe comprises a named set of tasks executing within one or more computers that cooperate with each other to transfer and process data in a pipelined manner. Within a DataPipe, a pipeline concept is used to improve performance of data transfer across multiple computers in a network. However, within a DataPipe, any stage within the pipeline may have multiple instances, thus greatly increasing the scaleability and performance of the basic pipeline concept.


The DataPipe mechanism processes data by dividing its processing into logical tasks that can be performed in parallel. It then sequences those tasks in the order in which they are to act on the data. For example, a head task may extract data from a database, a second task may encrypt it, a third may compress it, a fourth may send it out over the network, a fifth may receive it from the network, and a sixth may write it to a tape. The latter two tasks may reside on a different computer than the others, for example.


All of the tasks that comprise a single DataPipe on a given computer have access to a segment of shared memory that is divided into a number of buffers. A small set of buffer manipulation primitives is used to allocate, free, and transfer buffers between tasks.


Semaphores (or other OS specific mutual exclusion or signaling primitives) are used to coordinate access to buffers between tasks on a given computer. Special tasks, called network agents, send and receive data across network connections using standard network protocols. These agents enable a DataPipe to connect across multiple computer systems. A single DataPipe can therefore reside on more than one computer and could reside on computers of different types.


Each task may be implemented as a separate thread, process, or as a procedure depending on the capabilities of the computing system on which the DataPipe is implemented.


The data exchange paradigm called the DataPipe has been fashioned to provide solutions to the problems associated and encountered in prior art data transfer systems. The salient features of this method are as follows:

    • 1. Split the whole task of processing on data into logical sub tasks and sequence them according to the order in which they are supposed to act on the data stream.
    • 2. Use dedicated process/threads to perform network transfer.
    • 3. Make all the dedicated tasks share a single large shared memory segment.
    • 4. Split the shared memory segment into small buffers so that this single buffer space can be shared among various execution threads at various stages of tasks.
    • 5. Use semaphores (or other OS specific mutual exclusion or signaling primitives) to transfer control over the data segments between modules.


As mentioned previously, each task may be implemented as a separate thread, or process, or as a procedure in a monolithic process (in cases where native platforms don't support any forms of parallel execution or multi processing). For data transfer across network, dedicated network readers and writers ensure communication across the net. FIG. 1 shows a steady state picture of how the DataPipe architecture 10 is set up according to the present invention.


Referring to FIG. 1, there is shown a disk 20 residing on a computer machine 30 which houses information or data to be backed up or archived to server computer 40 via DLT device drivers 50 and 60 respectively. As one can ascertain, the DataPipe represents the end-to-end architecture which may be utilized during database backup from the disk drive 20 where the database resides to the tape or optical devices 50 and 60 at server 40. The DataPipe thus removes the network as the limiting factor in backup performance. As a result, the device pool defines the performance capabilities.


As shown in FIG. 1, the DataPipe or stream 70 is created for the transfer of data for each device in the device pool to be used simultaneously, which comprises modules 72, 74, 76, 78 and 50. Similarly, a second DataPipe 80 is shown comprised of modules 82, 84, 76, 78 and 60. Note that if additional DLT devices are used to backup data and parallel further DataPipes would be provided. Since one can ascertain the concept of the DataPipe through explanation of one path or thread by which data is transferred, further description will focus on processing through a single DataPipe or stream 70, as shown in FIG. 1. At the head of the DataPipe is the collector component 72 which is responsible for obtaining the database information from disk 20. The data is passed down in buffers residing in dedicated shared memory through the pipeline 70, through an optional compression module 74, to the network interface modules 76. At the network interface, data is multiplexed and parallel network paths 77 obtain maximum throughput across the network. Preferably, each network path runs at a rate equal to approximately 10 base T or the number of network paths utilized for each stream as determined by the bandwidth of the network. Note that as higher performance levels are necessary, additional devices may be used simultaneously with additional network interfaces added and utilized to further increase network throughput. On the receiving side, from the database server 40, the device pull appears local to the machine and the DataPipe architecture appears as a cloud with no constraints to performance. Network interface module 78 operates to transfer the data received across the network to device driver 50 for storage at server 40. Thus, the final task of storing or archiving the data is accomplished at DLT device module 50.


From the preceding discussion, one can ascertain that a pipeline or DataPipe 10 comprises a head task 15 that generates the data to be archived or transferred from store 50, and a tail task 40 which accomplishes the final task of storing or writing the data to store 60, including archiving or restoring on the data as shown in FIG. 2A. One or more middle modules 20, 30 may exist, which processes the data by performing actions such as compression, encryption, content analysis, etc., or by allocating or not allocating new buffers while doing the processing.


A pipeline on a particular machine can be arranged to provide a feed to another different machine. A schematic diagram is illustrated in FIG. 2B. In this case, the DataPipe resides on more than one computer. This is done with the aid of network agents and control processors 50A, 50B, 60A and 60B. In such cases, the first machine 12A has a head 15 and other modules 20, 30, etc., comprise middle processes, but the tail of this pipeline on this machine is a cluster of dedicated network agents 50A which send data across to the remote machine 12B via standard network protocols. On the remote machine, a cluster of dedicated network reader agents 50B act as the head, and along with other modules such as middle (not shown) and tail 70, constitute the pipeline on that machine.


In addition to the transferring of data from one computer to another, a unique capability of the datapipe invention is the ability to scale to enable full utilization of the bandwidth of a network, and to fully utilize the number of peripheral devices such as tape drives, or fully utilize other hardware components such as CPUs. The scaleability of a DataPipe is achieved by using multiple instances of each task in the pipeline.


For example, multiple head tasks operating in parallel may gather data from a database and deposit it into buffers. Those buffers may then be processed by several parallel tasks that perform a function such as encryption. The encryption tasks in turn may feed several parallel tasks to perform compression, and several parallel tasks may perform network send operations to fully exploit network bandwidth. On the target computer, several network reader tasks may receive data, which is written to multiple tape units by several tasks. All of these tasks on both computers are part of the same DataPipe and collectively perform the job of moving data from the database to tape units. They do this job extremely efficiently by fully utilizing all available bandwidth and hardware allocated to the DataPipe while also minimizing CPU cycles by avoiding unnecessary copying of the data as it moves from one stage of the DataPipe to the next.



FIG. 2B shows the multiple computer case where a single head task (collect process) gathers data from the disk 40 and deposits it into buffers. The buffers are then processed by several parallel instantiations of compression process 20 which upon completion of processing of each buffer for each instantiation sends the process buffer to process 30 which performs content analysis, and sends the processed buffer data to several network agent tasks 50A or instantiations, which perform the network operations to send the data over the physical network 55 where it is received and processed by corresponding network agents 50B on the remote computer 12B and sent to tail backup/restore process 70 for storage or writing to DLT drive 80.


In general, there could be N stages in a given DataPipe pipeline. At each stage of the pipeline, there could be p instances of a given module task. These N stages could all be on the local machine or could be split across two different machines in which case there are network writers and network readers (i.e. pseudo tail and head network agents) which work together to ensure continuity in the pipeline.


Referring to FIG. 2B, each DataPipe has a dedicated memory segment 85 on each machine on which the DataPipe resides. For example, a DataPipe that sends data from machine 12A to machine 12B has two dedicated memory segments, one on machine A and one on machine B. Tasks that are part of this DataPipe may allocate and free buffers within these memory segments. Of course, tasks operating on machine 12A may only allocate or free buffers within the memory segment 85 on machine A and likewise for tasks on machine B. Thus, any of these modules may allocate or free segments of a single large shared memory on each machine dedicated for the use of this particular pipeline.


Buffer Manipulation Primitives


Referring now to FIG. 2C, each task or process (15) that wishes to allocate a buffer does it from a buffer pool 75 stored in the shared memory segment 85 owned by the DataPipe using AllocBuf( ) Each task that wishes to process incoming data from the previous task executes a receive call using ReceiveBuf( ) Each task that wishes to relinquish control of a particular buffer so that the next task can operate on it, performs a SendBuf( ) on that buffer to send it to the next task. Each task that wishes to destroy a buffer and return it into the buffer pool, does so by executing a FreeBuf( ) on that buffer.


Master_Monitor is connected to a predefined port, to enable it to communicate with its peers on other computer systems. Master_Monitor monitors the status of all DataPipes under its control at all times and is able to provide status of the DataPipe to the application software that uses the DataPipe.


To accomplish these above tasks, a master manager program called Master_Monitor executes in the preferred embodiment as a daemon on all process machines, listening on a well-known port, to serve requirements of pipeline operations. Master_Monitor functions to monitor status of all pipelines under its control at all times and reports status of the pipeline to all its sub-modules. As shown in FIGS. 2B and 2D, Master_Monitor includes control messaging sockets 92 open to all modules through which it can control or change status of execution of each module. Master_Monitor 90 further includes functions which monitor status and listings of all centrally shared resources (among various modules of the same pipeline) such as shared memory or semaphores or any similar resource. Master_Monitor unless otherwise requested will initiate all modules of the pipeline either by fork( ) or thread_create( ) or a similar OS specific thread of control initiation mechanism. Master_Monitor will permit initiation of a pipeline with proper authentication. This initiator process can identify itself as either a head process or a tail process, which will later attach itself to the pipeline. (Exception is made in the case of a networking module, for this facility. A network process will not be allowed to attach itself as the head or tail of any pipeline.)


DataPipe Initiation


Referring now to FIG. 3A in conjunction with FIGS. 1 and 2A-2D, a DataPipe is created by calling Master_Monitor and passing it an Initiate_Pipe message. In this message, parameters such as the DataPipe name, DataPipe component module names, the number of parallel instances for each component, properties of each component (e.g., whether they allocate buffers or not), local and remote machines involved in the DataPipe, direction of flow, nature of the invocation program, etc., are passed to Master_Monitor. Note that the term “module” refers to a program that is executed as a task as part of an instance of a DataPipe. Each module may have more than one instance (e.g., execute as more than one task) within a DataPipe.


Referring now to FIG. 3B, depending upon the nature of the invocation program, it may be required that the process invoking the DataPipe needs to identify itself to the local Master_Monitor 90A and attach itself to the DataPipe as a head or tail task. In order to operate over a network on two computers, the Master_Monitor 90 initiates a Network Controller Process 60 on the first machine which contacts Master_Monitor 90B on the second machine where this DataPipe is to be completed using an Extend Pipe message. All information required for establishing the second side of the DataPipe is passed along with this call so that the DataPipe is completely established across both machines.


Identification


The process responsible for initiation of the pipeline constructs a name for the pipeline using its own process Id, a time stamp, and the name of the machine where the initiator process is running. This pipeline name is passed along with both the Initiate-Pipe as well as the EXTEND_Pipe message so that the pipeline is identified with the same name on all computers on which it is operating (i.e., both the remote as well as the local machine). All shared memory segments and semaphores (reference numeral 85 of FIG. 2C) attached to a particular pipeline are name referenced with this pipeline name and definite offsets. Hence the process of identification of a specific semaphore or shared memory associated with this pipeline is easy and accessible for all processes, and bound modules (i.e., modules for which control is initiated by the Master_Monitor). Each unbound module (i.e., a module not initiated via Master_Monitor, which attaches itself after the pipeline is initiated) must identify itself to its local Master_Monitor via a SEND_IDENT message shown in FIG. 3C. This message contains the name of the pipeline the unbound module wants to attach itself to, a control socket, and a process/thread id, which Master_Monitor uses to monitor status of this particular module.


Data Transfer Implementation


Allocation: Receive: Send: Free


Directing attention to FIG. 2C and FIGS. 4A and 4B, buffers are allocated using the call AllocBuf( ), from a common pool of buffers specified for the particular pipeline. The pool consists of a single large shared memory space 75 with Max Buffers number of equally sized buffers and an “rcq” structure. The “rcq” structure illustrated in FIG. 4A, contains input and output queues for each stage of the pipeline on that particular machine. Access to shared memory is controlled using a reader writer semaphore.


As shown in FIGS. 4A and 4B, the input queue of an ith stage module is the output queue of the (I−1)th stage module. The input queue of the first module is the output queue of the last module of the pipeline on that machine. Allocation is always performed done from the input queue of the first module or process. However, to ensure that no allocation task can unfairly consume buffers, allocation of buffers to each module is limited to a threshold value of Max—Buffers/NA, where NA is the number of allocators in the pipeline on this particular machine. These parameters are stored under control of the Master_Monitor program, which determines whether any process has exceeded its allocation. This means there could be K unfreed buffers in the system allocated by a single instance of a module H, where K is Max—Buffers/NA. Further allocation by module H will be possible when a buffer allocated by H gets freed.


All FreeBuf( ) calls free their buffers into the input queue of first module. By the same rule, first stage modules are never permitted to do a ReceiveBuf( ) but are permitted to do AllocBuf( ). On the other hand, tail processes are permitted to perform only FreeBuf( ) and never permitted to do a SendBuf( ). All other modules can Receive, Allocate, Send, and Free buffers. First stage modules always perform SendBuf( ) after they execute each AllocBuf( ).


Each queue 95 is associated with a semaphore to guarantee orderly access to shared memory and which gets triggered upon actions such as AllocBuf( ), ReceiveBuf( ), SendBuf( ) and FreeBuf( ). Dedicated network agents thus map themselves across any network interface on the system, as long as data propagation is ensured. The number of network agents per pipeline is a configurable parameter, which helps this mechanism exploit maximum data transfer bandwidth available on the network over which it is operating. A single dedicated parent network thread/process monitors performance and status of all network agents on that particular machine for a particular pipeline.


Referring again to FIG. 4A, upon allocation of a buffer by AllocBuf( ) or receipt of a buffer by ReceiveBuf( ), the buffer is taken off from the input queue and assigned to the module which performed the call. Upon completion of processing on this buffer, it is passed forward by mean of SendBuf( ) or FreeBuf( ) and the buffer is forwarded to its destination queue or it is freed for reuse by FreeBuf( ). AllocBuf( ) decrements the input queue semaphore of the first module and also decrements the semaphore which is the allocator Index for this particular module. Each FreeBuf( ) increments the allocator Index of the module who allocated this particular buffer. Information relevant to this operation is always available along with the buffer with which we are performing the free operation.


Attachments


As the identification process is completed, all modules attach themselves to a specific shared memory space segment that is shared among modules on that machine for this particular pipeline. This shared memory segment has many data buffers, input queues for all stages on the pipeline, and their initial values. Each module identifies its own input queues and output queues depending on the stage that module is supposed to run at, and initial queue (first stage) is populated with number of data segments for sharing on this particular pipeline. Also all modules attach themselves to an allocator semaphore array, which controls the number of buffers allocated by a specific module that can be active in the pipeline.


Data Integrity


Integrity of the data passed along and the sequencing of data are maintained in part by a pair of special purpose modules termed sequencer and resequencer processes. FIGS. 5A and 5B provide diagrams of the operation of the sequencer and resequencer processes respectively. Referring to FIG. 5A, the sequencer process receives each buffer (module 10), reads the current sequence number stored in memory (module 20), and then stamps the buffer, and then stamps the buffer with the current sequence number (module 30) and sends the stamped buffer to the next stage for processing (module 40). The current sequence number is then incremented (module 50) and the process is repeated for each buffer received by the sequencer. The resequencer is operative to receive all input buffers and store them internally and wait for the required predecessor buffers to show up at the input queue before forwarding them all in the next sequence to the next stage of processing.


Referring now to FIG. 5B, the resequencer receives a buffer (module 10) of data and determines the sequence number associated with that buffer (module 20). The buffer is then stored in internal memory (module 30) and a determination is made as to whether all preceding sequence numbers associated with buffers have been received and stored (module 40). Until then, the re-sequencer waits for the required predecessor buffers to show up at the input queue. When all predecessor buffers are available, these buffers are sent (module 50) to the next processor stage. The sequencer/re-sequencer process pairs thus ensure proper data sequencing across a set of network reader/writer modules having multiple instantiations of a particular process. Note however, that when there is only one instance of a module present at any particular stage, by virtue of the queuing mechanism available with all input queues, data sequence in the right order is insured.


Hence, in the preferred embodiment, all data pipe transfers employing multi-instance stages via the sequencer/resequencer processes ensure that the input sequence of sequence numbers are not violated for each instance of the module. Further, the restriction that all modules of a specific multi-instance stage should be of the same type eliminates the chances for preferential behavior.


Fairness


The concept of fairness means that each task will be assured of getting the input buffers it needs to operate on without waiting longer than necessary. Fairness among the modules in a given DataPipe where no stage of the pipeline has more than one instance is automatic. As the tail task frees a buffer it enters the free buffer pool where it may enable the head task to allocate it and begin processing. All tasks in the DataPipe operate a maximum speed overlapping the processing done by other tasks in the preceding or following stage of the pipeline.


If a DataPipe has stages consisting of parallel instances of a task, fairness among those tasks is assured by using an allocator semaphore which counts from Max_Buffers/NA (where NA is the number of allocators for this DataPipe on this particular machine) downward to zero. All FreeBuf( )s increment this semaphore back, however, there could be only Max—Buffers/NA buffers allocated by any allocator module in this DataPipe. This ensures that all allocators get a fair share of the available total number of input buffers. If a particular process attempts to allocate more buffers than it is allowed, the master_monitor process prevents such allocation, causing the process to either terminate or wait until a buffer currently allocated to the process becomes freed thereby incrementing the semaphore back up to allow the process to allocate another buffer.


Control Messages


All instances of all modules have a control socket to Master_Monitor over which control messages are exchanged. All network readers/writers have an analogous control socket to their parent network agent. The parent network agent itself has a control socket to Master_Monitor. Each module periodically checks its control socket for any messages from Master_Monitor. Critical information such as a STOP_PIPE message is passed to Master_Monitor via this mechanism.


Status Monitoring


Each module initiated by Master_Monitor on a given machine is monitored by either a parent network process (in the case of network reader or writer), or by Master_Monitor itself, for states of execution. In case any module is reported as having terminated abnormally, Master_Monitor identifies this exception, and signals all the modules on that particular pipeline to stop. This is done by means of control messages through control sockets as described previously. Upon safely stopping all modules pertaining to this particular pipeline, it signals the remote machine's Master_Monitor to stop the remote side of this particular pipeline and entire pipeline is shut down safely by means of control message signaling.


Implementation


In a preferred embodiment, DataPipe is implemented on Sun Solaris or HP-UX operating systems and incorporated into Release 2.7 of CommVault System's Vault98 storage management product.



FIG. 6 is an illustrative example of the sequence of primitive commands used to set up a DataPipe. The DataPipe is then used to process data in three modules named A, B and C.


To set up the DataPipe the Master_Monitor for this is called giving it the name of the DataPipe and the names of the modules that will use the pipe (module 10).


Master_Monitor (Initiate_Pipe(Sample_pipe,A,B,C)). Within the logic of module A, Alloc_Buf( ) function is then called to obtain a buffer (20). The logic of module A may perform any actions it wants to fill the buffer with useful data. When it has completed its processing of the buffer (30), it calls SendBuf( ) to send the buffer to module B for processing (40). Module A then repeats its function by again calling Alloc_Buf( ) to obtain the next buffer.


The logic of module B calls ReceiveBuf( ) to obtain a buffer of data from module A (50). It then operates on the buffer by performing processing as required (60). When it is finished with the buffer it calls SendBuf( ) to send that buffer to module C (70).


Module B then repeats if function by again calling ReceiveBuf( ) to obtain the next buffer from module A.


Module C obtains a buffer of data from module B by calling ReceiveBuf( ). When it has completed its processing of the data in that buffer (90), it calls FreeBuf( ) to release the buffer (100). Like the other two modules, it loops back to receive the next buffer form module B.


The primitives used to allocate, free, send, and receive buffers are synchronized by the use of semaphores. This ensures coordination between the modules so that the receiving module does not start processing data before the sending module has finished with it. If no buffer is available, the AllocBuf or ReceiveBuf primitives will wait until one is available. All three modules operate in parallel as separate tasks. The order of processing from A to B to C is established in the initial call to Master_Monitor that established the DataPipe.


Referring now to FIG. 7, there is shown another embodiment of the DataPipe apparatus as it is used within Vault98 to provide a high speed path between a “client” system containing a large database that is being backed up to the “CommServ” server and stored as archive files on a DLT drive. Everything on the collect side of the physical network is part of the client software configuration, whereas everything on the DLT drive side of the physical network are part of the server software configuration. The “collect” activities on the client prepare data to be sent over the DataPipe to the CommServ.



FIG. 7, which is similar to FIG. 2B, depicts a two computer configuration where a header task 15, identified as a collect process, is initiated via Master_Monitor daemon 90A on the first computer. Collector 15 retrieves data from the disk and allocates the buffer from the shared memory 85A for processing the data to be transferred. Collector 15 then sends the data to the compression process 20 which functions to compress the data as it moves over the pipe. As show in FIG. 7, multiple instantiations of compression module 20 are provided at this stage for effectively processing the data as it flows across the system. Accordingly, sequencer 17 initiated by Master_Monitor 90A is coupled directly between collect module 15 and compressor module 20 to stamp each of the buffers with the sequence number as described previously. Re-sequencer module 23 is coupled to the output queue of the compression module 20 instantiations to properly reorder and re-sequence the buffers sent from the instantiations of module 20 to content analysis module 30. Content analysis module 30 then receives the buffers from re-sequencer 23, processes the data, and sends the buffers to sequencer 33, which again stamps the buffers and sends them to multiple instantiations of network agents 50A for processing across the physical network via standard network protocol such as TCP IP, FTP, ICMP, etc. Network agents 50B are instantiated by network control processor 60B in communication with remote Master_Monitor 90B to provide multiple network agent instantiations, where each agent on the remote side uniquely corresponds and communicates with corresponding agent on the local side. In the preferred embodiment, each network agent 50A on the local side performs a copy of the data in the buffer for transfer over the physical network to its corresponding network agent 50B on the remote side and then performs a free buffer function call to free the buffers associated with shared memory 85A for reallocation. On the remote side, the network agent 50B receives the data transferred over the network and acts as a header on the remote side to allocate each of the buffers in shared memory 85B. These buffers are then sent to re-sequencer 53 which stores buffers received in internal memory until each of the predecessor buffers are received, and then forwards them to the backup restore process 70 via the send buff function. The backup restore process then functions to write the contents of each of the buffers received to DLT drive 80, and upon completion, frees each of those buffers to permit further reallocation in the buffer pool and shared memory 85B. As one can see, this pipeline could be set up over any high speed network, such as ATM, FDDI, etc. The pipeline is capable of utilizing entire practical bandwidth available on the physical network by means of multiple network agents. In cases where real high speed networks are available (networks which have transfer rates higher than DLT drives), multiple pipelines are set up, to utilize resources available to the full extent.


Salient Features


From the foregoing discussion, numerous advantages of the data pipe pipeline data transfer system using semaphore signaled shared memory to produce a general purpose, flexible data transfer mechanism are apparent. Included among these advantages are:

    • 1. Its flexible nature—the modules that are plugged into a pipeline can be easily changed based on the application.
    • 2. It allows for having multiple instances of a given module running in a given stage of the pipeline. This allows for parallelism over and beyond what the pipeline already provides.
    • 3. It provides a well-defined mechanism for startup and shutdown of a pipeline and includes housekeeping and cleanup mechanisms provided via Master_Monitor.
    • 4. It allows the application control over the amount of network bandwidth it wants to take advantage of. It is easily possible to take complete advantage of a wide-band transport mechanism simply by increasing the number of network agents.
    • 5. It provides built-in scheme for fairness among modules. In other words, no single module can retain all the input buffers, or no single instance of a multi-stage module can keep the other instances from operating.
    • 6. It allows easy integration with a 3rd party software by virtue of the fact that the DataPipe provides for any module to attach itself as an unbound end-point (head or tail).
    • 7. It allows for easy check pointing by virtue of a tail-head socket connection.


However, it should be remembered that shared memory on a particular machine is not shared among various other machines. Thus, we are not exploiting implicit results of a distributed shared memory, but doing data transfer, only on a demand basis, discarding all weed buffers, with selective copy, for best performance on a data transfer paradigm. Thus, the invention described herein represents a real data transfer system rather than a commonly seen distributed shared memory paradigm.


While there has been shown preferred embodiments of the present invention, those skilled in the art will further appreciate that the present invention may be embodied in other specific forms without departing from the spirit or central attributes thereof. All such variations and modifications are intended to be within the scope of this invention as defined by the appended claims.

Claims
  • 1. A pipeline system for providing data transfer between multiple computing devices, the pipeline system comprising: a datapipe that spans multiple computing devices, the datapipe comprising a sequence of stages for transferring data from an origination computing device to a destination computing device, wherein the datapipe is identified on the origination computing device and the destination computing device with a data pipeline identifier;one or more control modules configured to control at least a first stage of the sequence of stages on the origination computing device;a first dedicated memory comprising a first pool of buffers, wherein the one or more control modules allocate at least a first buffer from the first pool of buffers to the first stage, and wherein the first buffer is associated with the data pipeline identifier until freed by the one or more control modules;one or more of the control modules further configured to control at least a second stage of the sequence of stages on the destination computing device; anda second dedicated memory comprising a second pool of buffers, wherein the one or more control modules allocate at least a second buffer from the second pool of buffers to the second stage, and wherein the second buffer is associated with the data pipeline identifier until freed by the one or more control modules.
  • 2. The pipeline system of claim 1, wherein one of the stages in the sequence of stages comprises data compression.
  • 3. The pipeline system of claim 1, wherein the first stage comprises: an input queue for receiving or allocating at least the first buffer of the first pool of buffers; and an output queue for sending or freeing the first buffer.
  • 4. The pipeline system of claim 3, comprising an intermediate stage coupled to the first stage for stamping each buffer received from the first stage process with a sequence number prior to sending to a next stage.
  • 5. The pipeline system of claim 4, wherein a subsequent stage includes a re-sequence processor reordering a buffer sequence received according to the sequence number.
  • 6. The pipeline system of claim 1, wherein each of the first dedicated memory and the second dedicated memory further includes a plurality of semaphores each associated with a particular input/output queue for controlling access to the associated dedicated memory.
  • 7. The pipeline system of claim 1, wherein the pool of buffers in each of the first dedicated memory and the second dedicated memory comprises buffers of equal size.
  • 8. The pipeline system of claim 1, wherein a first control module is initiated via a request message from a requesting application process, the request message including a process identification and a timestamp.
  • 9. A method for transferring data in a pipeline system, the method comprising: registering and initiating a plurality of pipeline stages associated with a data transfer pipeline that spans multiple computing devices, wherein the data transfer pipeline is identified on an origination computing device and a destination computing device with a data pipeline identifier;controlling at least a first stage of the sequence of stages on the origination computing device;allocating at least a first buffer from a first pool of buffers in a first dedicated memory to the first stage of the plurality of pipeline stages, wherein the first buffer is associated with the data pipeline identifier until freed;controlling at least a second stage of the sequence of stages on the destination computing device; andallocating at least a second buffer from a second pool of buffers in a second dedicated memory to the second stage of the plurality of pipeline stages, wherein the second buffer is associated with the data pipeline identifier until freed.
  • 10. The method of claim 9, additionally comprising transferring control of multiple ones of the buffers through the use of semaphores.
  • 11. The method of claim 9, wherein the plurality of pipeline stages comprises a compression process and an encryption process.
  • 12. The method of claim 9, additionally comprising storing data from the pool of second buffers to a storage device.
  • 13. The method of claim 9, additionally comprising determining a number of buffers in the first pool of buffers to be allocated to the first stage.
  • 14. The method of claim 13, additionally comprising terminating the first stage when the number of buffers allocated to the first stage exceeds a threshold amount.
RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 12/147,066 filed Jun. 26, 2008, which is a continuation of U.S. patent application Ser. No. 10/144,683, filed May 13, 2002, now U.S. Pat. No. 7,401,154, issued Jul. 15, 2008, which is a continuation of U.S. patent application Ser. No. 09/038,440, filed Mar. 11, 1998, now U.S. Pat. No. 6,418,478, issued Jul. 9, 2002, which claims priority to U.S. Provisional Patent Application No. 60/063,831, entitled “HIGH SPEED DATA TRANSFER MECHANISM”, filed Oct. 30, 1997, each of which is hereby incorporated herein by reference in its entirety.

US Referenced Citations (296)
Number Name Date Kind
4296465 Lemak Oct 1981 A
4686620 Ng Aug 1987 A
4695943 Keeley et al. Sep 1987 A
4888689 Taylor et al. Dec 1989 A
4995035 Cole et al. Feb 1991 A
5005122 Griffin et al. Apr 1991 A
5062104 Lubarsky et al. Oct 1991 A
5093912 Dong et al. Mar 1992 A
5133065 Cheffetz et al. Jul 1992 A
5163131 Row et al. Nov 1992 A
5193154 Kitajima et al. Mar 1993 A
5212772 Masters May 1993 A
5226157 Nakano et al. Jul 1993 A
5239647 Anglin et al. Aug 1993 A
5241668 Eastridge et al. Aug 1993 A
5241670 Eastridge et al. Aug 1993 A
5247616 Berggren et al. Sep 1993 A
5276860 Fortier et al. Jan 1994 A
5276867 Kenley et al. Jan 1994 A
5287500 Stoppani, Jr. Feb 1994 A
5301351 Jippo Apr 1994 A
5311509 Heddes et al. May 1994 A
5321816 Rogan et al. Jun 1994 A
5333315 Saether et al. Jul 1994 A
5347653 Flynn et al. Sep 1994 A
5377341 Kaneko et al. Dec 1994 A
5388243 Glider et al. Feb 1995 A
5410700 Fecteau et al. Apr 1995 A
5428783 Lake Jun 1995 A
5448724 Hayashi Sep 1995 A
5465359 Allen et al. Nov 1995 A
5487160 Bemis Jan 1996 A
5491810 Allen Feb 1996 A
5495607 Pisello et al. Feb 1996 A
5504873 Martin et al. Apr 1996 A
5515502 Wood May 1996 A
5544345 Carpenter et al. Aug 1996 A
5544347 Yanai et al. Aug 1996 A
5555404 Torbjornsen et al. Sep 1996 A
5559957 Balk Sep 1996 A
5559991 Kanfi Sep 1996 A
5588117 Karp et al. Dec 1996 A
5592618 Micka et al. Jan 1997 A
5598546 Blomgren Jan 1997 A
5606359 Youden et al. Feb 1997 A
5615392 Harrison et al. Mar 1997 A
5619644 Crockett et al. Apr 1997 A
5638509 Dunphy et al. Jun 1997 A
5642496 Kanfi Jun 1997 A
5644779 Song Jul 1997 A
5651002 Van Seters et al. Jul 1997 A
5673381 Huai et al. Sep 1997 A
5675511 Prasad et al. Oct 1997 A
5680550 Kuszmaul et al. Oct 1997 A
5682513 Candelaria et al. Oct 1997 A
5687343 Fecteau et al. Nov 1997 A
5692152 Cohen et al. Nov 1997 A
5699361 Ding et al. Dec 1997 A
5719786 Nelson et al. Feb 1998 A
5729743 Squibb Mar 1998 A
5737747 Vishlitzky et al. Apr 1998 A
5751997 Kullick et al. May 1998 A
5758359 Saxon May 1998 A
5761104 Lloyd et al. Jun 1998 A
5761677 Senator et al. Jun 1998 A
5761734 Pfeffer et al. Jun 1998 A
5764972 Crouse et al. Jun 1998 A
5778395 Whiting et al. Jul 1998 A
5790828 Jost Aug 1998 A
5805920 Sprenkle et al. Sep 1998 A
5812398 Nielsen Sep 1998 A
5813008 Benson et al. Sep 1998 A
5813009 Johnson et al. Sep 1998 A
5813017 Morris Sep 1998 A
5815462 Konishi et al. Sep 1998 A
5829023 Bishop Oct 1998 A
5829046 Tzelnic et al. Oct 1998 A
5860104 Witt et al. Jan 1999 A
5875478 Blumenau Feb 1999 A
5875481 Ashton et al. Feb 1999 A
5878056 Black et al. Mar 1999 A
5887134 Ebrahim Mar 1999 A
5890159 Sealby et al. Mar 1999 A
5897643 Matsumoto Apr 1999 A
5901327 Ofek May 1999 A
5924102 Perks Jul 1999 A
5926836 Blumenau Jul 1999 A
5933104 Kimura Aug 1999 A
5936871 Pan et al. Aug 1999 A
5950205 Aviani, Jr. Sep 1999 A
5956519 Wise et al. Sep 1999 A
5958005 Thorne et al. Sep 1999 A
5970233 Liu et al. Oct 1999 A
5970255 Tran et al. Oct 1999 A
5974563 Beeler, Jr. Oct 1999 A
5987478 See et al. Nov 1999 A
5995091 Near et al. Nov 1999 A
5999629 Heer et al. Dec 1999 A
6003089 Shaffer et al. Dec 1999 A
6009274 Fletcher et al. Dec 1999 A
6012090 Chung et al. Jan 2000 A
6021415 Cannon et al. Feb 2000 A
6026414 Anglin Feb 2000 A
6041334 Cannon Mar 2000 A
6052735 Ulrich et al. Apr 2000 A
6058494 Gold et al. May 2000 A
6076148 Kedem et al. Jun 2000 A
6094416 Ying Jul 2000 A
6094684 Pallmann Jul 2000 A
6101255 Harrison et al. Aug 2000 A
6105129 Meier et al. Aug 2000 A
6105150 Noguchi et al. Aug 2000 A
6112239 Kenner et al. Aug 2000 A
6122668 Teng et al. Sep 2000 A
6131095 Low et al. Oct 2000 A
6131190 Sidwell Oct 2000 A
6137864 Yaker Oct 2000 A
6148412 Cannon et al. Nov 2000 A
6154787 Urevig et al. Nov 2000 A
6154852 Amundson et al. Nov 2000 A
6161111 Mutalik et al. Dec 2000 A
6167402 Yeager Dec 2000 A
6175829 Li et al. Jan 2001 B1
6212512 Barney et al. Apr 2001 B1
6230164 Rikieta et al. May 2001 B1
6260069 Anglin Jul 2001 B1
6269431 Dunham Jul 2001 B1
6275953 Vahalia et al. Aug 2001 B1
6292783 Rohler Sep 2001 B1
6295541 Bodnar et al. Sep 2001 B1
6301592 Aoyama et al. Oct 2001 B1
6304880 Kishi Oct 2001 B1
6324581 Xu et al. Nov 2001 B1
6328766 Long Dec 2001 B1
6330570 Crighton Dec 2001 B1
6330572 Sitka Dec 2001 B1
6330642 Carteau Dec 2001 B1
6343324 Hubis et al. Jan 2002 B1
6350199 Williams et al. Feb 2002 B1
RE37601 Eastridge et al. Mar 2002 E
6353878 Dunham Mar 2002 B1
6356801 Goodman et al. Mar 2002 B1
6374266 Shnelvar Apr 2002 B1
6374336 Peters et al. Apr 2002 B1
6381331 Kato Apr 2002 B1
6385673 DeMoney May 2002 B1
6389432 Pothapragada et al. May 2002 B1
6418478 Ignatius et al. Jul 2002 B1
6421711 Blumenau et al. Jul 2002 B1
6438586 Hass et al. Aug 2002 B1
6487561 Ofek et al. Nov 2002 B1
6487644 Huebsch et al. Nov 2002 B1
6505307 Stell et al. Jan 2003 B1
6519679 Devireddy et al. Feb 2003 B2
6538669 Lagueux, Jr. et al. Mar 2003 B1
6542909 Tamer et al. Apr 2003 B1
6542972 Ignatius et al. Apr 2003 B2
6564228 O'Connor May 2003 B1
6571310 Ottesen May 2003 B1
6577734 Etzel et al. Jun 2003 B1
6581143 Gagne et al. Jun 2003 B2
6604149 Deo et al. Aug 2003 B1
6631442 Blumenau Oct 2003 B1
6631493 Ottesen et al. Oct 2003 B2
6647396 Parnell et al. Nov 2003 B2
6654825 Clapp et al. Nov 2003 B2
6658436 Oshinsky et al. Dec 2003 B2
6658526 Nguyen et al. Dec 2003 B2
6675177 Webb Jan 2004 B1
6732124 Koseki et al. May 2004 B1
6742092 Huebsch et al. May 2004 B1
6757794 Cabrera et al. Jun 2004 B2
6763351 Subramaniam et al. Jul 2004 B1
6772332 Boebert et al. Aug 2004 B1
6785786 Gold et al. Aug 2004 B1
6789161 Blendermann et al. Sep 2004 B1
6791910 James et al. Sep 2004 B1
6859758 Prabhakaran et al. Feb 2005 B1
6871163 Hiller et al. Mar 2005 B2
6880052 Lubbers et al. Apr 2005 B2
6886020 Zahavi et al. Apr 2005 B1
6909722 Li Jun 2005 B1
6928513 Lubbers et al. Aug 2005 B2
6952758 Chron et al. Oct 2005 B2
6965968 Touboul et al. Nov 2005 B1
6968351 Butterworth Nov 2005 B2
6973553 Archibald, Jr. et al. Dec 2005 B1
6983351 Gibble et al. Jan 2006 B2
7003519 Biettron et al. Feb 2006 B1
7003641 Prahlad et al. Feb 2006 B2
7035880 Crescenti et al. Apr 2006 B1
7062761 Slavin et al. Jun 2006 B2
7069380 Ogawa et al. Jun 2006 B2
7085904 Mizuno et al. Aug 2006 B2
7103731 Gibble et al. Sep 2006 B2
7103740 Colgrove et al. Sep 2006 B1
7107298 Prahlad et al. Sep 2006 B2
7107395 Ofek et al. Sep 2006 B1
7117246 Christenson et al. Oct 2006 B2
7120757 Tsuge Oct 2006 B2
7130970 Devassy et al. Oct 2006 B2
7155465 Lee et al. Dec 2006 B2
7155633 Tuma et al. Dec 2006 B2
7159110 Douceur et al. Jan 2007 B2
7174433 Kottomtharayil et al. Feb 2007 B2
7209972 Ignatius et al. Apr 2007 B1
7246140 Therrien et al. Jul 2007 B2
7246207 Kottomtharayil et al. Jul 2007 B2
7246272 Cabezas et al. Jul 2007 B2
7269612 Devarakonda et al. Sep 2007 B2
7272606 Borthakur et al. Sep 2007 B2
7277941 Ignatius et al. Oct 2007 B2
7278142 Bandhole et al. Oct 2007 B2
7287047 Kavuri Oct 2007 B2
7287252 Bussiere et al. Oct 2007 B2
7293133 Colgrove et al. Nov 2007 B1
7298846 Bacon et al. Nov 2007 B2
7315923 Retnamma et al. Jan 2008 B2
7346623 Prahlad et al. Mar 2008 B2
7359917 Winter et al. Apr 2008 B2
7380072 Kottomtharayil et al. May 2008 B2
7398429 Shaffer et al. Jul 2008 B2
7401154 Ignatius et al. Jul 2008 B2
7409509 Devassy et al. Aug 2008 B2
7448079 Tremain Nov 2008 B2
7454569 Kavuri et al. Nov 2008 B2
7457933 Pferdekaemper et al. Nov 2008 B2
7467167 Patterson Dec 2008 B2
7472238 Gokhale Dec 2008 B1
7484054 Kottomtharayil et al. Jan 2009 B2
7490207 Amarendran Feb 2009 B2
7500053 Kavuri et al. Mar 2009 B1
7500150 Sharma et al. Mar 2009 B2
7509019 Kaku Mar 2009 B2
7519726 Palliyll et al. Apr 2009 B2
7523483 Dogan Apr 2009 B2
7529748 Wen et al. May 2009 B2
7536291 Retnamma et al. May 2009 B1
7546324 Prahlad et al. Jun 2009 B2
7546482 Blumenau et al. Jun 2009 B2
7581077 Ignatius et al. Aug 2009 B2
7596586 Gokhale et al. Sep 2009 B2
7613748 Brockway et al. Nov 2009 B2
7627598 Burke Dec 2009 B1
7627617 Kavuri et al. Dec 2009 B2
7631194 Wahlert et al. Dec 2009 B2
7685126 Patel et al. Mar 2010 B2
7765369 Prahlad et al. Jul 2010 B1
7809914 Kottomtharayil et al. Oct 2010 B2
7831553 Prahlad et al. Nov 2010 B2
7840537 Gokhale et al. Nov 2010 B2
7861050 Retnamma et al. Dec 2010 B2
8019963 Ignatius et al. Sep 2011 B2
20020029281 Zeidner et al. Mar 2002 A1
20020040405 Gold Apr 2002 A1
20020042869 Tate et al. Apr 2002 A1
20020049778 Bell et al. Apr 2002 A1
20020065967 MacWilliams et al. May 2002 A1
20020107877 Whiting et al. Aug 2002 A1
20020194340 Ebstyne et al. Dec 2002 A1
20020198983 Ullmann et al. Dec 2002 A1
20030014433 Teloh et al. Jan 2003 A1
20030016609 Rushton et al. Jan 2003 A1
20030061491 Jaskiewicz et al. Mar 2003 A1
20030066070 Houston Apr 2003 A1
20030079112 Sachs et al. Apr 2003 A1
20030169733 Gurkowski et al. Sep 2003 A1
20040073716 Boom et al. Apr 2004 A1
20040088432 Hubbard et al. May 2004 A1
20040107199 Dairymple, et al. Jun 2004 A1
20040193953 Callahan et al. Sep 2004 A1
20040210796 Largman et al. Oct 2004 A1
20050033756 Kottomtharayil et al. Feb 2005 A1
20050114477 Willging et al. May 2005 A1
20050166011 Burnett et al. Jul 2005 A1
20050172093 Jain Aug 2005 A1
20050246568 Davies Nov 2005 A1
20050256972 Cochran et al. Nov 2005 A1
20050262296 Peake Nov 2005 A1
20060005048 Osaki et al. Jan 2006 A1
20060010154 Prahlad et al. Jan 2006 A1
20060010227 Atluri Jan 2006 A1
20060044674 Martin et al. Mar 2006 A1
20060149889 Sikha Jul 2006 A1
20060224846 Amarendran et al. Oct 2006 A1
20070288536 Sen et al. Dec 2007 A1
20080059515 Fulton Mar 2008 A1
20080229037 Bunte et al. Sep 2008 A1
20080243914 Prahlad et al. Oct 2008 A1
20080243957 Prahlad et al. Oct 2008 A1
20080243958 Prahlad et al. Oct 2008 A1
20080256173 Ignatius et al. Oct 2008 A1
20090319534 Gokhale Dec 2009 A1
20090319585 Gokhale Dec 2009 A1
20100005259 Prahlad Jan 2010 A1
20100131461 Prahlad et al. May 2010 A1
Foreign Referenced Citations (19)
Number Date Country
0259912 Mar 1988 EP
0405926 Jan 1991 EP
0467546 Jan 1992 EP
0774715 May 1997 EP
0809184 Nov 1997 EP
0862304 Sep 1998 EP
0899662 Mar 1999 EP
0981090 Feb 2000 EP
1174795 Jan 2002 EP
1115064 Dec 2004 EP
2366048 Feb 2002 GB
WO 9513580 May 1995 WO
WO 9839709 Sep 1998 WO
WO 9912098 Mar 1999 WO
WO 9914692 Mar 1999 WO
WO 9917204 Apr 1999 WO
WO 0205466 Jan 2002 WO
WO 2004090788 Oct 2004 WO
WO 2005055093 Jun 2005 WO
Related Publications (1)
Number Date Country
20110238777 A1 Sep 2011 US
Provisional Applications (1)
Number Date Country
60063831 Oct 1997 US
Continuations (3)
Number Date Country
Parent 12147066 Jun 2008 US
Child 13158222 US
Parent 10144683 May 2002 US
Child 12147066 US
Parent 09038440 Mar 1998 US
Child 10144683 US