A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosures, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
Backup operations for client data on a storage network are often performed on streams of data which are managed by subclients and sent to a backup drive or media device. Typically, on a given stream, only one sub client can perform a backup at any given time. The concurrency limit for the number of backups that can go to a stream at any given time is one. Indirectly this means that only one backup can be sent to a media or drive at any point.
This limitation has a major drawback. With tape speeds in media increasing and the difference between disk speed and tape speed widening, the tape throughput is being throttled by the slower disks. This becomes a major issue in a large enterprise where there are many clients with slow, under performing disks with large amounts of data that need to be backed up in a fixed backup window. The only way the backup window can be met is by backing up these clients, each to a different piece of media in different drives. This increases the hardware requirement costs. This also can create a “shoe shining” effect in which the tape is driven back and forth since drive capacity is under-utilized at certain times.
Tape capacity is also growing and data from multiple clients can actually fit on a single piece of media especially if the backup being performed is an incremental backup. Scattering data across many pieces of media is a tape-handling nightmare for backup administrators.
In accordance with embodiments of the invention, a method is provided for performing a backup operation on a plurality of data streams containing data to be backed up. In one embodiment, the method involves combining the data streams into a single stream of one or more data chunks, including by writing data from more than one of the data streams into at least one data chunk. The combining may be done by multiplexing the data streams. The method further involves transmitting the one or more data chunks over a transport channel to a backup medium and storing the one or more data chunks on the backup medium.
Data from the data streams may be written into a data chunk until the data chunk reaches a predetermined size, or until a configurable time interval has lapsed, or otherwise in accordance with a storage policy as disclosed in some of the pending applications referenced above, and as discussed herein.
During a restore operation or during an operation to create an auxiliary backup copy, the data chunk is retrieved from the backup medium and data from the separate data streams are separated from the data chunk. All data streams written into a data chunk may be separated from each other into separate data stream portions. When the data streams have been multiplexed, separating involves demultiplexing the data streams written into the data chunk. The separated data streams may be restored to a client or further stored as auxiliary copies of the data streams.
In some embodiments, the data streams contain data from a plurality of archive files. Combining the data streams thus may involve writing data from more than one archive files into at least one data chunk, and may further involve writing data from a single archive file into more than one data chunk. In these embodiments, a plurality of tag headers are inserted into the data chunk, each tag header describing data written in the data chunk from a corresponding archive file. Data may be written into a data chunk until the end of an archive file has been reached. When the data chunk is retrieved, from the backup medium, the data from at least one of the archive files is separated from the data chunk, or all the archive files may be separated into separate archive file portions, using the tag headers when necessary to identify and describe the separate archive file portions. The archive file portions may then be restored to a client or may be stored on an auxiliary storage device which may be accessed in turn during a restore operation of a given archive file requested by a client.
In accordance with some embodiments, the invention provides a system for performing a backup operation on a plurality of data streams containing data to be backed up. The system includes one or more receivers for receiving the data streams, a multiplexer for combining the data streams into a combined data stream, a data writer for writing data from the combined data stream portion of the combined data streams into one or more data chunks, and one or more backup media for storing the one or more data chunks. The system may further include a transport channel for transporting the data chunks from the data writer to the backup media.
In accordance with further aspects of embodiments of the present invention, a data structure is provided for a data chunk stored on a memory device. The data chunk data structure is used by a computer system to backup data and includes a plurality of portions of data from different archive files written into the data chunk from multiplexed data streams containing the archive files and a plurality of tag headers each describing one of the archive file portions written into the data chunk.
The present invention includes methods and systems operating in conjunction with a modular storage system to enable computers on a network to share storage devices on a physical and logical level. An exemplary modular storage system is the GALAXY backup and retrieval system and QINETIX storage management system available from CommVault Systems of New Jersey. The modular architecture underlying this system is described in the above referenced patent applications, each of which is incorporated herein.
Preferred embodiments of the invention are now described with reference to the drawings. An embodiment of the system of the present invention is shown in
A client 85 can be any networked client 85 and preferably includes at least one attached information store 90. The information store 90 may be any memory device or local data storage device known in the art, such as a hard drive, CD-ROM drive, tape drive, random access memory (RAM), or other types of magnetic, optical, digital and/or analog local storage. In some embodiments of the invention, the client 85 includes at least one data agent 95, which is a software module that is generally responsible for performing storage operations on data of a client 85 stored in information store 90 or other memory location. Storage operations include, but are not limited to, creation, storage, retrieval, migration, deletion, and tracking of primary or production volume data, secondary volume data, primary copies, secondary copies, auxiliary copies, snapshot copies, backup copies, incremental copies, differential copies, synthetic copies, hierarchical storage management (HSM) copies, archive copies, information lifecycle management (ILM) copies, and other types of copies and versions of electronic data. In some embodiments of the invention, the system provides at least one, and typically a plurality of data agents 95 for each client, each data agent 95 is intended to backup, migrate, and recover data associated with a different application. For example, a client 85 may have different individual data agents 95 designed to handle Microsoft Exchange data, LOTUS NOTES data, MICROSOFT WINDOWS file system data, MICROSOFT ACTIVE DIRECTORY Objects data, and other types of data known in the art.
The storage manager 100 is generally a software module or application that coordinates and controls the system, for example, the storage manager 100 manages and controls storage operations performed by the system. The storage manager 100 communicates with all components of the system including client 85, data agent 95, media agent 105, and storage devices 115 to initiate and manage storage operations. The storage manager 100 preferably has an index 107, further described herein, for storing data related to storage operations. In general, the storage manager 100 communicates with storage devices 115 via a media agent 105. In some embodiments, the storage manager 100 communicates directly with the storage devices 115.
The system includes one or more media agent 105. The media agent 105 is generally a software module that conducts data, as directed by the storage manager 100, between the client 85 and one or more storage devices 115, for example, a tape library, a hard drive, a magnetic media storage device, an optical media storage device, or other storage device. The media agent 105 is communicatively coupled with and controls the storage device 115. For example, the media agent 105 might instruct a storage device 115 to perform a storage operation, e.g., archive, migrate, or restore application specific data. The media agent 105 generally communicates with the storage device 115 via a local bus such as a SCSI adaptor.
Each media agent 105 maintains an index cache 110 which stores index data that the system generates during storage operations as further described herein. For example, storage operations for Microsoft Exchange data generate index data. Media management index data includes, for example, information regarding the location of the stored data on a particular media, information regarding the content of the information stored such as file names, sizes, creation dates, formats, application types, and other file-related criteria, information regarding one or more clients associated with the information stored, information regarding one or more storage policies, storage criteria, or storage preferences associated with the information stored, compression information, retention-related information, encryption-related information, stream-related information, and other types of information. Index data thus provides the system with an efficient mechanism for performing storage operations including locating user files for recovery operations and for managing and tracking stored data.
The system generally maintains two copies of the media management index data regarding particular stored data. A first copy is generally stored with the data copied to a storage device 115. Thus, a tape may contain the stored data as well as index information related to the stored data. In the event of a system restore, the index information stored with the stored data can be used to rebuild a media agent index 110 or other index useful in performing storage operations. In addition, the media agent 105 that controls the storage operation also generally writes an additional copy of the index data to its index cache 110. The data in the media agent index cache 110 is generally stored on faster media, such as magnetic media, and is thus readily available to the system for use in storage operations and other activities without having to be first retrieved from the storage device 115.
The storage manager 100 also maintains an index cache 107. Storage manager index data is used to indicate, track, and associate logical relationships and associations between components of the system, user preferences, management tasks, and other useful data. For example, the storage manager 100 might use its index cache 107 to track logical associations between media agent 105 and storage devices 115. The storage manager 100 may also use its index cache 107 to track the status of storage operations to be performed, storage patterns associated with the system components such as media use, storage growth, network bandwidth, service level agreement (SLA) compliance levels, data protection levels, storage policy information, storage criteria associated with user preferences, retention criteria, storage operation preferences, and other storage-related information.
A storage policy is generally a data structure or other information which includes a set of preferences and other storage criteria for performing a storage operation. The preferences and storage criteria may include, but are not limited to: a storage location, relationships between system components, network pathway to utilize, retention policies, data characteristics, compression or encryption requirements, preferred system components to utilize in a storage operation, and other criteria relating to a storage operation. A storage policy may be stored to a storage manager index, to archive media as metadata for use in restore operations or other storage operations, or to other locations or components of the system.
Index caches 107 and 110 typically reside on their corresponding storage component's hard disk or other fixed storage device. For example, the jobs agent 102 of a storage manager 100 may retrieve storage manager index 107 data regarding a storage policy and storage operation to be performed or scheduled for a particular client 85. The jobs agent 102, either directly or via another system module, communicates with the data agent 95 at the client 85 regarding the storage operation. In some embodiments, the jobs agent 102 also retrieves from the index cache 107 a storage policy associated with the client 85 and uses information from the storage policy to communicate to the data agent 95 one or more media agents 105 associated with performing storage operations for that particular client 85 as well as other information regarding the storage operation to be performed such as retention criteria, encryption criteria, streaming criteria, etc. The data agent 95 then packages or otherwise manipulates the client information stored in the client information store 90 in accordance with the storage policy information and/or according to a user preference, and communicates this client data to the appropriate media agent(s) 100 for processing. The media agent(s) 105 store the data according to storage preferences associated with the storage policy including storing the generated index data with the stored data, as well as storing a copy of the generated index data in the media agent index cache 110.
In some embodiments, components of the system may reside and execute on the same computer. In some embodiments, a client component such as a data agent 95, a media agent 105, or a storage manager 100 coordinates and directs local archiving, migration, and retrieval application functions as further described in U.S. patent application Ser. No. 09/610,738, now U.S. Pat. No. 7,035,880, issued Apr. 25, 2006. These client components can function independently or together with other similar client components.
Data and other information is transported throughout the system via buffers and network pathways including, among others, a high-speed data transfer mechanism, such as the CommVault DATAPIPE, as further described in U.S. Pat. No. 6,418,478 and U.S. patent application Ser. No. 09/495,751, now U.S. Pat. No. 7,209,972, issued Apr. 24, 2007, each of which is hereby incorporated herein by reference in its entirety. Self describing tag headers are disclosed in these applications wherein data is transferred between a flexible grouping of data transport modules each supporting a separate function and leveraging buffers in a shared memory space. Thus, a data transport module receives a chunk of data and decodes how the data should be processed according to information contained in the chunk's header, and in some embodiments, the chunk's trailer. U.S. Pat. No. 6,418,478 and U.S. patent application Ser. No. 09/495,751, now U.S. Pat. No. 7,209,972, issued Apr. 24, 2007, generally address “logical data” transported via Transmission Control Protocol Internet Protocol (TCP/IP), however, embodiments of the invention herein are also contemplated which are directed to transporting, multiplexing, encrypting, and generally processing block level data as disclosed, for example, in U.S. patent application Ser. No. 10/803,542, filed Mar. 18, 2004, titled METHOD AND SYSTEM FOR TRANSFERRING DATA IN A STORAGE OPERATION, now abandoned, which is hereby incorporated herein by reference in its entirety.
As discussed, these applications generally disclose systems and methods of processing logical data. Thus, for example, contiguous blocks of data from a file might be written on a first volume as blocks 1, 2, 3, 4, 5, etc. The operating system of the host associated with the first volume would assist in packaging the data adding additional OS-specific information to the chunks. Thus, when transported and stored on a second volume, the blocks might be written to the second in a non-contiguous order such as blocks 2, 1, 5, 3, 4. On a restore storage operation, the blocks could (due to the OS-specific information and other information) be restored to the first volume in contiguous order, but there was no control over how the blocks were laid out or written to the second volume. Incremental block level backups of file data was therefore extremely difficult if not impossible in such a system since there was no discernable relationship between how blocks were written on the first volume and how they were written on the second volume.
Thus, in some embodiments, the system supports transport and incremental backups (and other storage operations) of block level data via a TCP/IP (and other transport protocols) over a local area network (LAN), wide area network (WAN), storage area network (SAN), etc. Additional data is added to the multi-tag header discussed in the applications referenced above which communicates how each block was written on the first volume. Thus, for example, a header might contain a file map of how the blocks were written on the first volume and the map could be used to write the blocks in similar order on the second volume. In other embodiments, each chunk header might contain a pointer or other similar data structure indicating the chunk's position relative to other chunks in the file. Thus, when a file block or other block changed on the first volume, the system could identify and update the corresponding copy of the block located on the second volume and effectively perform an incremental backup or other storage operation.
In the system, for example as in the CommVault GALAXY system, archives are grouped by storage policy. Many clients/sub clients can point to the same storage policy. Each storage policy has a primary copy and zero or more secondary copies. Each copy has one or more streams related to the number of drives in a drive pool.
The system uses a tape media to its maximum capacity and throughput by multiplexing data from several clients onto the same media at the same time. The system allows for a stream to be reserved more than once by different clients and have multiple data movers write to this same piece of media.
During backup or other storage operations, data from a data agent to a media agent is transferred over a “Data pipeline” as further described herein and in U.S. Pat. No. 6,418,478 and U.S. patent application Ser. No. 09/495,751, now U.S. Pat. No. 7,209,972, issued Apr. 24, 2007. One or more transport processes or modules, such as the Dsbackup in the CommVault GALAXY system, form the tail end on the Media Agent for the pipeline. For example, in the GALAXY system, the Datamover process running as part of Dsbackup is responsible for writing data to the media. For data multiplexing, many such Data movers belonging to different pipelines have to write to the same piece of media. This can be achieved by splitting the Datamover pipeline process into multiple components including a data receiver, a data writer, and other modules as necessary.
Backup streams 125 are fed into the transmit pipeline 130. For example, in some embodiments, a backup process, such as the Dsbackup process in the CommVault GALAXY system, packages file data and other data into chunks and communicates the chunks via the backup streams 125. Thus, the transmit pipeline 130 or tail end of the pipeline copies the data received in pipeline buffers from the backup process via the backup data streams 125. A data receiver 135 processes the data received from each backup stream 125. In some embodiments, there is one data receiver 135 per backup stream 125, thus in the case of multiple backup streams 135, the system might contain multiple data receiver modules 135.
A multiplexing module 140 combines the data received by the receiver module(s) 135 into a single stream of chunks as further described herein. Thus, the multiplexing module 140 may combine data from multiple archive files into a single chunk. Additional modules 145 perform other operations on the chunks of data to be transported such as encryption, compression, etc. as further described herein, in U.S. Pat. No. 6,418,478; U.S. patent application Ser. No. 09/495,751, now U.S. Pat. No. 7,209,972, issued Apr. 24, 2007; and U.S. patent application Ser. No. 10/990,284, now U.S. Pat. No. 7,277,941, issued Oct. 2, 2007.
The data writer module 150 communicates the chunks of data from the transmit pipeline 130 over a transport channel 155 to the receive pipeline 160. The transport channel may comprise a buffer, a bus, a fiber optic channel, a LAN, a SAN, a WAN, a wireless communication medium, or other transport methods known in the art. There is generally one data writer 150 per media (not shown) that receives data from multiple data receivers 135 and writes data to the media. The data writer process 150 is generally invoked when the first pipeline is established to use a given media and generally remains running until all the pipelines backing up to this media are finished. The data writer 150 writes the data to media or to the receive pipeline 160 and closes a chunk when the chunk size is reached, the chunk size being a design parameter set to allow only certain size chunks for transmission over the datapipe. In some embodiments, the data writer 150 also updates the Archive Manager tables with the chunk information. A multiplexed chunk thus will contain data from many archive files.
In some embodiments, the transmit pipeline receives data directly from the system's data agents and writes multiplexed data to the media directly without an intervening receive pipeline 160. Thus, in some embodiments, a single pipeline is also contemplated. In embodiments that include both a transmit pipeline 130 and a receive pipeline 160, the receive pipeline 160 processes data received from the transmit pipeline 130 for storage to media, etc. A second data receiver 165 processes data received from the data writer 150 and additional modules 170 which may include encryption, decryption, compression, decompression modules, etc. further process the data before it is written to the storage media by a final data writer module (not shown).
In some embodiments, Data Multiplexing is a property of a Storage Policy. Any storage policy with Data Multiplexing enabled has the ability to start backups for multiple sub clients to run simultaneously to the same media. In some embodiments, a resource manager process on the storage manager allows for multiple volume reservation for media belonging to storage policies with data multiplexing enabled.
During a restore storage operation, the process is essentially reversed. Data is retrieved from the storage media and passed back through the pipeline to the original volume. Thus, during a restore, a data reader module (e.g.—a data receiver directed to also retrieve data from storage) identifies the data by the looking into the tag header of each retrieved chunk. Any offset into the chunk is a relative offset i.e. when restoring data from a given archive file all the data buffers encountered from a different archive file should not be counted into the offset calculation and should be thrown out. Data within each volume block size of data will contain data from different Archive files. The tag header also contains the archive file id. In addition, all the offsets stored are relative offset within an archive file and does not depend on actual physical location on the tape or other storage media.
A more detailed description of data multiplexing according to embodiments of the invention is now described:
A single backup is made up of one or more archive files. An archive file is made up of the smallest restorable component called the “chunk”. The chunk always belonged to only one archive file. With data multiplexing a chunk interleaves pipeline buffers from different pipelines. A tag header written for each buffer of data will uniquely identify the data to the archive file. The tag header contains the archive file id (serial ID) from the database corresponding to the archive file being backed up.
In some embodiments, for example in the CommVault GALAXY system, one or more modules in the pipeline, such as the DsBackup module, package data or otherwise retrieve data from a primary volume to be backed up and from the pipeline, and sends the data to the DataMover or receive pipeline. DsBackup also initializes indexes and updates the index cache every time it receives a file header from a client. DataMover responsibility is to organize the data received from the dsBackup into chunks, start a new chunk when the size of the chunk reaches the predetermined value, update the archive manager tables information about the chunks and their location on that tape, also handle end of media conditions and media reservations. DataMover uses the MediaFileSystem object, for example I/O system API calls of a media agent or other system component, to write data on to the tape and read data from the tape. MediaFileSystem has a Write buffer and data is written onto the tape when this write buffer is filled with data.
With the new data Multiplexing model of DataMover, the previous DataMover modules and their functionalities undergo changes.
Referring now to
Each Data Receiver writes a tag portion immediately by calling the Data Writer's Write ( ) method. Data Writer has an internal buffer, which is the same as the selected block size. When this buffer is full, the buffer is locked and emptied to the media. While this write operation is ongoing to the media, the second buffer will be ready to accept data from the Data Receiver. The thread, which calls the write on Data Writer, will return from the function call when the Media IO is complete. Meanwhile, the second buffer fills. These double buffers are guarded with appropriate semaphores to ensure proper concurrent access.
The Write operation is a blocking call and returns after completing the write. The Data Writer Write API takes in the archive file id as the parameter and once the write is completed, the physical offsets are updated in a list maintained by the Data Writer object accordingly. When the size of the chunk exceeds the pre-determined size, the chunk is automatically closed by writing a file mark, updating the archive manager tables in the storage manager or at the media agent, and also updating the physical offsets in the list or index maintained by the data writer object to track multiplexed storage files. The Data Writer object is responsible for handling the end of media condition and the Data Receiver does not generally require any knowledge about it.
As previously discussed, only the data writer object generally knows the chunk closure, but there are conditions where the close chunk operation could be needed because of a CLOSE ARCHIVE FILE message sent by the client. This means that the system may need to close the chunk though the size of the chunk may not have reached the predetermined size. When a CLOSE ARCHIVE FILE message is received from the client, DsBackup calls into Data Receiver Close that in turn calls the Data Writer Close. This close waits for a pre-determined amount of time for the chunk to get close on its own as the other clients may be still pumping in data to the chunk. If after the pre-determined time the chunk is not closed, the chunk is closed forcefully by writing a file mark and updating the appropriate index cache. The only side effect this could result in is that the chunk may not be as big as the pre-determined size as the close chunk has been force fully done. The pre-determined time for wait can be made configurable or can be made a variable parameter depending on the client type. With this new model there can be a situation that the tag header gets split and is spanned between two data buffers on the tape. This is generally addressed during the restore of data.
The following cases illustrate exemplary backup scenarios and considerations according to embodiments of the invention:
1. Initialization of DataWriter. During the Initialization, the active media for the media group is mounted. This method returns success only if the media is mounted correctly. If the media is already mounted, this method just returns success along with the volume Id of the mounted media. This may be required for logging information for the Data Receiver in some embodiments.
2. CreateArchiveFile: In this method, an Archive file header is written on to the media. This uses the special tag header which identifies the data in the tag portion as an archive file header.
3. WriteToMedia: This method returns information to the upper layer if the write is successful or not. Method returns information such as, end of the chunk, various media errors, Media is full etc. There is no other way to indicate these conditions other than as a return value in this method.
4. CloseArchiveFile: This method closes the archive file by writing an Archive file trailer to the media. This again has a specialized tag header which identifies the data as Archive file trailer. Close Archive file trailer does not return immediately. There is a configurable time interval for which the writing to the current chunk continues. The current chunk will be closed when all the archive files in this chunk gets over or after the above time out interval from the first archive file close request which comes in, whichever is the earliest.
There is generally no need of any call back methods to Data Receiver from Data Writer. All communication from Data Writer to Receiver should be through return values of the functions called in.
Restores of the multiplexed data are often less complicated since restores are generally not multiplexed as the back-ups. But the aim during the restores is to seek to the offsets and restore without looking into the tag headers in all of the data. Data Reader object is instantiated during restore. The parameter for this object remains the same as the current DataMover object. The client opens the required archive file by specifying the archive file id. Then the client sends the seek offset. The Data Reader object queries the archive manager to determine the chunk number that needs to be opened and the volume that should be mounted to seek to the give offset. Once the media is mounted the media is positioned to the correct file marker so as to open the chunk. Once the chunk header is read and discarded, data is read block by block and the size of the block is the same as the one that was used during the write. Every time a block of data is read all tag headers are examined to determine whether it contains the data of the archive file that we are looking for. This is done by traversing the buffer read in and looking through the tag headers. If it contains any other archive file's data, that tag portion is discarded and the next header is read. If the tag portion contains the data of the archive file that is being searched, then a check is done to see if the tag portion contains the offset that is being searched. If it does not contain the offset, this tag portion is skipped but the physical offset calculations are incremented appropriately. Once the correct block that contains the offset is reached, the data buffer pointer is positioned properly and the success is returned to the caller.
Once the seek is successful, a data reader/retriever module in the pipeline, such as the FsRestoreHead module in the GALAXY system, requests a read with the read size equal to the size of the tag header. The process looks into the tag header to determine the size of the data that has to be read and requests a read with the size of the data. The restore happens in this fashion. The Data reader will have a buffer more than equal the size of one pipe line header as it may need to buffer the data during a read request. The Data Reader also takes care of the case of tag headers that may have spanned between two data blocks.
There is also metadata that is written on to the tape (or other media) during back up to trouble shoot problems and also enable disaster recover programs, such as CommVault's Dr-restore program which retrieves data from backups. During backup every time a chunk is closed, a file marker is written. After this a data block is constructed containing information, e.g., the list of archive file id's whose data is contained in the recently closed chunk and their physical offsets and the size within this chunk. A file marker follows this data and does not generally make any kind of update to the database. In order to facilitate the disaster recovery tool functionality, we also indicate which of the archive file ids were closed in the current chunk.
The data format on Media changes with Data Interleaving/Multiplexing. An exemplary current data format and related data structures used prior to multiplexing according to embodiments of the invention is shown in
An exemplary media format to support data multiplexing according to embodiments of the invention is shown in
When data multiplexing is enabled, other elements of the previous system also change in some embodiments as further described below. For example, Auxiliary Copy currently copies data chunk by chunk within an Archive File. The assumption is that the data within a chunk belongs to the same archive file. This is no longer true with data multiplexing. In embodiments where data multiplexing is supported, Auxiliary Copy allows two forms of copy mechanism: Simple Copy (copy whole chunk for all or part of archive files) and De-Multiplexed copy (archive file by archive file; only if source is magnetic).
In a simple copy, Auxiliary Copy creates a list of archive files that needs to be copied and copies then chunk by chunk and volume by volume. Data from different archive files will be copied at the same time to the secondary copy. This is faster, but the resultant copy will have the data still interleaved as the original copy.
In a de-multiplexed copy, Auxiliary Copy will copy data archive file by archive file. The result being that the system may go over the same set of media for each archive file discarding data encountered from a different archive file. This approach is slow and inefficient but the secondary copy has contiguous data for each archive file.
The system uses flags and other signaling mechanisms, for example flag deMultiplexDataOnCopy on the ArchGroupCopy object, to dictate the choice of copy mechanism. Archive Manager will pass down a list of Archive Files to be copied to the secondary copy, if the copy is setup for a Simple Copy. If the DeMultiplexing is supported on the Copy, AuxCopyMgr will pass down a single archive file to be copied.
Auxiliary Copy first creates all the headers for all archive files being copied and then starts the copy. A set of messages will be sent over the pipeline for creating these headers and in turn DSBackup will call DmReceiver create which will add archive file information to the dmreceiverinfo structure maintained in DmWriter. In some embodiments, Auxiliary Copy also supports client based copies, where archive files belonging for a set of clients will be copied. In other embodiments, a synthetic full backup combines archive files backed up from a single sub client and creates a full backup of all the incremental changes since the last full backup. The new archive file being created as part of Synthetic full can be multiplexed with other backups.
Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein. Software and other modules may reside on servers, workstations, personal computers, computerized tablets, personal digital assistants (PDAs), and other devices suitable for the purposes described herein. Software and other modules may be accessible via local memory, via a network, via a browser or other application in an ASP context, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein. Screenshots presented and described herein can be displayed differently as known in the art to input, access, change, manipulate, modify, alter, and work with information.
While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope of the invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope of the invention.
Appendix A describes data structures, software modules, and other elements of the system according to embodiments of the invention, such as in the CommVault GALAXY system.
ArchChunkTable
ArchChunkMapping table
Every Chunk has a Unique 64 bit id used as a counter to track chunks and perform other functions
Multiple archive file may be part of a single chunk
The ArchChunkMapping table determines archive files contained in each chunk
DmReceiver during restores queries archive manager and get the information for the chunk required by providing the archive file id and the physical offsets.
All data contained in a chunk belongs to the same copy.
An integer value in the ArchGroupCopy table defines the multiplexing factor, and determines how many clients can backup to this copy at the same time. This factor is applicable for all streams within the copy.
deMultiplexDataOnCopy flag indicates whether Auxiliary Copy should de-multiplex data when creating a secondary copy. The flag on the secondary copy is what is taken into consideration.
1.1.1 Creation of a Chunk
DmWriter maintains a list of archive files that were written as part of the chunk
When the chunk is closed, DmWriter makes a AMDS calls to close a chunk and will pass along a list of archive files that made up the chunk
Archive Manager creates the necessary entries in the ArchChunkMapping table and the archChunk table entries
1.1.2 API's that Changes in Archive Manager and AMDS
(1) ArchiveManagerDS::getAfileInfo(GetAfileInfoArgs_t * args, GetAfileInfoRet_t * ret)
Following structures defined in source/include/ArMgr/AmdsArgs.h and CVABasic.h change:
(2) int ArchiveManagerDS::closeChunk(CloseChunkArgsDS_t * args, CloseChunkRetDS_t * ret)
Following structures defined in source/include/ArMgr/AmdsArgs.h and CVABasic.h change:
1.2 Resource Manager
In previous versions of the system which did not support multiplexing, a single Volume (defined by MMS2Volume table) could be reserved only once—the same held true for drive reservations. This behavior changes to support data multiplexing. A given volume can be reserved multiple times by different jobs for writes. The number of times a volume can be reserved for writes is determined by the an index value, such as the howManyCanMultiplex value set in the ArchGroupCopy table. The same stream also now can be reserved multiple times due to this change.
Resource Manager allows up to “howManyCanMultiplex” number of reservations on a volume, stream and drive when reservation type is “WRITE”
Jobs running as part of a copy that support Multiplexing cannot generally be interrupted
These jobs can be suspended or killed
The Mark media full option is supported once per media group.
Streams availability is no longer based on the “inuse” flag but on “howManyReservations” field. If this value equals the “howManyCanMultiplex” value for the ArchGroupCopy then that stream cannot generally be reserved.
Resource Manager will reserve a specific drive.
The selection of the drive is based on the volumeid and the mediaid set on the drive table. If the requested media is already cache mounted in the drive and then that drive is reserved.
Resource Manager disallows any reservation of a client that wants to participate in data multiplexing if the client has not been upgraded to support multiplexing.
1.3 Media Manager
In previous versions of the system that did not support multiplexing, Media Manager mounted a particular volume into any available drive and set the drive id in the reservation tables. This will change as now the reservation will be made for a specific drive. When the mount request is received, Media Manager determines that the drive that is reserved for this job and mounts the media into that drive. If the media is cache mounted in a different drive then the reservation will be switched to that drive if that drive is not reserved.
1.4 DSBackup
With data multiplexing data of different clients will belong to the same chunk and hence each data block in the chunk has to be identified uniquely so as to perform the restore. This will be achieved by storing the archive file id into the tag header that is associated with every data block. The arch file id will uniquely identify the data block and the database can be used to determine the client to which the data belongs. The structure of an exemplary tag_header is given below.
The field validity_bits has been renamed to archive_file_id to store the archive file as shown below.
The archive file id is filled by the pipe layer of the client during backup. The tag header is written on to the media in the same format without any modification. During restore Data Reader reads the tag header to get find the archive file and in turn determine whether the data associated with that restore is required for the current restore or not.
1.5 DataMover (Windows Implementation)
Datamover is responsible of writing the data transferred over the pipeline to the media. With data multiplexing Datamover gets split into two components, Data Receiver (DmReceiver) and Data Writer (DmWriter). DsBackup invokes an instance of DmReceiver object. The DmReceiver object internally checks for the DmWriter's existence for the requested MediaGroupId. If the DmWriter is not present then a new instance of the DmWriter is created and cached in a DmWriter map. This map is maintained in CVD's context of the media gent and is accessible to all DmReceiver. DmWriter maintains an internal buffer corresponding to the volume block of the data per DmReceiver. The volume block size is determined from the media type being used. Write on the DmReceiver will call DmWriter write. DmWriter will copy the pipeline buffers internally for aligning it to the volume block in the ReceiverInfo structure.
DataMoverBase class is the class that implements the functionality of DataWriter. Since this class will be used for both backup and restore it was given a generic name “DataMoverBase”
In the above classes, Data Receiver is a thin layer which in many cases calls the DataWriter methods directly.
This application is a continuation of U.S. patent application Ser. No. 11/954,176, filed Dec. 11, 2007, which is a continuation of U.S. patent application Ser. No. 10/990,357, filed Nov. 15, 2004, which claims the benefit of U.S. Provisional Patent Application No. 60/519,526, titled SYSTEM AND METHOD FOR PERFORMING PIPELINED STORAGE OPERATIONS IN A STORAGE NETWORK, filed Nov. 13, 2003, each of which is hereby incorporated herein by reference in its entirety. This application is also related to the following patents and pending applications, each of which is hereby incorporated herein by reference in its entirety: U.S. patent application Ser. No. 10/990,284, titled SYSTEM AND METHOD FOR PROVIDING ENCRYPTION IN PIPELINED STORAGE OPERATIONS IN A STORAGE NETWORK, filed Nov. 15, 2004, now U.S. Pat. No. 7,277,941, issued Oct. 2, 2007;U.S. Pat. No. 6,418,478, titled PIPELINED HIGH SPEED DATA TRANSFER MECHANISM, issued Jul. 9, 2002;U.S. patent application Ser. No. 09/495,751, titled HIGH SPEED TRANSFER MECHANISM, filed Feb. 1, 2000, now U.S. Pat. No. 7,209,972, issued Apr. 24, 2007;U.S. patent application Ser. No. 09/610,738, titled MODULAR BACKUP AND RETRIEVAL SYSTEM USED IN CONJUNCTION WITH A STORAGE AREA NETWORK, filed Jul. 6, 2000, now U.S. Pat. No. 7,035,880, issued Apr. 25, 2006;U.S. patent application Ser. No. 09/774,268, titled LOGICAL VIEW AND ACCESS TO PHYSICAL STORAGE IN MODULAR DATA AND STORAGE MANAGEMENT SYSTEM, filed Jan. 30, 2001, now U.S. Pat. No. 6,542,972, issued Apr. 1, 2003;U.S. patent application Ser. No. 10/658,095, titled DYNAMIC STORAGE DEVICE POOLING IN A COMPUTER SYSTEM, filed Sep. 9, 2003, now U.S. Pat. No. 7,130,970, issued Oct. 31, 2006; andU.S. Provisional Patent Application No. 60/460,234, titled SYSTEM AND METHOD FOR PERFORMING STORAGE OPERATIONS IN A COMPUTER NETWORK, filed Apr. 3, 2003.
Number | Name | Date | Kind |
---|---|---|---|
4296465 | Lemak | Oct 1981 | A |
4686620 | Ng | Aug 1987 | A |
4695943 | Keeley et al. | Sep 1987 | A |
4888689 | Taylor et al. | Dec 1989 | A |
4995035 | Cole et al. | Feb 1991 | A |
5005122 | Griffin et al. | Apr 1991 | A |
5062104 | Lubarsky et al. | Oct 1991 | A |
5093912 | Dong et al. | Mar 1992 | A |
5133065 | Cheffetz et al. | Jul 1992 | A |
5163131 | Row et al. | Nov 1992 | A |
5193154 | Kitajima et al. | Mar 1993 | A |
5212772 | Masters | May 1993 | A |
5226157 | Nakano et al. | Jul 1993 | A |
5239647 | Anglin et al. | Aug 1993 | A |
5241668 | Eastridge et al. | Aug 1993 | A |
5241670 | Eastridge et al. | Aug 1993 | A |
5247616 | Berggren et al. | Sep 1993 | A |
5276860 | Fortier et al. | Jan 1994 | A |
5276867 | Kenley et al. | Jan 1994 | A |
5287500 | Stoppani, Jr. | Feb 1994 | A |
5301351 | Jippo | Apr 1994 | A |
5311509 | Heddes et al. | May 1994 | A |
5321816 | Rogan et al. | Jun 1994 | A |
5333315 | Saether et al. | Jul 1994 | A |
5347653 | Flynn et al. | Sep 1994 | A |
5377341 | Kaneko et al. | Dec 1994 | A |
5388243 | Glider et al. | Feb 1995 | A |
5410700 | Fecteau et al. | Apr 1995 | A |
5428783 | Lake | Jun 1995 | A |
5448724 | Hayashi et al. | Sep 1995 | A |
5465359 | Allen et al. | Nov 1995 | A |
5487160 | Bemis | Jan 1996 | A |
5491810 | Allen | Feb 1996 | A |
5495607 | Pisello et al. | Feb 1996 | A |
5504873 | Martin et al. | Apr 1996 | A |
5515502 | Wood | May 1996 | A |
5544345 | Carpenter et al. | Aug 1996 | A |
5544347 | Yanai et al. | Aug 1996 | A |
5555404 | Torbjornsen et al. | Sep 1996 | A |
5559957 | Balk | Sep 1996 | A |
5559991 | Kanfi | Sep 1996 | A |
5588117 | Karp et al. | Dec 1996 | A |
5592618 | Micka et al. | Jan 1997 | A |
5598546 | Blomgren | Jan 1997 | A |
5606359 | Youden et al. | Feb 1997 | A |
5615392 | Harrison et al. | Mar 1997 | A |
5619644 | Crockett et al. | Apr 1997 | A |
5638509 | Dunphy et al. | Jun 1997 | A |
5642496 | Kanfi | Jun 1997 | A |
5644779 | Song | Jul 1997 | A |
5651002 | Van Seters et al. | Jul 1997 | A |
5673381 | Huai et al. | Sep 1997 | A |
5675511 | Prasad et al. | Oct 1997 | A |
5680550 | Kuszmaul et al. | Oct 1997 | A |
5682513 | Candelaria et al. | Oct 1997 | A |
5687343 | Fecteau et al. | Nov 1997 | A |
5692152 | Cohen et al. | Nov 1997 | A |
5699361 | Ding et al. | Dec 1997 | A |
5719786 | Nelson et al. | Feb 1998 | A |
5729743 | Squibb | Mar 1998 | A |
5737747 | Vishlitzky et al. | Apr 1998 | A |
5751997 | Kullick et al. | May 1998 | A |
5758359 | Saxon | May 1998 | A |
5761104 | Lloyd et al. | Jun 1998 | A |
5761677 | Senator et al. | Jun 1998 | A |
5761734 | Pfeffer et al. | Jun 1998 | A |
5764972 | Crouse et al. | Jun 1998 | A |
5778395 | Whiting et al. | Jul 1998 | A |
5790828 | Jost | Aug 1998 | A |
5805920 | Sprenkle et al. | Sep 1998 | A |
5812398 | Nielsen | Sep 1998 | A |
5813008 | Benson et al. | Sep 1998 | A |
5813009 | Johnson et al. | Sep 1998 | A |
5813017 | Morris | Sep 1998 | A |
5815462 | Konishi et al. | Sep 1998 | A |
5829023 | Bishop | Oct 1998 | A |
5829046 | Tzelnic et al. | Oct 1998 | A |
5860104 | Witt et al. | Jan 1999 | A |
5875478 | Blumenau | Feb 1999 | A |
5875481 | Ashton et al. | Feb 1999 | A |
5878056 | Black et al. | Mar 1999 | A |
5887134 | Ebrahim | Mar 1999 | A |
5890159 | Sealby et al. | Mar 1999 | A |
5897643 | Matsumoto | Apr 1999 | A |
5901327 | Ofek | May 1999 | A |
5924102 | Perks | Jul 1999 | A |
5926836 | Blumenau | Jul 1999 | A |
5933104 | Kimura | Aug 1999 | A |
5936871 | Pan et al. | Aug 1999 | A |
5950205 | Aviani, Jr. | Sep 1999 | A |
5956519 | Wise et al. | Sep 1999 | A |
5958005 | Thorne et al. | Sep 1999 | A |
5970233 | Lie et al. | Oct 1999 | A |
5970255 | Tran et al. | Oct 1999 | A |
5974563 | Beeler, Jr. | Oct 1999 | A |
5987478 | See et al. | Nov 1999 | A |
5995091 | Near et al. | Nov 1999 | A |
5999629 | Heer et al. | Dec 1999 | A |
6003089 | Shaffer et al. | Dec 1999 | A |
6009274 | Fletcher et al. | Dec 1999 | A |
6012090 | Chung et al. | Jan 2000 | A |
6021415 | Cannon et al. | Feb 2000 | A |
6026414 | Anglin | Feb 2000 | A |
6041334 | Cannon | Mar 2000 | A |
6052735 | Ulrich et al. | Apr 2000 | A |
6058494 | Gold et al. | May 2000 | A |
6076148 | Kedem et al. | Jun 2000 | A |
6094416 | Ying | Jul 2000 | A |
6094684 | Pallmann | Jul 2000 | A |
6101255 | Harrison et al. | Aug 2000 | A |
6105129 | Meier et al. | Aug 2000 | A |
6105150 | Noguchi et al. | Aug 2000 | A |
6112239 | Kenner et al. | Aug 2000 | A |
6122668 | Teng et al. | Sep 2000 | A |
6131095 | Low et al. | Oct 2000 | A |
6131190 | Sidwell | Oct 2000 | A |
6137864 | Yaker | Oct 2000 | A |
6148412 | Cannon et al. | Nov 2000 | A |
6154787 | Urevig et al. | Nov 2000 | A |
6154852 | Amundson et al. | Nov 2000 | A |
6161111 | Mutalik et al. | Dec 2000 | A |
6167402 | Yeager | Dec 2000 | A |
6175829 | Li et al. | Jan 2001 | B1 |
6212512 | Barney et al. | Apr 2001 | B1 |
6230164 | Rekieta et al. | May 2001 | B1 |
6260069 | Anglin | Jul 2001 | B1 |
6269431 | Dunham | Jul 2001 | B1 |
6275953 | Vahalia et al. | Aug 2001 | B1 |
6292783 | Rohler | Sep 2001 | B1 |
6295541 | Bodnar et al. | Sep 2001 | B1 |
6301592 | Aoyama et al. | Oct 2001 | B1 |
6304880 | Kishi | Oct 2001 | B1 |
6324581 | Xu et al. | Nov 2001 | B1 |
6328766 | Long | Dec 2001 | B1 |
6330570 | Crighton | Dec 2001 | B1 |
6330572 | Sitka | Dec 2001 | B1 |
6330642 | Carteau | Dec 2001 | B1 |
6343324 | Hubis et al. | Jan 2002 | B1 |
6350199 | Williams et al. | Feb 2002 | B1 |
RE37601 | Eastridge et al. | Mar 2002 | E |
6353878 | Dunham | Mar 2002 | B1 |
6356801 | Goodman et al. | Mar 2002 | B1 |
6374266 | Shnelvar | Apr 2002 | B1 |
6374336 | Peters et al. | Apr 2002 | B1 |
6381331 | Kato | Apr 2002 | B1 |
6385673 | DeMoney | May 2002 | B1 |
6389432 | Pothapragada et al. | May 2002 | B1 |
6418478 | Ignatius et al. | Jul 2002 | B1 |
6421711 | Blumenau et al. | Jul 2002 | B1 |
6438586 | Hass et al. | Aug 2002 | B1 |
6487561 | Ofek et al. | Nov 2002 | B1 |
6487644 | Huebsch et al. | Nov 2002 | B1 |
6505307 | Stell et al. | Jan 2003 | B1 |
6519679 | Devireddy et al. | Feb 2003 | B2 |
6538669 | Lagueux, Jr. et al. | Mar 2003 | B1 |
6542909 | Tamer et al. | Apr 2003 | B1 |
6542972 | Ignatius et al. | Apr 2003 | B2 |
6564228 | O'Connor | May 2003 | B1 |
6571310 | Ottesen | May 2003 | B1 |
6577734 | Etzel et al. | Jun 2003 | B1 |
6581143 | Gagne et al. | Jun 2003 | B2 |
6604149 | Deo et al. | Aug 2003 | B1 |
6631442 | Blumenau | Oct 2003 | B1 |
6631493 | Ottesen et al. | Oct 2003 | B2 |
6647396 | Parnell et al. | Nov 2003 | B2 |
6654825 | Clapp et al. | Nov 2003 | B2 |
6658436 | Oshinsky et al. | Dec 2003 | B2 |
6658526 | Nguyen et al. | Dec 2003 | B2 |
6675177 | Webb | Jan 2004 | B1 |
6732124 | Koseki et al. | May 2004 | B1 |
6742092 | Huebsch et al. | May 2004 | B1 |
6757794 | Cabrera et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6772332 | Boebert et al. | Aug 2004 | B1 |
6785786 | Gold et al. | Aug 2004 | B1 |
6789161 | Blendermann et al. | Sep 2004 | B1 |
6791910 | James et al. | Sep 2004 | B1 |
6859758 | Prabhakaran et al. | Feb 2005 | B1 |
6871163 | Hiller et al. | Mar 2005 | B2 |
6880052 | Lubbers et al. | Apr 2005 | B2 |
6886020 | Zahavi et al. | Apr 2005 | B1 |
6909722 | Li | Jun 2005 | B1 |
6928513 | Lubbers et al. | Aug 2005 | B2 |
6952758 | Chron et al. | Oct 2005 | B2 |
6965968 | Touboul et al. | Nov 2005 | B1 |
6968351 | Butterworth | Nov 2005 | B2 |
6973553 | Archibald, Jr. et al. | Dec 2005 | B1 |
6983351 | Gibble et al. | Jan 2006 | B2 |
7003519 | Biettron et al. | Feb 2006 | B1 |
7003641 | Prahlad et al. | Feb 2006 | B2 |
7035880 | Crescenti et al. | Apr 2006 | B1 |
7062761 | Slavin et al. | Jun 2006 | B2 |
7069380 | Ogawa et al. | Jun 2006 | B2 |
7085904 | Mizuno et al. | Aug 2006 | B2 |
7103731 | Gibble et al. | Sep 2006 | B2 |
7103740 | Colgrove et al. | Sep 2006 | B1 |
7107298 | Prahlad et al. | Sep 2006 | B2 |
7107395 | Ofek et al. | Sep 2006 | B1 |
7117246 | Christenson et al. | Oct 2006 | B2 |
7120757 | Tsuge | Oct 2006 | B2 |
7130970 | Devassy et al. | Oct 2006 | B2 |
7155465 | Lee et al. | Dec 2006 | B2 |
7155633 | Tuma et al. | Dec 2006 | B2 |
7159110 | Douceur et al. | Jan 2007 | B2 |
7174433 | Kottomtharayil et al. | Feb 2007 | B2 |
7209972 | Ignatius et al. | Apr 2007 | B1 |
7246140 | Therrien et al. | Jul 2007 | B2 |
7246207 | Kottomtharayil et al. | Jul 2007 | B2 |
7246272 | Cabezas et al. | Jul 2007 | B2 |
7269612 | Devarakonda et al. | Sep 2007 | B2 |
7277941 | Ignatius et al. | Oct 2007 | B2 |
7278142 | Bandhole et al. | Oct 2007 | B2 |
7287047 | Kavuri | Oct 2007 | B2 |
7287252 | Bussiere et al. | Oct 2007 | B2 |
7293133 | Colgrove et al. | Nov 2007 | B1 |
7298846 | Bacon et al. | Nov 2007 | B2 |
7315923 | Retnamma et al. | Jan 2008 | B2 |
7346623 | Prahlad et al. | Mar 2008 | B2 |
7359917 | Winter et al. | Apr 2008 | B2 |
7380072 | Kottomtharayil et al. | May 2008 | B2 |
7398429 | Shaffer et al. | Jul 2008 | B2 |
7401154 | Ignatius et al. | Jul 2008 | B2 |
7409509 | Devassy et al. | Aug 2008 | B2 |
7448079 | Tremain | Nov 2008 | B2 |
7454569 | Kavuri et al. | Nov 2008 | B2 |
7457933 | Pferdekaemper et al. | Nov 2008 | B2 |
7467167 | Patterson | Dec 2008 | B2 |
7472238 | Gokhale | Dec 2008 | B1 |
7484054 | Kottomtharayil et al. | Jan 2009 | B2 |
7490207 | Amarendran et al. | Feb 2009 | B2 |
7500053 | Kavuri et al. | Mar 2009 | B1 |
7500150 | Sharma et al. | Mar 2009 | B2 |
7509019 | Kaku | Mar 2009 | B2 |
7519726 | Palliyil et al. | Apr 2009 | B2 |
7529748 | Wen et al. | May 2009 | B2 |
7536291 | Vijayan Retnamma et al. | May 2009 | B1 |
7546324 | Prahlad et al. | Jun 2009 | B2 |
7546482 | Blumenau et al. | Jun 2009 | B2 |
7581077 | Ignatius et al. | Aug 2009 | B2 |
7596586 | Gokhale et al. | Sep 2009 | B2 |
7613748 | Brockway et al. | Nov 2009 | B2 |
7627598 | Burke | Dec 2009 | B1 |
7627617 | Kavuri et al. | Dec 2009 | B2 |
7631194 | Wahlert et al. | Dec 2009 | B2 |
7685126 | Patel et al. | Mar 2010 | B2 |
7765369 | Prahlad et al. | Jul 2010 | B1 |
7809914 | Kottomtharayil et al. | Oct 2010 | B2 |
7831553 | Prahlad et al. | Nov 2010 | B2 |
7840537 | Gokhale et al. | Nov 2010 | B2 |
7861050 | Retnamma et al. | Dec 2010 | B2 |
8019963 | Ignatius et al. | Sep 2011 | B2 |
20020029281 | Zeidner et al. | Mar 2002 | A1 |
20020040405 | Gold | Apr 2002 | A1 |
20020042869 | Tate et al. | Apr 2002 | A1 |
20020042882 | Dervan et al. | Apr 2002 | A1 |
20020049778 | Bell et al. | Apr 2002 | A1 |
20020065967 | MacWilliams et al. | May 2002 | A1 |
20020069369 | Tremain | Jun 2002 | A1 |
20020107877 | Whiting et al. | Aug 2002 | A1 |
20020194340 | Ebstyne et al. | Dec 2002 | A1 |
20020198983 | Ullmann et al. | Dec 2002 | A1 |
20030014433 | Teloh et al. | Jan 2003 | A1 |
20030016609 | Rushton et al. | Jan 2003 | A1 |
20030061491 | Jaskiewicz et al. | Mar 2003 | A1 |
20030066070 | Houston | Apr 2003 | A1 |
20030079112 | Sachs et al. | Apr 2003 | A1 |
20030169733 | Gurkowski et al. | Sep 2003 | A1 |
20040073716 | Boom et al. | Apr 2004 | A1 |
20040088432 | Hubbard et al. | May 2004 | A1 |
20040107199 | Dairymple et al. | Jun 2004 | A1 |
20040193953 | Callahan et al. | Sep 2004 | A1 |
20040210796 | Largman et al. | Oct 2004 | A1 |
20040230829 | Dogan et al. | Nov 2004 | A1 |
20050033756 | Kottomtharayil et al. | Feb 2005 | A1 |
20050114406 | Borthakur et al. | May 2005 | A1 |
20050114477 | Willging et al. | May 2005 | A1 |
20050166011 | Burnett et al. | Jul 2005 | A1 |
20050172093 | Jain | Aug 2005 | A1 |
20050246510 | Retnamma et al. | Nov 2005 | A1 |
20050246568 | Davies | Nov 2005 | A1 |
20050256972 | Cochran et al. | Nov 2005 | A1 |
20050262296 | Peake | Nov 2005 | A1 |
20060005048 | Osaki et al. | Jan 2006 | A1 |
20060010154 | Prahlad et al. | Jan 2006 | A1 |
20060010227 | Atluri | Jan 2006 | A1 |
20060044674 | Martin et al. | Mar 2006 | A1 |
20060149889 | Sikha | Jul 2006 | A1 |
20060224846 | Amarendran et al. | Oct 2006 | A1 |
20070288536 | Sen et al. | Dec 2007 | A1 |
20080059515 | Fulton | Mar 2008 | A1 |
20080229037 | Bunte et al. | Sep 2008 | A1 |
20080243914 | Prahlad et al. | Oct 2008 | A1 |
20080243957 | Prahlad et al. | Oct 2008 | A1 |
20080243958 | Prahlad et al. | Oct 2008 | A1 |
20080256173 | Ignatius et al. | Oct 2008 | A1 |
20090319534 | Gokhale | Dec 2009 | A1 |
20090319585 | Gokhale | Dec 2009 | A1 |
20100005259 | Prahlad | Jan 2010 | A1 |
20100131461 | Prahlad et al. | May 2010 | A1 |
Number | Date | Country |
---|---|---|
0259912 | Mar 1988 | EP |
0405926 | Jan 1991 | EP |
0467546 | Jan 1992 | EP |
0774715 | May 1997 | EP |
0809184 | Nov 1997 | EP |
0862304 | Sep 1998 | EP |
0899662 | Mar 1999 | EP |
0981090 | Feb 2000 | EP |
1174795 | Jan 2002 | EP |
1115064 | Dec 2004 | EP |
2366048 | Feb 2002 | GB |
WO 9513580 | May 1995 | WO |
WO 9839707 | Sep 1998 | WO |
WO 9839709 | Sep 1998 | WO |
WO 9912098 | Mar 1999 | WO |
WO 9914692 | Mar 1999 | WO |
WO 9917204 | Apr 1999 | WO |
WO 0205466 | Jan 2002 | WO |
WO 2004090788 | Oct 2004 | WO |
WO 2005055093 | Jun 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20110087851 A1 | Apr 2011 | US |
Number | Date | Country | |
---|---|---|---|
60519526 | Nov 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11954176 | Dec 2007 | US |
Child | 12969389 | US | |
Parent | 10990357 | Nov 2004 | US |
Child | 11954176 | US |