Systems and methods for resynchronizing information

Abstract
Methods and systems for synchronizing data files in a storage network between a first and a second storage device is provided. The method includes storing first data files associated with the first storage device to a storage medium, whereby the first data files include first data records. The storage medium may then be transferred to the second storage device. The first data files from the storage medium may be loaded onto the second storage device. The second data records from the first storage device may be received, and the first and second data records are compared. The first data files at the second storage device may be updated based on the comparison of the first and second data records.
Description
BACKGROUND OF THE INVENTION

The invention disclosed herein relates generally to performing data transfer operations in a data storage system. More particularly, the present invention relates to facilitating data synchronization between a source and destination device in a storage operation system.


Performing data synchronization is an important task in any system that processes and manages data. Synchronization is particularly important when a data volume residing in one location in a system is to be replicated and maintained on another part of the system. Replicated data volumes may be used, for example, for backup repositories, data stores, or in synchronous networks which may utilize multiple workstations requiring identical data storage.


File replication may include continually capturing write activity on a source computer and transmitting this write activity from the source computer to a destination or target computer in real-time or near real-time. A first step in existing file replication systems, as illustrated in FIG. 1A, is a synchronization process to ensure that the source data 22 at a source storage device and the destination data 24 at a destination storage device are the same. That is, before a destination computer 28 may begin storing write activity associated with the source data 22 at a source computer 26, the system 20 needs to first ensure that the previously written source data 22 is stored at the destination computer 28.


Problems in existing synchronization processes may occur as a result of low or insufficient bandwidth in a network connection 30 over which the source and destination computers 26, 28 communicate. Insufficient bandwidth over the connection 30 ultimately causes bottlenecks and network congestion. For example, if the rate of change of data at the source computer 26 is greater than the bandwidth available on the network connection 30, data replication may not occur since data at the source computer 26 will continue to change at a faster rate than it can be updated at the destination computer 28. Therefore, the attempts to synchronize the source and destination computers 26, 28 may continue indefinitely without success and one set of data will always lag behind the other.


Additional synchronization problems may arise due to hardware failure. If either the source computer 26 or the destination computer 28 were to fail, become unavailable, or have a failure of one of its storage components, application data may still be generated without system 20 being able to replicate the data to the other storage device. Neither computers 26 or 28 possess means of tracking data changes during such a failure. Other possible sources of disruption of replication operations in existing systems may include disrupted storage paths, broken communication links or exceeding the storage capacity of a storage device.


Additionally, some existing synchronization systems maintain continuity across multiple storage volumes using a wholesale copy routine. Such a routine entails periodically copying the most or all contents of a storage volume across the network to replace all the previous replication data. A storage policy or network administrator may control the operations and determine the frequency of the storage operation. Copying the entire contents of a storage volume across a network to a replication storage volume may be inefficient and can overload the network between the source computer 26 and the destination computer 28. Copying the entire volume across the network connection 30 between the two computers causes the connection 30 to become congested and unavailable for other operations or to other resources, which may lead to hardware or software operation failure, over-utilization of storage and network resources and lost information. A replication operation as described above may also lack the capability to encrypt or secure data transmitted across the network connection 30. A replication operation that takes place over a public network, such as the Internet, or publicly accessible wide area network (“WAN”), can subject the data to corruption or theft.


SUMMARY OF THE INVENTION

In accordance with some aspects of the present invention, a method of synchronizing data files with a storage operation between a first and a second storage device is provided. The method may include storing first data files associated with the first storage device to a storage medium, whereby the first data files include first data records. The storage medium may then be transferred to the second storage device. The first data files from the storage medium may be stored on the second storage device. The second data records from the first storage device may be received, and the first and second data records may be compared. The first data files at the second storage device may be updated based on the comparison of the first and second data records.


In accordance with other embodiments of the present invention, a method of synchronizing data after an interruption of data transfer between a first and a second storage device is provided. The method may include detecting an interruption in the data transfer between the first and the second storage device, and comparing first logged data records in a first data log associated with the first storage device with second logged records in a second data log associated with the second storage device. Updated data files from the first storage device may then be sent to the second storage device based on comparison the first and the second logged records.


One embodiment of the present invention includes a method of synchronizing data between a first and second storage device. The method may include identifying a first set of data on a first storage device for replication and capture the set of data in a first log entry. Changes to the first set of data may be determined and recorded as a second set data in a suitable log or data structure for recording such data. Next, the first and second set of data may be transmitted to the second storage device and any changes replicated in the second storage device.


Another embodiment of the present invention includes a method of synchronizing data after an interruption of data transfer between a first and a second storage device. When an interruption in the data transfer between the first and the second storage device is detected, the first logged data records in a first data log associated with the first storage device are compared with second logged records in a second data log associated with the second storage device. Updated data files from the first storage device are then sent to the second storage device based on comparing the first and the second logged records.


In yet another embodiment, a method of replicating data on an electronic storage system network is presented. A set of data, including a record identifier, is stored on a first storage device and copied to an intermediary storage device. The set of data from the intermediary storage device may then be transferred to a third storage device. The record identifier of the set of data on the third storage device may then be compared to the record identifier of the set of data on the first storage device. The set of data on the third storage device is updated upon detection of non-identical record identifiers, wherein the updated data files are transmitted across the storage network.


In another embodiment, a system for replicating data on an electronic storage network is presented. The system includes a first and second storage device, a first log, for tracking changes to data stored on the first storage device, and a replication manager module. The replication manager module transmits updated data from the first log to the second storage device.


In another embodiment, a computer-readable medium having stored thereon a plurality of sequences of instructions is presented. When executed by one or more processors the sequences cause an electronic device to store changes to data on a first storage device in a first log including record identifiers. Updated data is transmitted from the first log to a second log on a second storage device where the record identifier of the data from the first log is compared to the record identifier of the data from the second log. The second storage device is updated with the updated data upon detecting a difference in the record identifiers.


In another embodiment, a computer-readable medium having stored thereon a plurality of sequences of instructions is presented. When executed by one or more processors the sequences cause an electronic device to detect a failure event in a data replication operation between first and second storage devices. Updates of a first set of data are stored in the first storage device. A second set of data detailing the updates to the first set of data is logged. The second set of data also includes a record identifier which is compared to a record identifier of the second storage device. The updates to the first set of data, identified by the second set of data, are replicated on the second storage device.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention is illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, in which like references are intended to refer to like or corresponding parts, and in which:



FIG. 1 is a block diagram of a prior art system;



FIG. 2 is a block diagram of a system for performing storage operations on electronic data in a computer network according to an embodiment of the invention;



FIG. 3A is a block diagram of storage operation system components utilized during synchronization operations according to an embodiment of the invention;



FIG. 3B is an exemplary data format associated with logged data entries according to an embodiment of the invention;



FIG. 4A is a block diagram of storage operation system components utilized during synchronization operations in accordance with another embodiment of the invention.



FIG. 4B is an exemplary data format associated with logged data record entries according to an embodiment of the invention;



FIG. 5 is a flowchart illustrating some of the steps involved in replication according to an embodiment of the invention;



FIG. 6 is a flowchart illustrating some of the steps involved in replication according to an embodiment of the invention; and



FIG. 7 is a flowchart illustrating some of the steps involved in replication according to another embodiment of the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

Detailed embodiments of the present invention are disclosed herein, however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms. Therefore, specific functional details disclosed herein are not to be interpreted as limiting, as a representative basis for teaching one skilled in the art to variously employ the present invention in any appropriately detailed embodiment.


With reference to FIGS. 2-7, exemplary aspects of embodiments and features of the present invention are presented. Turning now to FIG. 2, a block diagram of a storage operation cell 50 that may perform storage operations on electronic data in a computer network in accordance with an embodiment of the present invention is illustrated. As shown, storage operation cell 50 may generally include a storage manager 100, a data agent 95, a media agent 105, a storage device 115, and, may include certain other components such as a client computer 85, a data or information store 90, databases 110,111, a jobs agent 120, an interface module 125, a management agent 130, and a resynchronization agent 133. Such system and elements thereof are exemplary of a modular storage management system such as the CommVault QiNetix™ system, and also the CommVault GALAXY™ backup system, available from CommVault Systems, Inc. of Oceanport, N.J., and further described in U.S. Pat. No. 7,035,880, which is incorporated herein by reference in its entirety.


A storage operation cell, such as cell 50, may generally include combinations of hardware and software components associated with performing storage operations on electronic data. Exemplary storage operation cells according to embodiments of the invention may include, as further described herein, CommCells as embodied in the QNet storage management system and the QiNetix storage management system by CommVault Systems of Oceanport, N.J. According to some embodiments of the invention, storage operations cell 50 may be related to backup cells and provide some or all of the functionality of backup cells as described in application Ser. No. 10/877,831 which is hereby incorporated by reference in its entirety.


Storage operations performed by storage operation cell 50 may include creating, storing, retrieving, and migrating primary data copies and secondary data copies (which may include, for example, snapshot copies, backup copies, HSM (Hierarchical Storage Management) copies, archive copies, and other types of copies of electronic data). Storage operation cell 50 may also provide one or more integrated management consoles for users or system processes to interface with in order to perform certain storage operations on electronic data as further described herein. Such integrated management consoles may be displayed at a central control facility or several similar consoles distributed throughout multiple network locations to provide global or geographically specific network data storage information. The use of integrated management consoles may provide a unified view of the data operations across the network.


A unified view of the data operations collected across the entire storage network may provide an advantageous benefit in the management of the network. The unified view may present the system, or system administrator with a broad view of the utilized resources of the network. Presenting such data to one centralized management console may allow for a more complete and efficient administration of the available resources of the network. The storage manager 100, either via a preconfigured policy or via a manual operation from a system administrator, can reallocate resources to more efficiently run the network. Data paths from storage operation cells may be re-routed to avoid areas of the network which are congested by taking advantage of underutilized data paths or operation cells. Additionally, should a storage operation cell arrive at or exceed a database size maximum, storage device capacity maximum or fail outright, several routes of redundancy may be triggered to ensure the data arrives at the location for which it was intended. A unified view may provide the manager with a collective status of the entire network allowing the system to adapt and reallocate the many resources of the network for faster and more efficient utilization of those resources.


In some embodiments, storage operations may be performed according to a storage policy. A storage policy generally may be a data structure or other information source that includes a set of preferences and other storage criteria for performing a storage operation and/or other functions that relate to storage operation. The preferences and storage criteria may include, but are not limited to, a storage location, relationships between system components, network pathway to utilize, retention policies, data characteristics, compression or encryption requirements, preferred system components to utilize in a storage operation, and other criteria relating to a storage operation. For example, a storage policy may indicate that certain data is to be stored in a specific storage device, retained for a specified period of time before being aged to another tier of secondary storage, copied to secondary storage using a specified number of streams, etc. In one embodiment, a storage policy may be stored in a storage manager database 111. Alternatively, certain data may be stored to archive media as metadata for use in restore operations or other storage operations. In other embodiments, the data may be stored to other locations or components of the system.


A schedule policy specifies when and how often to perform storage operations and may also specify performing certain storage operations (i.e. replicating certain data) on sub-clients of data including how to handle those sub-clients. A sub-client may represent static or dynamic associations of portions of data of a volume and are generally mutually exclusive. Thus, a portion of data may be given a label and the association is stored as a static entity in an index, database or other storage location used by the system. Sub-clients may also be used as an effective administrative scheme of organizing data according to data type, department within the enterprise, storage preferences, etc. For example, an administrator may find it preferable to separate e-mail data from financial data using two different sub-clients having different storage preferences, retention criteria, etc.


Storage operation cells may contain not only physical devices, but also may represent logical concepts, organizations, and hierarchies. For example, a first storage operation cell 50 may be configured to perform HSM operations, such as data backup or other types of data migration, and may include a variety of physical components including a storage manager 100 (or management agent 130), a media agent 105, a client component 85, and other components as described herein. A second storage operation cell may contain the same or similar physical components, however, it may be configured to perform storage resource management (“SRM”) operations, such as monitoring a primary data copy or performing other known SRM operations.


In one embodiment a data agent 95 may be a software module or part of a software module that is generally responsible for archiving, migrating, and recovering data from client computer 85 stored in an information store 90 or other memory location. Each computer 85 may have at least one data agent 95 and a resynchronization agent 133. Storage operation cell 50 may also support computers 85 having multiple clients (e.g., each computer may have multiple applications, with each application considered as either a client or sub-client).


In some embodiments, the data agents 95 may be distributed between computer 85 and the storage manager 100 (and any other intermediate components (not explicitly shown)) or may be deployed from a remote location or its functions approximated by a remote process that performs some or all of the functions of the data agent 95. The data agent 95 may also generate metadata associated with the data that it is generally responsible for replicating, archiving, migrating, and recovering from client computer 85. This metadata may be appended or embedded within the client data as it is transferred to a backup or secondary storage location, such as a replication storage device, under the direction of storage manager 100.


One embodiment may also include multiple data agents 95, each of which may be used to backup, migrate, and recover data associated with a different application. For example, different individual data agents 95 may be designed to handle MICROSOFT EXCHANGE® data, MICROSOFT SHAREPOINT data or other collaborative project and document management data, LOTUS NOTES® data, MICROSOFT WINDOWS 2000® file system data, MICROSOFT Active Directory Objects data, and other types of data known in the art. Alternatively, one or more generic data agents 95 may be used to handle and process multiple data types rather than using the specialized data agents described above.


In an embodiment utilizing a computer 85 having two or more types of data, one data agent 95 may be used for each data type to archive, migrate, and restore the client computer 85 data. For example, to backup, migrate, and restore all of the data on a MICROSOFT EXCHANGE 2000® server, the computer 85 may use one MICROSOFT EXCHANGE 2000® Mailbox data agent to backup the EXCHANGE 2000® mailboxes, one MICROSOFT EXCHANGE 2000® Database data agent to backup the EXCHANGE 2000® databases, one MICROSOFT EXCHANGE 2000® Public Folder data agent to backup the EXCHANGE 2000® Public Folders, and one MICROSOFT WINDOWS 2000® File System data agent to backup the file system of the computer 85. These data agents 95 would be treated as four separate data agents 95 by the system even though they reside on the same computer 85.


In an alternative embodiment, one or more generic data agents 95 may be used, each of which may be capable of handling two or more data types. For example, one generic data agent 95 may be used to back up, migrate and restore MICROSOFT EXCHANGE 2000® Mailbox data and MICROSOFT EXCHANGE 2000® Database data while another generic data agent may handle MICROSOFT EXCHANGE 2000® Public Folder data and MICROSOFT WINDOWS 2000® File System data.


While the illustrative embodiments described herein detail data agents implemented, specifically or generically, for Microsoft applications, one skilled in the art should recognize that other application types (i.e. Oracle data, SQL data, Lotus Notes, etc.) may be implemented without deviating from the scope of the present invention.


Resynchronization agent 133 may initiate and manage system backups, migrations, and data recovery. Although resynchronization agent 133 is shown as being part of each client computer 85, it may exist within the storage operation cell 50 as a separate module or may be integrated with or part of a data agent (not shown). In other embodiments, resynchronization agent 133 may be resident on a separate host. As a separate module, resynchronization agent 133 may communicate with all or some of the software modules in storage operation cell 50. For example, resynchronization agent 133 may communicate with storage manager 100, other data agents 95, media agents 105, and/or storage devices 115.


In one embodiment, the storage manager 100 may include a software module (not shown) or other application that may coordinate and control storage operations performed by storage operation cell 50. The storage manager 100 may communicate with the elements of storage operation cell 50 including computers 85, data agents 95, media agents 105, and storage devices 115.


In one embodiment the storage manager 100 may include a jobs agent 120 that monitors the status of some or all storage operations previously performed, currently being performed, or scheduled to be performed by the storage operation cell 50. The jobs agent 120 may be linked with an interface module 125 (typically a software module or application). The interface module 125 may include information processing and display software, such as a graphical user interface (“GUI”), an application program interface (“API”), or other interactive interface through which users and system processes can retrieve information about the status of storage operations. Through the interface module 125, users may optionally issue instructions to various storage operation cells 50 regarding performance of the storage operations as described and contemplated by embodiment of the present invention. For example, a user may modify a schedule concerning the number of pending snapshot copies or other types of copies scheduled as needed to suit particular needs or requirements. As another example, a user may utilize the GUI to view the status of pending storage operations in some or all of the storage operation cells in a given network or to monitor the status of certain components in a particular storage operation cell (e.g., the amount of storage capacity left in a particular storage device). As a further example, the interface module 125 may display the cost metrics associated with a particular type of data storage and may allow a user to determine the overall and target cost metrics associated with a particular data type. This determination may also be done for specific storage operation cells 50 or any other storage operation as predefined or user-defined (discussed in more detail below).


One embodiment of the storage manager 100 may also include a management agent 130 that is typically implemented as a software module or application program. The management agent 130 may provide an interface that allows various management components in other storage operation cells 50 to communicate with one another. For example, one embodiment of a network configuration may include multiple cells adjacent to one another or otherwise logically related in a WAN or LAN configuration (not explicitly shown). With this arrangement, each cell 50 may be connected to the other through each respective management agent 130. This allows each cell 50 to send and receive certain pertinent information from other cells 50 including status information, routing information, information regarding capacity and utilization, etc. These communication paths may also be used to convey information and instructions regarding storage operations.


In an illustrative embodiment, the management agent 130 in the first storage operation cell 50 may communicate with a management agent 130 in a second storage operation cell regarding the status of storage operations in the second storage operation cell. Another illustrative example may include a first management agent 130 in a first storage operation cell 50 that may communicate with a second management agent in a second storage operation cell to control the storage manager (and other components) of the second storage operation cell via the first management agent 130 contained in the storage manager 100 of the first storage operation cell.


Another illustrative example may include the management agent 130 in the first storage operation cell 50 communicating directly with and controlling the components in the second storage management cell 50, bypassing the storage manager 100 in the second storage management cell. In an alternative embodiment, the storage operation cells may also be organized hierarchically such that hierarchically superior cells control or pass information to hierarchically subordinate cells or vice versa.


The storage manager 100 may also maintain, in an embodiment, an index cache, a database, or other data structure 111. The data stored in the database 111 may be used to indicate logical associations between components of the system, user preferences, management tasks, Storage Resource Management (SRM) data, Hierarchical Storage Management (HSM) data or other useful data. The SRM data may, for example, include information that relates to monitoring the health and status of the primary copies of data (e.g., live or production line copies). HSM data may, for example, be related to information associated with migrating and storing secondary data copies including archival volumes to various storage devices in the storage system. As further described herein, some of this information may be stored in a media agent database 110 or other local data store. For example, the storage manager 100 may use data from the database 111 to track logical associations between the media agents 105 and the storage devices 115.


From the client computer 85, resynchronization agent 133 may maintain and manage the synchronization of data both within the storage operation cell 50, and between the storage operation cell 50 and other storage operation cells. For example, resynchronization agent 133 may initiate and manage a data synchronization operation between data store 90 and one or more of storage devices 115. Resynchronization agent 133 may also initiate and manage a storage operation between two data stores 90 and associated storage devices, each in a separate storage operation cell implemented as primary storage. Alternatively, resynchronization agent 133 may be implemented as a separate software module that communicates with the client 85 for maintaining and managing resynchronization operations.


In one embodiment, a media agent 105 may be implemented as a software module that conveys data, as directed by the storage manager 100, between computer 85 and one or more storage devices 115 such as a tape library, a magnetic media storage device, an optical media storage device, or any other suitable storage device. Media agents 105 may be linked with and control a storage device 115 associated with a particular media agent. In some embodiments, a media agent 105 may be considered to be associated with a particular storage device 115 if that media agent 105 is capable of routing and storing data to particular storage device 115.


In operation, a media agent 105 associated with a particular storage device 115 may instruct the storage device to use a robotic arm or other retrieval means to load or eject a certain storage media, and to subsequently archive, migrate, or restore data to or from that media. The media agents 105 may communicate with the storage device 115 via a suitable communications path such as a SCSI (Small Computer System Interface), fiber channel or wireless communications link or other network connections known in the art such as a WAN or LAN. Storage device 115 may be linked to a data agent 105 via a Storage Area Network (“SAN”).


Each media agent 105 may maintain an index cache, a database, or other data structure 110 which may store index data generated during backup, migration, and restore and other storage operations as described herein. For example, performing storage operations on MICROSOFT EXCHANGE® data may generate index data. Such index data provides the media agent 105 or other external device with a fast and efficient mechanism for locating the data stored or backed up. In some embodiments, storage manager database 111 may store data associating a computer 85 with a particular media agent 105 or storage device 115 as specified in a storage policy. The media agent database 110 may indicate where, specifically, the computer data is stored in the storage device 115, what specific files were stored, and other information associated with storage of the computer data. In some embodiments, such index data may be stored along with the data backed up in the storage device 115, with an additional copy of the index data written to the index cache 110. The data in the database 110 is thus readily available for use in storage operations and other activities without having to be first retrieved from the storage device 115.


In some embodiments, certain components may reside and execute on the same computer. For example, a client computer 85 including a data agent 95, a media agent 105, or a storage manager 100 coordinates and directs local archiving, migration, and retrieval application functions as further described in U.S. Pat. No. 7,035,880. Thus, client computer 85 can function independently or together with other similar client computers 85.



FIG. 3A illustrates a block diagram of a system 200 of system storage operation system components that may be utilized during synchronization operations on electronic data in a computer network in accordance with an embodiment of the present invention. The system 200 may comprise CLIENT 1 and CLIENT 2 for, among other things, replicating data. CLIENT 1 may include a replication manager 210, a memory device 215, a log filter driver 220, a log 225, a file system 230, and a link to a storage device 235. Similarly, CLIENT 2 may include a replication manager 245, a memory device 250, a log filter driver 255, a log 260, a file system 265, and a storage device. Additional logs 261 may also reside on CLIENT 2 in some embodiments.


In one embodiment, replication manager 210 may be included in resynchronization agent 133 (FIG. 2). Replication manager 210, in one embodiment, may manage and coordinate the replication and transfer of data files between storage device 235 and a replication volume. As previously described in relation to FIG. 2, resynchronization agent 133 may be included in client computer 85. In such an embodiment, replication manager 210 may reside within resynchronization agent 133 (FIG. 2) in a client computer. In other embodiments, the replication manager 210 may be part of a computer operating system (OS). In such embodiments, for example, client computer 85 (FIG. 2) may communicate and coordinate the data replication processes with the OS.


In the exemplary embodiment of FIG. 3A, the replication process between CLIENT 1 and CLIENT 2 in system architecture 200 may occur, for example, during a data write operation in which storage data may be transferred from a memory device 215 to a log filter driver 220. Log filter driver 220 may, among other things, filter or select specific application data or other data that may be parsed as part of the replication process that is received from the memory device 215. For example, ORACLE data, SQL data, or MICROSOFT EXCHANGE data may be selected by the log filter driver 220. The log filter driver 220 may, among other things, include a specific application or module that resides on the input/output (“I/O”) stack between the memory device 215 and the storage device 235. Once write data passes through the memory device 215 towards the file system 230, the write data is intercepted and processed by the log filter driver 220. As the write data is intercepted by the log filter driver 220, it is also received by the file system 230. The file system 230 may be responsible for managing the allocation of storage space on the storage device 235. Therefore, the file system 230 may facilitate storing the write data to the storage device 235 associated with CLIENT 1.


In order to replicate the filtered write data that is received from the memory device 215, the log filter driver 220 may send filtered write data to the log 225. The log 225 may include metadata in addition to write data, whereby the write data entries in log 225 may include a data format 300, such as that illustrated in FIG. 3B. Metadata may include information, or data, about the data stored on the system. Metadata, while generally not including the substantive operational data of the network is useful in the administration, security, maintenance and accessibility of operational data. Examples of metadata include files size, edit times, edit dates, locations on storage devices, version numbers, encryption codes, restrictions on access or uses, and tags of information that may include an identifier for editors. These are mere examples of common usages of metadata. Any form of data that describes or contains attributes or parameters of other data may be considered metadata.


As illustrated in FIG. 3B, the data format of the logged write data entries in the log 225 may include, for example, a file identifier field(s) 302, an offset 304, a payload region 306, and a timestamp 309. Identifier 302 may include information associated with the write data (e.g., file name, path, size, computer device associations, user information, etc.). Timestamp field 309 may include a timestamp referring to the time associated with its log entry, and in some embodiments may include a indicator, which may be unique, such as USN.


Offset 304 may indicate the distance from the beginning of the file to the position of the payload data. For example, as indicated by the illustrative example 308, the offset may indicate the distance of the payload 310 from the beginning of the file 312. Thus, using the offset 314 (e.g., offset=n), only the payload 310 (e.g., payload n) that requires replicating is sent from storage device 235 (FIG. 3A) to the replication volume storage device. P Thereby replicating only that portion of the data that has changed. The replication process may be sent over the network, for example, the communication link 275 (FIG. 3A) to another client, CLIENT 2.


As indicated in FIG. 3A, at CLIENT 2, write data associated with the log 225 of CLIENT 1 may be received by the log 260 of CLIENT 2 via the communication link 275. The write data may then be received by the file system 265 of CLIENT 2 prior to being stored on the replication volume at the storage device (the replication volume).


Referring to FIG. 3A, changes captured by filter driver 220 on CLIENT 1 may later be used to replicate the write data entries utilizing the log 225, if, for example, a communication failure occurs between CLIENT 1 and CLIENT 2 due to a network problem associated with communication link 275. If the failure is of limited duration the log 225 will not be overwritten by additional data being logged. Therefore, provided that during a network failure, the log 225 has enough storage capacity to store recent entries associated with the write data, the log 225 may be able to successfully send the recent write data entries to a replication volume upon restoration of communication.


The write data entries in the log 225 of CLIENT 1 may accumulate over time. Replication manager 210 of CLIENT 1 may periodically direct the write data entries of the log 225 to be sent to a storage device having the replication volume. During a network failure, however, the storage capacity of the log 225 may be exceeded as a result of recent logged entries associated with the write data. Upon such an occurrence, the log filter driver 220 may begin to overwrite the oldest entries associated with the write data. Replication of the write data associated with the overwritten entries may not be possible. Thus, the present embodiment allows for a full synchronization of data files between the storage device 235 and a replication volume which may be necessary to ensure the data volume in the storage device 235 associated with CLIENT 1 is replicated at the replication volume.


In one embodiment, the storage manager 100 (FIG. 2) may monitor and control the network resources utilized in the replication operations. Through a defined storage policy, or interactive interfacing with a system administrator, the storage manager 100 may reallocate network resources (e.g. storage operation paths, storage devices utilized, etc). Reallocating the resources of the network may alleviate the concentrated traffic and bottlenecks created by these types of situations in replication operations.



FIG. 4A illustrates a block diagram 280 of storage operation system components that may be utilized during synchronization operations on electronic data in a computer network in accordance with another embodiment of the present invention. System 280 is similar to system 200 (FIG. 3A) and use like reference numbers to designate generally like components. As shown, system 280 may include CLIENT 1 and CLIENT 2 for, among other things, replicating data. CLIENT 1 may include a replication manager 210, a memory device 215, a log filter driver 220, one or more log files 225, a change journal filter 240, a change journal 241, a file system 230, and a storage device 235. Similarly, CLIENT 2 may include a replication manager 245, a memory device 250, one or more log files 260, 261, and a file system 265. The one or more log files 260, 261 may be utilized for different application types, such as, SQL data, MICROSOFT EXCHANGE data, etc.


In one embodiment, the replication manager 210 may be included in the resynchronization agent 133 (FIG. 2). The replication manager 210, in one embodiment may manage and coordinate the replication of data files between storage device 235 and a replication volume. As previously described in relation to FIG. 2, resynchronization agent 133 may be included in client computer 85. In such an embodiment, the replication manager 210 may reside within resynchronization agent 133, in a client computer. In other embodiments, replication manager 210 may be part of a computer operating system (OS). In such embodiments, for example, the client computer 85 (FIG. 2) may communicate and coordinate the data replication processes with the OS.


In the exemplary embodiment of FIG. 4A, the replication process between CLIENT 1 and CLIENT 2 in the system architecture 280 may occur, for example, during a data write operation in which storage data may be transferred from the memory device 215 of CLIENT 1 to a storage device 235 via the file system 230. The write data from the memory 215 device, however, may be intercepted by the log filter driver 220. As previously described, the log filter driver 220 may, among other things, trap, filter or select intercepted application data received from memory 215. For example, ORACLE data, SQL data, or MICROSOFT EXCHANGE data may be selected by the log filter driver 220. Once the write data passes through and is captured by the log filter driver 220, the write data may be received by the change journal filter driver 240.


Change journal filter driver 240 may also create data records that reflect changes made to the data files (e.g., write activity associated with new file creation, existing file updates, file deletion, etc.) stored on the storage device 235. These data records, once selected by the change journal filter driver 240, may be stored as records in the change journal 241. The replication manager 210 may then utilize these change journal 241 record entries during replication operations if access to the log file 225 entries, which may have ordinarily facilitated the replication process as further described herein, is unavailable (e.g., corrupted, deleted, or overwritten entries). Write data may then be received at the file system 230 from the change journal filter driver 240, whereby the file system 230 may be responsible for managing the allocation of storage space and storage operations on the storage device 235, and copying/transferring data to the storage device 235.


In order to replicate the filtered write data that is received from the memory device 215, the log filter driver 220 may send write data filtered by the log filter driver 220 to the log 225. The log 225 may include metadata in addition to write data payloads, whereby the write data entries in the log 225 may include the data format 300, previously described and illustrated in relation to FIG. 3B.


As previously described in relation to the embodiments of FIGS. 3A and 3B, the present invention provides for replication operations during both normal and failure occurrences between CLIENT 1 and CLIENT 2 due to network problems (e.g., failure in communication link 275). In one embodiment, the filter driver 220 captures changes in the write data that may later be used to replicate write data entries utilizing the log 225, provided the failure is of limited duration and the log 225 goes not get overwritten. Therefore, provided that during a network failure, the log 220 has enough storage capacity to store recent entries associated with the write data, the log filter driver 220 may be able to successfully send the recent write data entries to the replication upon restoration of communication.


The write data entries in the log 225 of CLIENT 1 may accumulate over time. The replication manager 210 of CLIENT 1 may periodically direct the write data entries of the log 225 to be sent to the replication volume. During a network failure, however, the storage capacity of the log 225 may be exceeded as a result of recent logged entries associated with the write data. Replication of write data associated with the overwritten entries may not be possible. Thus, under these conditions, the change journal 241 entries captured by the change journal filter driver 240 may enable the replication of write data without the need for a full synchronization of data files between the storage devices 235 and a replication volume. As previously described, full synchronization may require a transfer of the entire storage volume stored at the storage device 235 linked to CLIENT 1 to the replication volume of CLIENT 2. The present embodiment is advantageous as a full synchronization operations may place a heavy burden on network resources, especially considering the large data volume that may reside on the storage device 235. In addition to the large data transfer requirement during this operation, other data transfer activities within the storage operation system may also create further network bottlenecks.


With the implementation of the change journal filter driver 240 and the change journal 241, the requirement for a full synchronization may be obviated. The changed data entries in change journal 241 may allow for the replication manager to selectively update the replicated data instead of requiring a full synchronization that may occupy valuable network resources better suited for other operations.



FIG. 4B illustrates some of the data fields 400 associated with entries within the change journal log 241 according to an embodiment of the invention. The data fields 400 may include, for example, a record identifier 402 such as an Update Sequence Number (USN), metadata 404, and a data object identifier 406 such as a File Reference Number (FRN). The data object identifier 406 may include additional information associated with the write data (e.g., file name, path size, etc.). Each record logged or entered in change journal 241 via change journal filter driver 240 may have a unique record identifier number that may be located in the record identifier field 402. For example, this identifier may be a 64-bit identifier such as a USN number used in the MICROSOFT Windows® OS change journal system. Each of the records that are created and entered into the change journal 241 is assigned such a record identifier. In one embodiment, each of the assigned identifiers is sequentially incremented with the creation of a newly created record reflecting a change to the data of the client. For example, an assigned identifier (e.g., USN) associated with the most recent change to a file on the storage device may include the numerically greatest record identifier with respect to all previously created records, thereby indicating the most recent change. The metadata field 404 may include, among other things, a time stamp of the record, information associated with the sort of changes that have occurred to a file or directory (e.g., a Reason member), etc. In some embodiments, a FRN associated with the data object identifier 406 may include a 64-bit ID that uniquely identifies any file or directory on a storage volume such as that of the storage device 235 (FIG. 4A).


In accordance with an embodiment of the invention, as further described herein, the record identifier fields 402 (FIG. 4B) of each logged record entered in change journal 241 may be utilized to resynchronize replication operations in conjunction with replication managers 210, 245 and one or more of the log files 260, 261. Based on the recorded entries in change journal 241, the replication manager 210 of CLIENT 1 may coordinate the transfer of files that are to be replicated with replication manager 245 of CLIENT 2. This may be accomplished as follows. Change journal 241 logs all changes and assigns a USN or FRN to each log entry in log 242 (FIG. 4A). Each log entry may include a timestamp indicating its recordation in log 242. Periodically, replication manager 210 may send the most recent USN copied to log 242 to the destination. Next, change journal 241 may be queried for changes since the last USN copied, which indicates the difference between the log at the source and the log at the destination, and only those log entries are replicated. This may be thought of as “resynchronizing” CLIENT 1 and CLIENT 2.


Once the transfer of files has been coordinated by replication managers 210, 245, the designated files may be sent over communication link 275 to the one or more log files 260, 261. The files received are then forwarded from the one or more log files 260, 261 to the replication volume.



FIG. 5 is a flowchart 500 illustrating some of the steps involved in a replication process in a storage operation system under substantially normal operating conditions according to an embodiment of the invention. The replication process of FIG. 5 may be described with reference to system architecture 280 illustrated in FIG. 4A to facilitate comprehension. However, it will be understood this merely represents one possible embodiment of the invention and should not be construed to be limited to this exemplary architecture.


As shown, at step 502, it may be determined whether any write data (e.g., application specific data) is available for transfer to the storage device 235 of a first client, whereby the write data may require replication at the replication volume of a second client. If the write data (e.g., application data) requiring replication exists, it may be captured by the log filter driver 220 and logged in the log 225 (step 504). Additionally, through the use of another data volume filter driver, such as a MICROSOFT Change Journal filter driver, records identifying any changes to files or directories (e.g., change journal records) on the storage device 235 of the first client may be captured and stored in the change journal 241 (step 506).


In some embodiments, under the direction of the replication manager 210, the write data stored and maintained in the log 225 may be periodically (e.g., every 5 minutes) sent via a communications link 275, to the replication volume of the second client. In an alternative embodiment, under the direction of the replication manager 210, the write data stored in the log 225 may be sent via the communications link 275, to the replication volume when the quantity of data stored in the log 225 exceeds a given threshold. For example, when write data stored to the log 225 reaches a five megabyte (MB) capacity, all write data entries in the log 225 may be replicated to the second client.


Also, in some embodiments, under the direction of the replication manager 210, record identifiers (e.g., USN numbers) stored in the change journal 241 may also be periodically (e.g., every 5 minutes) sent via the communications link 275 to the replication manager 245 of the second client. The replication manager 245 may store these record identifiers in a log file at CLIENT 2, or at another memory index, or data structure (step 508). In other embodiments, under the direction of the replication manager 210, each record written to the change journal 241 may be directly sent via the communications link 275 to the replication manager 245.


At step 510, the record identifiers (e.g., USN numbers) sent via the communications link 275 and stored in the log file 260 may be compared with existing record identifiers. Based on a comparison between the greatest numerical value of a record identifier received at the log 260 and other record identifiers, replication data may be identified and replicated to the data volume of the second client.



FIG. 6 is a flowchart 600 illustrating some of the steps involved in a replication resynchronization process in a storage operation system according to an embodiment of the invention. The replication process of FIG. 6 may be described with reference to system architecture 280 illustrated in FIG. 4A to facilitate comprehension. However, it will be understood this merely represents one possible embodiment of the invention and should not be construed to be limited to this exemplary architecture.


At step 604, if a communication failure affecting replication or other event criteria, such as log file corruption, power failure, loss of network, for example, is detected or found and then restored, the most recent record identifier field (e.g., USN number) in the destination log may be accessed and compared with the last record identifier received from the change journal log 241. The replication managers 210, 245 may coordinate and manage the comparison of these record identifier fields, which may include, in one embodiment, comparing identifier values such as USNs used in the MICROSOFT change journal (step 606).


As previously described, write operations or other activities (e.g., file deletions) associated with each file are logged in the change journal records having unique identification numbers (i.e., record identifier) such as a USN number. At step 606, an identification number (e.g., USN number) associated with the last record identifier field stored at the change journal 241 may be compared with an identification number (e.g., USN number) associated with the most recent record identifier stored in the log 260 upon restoration of the communication failure or other event. If it is determined that these identification numbers (e.g., USN numbers) are not the same (step 608), this may indicate that additional file activities (e.g., data write to file operations) may have occurred at the source location (i.e., CLIENT 1), during the failure. These changes may not have been replicated to the second client due to the failure. For example, this may be determined by the last record identifier field's USN number from the change journal 241 at the source having a larger numerical value than the USN number associated with the most recent record identifier field accessed from the log 260. In one embodiment, this may occur as a result of a log filter driver 220 not capturing an event (e.g., a data write operation) or overwriting an event. This may, therefore, lead to a record identifier such as a USN number not being sent to log file 260 associated with the replication data volume of the second client.


Since USN numbers are assigned sequentially, in an embodiment, the numerical comparison between the last record identifier field's USN number stored at the log 260 and the most recent record identifier field's USN number accessed from the change journal 241 may be used to identify any files that may not have been replicated at the replication volume (step 610) of the second client. For example, if the last record identifier field's USN number (i.e., at log 241) is “5” and the most recently sent record identifier field's USN number (i.e., at log 260) is “2,” it may be determined that the data objects associated with USN numbers “3, 4, and 5” have not yet be replicated to the second client. Once these data files have been identified (e.g., by data object identifiers such as FRNs in the change journal entries) (step 610), they may be copied from the storage device 235 of the first client and sent over the communication link 275 to the second client (step 612). Thus, the data volumes associated with storage devices 235 and the replication volume may be brought back into sync without the need for resending (or re-copying) all the data files between the two storage devices.


In the exemplary embodiments discussed above, a communication failure may generate an over-flow in the log 225, which in turn may cause a loss of logged entries. As, previously described, these lost entries inhibit the replication process upon restoration of the communication failure. Other failures may also lead to a loss of logged entries in log 225. For example, these failures may include, but are not limited to, corrupted entries in log 225 and/or the inadvertent deletion or loss of entries in log 225.



FIG. 7 is a flowchart 700 illustrating a replication process in a storage operation system according to another embodiment of the invention. The replication process of FIG. 7 may also be described with reference to system architecture 280 illustrated in FIG. 4A to facilitate comprehension. However, it will be understood this merely represents one possible embodiment of the invention and should not be construed to be limited to this exemplary architecture.


The replication process 700 may, in one embodiment, be based on ensuring that electronic data files at a source storage device are synchronized with electronic data files at a destination or target storage device without the need to perform full synchronization operations over the storage operation network.


At step 702, the data files stored on a first storage device 235 and the record identifiers associated with the data records at the first storage device logged in change journal 241 may undergo a data transfer. Examples of certain data transfers include, but are not limited to, a block level copy, storage to a first destination storage medium/media such as magnetic media storage, tape media storage, optical media storage, or any other storage means having sufficient retention and storage capacity.


At step 704, the first destination medium/media, holding data from the first storage device, may be transferred (e.g., by vehicle) to a second destination storage device of the second client in FIG. 4A. At step 706, the data stored on a first destination medium/media may be loaded onto the second destination storage device.


Since copying the data from the first storage device 235 and journal log 241 onto the first destination medium/media and transporting the first destination medium/media to the second destination storage device (e.g., a storage device of the second client, (not shown)), the data files at the first storage device 235 may have undergone changes during this transit period. For example, one or more existing data files may have been modified (e.g., a data write operation), deleted or augmented at the first storage device 235. In order to ensure that an up-to-date replication of the data files is copied to the destination storage device, particularly in light of such changes, a synchronization of data between the data files residing on both the first storage device 235 and the destination storage device may be required.


At step 708, record identifiers such as the USN numbers associated with each data record logged within the change journal 241 are compared with the record identifiers associated with data loaded onto the second destination storage device. This process may be performed, as during the time period between the first storage device 235 data files and the record identifiers being copied to the first destination medium/media and being transferred to the second destination storage device, the data files at the first storage device 235 may have undergone changes (e.g., modify, write, delete etc.). Based on these changes to the data files at the first storage device 235, additional data record entries (e.g., the change journal entries) may have been created in change journal 241.


At step 710, the process determines whether data files at the first storage device 235 have changed compared to their copies stored at the destination storage device. As previously described (step 708), this is achieved by comparing the record identifiers (e.g., USN numbers) associated with each data record logged within the change journal 241 with the record identifiers associated with data loaded onto the second destination storage device. For example, if the USN numbers are the same, at step 712 it may be determined that no synchronization of data is required as the data has not changed. Thus, there is an indication that the data files at the first storage device 235 have not changed since being copied to the second destination storage device. However, for example, if at step 710 it is determined that the USN numbers associated with each data record logged within the change journal 241 are not the same as the USN numbers loaded onto the second destination storage device, the data files associated with the USN numbers that were not loaded onto the second destination storage device may be sent via a communication pathway from the first storage device 235 to the second destination storage device. Thus, the data files associated with the first storage device 235 (source location) are synchronized with the data files at second destination storage device (target location).


Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein. Software and other modules may reside on servers, workstations, personal computers, computerized tablets, PDAS, and other devices suitable for the purposes described herein. Software and other modules may be accessible via local memory, via a network, via a browser or other application in an ASP context or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein. Screenshots presented and described herein can be displayed differently as known in the art to input, access, change, manipulate, modify, alter, and work with information.


While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope of the invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope of the invention.

Claims
  • 1. A system of synchronizing data between a first and second storage device comprising: a first log stored in computer readable memory, the first log captures changes to first data stored on a first storage device, the first log comprising at least a first set of changes to first data stored on at least a first storage device;a first replication manager comprising computer hardware that transmits over a network at least a portion of the first set of changes captured in the first log to a second log stored on one or more second storage devices, wherein transmitting the portion of the first set of changes creates a second set of changes in the second log;a second replication manager comprising computer hardware that replicates the first set of changes to the first data, by performing the second set of changes in the second log to create second data stored on the one or more second storage devices, wherein the second data is a replication of the first data stored on the first storage device;after detection of a network communication error, at least one of the first and second replication managers compare the second set of changes in the second log with the first set of changes to the first data to determine whether there is missing change data in the second set of changes; andat least one of the first and second replication managers update the second data on the one or more second storage devices upon detection of the missing change data, wherein the update of the second data comprises copying at least a portion of the first data stored on the first storage device to the one or more second storage devices.
  • 2. The system of claim 1 wherein the record identifiers are associated with change journal entries.
  • 3. The system of claim 1 wherein the record identifiers comprise at least an update sequence number.
  • 4. The system of claim 1 wherein the record identifiers comprise at least a file reference number.
  • 5. A method of replicating data comprising: storing in a first log, changes to first data stored on one or more first storage devices, the first log further comprises record identifiers that are associated with the changes to the data files;copying at least a portion of a first set of changes and a first set of record identifiers in the first log to a second log stored on one or more second storage devices to create a second set of changes and a second set of record identifiers in the second log;replicating with computer hardware, the portion of the first set of changes to the first data, by performing the second set of changes in the second log to create second data stored on the one or more second storage devices, wherein the second data is a replication of the first data stored on the one or more first storage devices;comparing with computer hardware, the second set of record identifiers in the second log with the first set of record identifiers in the first log to determine whether there is missing change data in the second set of changes; andupdating with computer hardware, the second data on the one or more second storage devices upon detection of the missing change data, wherein updating the second data comprises copying at least a portion of the first data stored on the one or more first storage devices to the one or more second storage devices.
  • 6. The method of claim 5 wherein copying the portion of the first set of changes and the first set of record identifiers in the first log to the second log the transmitting occurs upon reaching a threshold capacity of the first log.
  • 7. The method of claim 5 further comprising associating the record identifiers with change journal entries.
  • 8. The method of claim 7 further comprising comparing at least one of the record identifiers associated with the change journal entries to a record identifier of a previously stored change journal entry on the second storage device, wherein non-identical identifiers signify non-synchronized data.
  • 9. The method of claim 5 wherein the record identifiers comprise at least an update sequence number.
  • 10. The method of claim 5 wherein the record identifiers comprise at least a file reference number.
  • 11. The method of claim 5 wherein the record identifiers comprise at least one of (i) a time stamp, and (ii) information about a type of change.
  • 12. The method of claim 5 further comprising resynchronizing the replication based on missing record identifiers.
  • 13. A data replication system comprising: a first log stored in computer accessible memory, the first log records changes to first data stored on one or more first storage devices, the first log further comprising record identifiers that are associated with the changes to the data files;a first replication manager comprising computer hardware that copies at least a portion of a first set of changes and a first set of record identifiers in the first log to a second log stored on one or more second storage devices to create a second set of changes a second set of record identifiers in the second change log;a second replication manager comprising computer hardware that replicates at least a portion of the first set of changes to the first data, by performing the second set of changes in the second log to create second data stored on the one or more second storage devices, wherein the second data is a replication of the first data stored on the one or more first storage devices;at least one of the first and second replication managers compares the second set of record identifiers in the second log with the first set of record identifiers in the first log to determine whether there is missing change data in the second set of changes; andat least one of the first and second replication managers updates the second data on the one or more second storage devices upon detection of the missing change data, wherein updating the second data comprises copying at least a portion of the first data stored on the one or more first storage devices to the one or more second storage devices.
  • 14. The data replication system of claim 13 wherein the computer hardware copies the portion of the first set of changes and the first set of record identifiers in the first log to the second log upon reaching a threshold capacity of the first log.
  • 15. The data replication system of claim 13 wherein the record identifiers are associated with change journal entries.
  • 16. The data replication system of claim 15 further comprising comparing at least one of the record identifiers associated with the change journal entries to a record identifier of a previously stored change journal entry on the second storage device, wherein non-identical identifiers signify non-synchronized data.
  • 17. The data replication system of claim 13 wherein the record identifiers comprise at least an update sequence number.
  • 18. The data replication system of claim 13 wherein the record identifiers comprise at least a file reference number.
  • 19. The data replication system of claim 13 wherein the record identifiers comprise at least a file reference number.
  • 20. The data replication system of claim 13 wherein the record identifiers comprise at least one of (i) a time stamp, and (ii) information about a type of change.
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS

This application is a continuation of U.S. application Ser. No. 11/640,024, filed Dec. 15, 2006, which claims priority to provisional application No. 60/752,201 filed Dec. 19, 2005. The entirety of each of the foregoing applications is hereby incorporated by reference. This application is related to the following patents and pending applications, each of which is hereby incorporated herein by reference in its entirety: Application Ser. No. 60/752,203 titled “Systems and Methods for Classifying and Transferring Information in a Storage Network” filed Dec. 19, 2005; Application Ser. No. 60/752,198 titled “Systems and Methods for Granular Resource Management in a Storage Network” filed Dec. 19, 2005; Application Ser. No. 11/313,224 titled “Systems and Methods for Performing Multi-Path Storage Operations” filed Dec. 19, 2005; Application Ser. No. 60/752,196 titled “System and Method for Migrating Components in a Hierarchical Storage Network” filed Dec. 19, 2005. Application Ser. No. 60/752,202 titled “Systems and Methods for Unified Reconstruction of Data in a Storage Network” filed Dec. 19, 2005; Application Ser. No. 60/752,197 titled “Systems and Methods for Hierarchical Client Group Management” filed Dec. 19, 2005.

US Referenced Citations (624)
Number Name Date Kind
4296465 Lemak Oct 1981 A
4686620 Ng Aug 1987 A
4995035 Cole et al. Feb 1991 A
5005122 Griffin et al. Apr 1991 A
5093912 Dong et al. Mar 1992 A
5133065 Cheffetz et al. Jul 1992 A
5193154 Kitajima et al. Mar 1993 A
5212772 Masters May 1993 A
5226157 Nakano et al. Jul 1993 A
5231668 Kravitz Jul 1993 A
5239647 Anglin et al. Aug 1993 A
5241668 Eastridge et al. Aug 1993 A
5241670 Eastridge et al. Aug 1993 A
5263154 Eastridge et al. Nov 1993 A
5265159 Kung Nov 1993 A
5276860 Fortier et al. Jan 1994 A
5276867 Kenley et al. Jan 1994 A
5287500 Stoppani, Jr. Feb 1994 A
5301351 Jippo Apr 1994 A
5311509 Heddes et al. May 1994 A
5317731 Dias et al. May 1994 A
5321816 Rogan et al. Jun 1994 A
5333315 Saether et al. Jul 1994 A
5347653 Flynn et al. Sep 1994 A
5369757 Spiro et al. Nov 1994 A
5403639 Belsan et al. Apr 1995 A
5410700 Fecteau et al. Apr 1995 A
5448724 Hayashi Sep 1995 A
5455926 Keele et al. Oct 1995 A
5487072 Kant Jan 1996 A
5491810 Allen Feb 1996 A
5495607 Pisello et al. Feb 1996 A
5504873 Martin et al. Apr 1996 A
5544345 Carpenter et al. Aug 1996 A
5544347 Yanai et al. Aug 1996 A
5546536 Davis et al. Aug 1996 A
5555404 Torbjornsen et al. Sep 1996 A
5559957 Balk Sep 1996 A
5559991 Kanfi Sep 1996 A
5598546 Blomgren Jan 1997 A
5604862 Midgely et al. Feb 1997 A
5606693 Nilsen et al. Feb 1997 A
5615392 Harrison et al. Mar 1997 A
5619644 Crockett et al. Apr 1997 A
5638509 Dunphy et al. Jun 1997 A
5642496 Kanfi Jun 1997 A
5668986 Nilsen et al. Sep 1997 A
5673381 Huai et al. Sep 1997 A
5675511 Prasad et al. Oct 1997 A
5677900 Nishida et al. Oct 1997 A
5682513 Candelaria et al. Oct 1997 A
5687343 Fecteau et al. Nov 1997 A
5689706 Rao et al. Nov 1997 A
5699361 Ding et al. Dec 1997 A
5719786 Nelson et al. Feb 1998 A
5720026 Uemura et al. Feb 1998 A
5729743 Squibb Mar 1998 A
5737747 Vishlitzky et al. Apr 1998 A
5742792 Yanai et al. Apr 1998 A
5751997 Kullick et al. May 1998 A
5758359 Saxon May 1998 A
5761677 Senator et al. Jun 1998 A
5761734 Pfeffer et al. Jun 1998 A
5764972 Crouse et al. Jun 1998 A
5765173 Cane et al. Jun 1998 A
5778395 Whiting et al. Jul 1998 A
5790114 Geaghan et al. Aug 1998 A
5790828 Jost Aug 1998 A
5805920 Sprenkle et al. Sep 1998 A
5812398 Nielsen Sep 1998 A
5813009 Johnson et al. Sep 1998 A
5813017 Morris Sep 1998 A
5829046 Tzelnic et al. Oct 1998 A
5860104 Witt et al. Jan 1999 A
5875478 Blumenau Feb 1999 A
5875481 Ashton et al. Feb 1999 A
5878408 Van Huben et al. Mar 1999 A
5887134 Ebrahim Mar 1999 A
5901327 Ofek May 1999 A
5907621 Bachman et al. May 1999 A
5907672 Matze et al. May 1999 A
5924102 Perks Jul 1999 A
5926836 Blumenau Jul 1999 A
5933104 Kimura Aug 1999 A
5933601 Fanshier et al. Aug 1999 A
5950205 Aviani, Jr. Sep 1999 A
5956519 Wise et al. Sep 1999 A
5958005 Thorne et al. Sep 1999 A
5970233 Liu et al. Oct 1999 A
5970255 Tran et al. Oct 1999 A
5974563 Beeler, Jr. Oct 1999 A
5987478 See et al. Nov 1999 A
5991779 Bejar Nov 1999 A
5995091 Near et al. Nov 1999 A
6003089 Shaffer et al. Dec 1999 A
6009274 Fletcher et al. Dec 1999 A
6012090 Chung et al. Jan 2000 A
6021415 Cannon et al. Feb 2000 A
6021475 Nguyen et al. Feb 2000 A
6023710 Steiner et al. Feb 2000 A
6026414 Anglin Feb 2000 A
6049889 Steely, Jr. et al. Apr 2000 A
6052735 Ulrich et al. Apr 2000 A
6058066 Norris et al. May 2000 A
6061692 Thomas et al. May 2000 A
6072490 Bates et al. Jun 2000 A
6076148 Kedem Jun 2000 A
6088697 Crockett et al. Jul 2000 A
6094416 Ying Jul 2000 A
6105129 Meier et al. Aug 2000 A
6112239 Kenner et al. Aug 2000 A
6122668 Teng et al. Sep 2000 A
6131095 Low et al. Oct 2000 A
6131148 West et al. Oct 2000 A
6131190 Sidwell Oct 2000 A
6137864 Yaker Oct 2000 A
6148377 Carter et al. Nov 2000 A
6148412 Cannon et al. Nov 2000 A
6154787 Urevig et al. Nov 2000 A
6154852 Amundson et al. Nov 2000 A
6158044 Tibbetts Dec 2000 A
6161111 Mutalik et al. Dec 2000 A
6163856 Dion et al. Dec 2000 A
6167402 Yeager Dec 2000 A
6175829 Li et al. Jan 2001 B1
6195695 Cheston et al. Feb 2001 B1
6205450 Kanome Mar 2001 B1
6212512 Barney et al. Apr 2001 B1
6212521 Minami et al. Apr 2001 B1
6230164 Rikieta et al. May 2001 B1
6260068 Zalewski et al. Jul 2001 B1
6260069 Anglin Jul 2001 B1
6269431 Dunham Jul 2001 B1
6275953 Vahalia et al. Aug 2001 B1
6279078 Sicola et al. Aug 2001 B1
6292783 Rohler Sep 2001 B1
6301592 Aoyama et al. Oct 2001 B1
6304880 Kishi Oct 2001 B1
6311193 Sekido Oct 2001 B1
6324581 Xu et al. Nov 2001 B1
6328766 Long Dec 2001 B1
6330570 Crighton Dec 2001 B1
6330642 Carteau Dec 2001 B1
6343324 Hubis et al. Jan 2002 B1
6350199 Williams et al. Feb 2002 B1
RE37601 Eastridge et al. Mar 2002 E
6353878 Dunham Mar 2002 B1
6356801 Goodman et al. Mar 2002 B1
6363464 Mangione Mar 2002 B1
6366986 St. Pierre et al. Apr 2002 B1
6366988 Skiba et al. Apr 2002 B1
6374336 Peters et al. Apr 2002 B1
6374363 Wu et al. Apr 2002 B1
6389432 Pothapragada et al. May 2002 B1
6397308 Ofek et al. May 2002 B1
6418478 Ignatius et al. Jul 2002 B1
6421711 Blumenau et al. Jul 2002 B1
6434681 Armangau Aug 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6466950 Ono Oct 2002 B1
6473775 Kusters et al. Oct 2002 B1
6487561 Ofek et al. Nov 2002 B1
6487644 Huebsch et al. Nov 2002 B1
6487645 Clark et al. Nov 2002 B1
6502205 Yanai et al. Dec 2002 B1
6516314 Birkler et al. Feb 2003 B1
6516327 Zondervan et al. Feb 2003 B1
6516348 MacFarlane et al. Feb 2003 B1
6519679 Devireddy et al. Feb 2003 B2
6538669 Lagueux, Jr. et al. Mar 2003 B1
6539462 Mikkelsen et al. Mar 2003 B1
6542468 Hatakeyama Apr 2003 B1
6542909 Tamer et al. Apr 2003 B1
6542972 Ignatius et al. Apr 2003 B2
6564228 O'Connor May 2003 B1
6564229 Baweja et al. May 2003 B1
6564271 Micalizzi, Jr. et al. May 2003 B2
6581143 Gagne et al. Jun 2003 B2
6604118 Kleiman et al. Aug 2003 B2
6604149 Deo et al. Aug 2003 B1
6611849 Raff et al. Aug 2003 B1
6615223 Shih et al. Sep 2003 B1
6629189 Sandstrom Sep 2003 B1
6631477 LeCrone et al. Oct 2003 B1
6631493 Ottesen et al. Oct 2003 B2
6647396 Parnell et al. Nov 2003 B2
6647473 Golds et al. Nov 2003 B1
6651075 Kusters et al. Nov 2003 B1
6654825 Clapp et al. Nov 2003 B2
6658436 Oshinsky et al. Dec 2003 B2
6658526 Nguyen et al. Dec 2003 B2
6662198 Satyanarayanan et al. Dec 2003 B2
6665815 Goldstein et al. Dec 2003 B1
6681230 Blott et al. Jan 2004 B1
6691209 O'Connell Feb 2004 B1
6721767 De Meno et al. Apr 2004 B2
6728733 Tokui Apr 2004 B2
6732124 Koseki et al. May 2004 B1
6732125 Autrey et al. May 2004 B1
6742092 Huebsch et al. May 2004 B1
6748504 Sawdon et al. Jun 2004 B2
6751635 Chen et al. Jun 2004 B1
6757794 Cabrera et al. Jun 2004 B2
6760723 Oshinsky et al. Jul 2004 B2
6763351 Subramaniam et al. Jul 2004 B1
6789161 Blendermann et al. Sep 2004 B1
6792472 Otterness et al. Sep 2004 B1
6792518 Armangau et al. Sep 2004 B2
6799258 Linde Sep 2004 B1
6820035 Zahavi Nov 2004 B1
6836779 Poulin Dec 2004 B2
6839724 Manchanda et al. Jan 2005 B2
6871163 Hiller et al. Mar 2005 B2
6871271 Ohran et al. Mar 2005 B2
6880051 Timpanaro-Perrotta Apr 2005 B2
6886020 Zahavi et al. Apr 2005 B1
6892211 Hitz et al. May 2005 B2
6912482 Kaiser Jun 2005 B2
6925476 Multer et al. Aug 2005 B1
6925512 Louzoun et al. Aug 2005 B2
6938135 Kekre et al. Aug 2005 B1
6938180 Dysert et al. Aug 2005 B1
6941393 Secatch Sep 2005 B2
6944796 Joshi et al. Sep 2005 B2
6952705 Knoblock et al. Oct 2005 B2
6952758 Chron et al. Oct 2005 B2
6954834 Slater et al. Oct 2005 B2
6968351 Butterworth Nov 2005 B2
6973553 Archibald, Jr. et al. Dec 2005 B1
6978265 Schumacher Dec 2005 B2
6981177 Beattie Dec 2005 B2
6983351 Gibble et al. Jan 2006 B2
6993539 Federwisch et al. Jan 2006 B2
7003519 Biettron et al. Feb 2006 B1
7003641 Prahlad et al. Feb 2006 B2
7007046 Manley et al. Feb 2006 B2
7032131 Lubbers et al. Apr 2006 B2
7035880 Crescenti et al. Apr 2006 B1
7039661 Ranade May 2006 B1
7051050 Chen et al. May 2006 B2
7062761 Slavin et al. Jun 2006 B2
7065538 Aronoff et al. Jun 2006 B2
7068597 Fijolek et al. Jun 2006 B1
7082441 Zahavi et al. Jul 2006 B1
7085787 Beier et al. Aug 2006 B2
7085904 Mizuno et al. Aug 2006 B2
7093012 Olstad et al. Aug 2006 B2
7096315 Takeda et al. Aug 2006 B2
7103731 Gibble et al. Sep 2006 B2
7103740 Colgrove et al. Sep 2006 B1
7106691 Decaluwe et al. Sep 2006 B1
7107298 Prahlad et al. Sep 2006 B2
7107395 Ofek et al. Sep 2006 B1
7111021 Lewis et al. Sep 2006 B1
7111189 Sicola et al. Sep 2006 B1
7120757 Tsuge Oct 2006 B2
7130860 Pachet Oct 2006 B2
7130970 Devassy et al. Oct 2006 B2
7139932 Watanabe Nov 2006 B2
7155465 Lee et al. Dec 2006 B2
7155633 Tuma et al. Dec 2006 B2
7158985 Liskov Jan 2007 B1
7177866 Holenstein et al. Feb 2007 B2
7181477 Saika et al. Feb 2007 B2
7188292 Cordina et al. Mar 2007 B2
7191198 Asano et al. Mar 2007 B2
7194454 Hansen et al. Mar 2007 B2
7194487 Kekre et al. Mar 2007 B1
7200620 Gupta Apr 2007 B2
7203807 Urabe et al. Apr 2007 B2
7209972 Ignatius et al. Apr 2007 B1
7225204 Manley et al. May 2007 B2
7225208 Midgley et al. May 2007 B2
7225210 Guthrie, II. May 2007 B2
7228456 Lecrone et al. Jun 2007 B2
7231391 Aronoff et al. Jun 2007 B2
7231544 Tan et al. Jun 2007 B2
7234115 Sprauve et al. Jun 2007 B1
7246140 Therrien et al. Jul 2007 B2
7246207 Kottomtharayil Jul 2007 B2
7257689 Baird Aug 2007 B1
7269612 Devarakonda et al. Sep 2007 B2
7269641 Powers et al. Sep 2007 B2
7272606 Borthakur et al. Sep 2007 B2
7275138 Saika Sep 2007 B2
7275177 Armangau et al. Sep 2007 B2
7278142 Bandhole et al. Oct 2007 B2
7284153 Okbay et al. Oct 2007 B2
7287047 Kavuri Oct 2007 B2
7293133 Colgrove et al. Nov 2007 B1
7296125 Ohran Nov 2007 B2
7315923 Retnamma et al. Jan 2008 B2
7318134 Oliveira et al. Jan 2008 B1
7340652 Jarvis et al. Mar 2008 B2
7343356 Prahlad et al. Mar 2008 B2
7343365 Farnham et al. Mar 2008 B2
7343453 Prahlad et al. Mar 2008 B2
7343459 Prahlad et al. Mar 2008 B2
7346623 Prahlad et al. Mar 2008 B2
7346751 Prahlad et al. Mar 2008 B2
7356657 Mikami Apr 2008 B2
7359917 Winter et al. Apr 2008 B2
7363444 Ji Apr 2008 B2
7370232 Safford May 2008 B2
7373364 Chapman May 2008 B1
7380072 Kottomtharayil et al. May 2008 B2
7383293 Gupta et al. Jun 2008 B2
7389311 Crescenti et al. Jun 2008 B1
7392360 Aharoni et al. Jun 2008 B1
7395282 Crescenti et al. Jul 2008 B1
7401064 Arone et al. Jul 2008 B1
7409509 Devassy et al. Aug 2008 B2
7415488 Muth et al. Aug 2008 B1
7428657 Yamasaki Sep 2008 B2
7430587 Malone et al. Sep 2008 B2
7433301 Akahane et al. Oct 2008 B2
7440982 Lu et al. Oct 2008 B2
7454569 Kavuri et al. Nov 2008 B2
7457980 Yang et al. Nov 2008 B2
7461230 Gupta et al. Dec 2008 B1
7464236 Sano et al. Dec 2008 B2
7467167 Patterson Dec 2008 B2
7467267 Mayock Dec 2008 B1
7469262 Baskaran et al. Dec 2008 B2
7472238 Gokhale Dec 2008 B1
7472312 Jarvis et al. Dec 2008 B2
7475284 Koike Jan 2009 B2
7484054 Kottomtharayil et al. Jan 2009 B2
7490207 Amarendran Feb 2009 B2
7496589 Jain et al. Feb 2009 B1
7496690 Beverly et al. Feb 2009 B2
7500053 Kavuri et al. Mar 2009 B1
7500150 Sharma et al. Mar 2009 B2
7502902 Sato Mar 2009 B2
7509316 Greenblatt et al. Mar 2009 B2
7512601 Cucerzan et al. Mar 2009 B2
7516088 Johnson et al. Apr 2009 B2
7519726 Palliyil et al. Apr 2009 B2
7523483 Dogan et al. Apr 2009 B2
7529745 Ahluwalia et al. May 2009 B2
7529748 Wen et al. May 2009 B2
7529782 Prahlad et al. May 2009 B2
7529898 Nguyen et al. May 2009 B2
7532340 Koppich et al. May 2009 B2
7533181 Dawson et al. May 2009 B2
7536291 Retnamma et al. May 2009 B1
7539707 Prahlad et al. May 2009 B2
7539835 Kaiser May 2009 B2
7543125 Gokhale Jun 2009 B2
7546324 Prahlad et al. Jun 2009 B2
7546364 Raman et al. Jun 2009 B2
7565572 Yamasaki Jul 2009 B2
7581077 Ignatius et al. Aug 2009 B2
7593966 Therrien et al. Sep 2009 B2
7596586 Gokhale et al. Sep 2009 B2
7606841 Ranade Oct 2009 B1
7606844 Kottomtharayil Oct 2009 B2
7613748 Brockway et al. Nov 2009 B2
7613750 Valiyaparambil et al. Nov 2009 B2
7617253 Prahlad et al. Nov 2009 B2
7617262 Prahlad et al. Nov 2009 B2
7617321 Clark Nov 2009 B2
7617369 Bezbaruah et al. Nov 2009 B1
7617541 Plotkin et al. Nov 2009 B2
7627598 Burke Dec 2009 B1
7627617 Kavuri et al. Dec 2009 B2
7636743 Erofeev Dec 2009 B2
7651593 Prahlad et al. Jan 2010 B2
7661028 Erofeev Feb 2010 B2
7668798 Scanlon et al. Feb 2010 B2
7669029 Mishra et al. Feb 2010 B1
7673000 Smoot et al. Mar 2010 B2
7685126 Patel et al. Mar 2010 B2
7689467 Belanger et al. Mar 2010 B1
7694086 Bezbaruah et al. Apr 2010 B1
7702533 Barnard et al. Apr 2010 B2
7702670 Duprey et al. Apr 2010 B1
7707184 Zhang et al. Apr 2010 B1
7716171 Kryger May 2010 B2
7734715 Hyakutake et al. Jun 2010 B2
7739235 Rousseau et al. Jun 2010 B2
7809691 Karmarkar et al. Oct 2010 B1
7810067 Kaelicke et al. Oct 2010 B2
7831553 Prahlad et al. Nov 2010 B2
7831622 Prahlad et al. Nov 2010 B2
7840533 Prahlad et al. Nov 2010 B2
7840537 Gokhale et al. Nov 2010 B2
7870355 Erofeev Jan 2011 B2
7904681 Bappe Mar 2011 B1
7930476 Castelli et al. Apr 2011 B1
7962455 Erofeev Jun 2011 B2
7962709 Agrawal Jun 2011 B2
8005795 Galipeau et al. Aug 2011 B2
8024294 Kottomtharayil Sep 2011 B2
8121983 Prahlad et al. Feb 2012 B2
8166263 Prahlad Apr 2012 B2
8190565 Prahlad et al. May 2012 B2
8195623 Prahlad et al. Jun 2012 B2
8204859 Ngo Jun 2012 B2
8219524 Gokhale Jul 2012 B2
8271830 Erofeev Sep 2012 B2
8285684 Prahlad et al. Oct 2012 B2
8352422 Prahlad et al. Jan 2013 B2
8463751 Kottomtharayil Jun 2013 B2
8489656 Erofeev Jul 2013 B2
8504515 Prahlad et al. Aug 2013 B2
8504517 Agrawal Aug 2013 B2
8572038 Erofeev Oct 2013 B2
8589347 Erofeev Nov 2013 B2
8645320 Prahlad et al. Feb 2014 B2
8655850 Ngo et al. Feb 2014 B2
8656218 Erofeev Feb 2014 B2
8666942 Ngo Mar 2014 B2
8725694 Kottomtharayil May 2014 B2
8725698 Prahlad et al. May 2014 B2
8726242 Ngo May 2014 B2
8745105 Erofeev Jun 2014 B2
8793221 Prahlad et al. Jul 2014 B2
8868494 Agrawal Oct 2014 B2
20010029512 Oshinsky et al. Oct 2001 A1
20010029517 De Meno et al. Oct 2001 A1
20010032172 Moulinet et al. Oct 2001 A1
20010035866 Finger et al. Nov 2001 A1
20010042222 Kedem et al. Nov 2001 A1
20010044807 Kleiman et al. Nov 2001 A1
20020002557 Straube et al. Jan 2002 A1
20020004883 Nguyen et al. Jan 2002 A1
20020019909 D'Errico Feb 2002 A1
20020023051 Kunzle et al. Feb 2002 A1
20020040376 Yamanaka et al. Apr 2002 A1
20020042869 Tate et al. Apr 2002 A1
20020049626 Mathias et al. Apr 2002 A1
20020049718 Kleiman et al. Apr 2002 A1
20020049738 Epstein Apr 2002 A1
20020049778 Bell et al. Apr 2002 A1
20020062230 Morag et al. May 2002 A1
20020069324 Gerasimov et al. Jun 2002 A1
20020083055 Pachet et al. Jun 2002 A1
20020091712 Martin et al. Jul 2002 A1
20020103848 Giacomini et al. Aug 2002 A1
20020107877 Whiting et al. Aug 2002 A1
20020112134 Ohran et al. Aug 2002 A1
20020120741 Webb et al. Aug 2002 A1
20020124137 Ulrich et al. Sep 2002 A1
20020133511 Hostetter et al. Sep 2002 A1
20020133512 Milillo et al. Sep 2002 A1
20020161753 Inaba et al. Oct 2002 A1
20020174107 Poulin Nov 2002 A1
20020174139 Midgley et al. Nov 2002 A1
20020174416 Bates et al. Nov 2002 A1
20020181395 Foster et al. Dec 2002 A1
20030005119 Mercier et al. Jan 2003 A1
20030018657 Monday Jan 2003 A1
20030023893 Lee et al. Jan 2003 A1
20030028736 Berkowitz et al. Feb 2003 A1
20030033308 Patel et al. Feb 2003 A1
20030061491 Jaskiewicz et al. Mar 2003 A1
20030079018 Lolayekar et al. Apr 2003 A1
20030097296 Putt May 2003 A1
20030126200 Wolff Jul 2003 A1
20030131278 Fujibayashi Jul 2003 A1
20030135783 Martin et al. Jul 2003 A1
20030161338 Ng et al. Aug 2003 A1
20030167380 Green et al. Sep 2003 A1
20030177149 Coombs Sep 2003 A1
20030177321 Watanabe Sep 2003 A1
20030187847 Lubbers et al. Oct 2003 A1
20030225800 Kavuri Dec 2003 A1
20040006572 Hoshino et al. Jan 2004 A1
20040006578 Yu Jan 2004 A1
20040010487 Prahlad et al. Jan 2004 A1
20040015468 Beier et al. Jan 2004 A1
20040039679 Norton et al. Feb 2004 A1
20040078632 Infante et al. Apr 2004 A1
20040098425 Wiss et al. May 2004 A1
20040107199 Dairymple, III et al. Jun 2004 A1
20040117438 Considine et al. Jun 2004 A1
20040117572 Welsh et al. Jun 2004 A1
20040133634 Luke et al. Jul 2004 A1
20040139128 Becker et al. Jul 2004 A1
20040158588 Pruet Aug 2004 A1
20040193625 Sutoh Sep 2004 A1
20040193953 Callahan et al. Sep 2004 A1
20040205206 Naik et al. Oct 2004 A1
20040212639 Smoot et al. Oct 2004 A1
20040215724 Smoot et al. Oct 2004 A1
20040225437 Endo et al. Nov 2004 A1
20040230829 Dogan et al. Nov 2004 A1
20040236958 Teicher et al. Nov 2004 A1
20040249883 Srinivasan et al. Dec 2004 A1
20040250033 Prahlad et al. Dec 2004 A1
20040254919 Giuseppini Dec 2004 A1
20040260678 Verbowski et al. Dec 2004 A1
20040267777 Sugimura et al. Dec 2004 A1
20040267835 Zwilling et al. Dec 2004 A1
20040267836 Armangau et al. Dec 2004 A1
20050015409 Cheng et al. Jan 2005 A1
20050027892 McCabe et al. Feb 2005 A1
20050033800 Kavuri et al. Feb 2005 A1
20050044114 Kottomtharayil et al. Feb 2005 A1
20050055445 Gupta et al. Mar 2005 A1
20050060613 Cheng Mar 2005 A1
20050071389 Gupta et al. Mar 2005 A1
20050071391 Fuerderer et al. Mar 2005 A1
20050080928 Beverly et al. Apr 2005 A1
20050086443 Mizuno et al. Apr 2005 A1
20050108292 Burton et al. May 2005 A1
20050114406 Borthakur et al. May 2005 A1
20050131900 Palliyll et al. Jun 2005 A1
20050138306 Panchbudhe et al. Jun 2005 A1
20050144202 Chen Jun 2005 A1
20050172073 Voigt Aug 2005 A1
20050187982 Sato Aug 2005 A1
20050187992 Prahlad et al. Aug 2005 A1
20050188109 Shiga et al. Aug 2005 A1
20050188254 Urabe et al. Aug 2005 A1
20050193026 Prahlad et al. Sep 2005 A1
20050198083 Saika et al. Sep 2005 A1
20050228875 Monitzer et al. Oct 2005 A1
20050246376 Lu et al. Nov 2005 A1
20050246510 Retnamma et al. Nov 2005 A1
20050254456 Sakai Nov 2005 A1
20050268068 Ignatius et al. Dec 2005 A1
20060005048 Osaki et al. Jan 2006 A1
20060010154 Prahlad et al. Jan 2006 A1
20060010227 Atluri Jan 2006 A1
20060010341 Kodama Jan 2006 A1
20060020616 Hardy et al. Jan 2006 A1
20060034454 Damgaard et al. Feb 2006 A1
20060036901 Yang et al. Feb 2006 A1
20060047805 Byrd et al. Mar 2006 A1
20060047931 Saika Mar 2006 A1
20060092861 Corday et al. May 2006 A1
20060107089 Jansz et al. May 2006 A1
20060120401 Harada et al. Jun 2006 A1
20060129537 Torii et al. Jun 2006 A1
20060136685 Griv et al. Jun 2006 A1
20060155946 Ji Jul 2006 A1
20060171315 Choi et al. Aug 2006 A1
20060174075 Sutoh Aug 2006 A1
20060215564 Breitgand et al. Sep 2006 A1
20060230244 Amarendran et al. Oct 2006 A1
20060242371 Shono et al. Oct 2006 A1
20060242489 Brockway et al. Oct 2006 A1
20070033437 Kawamura Feb 2007 A1
20070043956 El Far et al. Feb 2007 A1
20070050547 Sano et al. Mar 2007 A1
20070055737 Yamashita et al. Mar 2007 A1
20070094467 Yamasaki Apr 2007 A1
20070100867 Celik et al. May 2007 A1
20070112897 Asano et al. May 2007 A1
20070113006 Elliott et al. May 2007 A1
20070124347 Vivian et al. May 2007 A1
20070124348 Claborn et al. May 2007 A1
20070130373 Kalwitz Jun 2007 A1
20070143371 Kottomtharayil Jun 2007 A1
20070143756 Gokhale Jun 2007 A1
20070179990 Zimran et al. Aug 2007 A1
20070183224 Erofeev Aug 2007 A1
20070185852 Erofeev Aug 2007 A1
20070185937 Prahlad et al. Aug 2007 A1
20070185938 Prahlad et al. Aug 2007 A1
20070185939 Prahlad et al. Aug 2007 A1
20070185940 Prahlad et al. Aug 2007 A1
20070186042 Kottomtharayil et al. Aug 2007 A1
20070186068 Agrawal Aug 2007 A1
20070198602 Ngo et al. Aug 2007 A1
20070226438 Erofeev Sep 2007 A1
20070244571 Wilson et al. Oct 2007 A1
20070260609 Tulyani Nov 2007 A1
20070276848 Kim Nov 2007 A1
20070288536 Sen et al. Dec 2007 A1
20080016126 Kottomtharayil et al. Jan 2008 A1
20080016293 Saika Jan 2008 A1
20080059515 Fulton Mar 2008 A1
20080077634 Quakenbush Mar 2008 A1
20080077636 Gupta et al. Mar 2008 A1
20080103916 Camarador et al. May 2008 A1
20080104357 Kim et al. May 2008 A1
20080114815 Sutoh May 2008 A1
20080147878 Kottomtharayil et al. Jun 2008 A1
20080183775 Prahlad et al. Jul 2008 A1
20080205301 Burton et al. Aug 2008 A1
20080208933 Lyon Aug 2008 A1
20080228987 Yagi Sep 2008 A1
20080229037 Bunte et al. Sep 2008 A1
20080243914 Prahlad et al. Oct 2008 A1
20080243957 Prahlad et al. Oct 2008 A1
20080243958 Prahlad et al. Oct 2008 A1
20080244205 Amano et al. Oct 2008 A1
20080250178 Haustein et al. Oct 2008 A1
20080306954 Hornqvist Dec 2008 A1
20080313497 Hirakawa Dec 2008 A1
20090013014 Kern et al. Jan 2009 A1
20090044046 Yamasaki Feb 2009 A1
20090113056 Tameshige et al. Apr 2009 A1
20090150462 McClanahan et al. Jun 2009 A1
20090182963 Prahlad et al. Jul 2009 A1
20090187944 White et al. Jul 2009 A1
20090300079 Shitomi Dec 2009 A1
20090319534 Gokhale Dec 2009 A1
20090319585 Gokhale Dec 2009 A1
20100005259 Prahlad Jan 2010 A1
20100049753 Prahlad et al. Feb 2010 A1
20100094808 Erofeev Apr 2010 A1
20100100529 Erofeev Apr 2010 A1
20100131461 Prahlad et al. May 2010 A1
20100131467 Prahlad et al. May 2010 A1
20100145909 Ngo Jun 2010 A1
20100153338 Ngo et al. Jun 2010 A1
20100179941 Agrawal et al. Jul 2010 A1
20100205150 Prahlad et al. Aug 2010 A1
20100211571 Prahlad et al. Aug 2010 A1
20110066599 Prahlad et al. Mar 2011 A1
20130006938 Prahlad et al. Jan 2013 A1
20130006942 Prahlad et al. Jan 2013 A1
20140032495 Erofeev Jan 2014 A1
20140067764 Prahlad et al. Mar 2014 A1
20140074777 Agrawal Mar 2014 A1
20140164327 Ngo et al. Jun 2014 A1
20140181022 Ngo Jun 2014 A1
20140181029 Erofeev Jun 2014 A1
20140236900 Kottomtharayil Aug 2014 A1
20140244586 Ngo Aug 2014 A1
Foreign Referenced Citations (34)
Number Date Country
2006331932 Dec 2006 AU
2632935 Dec 2006 CA
0259912 Mar 1988 EP
0405926 Jan 1991 EP
0467546 Jan 1992 EP
0774715 May 1997 EP
0809184 Nov 1997 EP
0862304 Sep 1998 EP
0899662 Mar 1999 EP
0981090 Feb 2000 EP
1174795 Jan 2002 EP
1349089 Jan 2003 EP
1349088 Oct 2003 EP
1579331 Sep 2005 EP
1974296 Oct 2008 EP
2256952 Dec 1992 GB
2411030 Aug 2005 GB
05189281 Jul 1993 JP
06274605 Sep 1994 JP
09016463 Jan 1997 JP
11259348 Sep 1999 JP
200347811 Dec 2000 JP
WO 9303549 Feb 1993 WO
WO 9513580 May 1995 WO
WO 9839707 Sep 1998 WO
WO 9912098 Mar 1999 WO
WO 9914692 Mar 1999 WO
WO 02095632 Nov 2002 WO
WO 03028183 Apr 2003 WO
WO 2004034197 Apr 2004 WO
WO 2005055093 Jun 2005 WO
WO 2005086032 Sep 2005 WO
WO 2007053314 May 2007 WO
WO 2010068570 Jun 2010 WO
Non-Patent Literature Citations (54)
Entry
U.S. Appl. No. 14/038,540, filed Sep. 26, 2013, Erofeev.
Armstead et al., “Implementation of a Campus-Wide Distributed Mass Storage Service: The Dream vs. Reality,” IEEE, 1995, pp. 190-199.
Arneson, “Development of Omniserver; Mass Storage Systems,” Control Data Corporation, 1990, pp. 88-93.
Arneson, “Mass Storage Archiving in Network Environments” IEEE, 1998, pp. 45-50.
Ashton, et al., “Two Decades of policy-based storage management for the IBM mainframe computer”, www.research.ibm.com, 19 pages, published Apr. 10, 2003, printed Jan. 3, 2009., www.research.ibm.com, Apr. 10, 2003, pp. 19.
Cabrera, et al. “ADSM: A Multi-Platform, Scalable, Back-up and Archive Mass Storage System,” Digest of Papers, Compcon '95, Proceedings of the 40th IEEE Computer Society International Conference, Mar. 5, 1995-Mar. 9, 1995, pp. 420-427, San Francisco, CA.
Calvert, Andrew, “SQL Server 2005 Snapshots”, published Apr. 3, 2006, http:/www.simple-talk.com/contnet/print.aspx?article=137, 6 pages.
Eitel, “Backup and Storage Management in Distributed Heterogeneous Environments,” IEEE, 1994, pp. 124-126.
Gait, “The Optical File Cabinet: A Random-Access File system for Write-Once Optical Disks,” IEEE Computer, vol. 21, No. 6, pp. 11-22 (1988).
Gray, et al. “Transaction processing: concepts andtechniques” 1994, Morgan Kaufmann Publishers, USA, pp. 604-609, 646-655.B7.
Harrington, “The RFP Process: How To Hire a Third Party”, Transportation & Distribution, Sep. 1988, vol. 39, Issue 9, in 5 pages.
http://en.wikipedia.org/wiki/Naive—Bayes—classifier, printed on Jun. 1, 2010, in 7 pages.
IBM, “Intelligent Selection of Logs Required During Recovery Processing”, ip.com, Sep. 16, 2002, 4 pages.
IBM, “Near Zero Impact Backup and Data Replication Appliance”, ip.com, Oct. 18, 2004, 5 pages.
Jander, “Launching Storage-Area Net,” Data Communications, US, McGraw Hill, NY, vol. 27, No. 4(Mar. 21, 1998), pp. 64-72.
Kashyap, et al., “Professional Services Automation: A knowledge Management approach using LSI and Domain specific Ontologies”, FLAIRS-01 Proceedings, 2001, pp. 300-302.
Lyon J., Design considerations in replicated database systems for disaster protection, COMPCON 1988, Feb. 29, 1988, pp. 428-430.
Microsoft Corporation, “Microsoft Exchange Server: Best Practices for Exchange Database Management,” 1998.
Park, et al., “An Efficient Logging Scheme for Recoverable Distributed Shared Memory Systems”, IEEE, 1997, 9 pages.
Rosenblum et al., “The Design and Implementation of a Log-Structure File System,” Operating Systems Review SIGOPS, vol. 25, No. 5, New York, US, pp. 1-15 (May 1991).
The Oracle8 Replication Manual, Part No. A58245-01; Chapters 1-2; Dec. 1, 1997; obtained from website: http://download-west.oracle.com/docs/cd/A64702—01/doc/server.805/a58245/toc.htm on May 20, 2009.
Veritas Software Corporation, “Veritas Volume Manager 3.2, Administrator's Guide,” Aug. 2001, 360 pages.
Wiesmann M, Database replication techniques: a three parameter classification, Oct. 16, 2000, pp. 206-215.
Final Office Action for Japanese Application No. 2003531581, Mail Date Mar. 24, 2009, 6 pages.
International Search Report and Written Opinion dated Nov. 13, 2009, PCT/US2007/081681.
First Office Action for Japanese Application No. 2003531581, Mail Date Jul. 8, 2008, 8 pages.
International Preliminary Report on Patentability, PCT Application No. PCT/US2009/066880, mailed Jun. 23, 2011, in 9 pages.
International Search Report and Written Opinion issued in PCT Application No. PCT/US2011/030396, mailed Jul. 18, 2011, in 20 pages.
International Search Report and Written Opinion issued in PCT Application No. PCT/US2011/38436, mailed Sep. 21, 2011, in 18 pages.
Canadian Office Action dated Sep. 24, 2012, Application No. 2,632,935, 2 pages.
European Examination Report; Application No. 06848901.2, Apr. 1, 2009, pp. 7.
Examiner's First Report; Application No. 2006331932, May 11, 2011 in 2 pages.
Canadian Office Action dated Dec. 29, 2010, Application No. CA2546304.
Examiner's Report for Australian Application No. 2003279847, Dated Dec. 9, 2008, 4 pages.
First Office Action in Canadian application No. 2,632,935 dated Feb. 16, 2012, in 5 pages.
International Search Report dated May 15, 2007, PCT/US2006/048273.
Second Examination Report in EU Appl. No. 06 848 901.2-2201 dated Dec. 3, 2010.
International Search Report and Written Opinion dated Mar. 25, 2010, PCT/US2009/066880.
International Preliminary Report on Patentability and Written Opinion in PCT/US2011/030396 mailed Oct. 2, 2012.
International Preliminary Report on Patentability and Written Opinion in PCT/US2011/038436 mailed Dec. 4, 2012.
International Search Report dated Dec. 28, 2009, PCT/US204/038324.
International Search Report and Written Opinion dated Jan. 11, 2006, PCT/US2004/038455.
Exam Report in Australian Application No. 2009324800 dated Jun. 17, 2013.
U.S. Appl. No. 60/519,876, filed Nov. 13, 2003, Prahlad, et. al., System And Method For Performing A Snapshot And For Restoring Data, Now Expired.
U.S. Appl. No. 60/519,576, filed Nov. 13, 2003, Prahlad, et. al., System And Method For Performing An Image Level Snapshot And For Restoring Partial Volume Data, Now Expired.
U.S. Appl. No. 11/672,926, filed Feb. 8, 2007, Prahlad, et. al., System And Method For Performing An Image Level Snapshot And For Restoring Partial Volume Data, Now Abandoned.
U.S. Appl. No. 61/121,418, filed Dec. 10, 2008, Ngo, Systems and Methods for Managing Replicated Database Data, Now Expired.
U.S. Appl. No. 61/121,438, filed Dec. 10, 2008, Agrawal, et al., Systems and Methods for Performing Discrete Data Replication, Now Expired.
U.S. Appl. No. 12/712,245, filed Feb. 25, 2010, Ngo, et al., Systems And Methods For Resynchronizing Information, Abandoned.
U.S. Appl. No. 14/138,599, filed Dec. 23, 2013, Prahlad, et. al., System And Method For Performing An Image Level Snapshot And For Restoring Partial Volume Data.
U.S. Appl. No. 14/138,666, filed Dec. 23, 2013, Erofeev, Rolling Cache Configuration For A Data Replication System.
U.S. Appl. No. 14/193,945, filed Feb. 28, 2014, Ngo, Systems And Methods For Managing Replicated Database Data.
U.S. Appl. No. 14/261,789, filed Apr. 25, 2014, Kottomtharayil, Systems And Methods For Performing Replication Copy Storage Operations.
U.S. Appl. No. 14/269,506, filed May 5, 2014, Ngo, Systems and Methods for Continuous Data Replication.
Related Publications (1)
Number Date Country
20140164327 A1 Jun 2014 US
Provisional Applications (1)
Number Date Country
60752201 Dec 2005 US
Continuations (1)
Number Date Country
Parent 11640024 Dec 2006 US
Child 14181359 US