As is known in the art, computer data is vital to today's organizations, and a significant part of protection against disasters is focused on data protection. As solid-state memory has advanced to the point where the cost of memory has become a relatively insignificant factor, organizations can afford to operate with systems that store and process terabytes of data.
Conventional data protection systems include tape backup drives, for storing organizational data on a periodic basis. Such systems suffer from several drawbacks. First, they require a system shutdown during backup, since the data being backed up cannot be used during the backup operation. Second, they limit the points in time to which the organization to recover. For example, if data is backed up on a daily basis, there may be several hours of lost data in the event of a disaster. Third, the data recovery process itself takes a relatively long time.
Another conventional data protection system uses data replication, by creating a copy of the organization's data on a secondary backup storage system, and updating the backup occur with changes. The backup storage system may be situated in the same physical location as the production storage system, or in a physically remote location. Data replication systems generally operate either at the application level, or at the file system level, or at the data block level.
Current data protection systems try to provide continuous data protection, which enable the organization to roll back to any specified point in time within a recent history. Continuous data protection systems aim to satisfy two conflicting objectives, as best as possible; namely, (i) minimize the down time, in which the organization data is unavailable, during a recovery, and (ii) enable recovery as close a possible to any specified point in time within a recent history.
Continuous data protection typically uses a technology referred to as “journaling”, whereby a log is kept of changes made to the backup storage. During a recovery, the journal entries serve as successive “undo” information, enabling rollback of the backup storage to previous points in time.
The present invention provides systems and methods for efficient data access and recovery by enabling access to data that was in a storage system at an earlier point in time, while simultaneously performing a storage rollback. Such access is uninterrupted when the rollback is completed, thus minimizing system down time during a recovery operation.
In one aspect of the invention, a method comprises accessing data from a previous point in time, including receiving data stored in a storage system of addressable memory, the storage system including a plurality of addresses, receiving a journal history of write transactions for the storage, each write transaction including (i) a plurality of designated memory addresses, (ii) a corresponding plurality of current data to write in the designated memory addresses for storage, and (iii) a time indicator, generating a data structure that represents a virtual interface to the storage at a specified point in time, based on the write transactions in the journal history having a time subsequent to the specified point in time, and enabling a user to process data from the specified point in time, via the virtual interface and the journal history.
In another aspect of the invention, a data access system comprises a data recovery system, including a storage system of addressable memory, the storage system including data stored at a plurality of addresses, a journal history of write transactions for the storage system, each write transaction including (i) a plurality of designated memory addresses, (ii) a corresponding plurality of current data to write in the designated memory addresses for storage, and (iii) a time indicator, a data protector, including a data protector memory, a journal processor for generating a data structure, stored within the data protector memory, which represents a virtual interface to the storage at a specified point in time, based on the write transactions in the journal having a data and time subsequent to the specified point in time, a storage manager for rolling back the storage to the data that was stored therein at the specified point in time, based on the journal of write transactions, while a user is using the virtual interface to the storage, and a data protector switcher for switching from the virtual storage interface via the journal history, over to a direct interface to the rolled back storage, after completion of the rolling back, thereby enabling the user to continue data processing without interruption, and a host application driver for enabling a user to process data from the specified point in time, via the virtual interface and the journal history.
In a further aspect of the invention, a computer-readable storage medium comprises program code for causing at least one computing device to receive data stored in a storage system of addressable memory, the storage system including a plurality of addresses, to receive a journal history of write transactions for the storage, each write transaction including (i) a plurality of designated memory addresses, (ii) a corresponding plurality of current data to write in the designated memory addresses for storage, and (iii) a time indicator, to generate a data structure that represents a virtual interface to the storage at a specified point in time, based on the write transactions in the journal history having a time subsequent to the specified point in time, and to enable a user to process data from the specified point in time, via the virtual interface and the journal history.
Another aspect of the invention comprises a method for data access, including for a host device designated as Device A within a host computer, the host device corresponding to a first logical storage unit designated as LUN A, where LUN A is configured to access data directly from a physical storage system, assigning a second logical storage unit, designated as LUN X, wherein LUN X is configured to access the storage system indirectly via a data protection computer, providing the data protection computer access to a data structure that is able to recover data that was stored in the storage system at an earlier point in time, T1, and in response to a request from the host computer for data that was stored in the storage system at time T1, switching Device A to get its data from LUN X instead of from LUN A.
In yet another aspect of the invention, a data access system comprises a physical storage system, a data protection computer for accessing a data structure that is able to recover data that was stored in the storage system at an earlier point in time, T1; and a host computer connected with said physical storage system, including a host device designated as Device A and corresponding to a first logical storage unit designated as LUN A, where LUN A is configured to access data directly from the storage system, and a host driver (i) for assigning a second logical storage unit, designated as LUN X, to Device A, wherein LUN X is configured to access the storage system indirectly via the data protection computer, and (ii) for switching Device A to get its data from LUN X instead of from LUN A, in response to a request from the host computer for data that was stored in said storage system at time T1.
In a still further aspect of the invention, a data access system comprises a physical storage system including a controller for exposing at least one logical storage unit, a data protection computer for accessing a data structure that is able to recover data that was stored in the storage system at an earlier point in time, T1, and a host computer connected with the storage system, comprising a host device designated as Device A and corresponding to a first logical storage unit designated as LUN A, where LUN A is configured to access data directly from the storage system, wherein the storage system controller is operable to assign a second logical storage unit, designated as LUN X, to Device A, wherein LUN X is configured to access the storage system indirectly via said data protection computer, and wherein the host computer is operable to switch Device A to get its data from LUN X instead of from LUN A, in response to a request from for data that was stored in the storage system at time T1.
In another aspect of the invention, a data access system comprises a physical storage system, a data protection computer for accessing a data structure that is able to recover data that was stored in the storage system at an earlier point in time, T1, and a host computer connected with the physical storage system, comprising a host device designated as Device A and corresponding to a first logical storage unit designated as LUN A, where LUN A is configured to access data directly from the storage system, wherein the data protection computer is operable to assign a second logical storage unit, designated as LUN X, to Device A, wherein LUN X is configured to access the storage system indirectly via said data protection computer, and wherein the host computer is operable to switch Device A to get its data from LUN X instead of from LUN A, in response to a request from the host computer for data that was stored in said storage system at time T1.
In a further aspect of the invention, a computer-readable storage medium comprises program code for causing a host computer that comprises a host device designated as Device A and corresponding to a first logical storage unit designated as LUN A, where LUN A is configured to access data directly from a storage system to assign a second logical storage unit, designated as LUN X, wherein LUN X is configured to access the storage system indirectly via a data protection computer, to provide the data protection computer access to a data structure that is able to recover data that was stored in the storage system at an earlier point in time, T1, and to switch Device A to get its data from LUN X instead of from LUN A, in response to a request from the host computer for data that was stored in the storage system at time T1.
The present invention will be more fully understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:
The source and target sides are connected via a wide area network (WAN) 180. Each host computer and its corresponding storage system are coupled through a storage area network (SAN) that includes network switches, such a fiber channel switches. The communication links between each host computer and its corresponding storage system, may be any appropriate medium suitable for data transfer, such as fiber communication channel links.
Host computers 110 and 130 may each be implemented as one computer, or as a plurality of computers, or as a network of distributed computers. Generally, a host computer runs one or more applications, such as database applications and e-mail servers.
Each storage system 120 and 140 includes one or more physical storage devices, such as single disks or redundant arrays of inexpensive disks (RAID). Storage system 140 generally includes a copy of storage system 120, as well as additional data.
In the course of continuous operation, host computer 110 issues I/O requests (write/read operations) to storage system 120 using, for example, small computer system interface (SCSI) commands. Such requests are generally transmitted to storage system 120 with an address that includes a specific device identifier, an offset within the device, and a data size. Offsets are generally granularized to 512 byte blocks. The average size of a write operation issued by the host computer may be, for example, 10 kilobytes (KB); i.e., 20 blocks. For an I/O rate of 50 megabytes (MB) per second, this corresponds to approximately 5,000 write transactions per second.
In accordance with an exemplary embodiment of the present invention, a replica of every write operation issued by host computer 110 to storage system 120 is transmitted to a source-side data protection appliance (DPA) 160. In one embodiment, DPA 160, and its counterpart at the target side DPA 170, include their own internal memories and computing processors. In the architecture illustrated in
In accordance with an exemplary embodiment of the present invention, DPA 160 and DPA 170 are “initiators”; i.e., the DPAs can issue I/O requests using, for example, SCSI commands, to storage devices of their respective storage systems. Specifically, the DPAs may issue I/O requests to one or more storage devices of their respective storage systems, referred to as “journal volumes”. The DPAs are also programmed with the necessary functionality to act as a “target”; i.e., to reply to I/O requests, such as SCSI commands, issued by other initiators, such as their respective host computer.
DPA 160 sends write transactions over a wide area network 180 to a second DPI 170 at the target side, for incorporation within target storage system 140. DPA 160 may send its write transactions to DPA 170 using a variety of modes of transmission, including inter alia (i) a synchronous mode, (ii) an asynchronous mode, and (iii) a snapshot mode. In synchronous mode, DPA 160 sends each write transaction to DPA 170, receives back an acknowledgement, and in turns sends an acknowledgement back to host computer 110. Host computer waits until receipt of such acknowledgement before issuing further write transactions. In asynchronous mode, DPA 160 sends an acknowledgement to host computer 110 upon receipt of each write transaction, before receiving an acknowledgement back from DPA 170. In snapshot mode, DPA 160 receives several write transactions and combines them into an aggregate “snapshot” of all write activity performed in the multiple write transactions, and sends such snapshots to DPA 170, for incorporation in target storage system 140.
For the sake of clarity, the ensuing discussion assumes that information is transmitted at a write-by-write granularity. During normal operations, the direction of replicate data flow goes from source side to target side. Generally, during data recovery the direction of replicate data flow is reversed, with the target side behaving as if it were the source side, and vice versa. To this end, the target side also includes a switch 190, making the target side symmetric with the source side.
In accordance with an exemplary embodiment of the present invention, DPA 160 is operative to send write transactions from the source side to the target side. DPA 170 is operative to maintain a journal history of write transactions, as described in detail hereinbelow. Journal histories may be stored in a journal volume. Such journal volume may include one or more physical storage device units, or it may be a part of a storage system. The size of the journal volume determines the size of a journal history that can be stored. A possible size for a journal volume is 500 GB. Since the source side has the capability to act as a target side, a journal volume is also defined at the source side.
The system shown in
It will be appreciated that in practice the architecture may vary from one organization to another. Thus, although the target side is illustrated as being remote from the source side in
Write transactions are transmitted from source side DPA 160 to target side DPA 170. As shown in
In practice each of the four streams holds a plurality of write transaction data. As write transactions are received dynamically by target DPA 170, they are recorded at the end of the DO stream and the end of the DO METADATA stream, prior to committing the transaction. During transaction application, when the various write transactions are applied to the storage system, prior to writing the new DO data into addresses within the storage system, the older data currently located in such addresses is recorded into the UNDO stream.
By recording old data, a journal entry can be used to “undo” a write transaction. To undo a transaction, old data is read from the UNDO stream for writing into addresses within the storage system. Prior to writing the UNDO data into these addresses, the newer data residing in such addresses is recorded in the DO stream.
More specifically, in accordance with an exemplary embodiment of the present invention, journal history 200 is stored within a specific storage volume, or striped over several volumes, referred to collectively as a “journal volume”. Journal history 200 may have its own partition within a volume.
The journal volume can be partitioned into segments with a pre-defined size, such as 1 MB segments, with each segment identified by a counter. The collection of such segments forms a segment pool for the four journaling streams described hereinabove. Each such stream is structured as an ordered list of segments, into which the stream data is written, and includes two pointers—a beginning pointer that points to the first segment in the list and an end pointer that points to the last segment in the list.
According to a write direction for each stream, write transaction data is appended to the stream either at the end, for a forward direction, or at the beginning, for a backward direction. As each write transaction is received by DPA 170, its size is checked to determine if it can fit within available segments. If not, then one or more segments are chosen from the segment pool and appended to the stream's ordered list of segments.
Thereafter the DO data is written into the DO stream, and the pointer to the appropriate first or last segment is updated. Freeing of segments in the ordered list is performed by simply changing the beginning or the end pointer. Freed segments are returned to the segment pool for re-use.
When a write transaction is received, journaling is thus advanced as follows.
Conversely, during a rollback to undo a write transaction, the above operations are reversed, as follows:
The following example, in conjunction with
Three write transactions are received, as indicated in TABLE I.
The following discussion describes four stages of journaling and data storage; namely,
The write transaction with ID=1 is written to the first 15 blocks of Segment #1. The metadata corresponding to this transaction is written to the first block of Segment #2. The second write transaction with ID=2 is written to the last 5 blocks of Segment #1 and the first 15 blocks of Segment #3. The metadata corresponding to this transaction is written to the second block of Segment #2. The third write transaction with ID=3 is written to the last 5 blocks of Segment #3 and the first 15 blocks of Segment #4. The metadata corresponding to this transaction is written to the third block of Segment #2.
Thus at stage #1, the DO stream in memory includes a list of segments 1, 3, 4; and a beginning pointer to offset=0 in Segment #1 and an end pointer to offset=10 in Segment #4. The DO METADATA stream in memory includes a list of one segment, namely Segment #2; and a beginning pointer to offset=0 in Segment #2 and an end pointer to offset=3 in Segment #2. The UNDO stream and the UNDO METADATA stream are empty. The journal history and the four streams at the end of stage #1 are illustrated in
At stage #2 the write transaction with ID=1 is applied to the storage system. New data to be written as read from the journal volume at the offset and length indicated in the DO METADATA; namely, 15 blocks of data located in blocks 0-14 of journal volume Segment #1. Correspondingly, old data is read from the storage data volume at the offset and length indicated in the UNDO METADATA; namely, 15 blocks of data located in blocks 57-71 of Data Volume #1. The old data is then written into the UNDO stream in the journal volume, and the associated metadata is written into the UNDO METADATA stream in the journal volume. Specifically, for this example, the UNDO data is written into the first 15 blocks of Segment #5, and the UNDO METADATA is written into the first block of Segment #6. The beginning pointer of the UNDO data stream is set to offset=0 in Segment #5, and the end pointer is set to offset=15 in Segment #5. Similarly, the beginning pointer of the UNDO METADATA stream is set to offset=0 on Segment #6, and the end pointer is set to offset=1 in Segment #6.
At this point, the new data that was read from blocks 0-14 of journal volume Segment #1 is written to blocks 57-71 of Data Volume #1. The beginning pointer for the DO stream is moved forward to block 15 of journal volume Segment #1, and the beginning pointer for the DO METADATA stream is moved forward to block 1 of journal volume Segment #2. The journal history and the four streams at the end of stage #2 are illustrated in
At stage #3 the write transaction with ID=2 is applied to the storage system. As above, 20 blocks of new data are read from blocks 15-19 of journal volume Segment #1 and from blocks 0-14 of journal volume Segment #3. Similarly, 20 blocks of old data are read from blocks 87-106 of Data Volume #1. The old data is written to the UNDO stream in the last 5 blocks of journal volume Segment #5 and the first 15 blocks of journal volume Segment #7. The associated metadata is written to the UNDO METADATA stream in the second block of Segment #6. The list of segments in the UNDO stream includes Segment #5 and Segment #7. The end pointer of the UNDO stream is moved to block 15 of Segment #7, and the end pointed of the UNDO METADATA stream is moved to block 2 of Segment #6.
Finally, the new data from blocks 15-19 of journal volume Segment #1 and blocks 0-14 of journal volume Segment #3 is written into blocks 87-106 of Data Volume #1. The beginning pointer for the DO stream is moved forward to block 15 of journal volume Segment #3, and the beginning pointer for the DO METADATA stream is moved forward to block 2 of journal volume Segment #2. Segment #1 is freed from the DO stream, for recycling within the segment pool, and the list of segments for the DO stream is changed to Segment #3 and Segment #4. The journal history and the four streams at the end of stage #3 are illustrated in
At stage #4 a rollback to time 10:00:00.00 is performed. I.e., the write transaction with ID=2 is to be undone. The last entry is read from the UNDO METADATA stream, the location of the end of the UNDO METADATA stream being determined by its end pointer. I.e., the metadata before block 2 of journal volume Segment #6 is read, indicating two areas each of 20 blocks; namely, (a) the last 5 blocks of journal volume Segment #5 and the first 15 blocks of journal volume Segment #7, and (b) blocks 87-106 of Data Volume #1. Area (a) is part of the UNDO stream.
The 20 blocks of data from area (b) are read from Data Volume #1 and written to the beginning of the DO stream. As the beginning pointer of the DO stream is set to offset=15 of journal volume Segment #3, 5 blocks are written at the end of Segment #3, and the remaining 15 blocks are written to Segment #8. The end pointer for the DO stream is set to block 15 of Segment #8. The list of segments for the DO stream is changed to Segment #3, Segment #4 and Segment #8. The metadata associated with the 20 blocks from area (b) is written to block 3 of Segment #2, and the end pointer of the DO METADATA stream is advanced to block 4 of Segment #2.
The 20 blocks of data in area (a) of the journal volume are then written to area (b) of the data volume. Finally, Segment #7 is freed for recycling in the segment pool, the UNDO stream ending pointer is moved back to Segment #5 of the journal volume, block 15, and the UNDO METADATA stream ending pointed is moved back to Segment #6 of the journal volume, block 1. The journal history and the four streams at the end of stage #4 are illustrated in
Thus it may be appreciated that Journal history 200 is thus used to rollback storage system 140 to the state that it was in at a previous point in time. Journal history is also used to selectively access data from storage 140 at such previous point in time, without necessarily performing a rollback. Selective access is useful for correcting one or more files that are currently corrupt, or for simply accessing old data.
Reference is now made to
DPA drivers can be installed on both source side host computer 110 and target side host computer 130. During normal operation, the DPA driver on source side host computer 110 acts as a “splitter”, to intercept SCSI I/O commands in its data path, to replicate these commands, and to send a copy to the DPA. A DPA driver may reside on a host computer within a switch, such as switch 150.
Journal history 200 from
The present invention provide efficient ways to use journal history 200 by an adaptor to access data that was stored in dynamically changing storage system 140 at a specified point in time. As described more fully with respect to
While host computer is accessing and processing old data that was stored in storage system 140, new data is being generated through new write transactions. To manage such new write transactions, journal generator 310 generates an auxiliary journal history, dedicated to tracking target side data processing that operates on old data.
Reference is now made to
Reference is now made to
At step 530 the method generates a data structure for a virtual interface to the storage at the state it was in at a specified earlier point in time. In one embodiment of the present invention, the data structure generated at step 530 is a binary tree, and the data stored in the nodes of the binary tree includes sequential intervals of memory addresses.
Specifically, reference is now made to
The first transaction, with ID=1, writes DATA_A into interval A of memory locations shown in
In an exemplary embodiment of the present invention, the journal entries in TABLE II are processed in reverse chronological order; i.e., from ID=4 to ID=1. Such order corresponds to a last-in-first-out order, since the journal entries were written in order from ID=1 to ID=4. As shown in
At time T2 the binary tree the interval D1-D5 is broken down into intervals D1 -D3, D4 and D5, and two additional nodes are appended to the binary tree. Finally, at time T2, the interval D1-D3 is broken down into intervals D1, D2 and D3, and two additional nodes are appended to the binary tree, thereby generating the rightmost binary tree shown at the bottom of
The binary tree structure thus generated provides, at a time such as T4, indirect access to the data that was in storage system 140 at an earlier time T1. For a given memory address, the binary tree is traversed to find an interval containing the given address. If such interval is located in a node of the binary tree, then the node also provides the location in journal history where the data can be extracted. Otherwise, if such interval is not located, then the data can be extracted from the latest state of the storage at time T4.
A disadvantage of the binary tree data structure is that storage of the binary tree requires a large amount of memory with the DPA 170, and may exceed the DPA memory capacity. In a second embodiment of the present invention, which generally requires less DPA memory, the data structure generated at step 530 includes one or more sorted lists, each list storing data from write transactions in journal history 200, as described in detail hereinbelow.
Reference is now made to
In accordance with an exemplary embodiment of the present invention, an instant recovery request with a specified point in time triggers generation of ordered lists, as follows. The UNDO METADATA stream is parsed and binned appropriately according to data volume location. For each bin, a binary tree structure of non-overlapping intervals located within the bin, and ordered by beginning storage address, is generated as described hereinabove with respect to
The various corresponding sub-bins of each bin are grouped together into ordered lists, designated by J=1, J=2, etc., as illustrated in
The data within the bins may require a lot of memory for storage. To this end, the ordered lists themselves are stored within storage system 140, arranged, for example, as binary tree structures; and a filtered sub-list is stored in memory of DPA 170, the filtered sub-list including only every Mth entry from the full list. For example, if M=1000, then each 100th entry in a full list is stored in the sub-list. Alternatively, the filtered sub-list may include only one entry from each GB of storage locations.
The sorted lists and sub-lists thus generated provide a virtual interface to the data that was stored in storage system 140 at time T1. Given a specific memory address, the appropriate sub-bin is readily identified. The entries of the corresponding sub-list are searched to identify two bounding addresses, one below and one above the specific memory address. The two entries in the sub-list can include pointers to positions in the full lists that they correspond to and, using these pointers, a search is made of the full list between the two pointers. For example, suppose the specified memory address is 24 G+178 M+223 K+66. Then the relevant sub-list is J=1. Suppose further that the entries 24 G+13 M and 32 G+879 M are located in the sub-list for J=1 at locations corresponding to locations 122,001 and 123,000 in the full list for J=1. Then the full sorted list can be searched over the 1,000 entries between 122,001 and 123,000 to locate an entry that contains the specified memory address 24 G+178 M+223 K+66. If such an entry is located, then the UNDO data from the corresponding write transaction is the sought after data. Otherwise, if such an entry is not located, then the data currently in storage system 140 is the sought after data.
It may be appreciated that the advantage of combining sub-bins in a cyclical arrangement, as illustrated in
Occurrence of “hot spots” poses an additional challenge of sorting. Specifically, when a lot of data is allocated to a single bin, as is the case with “hot spots”, the RAM required to sort such data may not be available within the DPA, since the RAM within the DPA may not be large enough to load all of the data stored on disk. According to an exemplary embodiment of the present invention, the disk is used to sort such data. Instead of loading all of the data within a bin into RAM, the bin is divided into sub-areas according to time. For example, the bin may be divided into sub-areas corresponding to one million write transactions. Data from each sub-area is loaded into RAM, sorted, and written back into the bin. Such sorting of a plurality of sub-areas of the bin generates a corresponding plurality of sub-lists, the lists themselves being sorted according to time. Finally, the sub-lists are merged into an aggregate sorted list, according to any well-known merge algorithms.
The first and second embodiments, illustrated in
It may be appreciated that the data structures shown in
In an exemplary embodiment of the present invention, the data structures shown in
Referring back to
In general, a storage area network (SAN) includes one or more devices, referred to as “nodes”. A node in a SAN may be an “initiator” or a “target”, or both. An initiator is a device is able to initiate requests to one or more other devices; and a target is a device that is able to reply to requests, such as SCSI commands, sent by an initiator. Typically, physical storage systems, like systems 120 and 140 of
A physical storage system may store data in a variety of physical devices, such as disks, or arrays of disks. A physical storage systems typically includes a controller, which has its own one or more processors and memory, and which manages storage of data. In order to enable initiators to send requests to a physical storage system, the controller exposes one or more logical units to which commands are issued. A logical units is identified by a logical unit number (LUN). Generally, a host computer operating system creates a device for each LUN.
Reference is now made to
In an exemplary embodiment of the present invention, host computer 805 is an initiator that sends requests to storage system 810 using the small computer system interface (SCSI). Host computer 805 has access to a device 825, as Device A, which is associated with a logical unit number (LUN) 825, designated as LUN A. In accordance with an exemplary embodiment of the present invention, DPA 815 functions as a target; i.e., it can reply to SCSI commands sent to it by initiators in the SAN. DPA 815 is also an initiator, and may send SCSI commands to storage system 810.
During normal production, host computer 805 issues read and write requests, such as SCSI I/O requests, to LUN A. Data protection agent 820 is able to intercept SCSI commands issued by host computer 805 through Device A. In accordance with an exemplary embodiment of the present invention, data protection agent 820 may act on a SCSI command issued to Device A in one of the following ways:
The communication between data protection agent 820 and DPA 815 may be any protocol suitable for data transfer in a SAN, such as fiber channel, or SCSI over fiber channel. The communication may be direct, or through a LUN exposed by DPA 815, referred to as a “communication LUN”. In an exemplary embodiment of the present invention, data protection agent 820 communicates with DPA 815 by sending SCSI commands over fiber channel.
In an exemplary embodiment of the present invention, data protection agent 820 is a driver located in host computer 805. It may be appreciated that data protection agent 820 may also be located on a fiber channel switch, or in any other device situated in a data path between host computer 805 and storage system 810.
In an exemplary embodiment of the present invention, data protection agent 820 classifies SCSI requests in two broad categories; namely, “SCSI personality” queries, and SCSI I/O requests. SCSI I/O requests are typically read and write commands. SCSI personality queries are SCSI commands send to inquire about a device. Typically, when host computer 805 detects a fiber channel SCSI device, it sends these personality queries in order to identify the device. Examples of such queries include:
In accordance with an exemplary embodiment of the present invention, source side data protection agent 820 is configured to act as a splitter. Specifically, data protection agent 820 routes SCSI personality requests directly to LUN A, and replicates SCSI I/O requests. A replicated SCSI I/O request is sent to DPA 815 and, after receiving an acknowledgement from DPA 815, data protection agent 820 sends the SCSI I/O request to LUN A exposed by storage system 810. Only after receiving a second acknowledgement from storage system 810 will host computer 805 initiate another I/O request. The I/O request path from data protection drive 820 to storage system 810 is indicated by wider arrows in
When DPA 815 receives a SCSI write request from data protection agent 820, DPA 815 transmits I/O information to a corresponding DPA on the target side, for journaling.
Reference is now made to
Host computer 835 includes a host device 865, designated as Device B, having a corresponding LUN 870, designated as LUN B, which is a target LUN.
In accordance with an exemplary embodiment of the present invention, LUN B is a copy of source side LUN A of
During a recovery rollback, write transactions in journal history 200 are undone, so as to restore storage system 840 to the state it was at, at an earlier time T1. Generally, it takes a long time to perform a full rollback of storage system 840. In the meantime, DPA 845 generates a virtual image of storage system 840, and then instructs data protector driver 850 to redirect SCSI I/O commands to DPA 845. SCSI personality commands are still directed to LUN B.
Thus while such rollback is occurring, DPA 845 has indirect access to the time T1 data via the data structures illustrated in
In addition to redirecting SCSI read requests, data protection agent 850 also redirects SCSI write requests to DPA 845. As host computer 835 processes the rolled back data via DPA 845, an auxiliary journal history is maintained, for recording write transactions applied to the time T1 data, as illustrated in
After restoring storage system 840 to its earlier state at time T1, and after applying the write transactions maintained in the auxiliary journal history, LUN B may start receiving I/O requests directly. Using the capabilities of data protection agent 850, as described hereinabove with respect to driver 820 of
It may be appreciated that the architecture of
Reference is now made to
DPA 845 can also include an appropriate data structure for accessing time T1 data, and an appropriate data structure for accessing time T2 data during a rollback and recovery process. As such, host computer 835 is able to access time T1 data using Device B, and to access time T2 data using Device C. Data protection agent 850 handles Device B and Device C in the same way.
The embodiment in
As shown in
Host computer 835 recognizes DPA LUN 890 and generates a corresponding Device C, designated as 885. Since LUN 890 is a DPA LUN, data protection agent 850 routes all SCSI requests from Device C to DPA 845, including SCSI personality requests, as it is not necessary to handle SCSI personality requests differently than SCSI I/O requests in this configuration.
Unlike the configuration of
In yet another embodiment, LUN C may be generated by storage system 840 in response to a recovery process, as illustrated in
In yet another alternative embodiment, LUN C may be generated by data protection agent 850 in response to a trigger for data recovery. Reference is now made to
Thus it may be appreciated that the additional LUN, corresponding to time T2, may be generated by a controller for storage system 840 as in
Having read the above disclosure, it will be appreciated by those skilled in the art that the present invention can be used to provide access to historical data within a wide variety of systems. Although the invention has been described with reference to a data recovery system that includes source side and target side storage systems, the present invention applies to general data management systems that may required “go back” access to data that was processed at an earlier point in time.
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The present application claims the benefit of U.S. Provisional Patent Application No. 60/753,263, filed on Dec. 22, 2005, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5170480 | Mohan et al. | Dec 1992 | A |
5388254 | Betz et al. | Feb 1995 | A |
5499367 | Bamford et al. | Mar 1996 | A |
5864837 | Maimone | Jan 1999 | A |
5879459 | Gadgil et al. | Mar 1999 | A |
5990899 | Whitten | Nov 1999 | A |
6042652 | Hyun et al. | Mar 2000 | A |
6065018 | Beier et al. | May 2000 | A |
6143659 | Leem | Nov 2000 | A |
6148340 | Bittinger et al. | Nov 2000 | A |
6174377 | Doering et al. | Jan 2001 | B1 |
6174809 | Kang et al. | Jan 2001 | B1 |
6203613 | Gates et al. | Mar 2001 | B1 |
6260125 | McDowell | Jul 2001 | B1 |
6270572 | Kim et al. | Aug 2001 | B1 |
6272534 | Guha | Aug 2001 | B1 |
6287965 | Kang et al. | Sep 2001 | B1 |
6467023 | DeKoning et al. | Oct 2002 | B1 |
6574657 | Dickinson | Jun 2003 | B1 |
6621493 | Whitten | Sep 2003 | B1 |
6804676 | Bains, II | Oct 2004 | B1 |
6947981 | Lubbers et al. | Sep 2005 | B2 |
7043610 | Horn et al. | May 2006 | B2 |
7076620 | Takeda et al. | Jul 2006 | B2 |
7111197 | Kingsbury et al. | Sep 2006 | B2 |
7117327 | Hirakawa et al. | Oct 2006 | B2 |
7120768 | Mizuno et al. | Oct 2006 | B2 |
7130975 | Suishu et al. | Oct 2006 | B2 |
7139927 | Park et al. | Nov 2006 | B2 |
7159088 | Hirakawa et al. | Jan 2007 | B2 |
7167963 | Hirakawa et al. | Jan 2007 | B2 |
7222136 | Brown et al. | May 2007 | B1 |
7296008 | Passerini et al. | Nov 2007 | B2 |
7328373 | Kawamura et al. | Feb 2008 | B2 |
7353335 | Kawamura | Apr 2008 | B2 |
7360113 | Anderson et al. | Apr 2008 | B2 |
7426618 | Vu et al. | Sep 2008 | B2 |
7516287 | Ahal et al. | Apr 2009 | B2 |
7519625 | Honami et al. | Apr 2009 | B2 |
7519628 | Leverett | Apr 2009 | B1 |
7546485 | Cochran et al. | Jun 2009 | B2 |
7577867 | Lewin et al. | Aug 2009 | B2 |
7590887 | Kano | Sep 2009 | B2 |
7606940 | Yamagami | Oct 2009 | B2 |
7627612 | Ahal et al. | Dec 2009 | B2 |
7627687 | Ahal et al. | Dec 2009 | B2 |
7757057 | Sangapu et al. | Jul 2010 | B2 |
20020129168 | Kanai et al. | Sep 2002 | A1 |
20030061537 | Cha et al. | Mar 2003 | A1 |
20030110278 | Anderson | Jun 2003 | A1 |
20030196147 | Hirata et al. | Oct 2003 | A1 |
20040205092 | Longo et al. | Oct 2004 | A1 |
20040250032 | Ji et al. | Dec 2004 | A1 |
20040254964 | Kodama et al. | Dec 2004 | A1 |
20050015663 | Armangau et al. | Jan 2005 | A1 |
20050028022 | Amano | Feb 2005 | A1 |
20050049924 | DeBettencourt et al. | Mar 2005 | A1 |
20050172092 | Lam et al. | Aug 2005 | A1 |
20050273655 | Chow et al. | Dec 2005 | A1 |
20060031647 | Hirakawa et al. | Feb 2006 | A1 |
20060047996 | Anderson et al. | Mar 2006 | A1 |
20060064416 | Sim-Tang | Mar 2006 | A1 |
20060107007 | Hirakawa et al. | May 2006 | A1 |
20060117211 | Matsunami et al. | Jun 2006 | A1 |
20060161810 | Bao | Jul 2006 | A1 |
20060179343 | Kitamura | Aug 2006 | A1 |
20060195670 | Iwamura et al. | Aug 2006 | A1 |
20060212462 | Heller et al. | Sep 2006 | A1 |
20070055833 | Vu et al. | Mar 2007 | A1 |
20070162513 | Lewin et al. | Jul 2007 | A1 |
20070180304 | Kano | Aug 2007 | A1 |
20070198602 | Ngo et al. | Aug 2007 | A1 |
20070198791 | Iwamura et al. | Aug 2007 | A1 |
20070220311 | Lewin et al. | Sep 2007 | A1 |
20070266053 | Ahal et al. | Nov 2007 | A1 |
20080082591 | Ahal et al. | Apr 2008 | A1 |
20080082592 | Ahal et al. | Apr 2008 | A1 |
20080082770 | Ahal et al. | Apr 2008 | A1 |
Number | Date | Country |
---|---|---|
1154356 | Nov 2001 | EP |
WO 00 45581 | Aug 2000 | WO |
Number | Date | Country | |
---|---|---|---|
20070266053 A1 | Nov 2007 | US |
Number | Date | Country | |
---|---|---|---|
60753263 | Dec 2005 | US |