This application relates to and claims priority from Japanese Patent Application Nos. 2004-122431, filed on Apr. 19, 2004, and 2003-183734, filed Jun. 27, 2003. The entire disclosures of all of the above-identified applications are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to a storage system, and in particular to copying of data among plural storage systems.
2. Description of the Related Art
In recent years, a technique has grown in importance in which, in order to allow a data processing system to provide services even if a failure has occurred in a storage system used for providing continuous services to customers (hereinafter referred to as first storage system), other storage systems (a storage system a relatively short distance apart from the first storage system is referred to as a second storage system, and a storage system a longer distance apart from the second storage system is referred to as a third storage system) are set separately from the first storage system, and copies of data in the first storage system are stored in the other storage systems. As a technique for copying information stored in the first storage system to the second and the third storage systems, there are techniques disclosed in U.S. Pat. No. 6,209,002 and JP-A-2003-122509.
U.S. Pat. No. 6,209,002 discloses a technique in which the second storage system has two copied data corresponding to copy object data in the first storage system, and the third storage system holds one of the copied data.
JP-A-2003-122509 discloses a technique in which the second storage system has only one copied data corresponding to copy object data in the first storage system, and the third storage system can obtain the copied data without requiring a redundant logical volume for carrying out remote copy as described in U.S. Pat. No. 6,209,002.
As described above, in the conventional techniques, the second storage system is provided between the first storage system and the third storage system, which is located a long distance apart from the first storage system, to realize long-distance remote copy while preventing data loss such that a copy of data in the first storage system is obtained in the third storage system.
However, some users may require a remote copy system in which cost for system operation is considered while failure resistance of data is increased through long-distance copying. For example, a copy of data in the first storage system only has to be held in a storage system located a long distance apart from the first storage system.
In order to give a complete copy of data in the first storage system to the third storage system, which is located a long distance apart from the first storage system, in preparation for a failure, when influence on performance of the first storage system is taken into account, it is necessary to arrange the second storage system between the first storage system and the third storage system and transfer the data from the first storage system to the third storage system through this second storage system. In such a case, it is desired to minimize a logical volume that is used in the second storage system as much as possible.
However, in the case in which it is attempted to remotely copy data from the second storage system to the third storage system located a long distance apart from the second storage system, the second storage system is required to have a volume (copied volume) that is the same as a volume of the first storage system. This volume increases as a capacity of the volume of the first storage system increases.
It is needless to mention that, even if the technique disclosed in JP-A-2003-122509 is applied, the second storage system inevitably has a volume with the same capacity as the copy object volume in the first storage system.
The present invention has been devised in view of such problems, and it is an object of the present invention to minimize or eliminate use of a volume in a second storage system for copying data when the data is copied from a first site to a third site. In addition, it is another object of the present invention to increase availability of a volume such that plural host apparatuses can set an area of the volume as an object of writing.
In order to attain the above-mentioned objects, a form of the present invention has a constitution described below.
A remote copy system includes: a first storage system that sends and receives data to and from a first information processing apparatus; a second storage system that is connected to a second information processing apparatus and the first storage system and receives data from the first storage system; and a third storage system that is connected to the second storage system and receives data from the second storage system. In the remote copy system, the first storage system has a first storage area in which data from an information processing apparatus is written, the second storage system has a logical address for storing a copy of the data but does not have an allocated storage area, and has a second storage area in which the data and update information thereof are written, the data sent from the first storage system is written in the second storage area as the data and the update information, the third storage system has a third storage area in which data read out from the second storage area in the second storage system and update information concerning the data are stored, and the data and the update information stored in the second storage area are read out from the third storage system. The second storage system has a logical address for storing a copy of the data but does not have an allocated storage area, and the storage area has a structure that can be used for transmission and reception of data to and from a second information processing apparatus.
According to the present invention, a copy object data can be copied to the third storage system without requiring the second storage system to have a complete copy of the copy object data in the first storage system. Consequently, a volume capacity in the second storage system can be reduced. In addition, an actual volume, which is not required to be assigned, can be used for another application. Further, a specific area of a volume can be used by plural host apparatuses. The host apparatus in this context means an information processing apparatus that issues instructions for writing data in and reading data from the specific area of the volume. When writing of data in the volume of the second storage system is executed by a writing command issued from the first storage system, the first storage system is a host apparatus for the second storage system. It is needless to mention that an information processing apparatus such as a server can be a host apparatus for a storage system.
In the accompanying drawings:
Embodiments of the present invention will be hereinafter described with reference to the accompanying drawings.
A storage system 15 is connected to the first storage system 10 via a connection line 220. (According to circumstances, this storage system 15 will be hereinafter referred to as a second storage system, and a data processing system including at least this second storage system will be hereinafter referred to as a second site or an intermediate site.)
A storage system 20 is connected to the storage system 15 serving as the second storage system via a connection line 240. (According to circumstances, this storage system 20 will be hereinafter referred to as a third storage system, and a data processing system including at least this third storage system 20 will be hereinafter referred to as a third site.)
The connection lines 210, 220, and 240 may be directly connected lines such as fiber cables or may be connection via a wide-area network such as the Internet.
The storage system 10 in the first site retains a logical volume 110 (ORG1) and a logical volume 120 (ORG2). In this embodiment, it is assumed that an original data to be a copy object is stored in the logical volume 110 (ORG1).
The storage system 15 in the second site retains a copy of the logical volume 110 (ORG1) as a logical volume 150 (Data1). The storage system 20 in the third site retains a logical volume 200 (Data2) in which copied data is stored.
Here, a capacity and a physical storage position (physical address) of a logical volume, which are defined in the storage systems 10, 15, and 20, can be designated using maintenance terminals (not shown) such as computers connected to the respective storage systems or host computers 5, 6, and 7, respectively.
In the following description, in order to facilitate distinction between copy object data and copied data, a logical volume, in which the copy object data is accumulated, will be referred to as a primary logical volume, and a logical volume, in which the copied data is accumulated, will be referred to as a secondary logical volume. The primary logical volume and the secondary logical volume forming a pair will be referred to as a pair. A relation between the primary logical volume and the secondary logical volume, states of the primary logical volume and the secondary logical volume, and the like are saved as a pair setting information table 500 in shared memories (SMs) 70 in the respective storage systems to be described later.
First, an example of a hardware configuration of the storage system 10 shown in
The first storage system 10 has plural channel adapters for connecting the first storage system 10 to the host computer 5. These channel adapters 50 are connected to the host computer 5 and the second storage system 15 via the connection line 210.
The channel adapters 50 are connected to caches 60 via a connection unit 55, analyze a command received from a host apparatus, and control reading-out and writing of data, which is desired by the host computer 5, in the caches 60. The logical volume 110 (ORG1) and the logical volume 120 (ORG2) are arranged over plural HDDs 100.
Note that it is also possible to set the logical volume numbers so as to be uniquely defined by a unit of each storage system and specified in conjunction with identifiers of the storage systems. “Not used” in a volume state indicates that a logical volume is set but is not used yet. “Primary” indicates that a logical volume is in a state in which the logical volume can operate normally as the primary volume of the pair volume described above. “Normal” indicates that a logical volume is not set as a pair with another logical volume but is in a normal state. “Secondary” indicates that a logical volume is a secondary volume and can operate normally. Volume state information indicating a state of a pair will be described later.
This example shown in
A column of a physical address in
The storage system 10 is described above as a representative storage system. However, the other storage systems 15 and 20 shown in
Next, an operation for reflecting data update, which is applied to the primary logical volume 110 (ORG1) in the storage system 10 in the first site, in the logical volume 200 (Data2) of the storage system 20 in the third site via the storage system 15 in the second site (intermediate site) will be explained with reference to
Here, first, journal data will be explained. In order to facilitate explanation, a logical volume of an update source, in which data is updated, is distinguished from the other logical volumes to be referred to as a source logical volume, and a volume, which retains a copy of the update source logical volume, is referred to as a copy logical volume.
The journal data consists of, when data update is applied to a certain source logical volume, at least updated data itself and update information indicating to which position of the source logical volume the update is applied (e.g., a logical address in the source logical volume).
In other words, as long as the journal data is retained when data in the source logical volume is updated, the source logical volume can be reproduced from the journal data.
On the premise that there is a copy logical volume having the same data image as the source logical volume at a certain point in time, as long as the journal data is retained every time data in the source logical volume after that point is updated, it is possible to reproduce the data image of the source logical volume at or after the certain point in time in the copy logical volume.
If the journal data is used, the data image of the source logical volume can be reproduced in the copy logical volume without requiring the same capacity as the source logical volume. A volume in which the journal data is retained will be hereinafter referred to as a journal logical volume.
Data update will be further explained with reference to
The journal logical volume is used in a state in which it is divided into a storage area 9000 (update information area), in which the update information 620 is stored, and a storage area 9100 (write data area), in which write data is stored. Update information is stored in the update information area 9000 in an order of update (an order of an update number) from the top of the update information area 9000. When the update information reaches the end of the update information area 9000, the update information is stored from the top of the update information area 9000. Write data is stored in the write data area 9100 from the top of the write data area 9100. When the write data reaches the write data area 9100, the write data is stored from the top of the write data area 9100. It is needless to mention that it is necessary to apply update work to a logical volume of a copy destination on the basis of information in the journal logical volume before the data exceeds a capacity reserved for the journal logical volume. A ratio of the update information area 9000 and the write data area 9100 may be a fixed value or may be set by the maintenance terminal or the host computer 5.
In
On the other hand, when data update is applied to the logical volume 150 (Data1), the storage system 15 in the second site saves journal data in the logical volume 151 (JNL1) (hereinafter referred to as a journal volume according to circumstances) (arrow 260 shown in
The journal data, which is accumulated in the logical volume 151 (JNL1) for accumulation of journal data in the second storage system 15, is asynchronously transferred to the logical volume 201 (JNL2) for journal accumulation in the third storage system 20 located a long distance apart from the second storage system 15 via the connection line 240 (arrow 270 shown in
The data in the journal volume in the second storage system 15 may be read out from the third storage system 20 and accumulated in the logical volume 201 (JNL2) in the storage system 20 (hereinafter referred to as a PULL system).
This PULL system will be explained specifically. Upon receiving an instruction to read journal data (hereinafter referred to as journal read instruction) from the third storage system 20, the second storage system 15 reads out journal data from the journal logical volume 151 (JNL1) and sends the journal data to the third storage system 20.
Thereafter, the third storage system 20 reads out the journal data from the journal logical volume (JNL2) 201 according to restore processing 350 to be described later and updates the data in the logical volume 200 (Data2). This completes the processing for reflecting the data update, which is carried out for the primary logical volume 110 (ORG1) in the storage system 10 in the first site, in the secondary logical volume 200 (Data2) in the storage system 20 in the third site.
By saving the journal data in the journal volume 201, for example, it is also possible not to perform data update for the secondary logical volume 200 (Data2) when the journal data is received, that is, not to create a copy of the primary logical volume 110 (ORG1) in the secondary logical volume 200 (Data2) using the journal data (restore processing 350) when a load of the storage system 20 is high, and update the data in the secondary logical volume 200 (Data2) after a short time when a load of the storage system 20 is low.
As described above, the logical volume 151 (JNL1) in the second storage system 15 shown in
Next, setting for an entire data center system will be explained specifically. This setting is adopted in performing an operation for reflecting the data update for the logical volume 110 (ORG1) in the storage system 10 in the second storage system 15 in the intermediate site and the third storage system 20 in the third site.
In order to establish a data center system consisting of plural sites as shown in
In the example of
A flowchart in
In
Moreover, the user designates information indicating a data copy object and information indicating a data copy destination and sends a pair registration instruction to the first and the second storage systems 10 and 15 using the maintenance terminals or the host computers 5 and 6 connected to the respective storage systems (step 910). More specifically, the user sets a pair relation between the logical volume 110 (ORG1) and the logical volume 150 (Data1) in
When the logical volume 110 (ORG1) and the logical volume 150 (Data1) are set as a pair, according to a status of the pair, write processing applied to a primary logical volume serves as an opportunity for performing various kinds of processing with respect to a secondary logical volume. For example, the status of the pair includes a suspend state, a pair state, an initial copy state, and the like. When the status of the pair is the pair state, processing for writing data, which is written in the primary logical volume, in the secondary logical volume as well is performed. When the status of the pair is the suspend state, data, which is written in the primary logical volume, is not reflected in the secondary logical volume, and a difference between the primary logical volume and the secondary logical volume is retained in the first storage system 10 using a bit map.
As described above, setting information for the journal group and setting information for this pair are accumulated in the shared memories (SMs) 70 shown in
In the next step 915, the user designates the logical volume 150 (Data1) and the logical volume 200 (Data2) to form a pair and performs initial copy. This is for giving the identical data image to the logical volume 150 (Data1) and the logical volume 200 (Data2) as in the processing in step 910.
A row of a pair number 2 in
When the data image of the logical volume 110 (ORG1) in the first storage system is copied to the logical volumes 150 (Data1) and 200 (Data2) in the storage systems 15 and 20, copy programs in the storage systems 15 and 20 inform the maintenance terminal or the host computer 5 of the end of the copy. After this initialization processing, accurate restore processing (recovery) for data in the storage system 20 becomes possible.
Next, an operation of the storage system in an embodiment of the storage system of the present invention will be explained in detail with reference to
First, the first storage system 10 receives a data write instruction from the host computer 5 via the connection line 210 (arrow 250 in
An arrow 1100 shown in
Upon receiving the data write instruction for writing data in the logical volume 150 (Data1) from the first storage system, the channel adapter 50 retains the write data and update information in the cache memory 60. The write data in the cache 60 is written in the logical volume 150 (Data1) by the disk adapter 80 at timing different from timing for writing data in the cache 60 (arrow 1110 in
Similarly, the update information (including at least an updated address) recorded in the cache 60 is written in an update information area of the logical volume 151 (JNL1), and the write data is further accumulated in a write data area of the logical volume 151 (JNL1) (arrow 1120 in
On the other hand, a channel adapter 51, which is connected to the third storage system 20 via the connection line 240, receives a read instruction for the logical volume 151 (JNL1) from the storage system 20. This point will be described later with reference to
Upon receiving an access instruction from the first storage system 10, the microprocessor mounted in the channel adapter 50 in
If the received access instruction is not a write instruction but a journal read instruction from the third storage system 20, the channel adapter 50 performs journal read reception processing to be described later (steps 1215 and 1220).
If the access instruction is a write instruction in step 1210, the channel adapter 50 checks a volume state of the logical volume 150 (Data1) (step 1240).
As shown in
If the volume state of the logical volume 150 (Data1) is not normal in step 1240, since access to the logical volume 150 (Data1) is impossible, the channel adapter 50 informs the host computer 5 of abnormality and ends the processing (step 1230).
If the volume state of the logical volume 150 (Data1) is normal in step 1240, the channel adapter 50 reserves the cache memory 60 and receives data (step 1250). More specifically, the channel adapter 50 informs the first storage system 10 that the channel adapter 50 is prepared for receiving data. Thereafter, the first storage system 10 sends write data to the second storage system 15. The channel adapter 50 in the second storage system 15 receives the write data and saves the write data in the prepared cache memory 60 (step 1250, arrow 1100 in
Next, the channel adapter 50 checks whether the logical volume 150 (Data1) is a logical volume having a journal group with reference to the journal group setting information table 550 (see
Here,
If the logical volume 150 (Data1) is a logical volume having a journal group, the channel adapter 50 applies journal creation processing to this volume and the journal logical volume 151 (JNL1) forming the journal group (step 1265). Thereafter, at arbitrary timing, the disk adapter 80 writes data in the logical volume 150 (Data1) and the logical volume 151 (JNL1) that are defined on the HDD (step 1280, arrows 1130 and 1140 in
As described above, the journal is created in the second storage system 15, and the journal data is sequentially stored in the journal volume 151 (JNL1). The journal data is sent to the journal volume 201 (JNL2) in the third storage system 20 with a fixed factor as an opportunity. One method for sending the journal data is the PUSH system described above, and there is the PULL system as another method. The PULL system will be explained with reference to
The channel adapter 51 in the second storage system 15 receives an access instruction from the third storage system 20 (arrow 1410 in
If the journal group state is “normal” in step 1510, the channel adapter 51 checks a state of a journal logical volume (step 1520).
If the volume state of the journal logical volume is not “normal”, for example, if the volume state of the journal logical volume is “failure” in step 1520, the channel adapter 51 changes the journal group state shown in
In step 1530, the channel adapter 51 checks whether journal data, which has not been sent, is present. If journal data, which has not been sent, is present, the channel adapter 51 sends the journal data to the third storage system 20 (step 1550). If all journal data have been sent to the storage system 20, the channel adapter 51 informs the third storage system 20 of “absence of journal data” (step 1560). Thereafter, the channel adapter 51 opens an area in which the journal data was present (step 1570).
Processing in the case in which journal data, which has not been sent, is present will be explained more in detail with reference to
In read/write processing of the disk adapter 81, the disk adapter 81 reads the update information and the write data from the logical volume 151 (JNL1) that is a logical area formed in a distributed manner on the HDD 100, saves the update information and the write data in the cache memory 60, and informs the channel adapter 51 of the same (arrows 1430 and 1450 in
The channel adapter 51 is informed that the reading of the write data and the update information into the cache memory 60 has ended, sends the update information and the write data from the cache memory 60 to the third storage system 20, and then opens the cache memory 60 that retains journal data (arrow 1460 in
The channel adapter 51 opens the storage area for the journal data that was sent to the third storage system 20 at the time of the processing of the last journal read instruction (step 1570).
Note that, in the journal read reception processing described above, the second storage system 15 sends the journal data to the third storage system 20 one by one. However, the second storage system 15 may send plural journal data to the storage system 20 simultaneously.
The number of journal data to be sent at one journal read instruction may be designated in a journal read instruction by the third storage system 20 or may be designated in the second storage system 15 or the third storage system 20 by a user, for example, when a journal group is registered.
Moreover, the number of journal data, which is sent at one journal read instruction, may be changed dynamically according to transfer ability, load, or the like of the connection line 240 for the second storage system 15 and the third storage system 20. In addition, a transfer amount of journal data may be designated taking into account a size of write data of journal data rather than the number of journal data.
In the journal read instruction reception processing described above, journal data is read into the cache memory 60 from the HDD 100. However, when journal data is present in the cache memory 60, the processing is unnecessary.
The processing for opening a storage area for journal data in the journal read instruction reception processing is performed at the time of processing for the next journal read instruction. However, the storage area may be opened immediately after sending journal data to the third storage system 20. In addition, it is also possible that the third storage system 20 sets an update number, which may be opened, in a journal read instruction, and the second storage system 15 opens a storage area for journal data in accordance with an instruction of the third storage system 20.
The third storage system 20 having received the journal data stores the received journal data in the journal volume 201 (JNL2). Thereafter, the storage system 20 performs journal restore.
The third storage system 20 executes a journal restore program to restore data in the logical volume 200 (Data2) from the journal volume 201 (JNL2). Note that an area, in which the restored journal data was stored, is purged (opened) and used for storage of new journal data.
Next, this journal restore processing will be explained in detail.
An operation in which a channel adapter 53 in the third storage system 20 updates data using journal data will be explained with reference to
In step 2010 in
If the restore object journal data is present in step 2010, the channel adapter 53 applies the following processing to oldest (smallest) journal data. The channel adapter 53 only has to continuously give update numbers to the journal data and apply the restore processing to update information of journal data having an oldest (smallest) update number. The channel adapter 53 reserves the cache memory 60 (arrow 1910 in
More specifically, the disk adapter 83 in the third storage system 20 reads update information form the HDD 10, in which the update information is stored, according to read/write processing 340, saves the update information in the cache memory 60, and informs the channel adapter 53 of the update information.
Similarly, the disk adapter 83 in the third storage system 20 acquires write data on the basis of the read update information (step 1930) and issues an instruction to read the write data into an area of the cache memory 60 corresponding to a part of the logical volume 200 (Data2) that should be updated (step 2020, arrow 1940 in
Then, the disk adapter 83 writes the write data from the secondary logical volume cache area into the secondary logical volume 200 (Data2) asynchronously to the restore processing (arrow 1950 in
In the restore processing described above, journal data is read into the cache memory 60 from the HDD 100. However, when the journal data is present in the cache memory 60, the processing is unnecessary.
Next, a second embodiment of the present invention will be explained.
First, the flowchart shown in
Next, the user designates information indicating a data copy object and information indicating a data copy destination and performs pair setting using the maintenance terminals or the host computers 5, 6, and 7 connected to the respective storage system (step 3100). More specifically, the user sets a pair relation between the logical volume 110 (ORG1) and the logical volume 200 (Data2) in
In this step 3100, the user designates the logical volume 110 (ORG1) and the logical volume 200 (Data2) to form a pair and performs initial copy. This is for giving an identical image data to the logical volume 110 (ORG1) and the logical volume 200 (Data2). Then, the pair is deleted after the initial copy processing ends (step 3200).
Next, the user sets a pair relation between the logical volume 110 (ORG1) and the logical volume 150 (Data1) in the first storage system 10 and the second storage system 15 (step 3300).
The user registers the logical volume 150 (Data1) and the logical volume 151 (JNL1) as a journal group (step 3400).
The above is the procedure for the initial setting in the second embodiment. After this initialization processing, accurate restore processing (recovery) for data in the storage system 20 becomes possible.
Next,
In the first embodiment, that is, when the logical volume 150 (Data1) in the second storage system 15 has an entity, in this instruction reception processing 310, the processor analyzes the write command, stores write data in an area in a cache memory corresponding to a write destination of a designated logical volume, and accumulates update information in a cache memory corresponding to an area where the journal volume 151 (JNL1), in which the update information is written, is written. The disk adapter 80 performs processing for writing data in the cache memory in a logical volume area corresponding thereto according to circumstances.
On the other hand, in the second embodiment, first, the second storage system 15 judges whether the logical volume 150 (Data1) in the second storage system 15 designated as a write destination is a logical volume, which should be treated as one having an entity, with reference to the pair setting information table 510 shown in
The access instruction reception processing will be further explained with reference to
Next, the channel adapter 54 judges whether a volume, for which the write instruction has been received, is a normal volume (step 9240). If the volume state is not normal, the channel adapter 54 informs abnormality to a host apparatus, which has issued the instruction, via the maintenance terminal and ends the processing (step 9230). Next, the channel adapter 54 judges whether the logical volume, which is a write destination, is a virtual volume using the pair setting information table 510 in
If the logical volume is not a virtual volume, the channel adapter 54 receives data in a cache area corresponding to the logical volume (step 9260) and informs the host apparatus of the end of the data reception (step 9270). Next, the channel adapter 54 judges whether the logical volume is a logical volume having a journal group (step 9280). If the logical volume is a logical volume having a journal group, the channel adapter 54 performs journal creation processing (step 9265).
In this way, since the pair setting information table 510 also includes virtualization information indicating whether a secondary logical volume is virtualized, actual writing of data in the secondary logical volume can be controlled. This makes it possible to define the secondary logical volume as a destination of remote copy without giving a substantial storage capacity to the secondary logical volume.
Next, a third embodiment of the present invention will be explained. In the third embodiment, a constitution for making this virtualized secondary logical volume available for other applications will be explained.
In the third embodiment, the logical volume 150 (Data1) is further connected to the host computer 6 via the channel adapter 57. Then, the third embodiment is particularly characterized by making it possible to write data from the host computer 6 to the logical volume 150 (Data1).
Next, it will be explained how configuration information in the shared memory 70 for making it possible to use the logical volume 150 (Data1) in the host computer 6 is held. The configuration information includes, in addition to the above-mentioned tables (
Upon receiving an access request (read/write request for data) from a host apparatus, a processor in each of the respective channel adapters in the second storage system 15 judges a host apparatus or another channel adapter, which is connected to the channel adapter, with reference to the connection information table 5000 in
If it is judged that the host apparatus connected to the channel adapter is not another storage system (or a channel adapter in the storage system), the channel adapter executes write processing for writing data in the logical volume set as a write object. The channel adapter performs this processing by writing data in a cache area corresponding to the logical volume set as the write object and writes the data in a logical volume, for which a disk adapter is defined on the HDD 100, asynchronously to the writing in the cache area. In this way, the storage system judges whether data, for which I/O (access request) is received, may be written in a designated logical volume.
Since the storage system can only judge whether a logical volume is virtualized, the storage system cannot judge whether the data may be actually written in the volume. Thus, the storage system identifies data from a host apparatus that may actually be written according to which adapter receives the data. Consequently, the storage system can use a logical volume that is virtualized by another host apparatus.
Note that, as another method, when an identifier indicating remote copy data is present in a data set transferred in remote copy, writing of data in a virtualized volume may be restricted only in the case of remote copy using the identifier.
In the present invention, a case in which it is effective to virtualize a volume is explained with remote copy as an example. However, it is also possible to virtualize a logical volume set as an object of a function other than the remote copy, for example, an E-COPY command, which is a standard command of SCSI.
Note that it is needless to mention that, in
Next, a fourth embodiment of the present invention will be explained.
There is a connection information setting section 4400 in an area 4700 that indicates to which storage system or host apparatus each channel adapter in each storage system is connected. This connection information setting section 4400 makes it possible to set a connection relation between each channel adapter and storage system. Note that a connection destination of the channel adapter may be a channel adapter of another storage system or host apparatus.
An example of a screen of the connection setting section 4400 indicates that the channel adapters 56, 57, and 58 are connected to the first storage system 10, the host computer 5, and the third storage system 20, respectively.
Moreover, as shown in
As described above, the user chooses not to virtualize the logical volume 150 (Data1) in the second storage system 15 when the user attaches importance to safety and failure resistance property, and chooses to virtualize the logical volume 15 (Data1) when the user wishes to utilize a volume capacity in the second storage system 15 as much as possible. This makes it possible to establish a system according to a purpose and cost. Note that a procedure for copying data from the first storage system 10 to the third storage system 20 after virtualizing the same is as explained in the second embodiment.
Next, as a fifth embodiment of the present invention, a case will be explained in which, when a failure has occurred in the first storage system 10, a job is continued in the third storage system 20 located a long distance apart from the first storage system 10 (failover).
As shown in
Thus, in order to resume the job in the third storage system 20, first, the data, which has not reached it, is reflected in the logical volume 200 (Data2). In the second and third embodiments and the fourth embodiment in which a user has chosen to virtualize a logical volume, the second storage system 15 does not include the logical volume 150 (Data1), but journal data is present in the journal volume 151 (JNL1). Thus, the journal data is sent to the third storage system 20 to reflect the data, which has not reached it, in the logical volume 200 (Data2) according to the restore processing 350 shown in
As a result, resistance against a failure can be kept while virtualizing the logical volume 150 (Data1) in the second storage system to reduce a volume capacity.
In addition, as a sixth embodiment, as shown in
Consequently, a copy of a copy source logical volume in the first storage system 10 can be created in the logical volume assigned to the second storage system 15 anew. Thereafter, the second storage system 15 can receive an instruction from the host computer 6.
The present invention has been explained specifically on the basis of the embodiments. However, it is needless to mention that the present invention is not limited by the embodiments, and various modifications are possible within a range not departing from the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2003-183734 | Jun 2003 | JP | national |
2004-122431 | Apr 2004 | JP | national |
This application is a continuation application of U.S. Ser. No. 12/570,461, filed Sep. 30, 2009, which is a continuation-in-part of U.S. Ser. No. 12/467,155, filed May 15, 2009 (now U.S. Pat. No. 8,028,139), which is a continuation application of U.S. Ser. No. 11/585,747, filed Oct. 23, 2006 (now U.S. Pat. No. 7,640,411), which is a continuation application of U.S. Ser. No. 10/871,341, filed Jun. 18, 2004 (now U.S. Pat. No. 7,130,976); and is a continuation-in-part of U.S. Ser. No. 11/715,481, filed Mar. 8, 2007, which is a continuation application of U.S. Ser. No. 10/992,432, filed Nov. 17, 2004 (now U.S. Pat. No. 7,725,445), which is a continuation application of U.S. Ser. No. 10/650,338, filed Aug. 27, 2003 (now U.S. Pat. No. 7,152,079).
Number | Name | Date | Kind |
---|---|---|---|
5155845 | Beal et al. | Oct 1992 | A |
5170480 | Mohan et al. | Dec 1992 | A |
5307481 | Shimazaki et al. | Apr 1994 | A |
5379418 | Shimazaki et al. | Jan 1995 | A |
5459857 | Ludlam et al. | Oct 1995 | A |
5544347 | Yanai et al. | Aug 1996 | A |
5555371 | Duyanovich et al. | Sep 1996 | A |
5592618 | Micka et al. | Jan 1997 | A |
5720029 | Kern et al. | Feb 1998 | A |
5734818 | Kern et al. | Mar 1998 | A |
5742792 | Yanai et al. | Apr 1998 | A |
5799323 | Mosher et al. | Aug 1998 | A |
5835953 | Ohran | Nov 1998 | A |
5901327 | Ofek | May 1999 | A |
5933653 | Ofek | Aug 1999 | A |
5974563 | Beeler | Oct 1999 | A |
5995980 | Olson et al. | Nov 1999 | A |
6044444 | Ofek | Mar 2000 | A |
6052758 | Crockett et al. | Apr 2000 | A |
6092066 | Ofek | Jul 2000 | A |
6098079 | Howard | Aug 2000 | A |
6101497 | Ofek | Aug 2000 | A |
6148383 | Micka et al. | Nov 2000 | A |
6157991 | Arnon | Dec 2000 | A |
6173377 | Yanai et al. | Jan 2001 | B1 |
6178427 | Parker | Jan 2001 | B1 |
6209002 | Gagne et al. | Mar 2001 | B1 |
6237008 | Beal et al. | May 2001 | B1 |
6282610 | Bergsten | Aug 2001 | B1 |
6308283 | Galipeau | Oct 2001 | B1 |
6324654 | Wahl et al. | Nov 2001 | B1 |
6360306 | Bergsten | Mar 2002 | B1 |
6363462 | Bergsten | Mar 2002 | B1 |
6393538 | Murayama | May 2002 | B2 |
6397307 | Ohran | May 2002 | B2 |
6408370 | Yamamoto et al. | Jun 2002 | B2 |
6442706 | Wahl et al. | Aug 2002 | B1 |
6446176 | West et al. | Sep 2002 | B1 |
6460055 | Midgley et al. | Oct 2002 | B1 |
6463501 | Kern et al. | Oct 2002 | B1 |
6467034 | Yanaka | Oct 2002 | B1 |
6477627 | Ofek | Nov 2002 | B1 |
6487645 | Clark et al. | Nov 2002 | B1 |
6496908 | Kamvysselis et al. | Dec 2002 | B1 |
6526487 | Ohran et al. | Feb 2003 | B2 |
6535967 | Milillo et al. | Mar 2003 | B1 |
6598134 | Ofek et al. | Jul 2003 | B2 |
6622152 | Sinn et al. | Sep 2003 | B1 |
6625623 | Midgley et al. | Sep 2003 | B1 |
6654752 | Ofek | Nov 2003 | B2 |
6662197 | LeCrone et al. | Dec 2003 | B1 |
6732125 | Autrey et al. | May 2004 | B1 |
6754792 | Nakamura et al. | Jun 2004 | B2 |
6804676 | Bains | Oct 2004 | B1 |
6859824 | Yamamoto et al. | Feb 2005 | B1 |
6883122 | Maple et al. | Apr 2005 | B2 |
6915315 | Autrey et al. | Jul 2005 | B2 |
6941322 | Bills et al. | Sep 2005 | B2 |
6959369 | Ashton et al. | Oct 2005 | B1 |
6968349 | Owen et al. | Nov 2005 | B2 |
6981008 | Tabuchi et al. | Dec 2005 | B2 |
7134044 | Day et al. | Nov 2006 | B2 |
20010007102 | Gagne et al. | Jul 2001 | A1 |
20010029570 | Yamamoto et al. | Oct 2001 | A1 |
20010042222 | Kedem et al. | Nov 2001 | A1 |
20020016827 | McCabe et al. | Feb 2002 | A1 |
20020103980 | Crockette et al. | Aug 2002 | A1 |
20020133511 | Hostetter et al. | Sep 2002 | A1 |
20020133681 | McBrearty et al. | Sep 2002 | A1 |
20020143888 | Lisiecki et al. | Oct 2002 | A1 |
20030014432 | Teloh et al. | Jan 2003 | A1 |
20030014433 | Teloh et al. | Jan 2003 | A1 |
20030051111 | Nakano et al. | Mar 2003 | A1 |
20030074378 | Midgley et al. | Apr 2003 | A1 |
20030074600 | Tamatsu | Apr 2003 | A1 |
20030084075 | Balogh et al. | May 2003 | A1 |
20030115433 | Kodama | Jun 2003 | A1 |
20030126107 | Yamagami | Jul 2003 | A1 |
20030188114 | Lubber et al. | Oct 2003 | A1 |
20030188233 | Lubbers et al. | Oct 2003 | A1 |
20030204479 | Bills et al. | Oct 2003 | A1 |
20030217031 | Owen et al. | Nov 2003 | A1 |
20030220935 | Vivian et al. | Nov 2003 | A1 |
20030229764 | Ohno et al. | Dec 2003 | A1 |
20040015469 | Beier et al. | Jan 2004 | A1 |
20040024808 | Taguchi et al. | Feb 2004 | A1 |
20040024975 | Morishita et al. | Feb 2004 | A1 |
20040030703 | Bourbonnais et al. | Feb 2004 | A1 |
20040059738 | Tarbell | Mar 2004 | A1 |
20040148443 | Achiwa | Jul 2004 | A1 |
20040153719 | Achiwa et al. | Aug 2004 | A1 |
20040172509 | Takeda et al. | Sep 2004 | A1 |
20040172510 | Nagashima et al. | Sep 2004 | A1 |
20040193795 | Takeda et al. | Sep 2004 | A1 |
20040215878 | Takata et al. | Oct 2004 | A1 |
20040230756 | Achiwa et al. | Nov 2004 | A1 |
20040230859 | Cochrane et al. | Nov 2004 | A1 |
20040250031 | Ji et al. | Dec 2004 | A1 |
20050038968 | Iwamura et al. | Feb 2005 | A1 |
20050050115 | Kekre | Mar 2005 | A1 |
20050052921 | Butterworth et al. | Mar 2005 | A1 |
20050071710 | Micka et al. | Mar 2005 | A1 |
20050081091 | Bartfai et al. | Apr 2005 | A1 |
20050114410 | Fujibayashi | May 2005 | A1 |
20050154845 | Shackelford et al. | Jul 2005 | A1 |
20050223267 | Fujibayashi | Oct 2005 | A1 |
20050235121 | Ito et al. | Oct 2005 | A1 |
Number | Date | Country |
---|---|---|
0902370 | Mar 1999 | EP |
1217523 | Jun 2002 | EP |
1283469 | Feb 2003 | EP |
1591899 | Nov 2005 | EP |
1647891 | Apr 2006 | EP |
62-274448 | Nov 1987 | JP |
02-037418 | Feb 1990 | JP |
07-191811 | Jul 1995 | JP |
07-244597 | Sep 1995 | JP |
11-306058 | Nov 1999 | JP |
2000-181634 | Jun 2000 | JP |
2001-282628 | Oct 2001 | JP |
2002-189570 | Jul 2002 | JP |
2002-281065 | Sep 2002 | JP |
2002-542526 | Dec 2002 | JP |
2004-511854 | Apr 2004 | JP |
0049500 | Aug 2000 | WO |
0217056 | Feb 2002 | WO |
0221273 | Mar 2002 | WO |
0231696 | Apr 2002 | WO |
2004017194 | Feb 2004 | WO |
2008028803 | Mar 2008 | WO |
Entry |
---|
Deutsches Patent und Markenant office action to patent application DE10-2004-064076, Mar. 23, 2009. |
Deutsches Patent und Markenant office action for patent application DE10-2004-056216, Oct. 19, 2006. |
Deutsches Patent und Markenant office action for patent application DE10-2004-056216, Jun. 2, 2006. |
“Replication Guide and Reference V7: Document No. SC26-9920-00”, IBM DB2 Guide, 2000, 455 pages. |
“Replication Guide and Reference V8: Document No. SC27-1121-01”, IBM DB2 Guide, 2003, 789 pages. |
“IBM DB2 RepliData for z/OS, Version 3.1”, IBM DB2 Guide, Mar. 2003, 2 pages. |
IBM DB2 DataPropagator for z/OS, Version 8.1, IBM DB2 Guide, 2002, 4 pages. |
JP 2006-161778, Office Action (English translation), Sep. 10, 2007, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20120290787 A1 | Nov 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12570461 | Sep 2009 | US |
Child | 13535630 | US | |
Parent | 11585747 | Oct 2006 | US |
Child | 12467155 | US | |
Parent | 10871341 | Jun 2004 | US |
Child | 11585747 | US | |
Parent | 10992432 | Nov 2004 | US |
Child | 11715481 | US | |
Parent | 10650338 | Aug 2003 | US |
Child | 10992432 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12467155 | May 2009 | US |
Child | 12570461 | US | |
Parent | 11715481 | Mar 2007 | US |
Child | 10871341 | US |