The present invention is related to Japanese patent application No. 2003-400513 filed in Japan on Nov. 28, 2003, and Japanese patent application No. 2003-325082 filed in Japan on Sep. 17, 2003, which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a method for controlling a storage device system, a storage device system, and a storage device.
2. Related Background Art
Disaster recovery in information processing systems is attracting attention. As a technology to realize such disaster recovery, a technology in which a copy of data stored in a storage device that is installed in a primary site is also managed by a storage device that is installed in a remote site located away from the primary site is known. By using the data stored in the storage device installed at the remote site when the primary site is hit by a disaster, processings that are performed at the primary site can be continued at the remote site.
For data transfer from the primary site to the remote site, a method in which data is exchanged between an information processing device at the primary site and an information processing device at the remote site is known. The information processing device at the primary site transfers a copy of data that is written in the storage device at the primary site to the information processing device at the remote site. The information processing device at the remote site that has received the copy of data sends a request to write the data in the storage device at the remote site.
When data is stored as a backup by the method described above, a substantially large amount of data flow occurs on the network between the information processing devices. This causes a variety of problems such as an increased interface processing load on the information processing devices, delays in other data transmissions to be conducted between the information processing devices, and the like. Also, the method described above needs software to control data backup to be installed in each of the information processing devices. For this reason, management works such as upgrading the software and the like need to be performed on all of the information processing devices that execute data backup processings, which increases the management cost.
The present invention has been made in view of the problems described above, and relates to a storage device system, a storage device and a method for controlling a storage device system.
In accordance with an embodiment of the present invention, there is provided a method for controlling a storage device system that includes: at least one information processing device, a first storage device equipped with a first storage volume, and a second storage device equipped with a second storage volume, wherein the information processing device and the first storage device are communicatively connected to one another, the first storage device and the second storage device are communicatively connected to one another, the information processing device is equipped with a first write request section that requests to write data in the first storage device according to a first communications protocol, and the first storage device is equipped with a second write request section that requests to write data in the second storage device according to a second communications protocol. The method comprises: a step in which the information processing device sets a first instruction to be executed at the second storage device as first data; a step in which the information processing device sends a request to write the first data in the first storage volume to the first write request section; a step in which, when the first data written in the first storage volume is an instruction to the second storage device, the first storage device sends a request to write the first data in the second storage volume to the second write request section; and a step in which the second storage device executes the first instruction that is set as the first data written in the second storage volume.
It is noted that the information processing device may be, for example, a personal computer, a work station or a mainframe computer. The storage device may be, for example, a disk array device or a semiconductor storage device. The storage volume may be a storage resource that includes a physical volume that is a physical storage region provided by a disk drive, and a logical volume that is a storage region logically set on the physical volume. Also, the communications protocol may be, for example, a WRITE command stipulated by a SCSI (Small Computer System Interface) standard. As a result, without adding new commands to the operating system, the information processing device can make the second storage device to execute the first command.
Here, for example, when the first command is a command to read data of the first storage device, the second storage device can have a copy of the data of the first storage device according to an instruction from the information processing device. Therefore the present method can reduce the amount of data communicated between the information processing devices in the data backup management. Also, software for controlling data backup does not have to be installed on all of the information processing devices that are performing data backup, which lowers the management costs.
Other features and advantages of the invention will be apparent from the following detailed description, taken in conjunction with the accompanying drawings that illustrate, by way of example, various features of embodiments of the invention.
The information processing device 11 and the first storage device 10 are communicatively connected to each other via a first network 50. The first network 50 may be, for example, a LAN (Local Area Network), a SAN (Storage Area Network), an iSCSI (Internet Small Computer System Interface), an ESCON (Enterprise Systems Connection)®, or a FICON (Fibre Connection)®.
The first storage device 10 and the second storage device 20 are communicatively connected to each other via a second network 60. The second network 60 may be, for example, a Gigabit Ether Net®, an ATM (Asynchronous Transfer Mode), or a public telephone line.
[Information Processing Device]
The information processing device 11 may be a computer that is equipped with a CPU (Central Processing Unit), memories, and other devices. The information processing device 11 may be a personal computer, a work station or a mainframe computer. The information processing device 11 may be composed of a plurality of computers that are mutually connected. An operating system is operating on the information processing device 11, and application software is operating on the operating system.
[Storage Device]
The cache memory 205 is used to temporarily store data that is exchanged mainly between the channel control section 201 and the disk control sections 203. For example, when a data input/output command which the channel control section 201 receives from the information processing device 11 is a write command, the channel control section 201 writes in the cache memory 205 write data received from the information processing device 11. Also, an appropriate one of the disk control devices 203 reads the data written in the cache memory 205, and writes the same in the memory devices 208.
The disk control section 203 reads a data I/O request stored in the shared memory 204 written by the channel control section 201, and executes data writing processing or data reading processing with respect to the memory devices 208 according to a command set at the data I/O request (for example, a command according to a SCSI standard). The disk control section 203 writes in the cache memory 205 data that has been read out from the memory devices 208. Also, the disk control section 203 transmits to the channel control section 201 notifications, such as, for example, a data write completion notification and a data read completion notification. The disk control section 203 may be equipped with a function to control the memory devices 208 with RAID levels (for example, 0, 1, 5) stipulated in the so-called RAID (Redundant Array of Inexpensive Disks) method.
The memory devices 208 may be, for example, hard disk devices. The memory devices 208 may be provided in one piece with or separately as independent devices from the disk array device. Storage regions provided by the memory devices 208 at each site are managed in units of logical volumes 209, which are volumes that are logically set on the storage regions. Data can be written in or read from the memory devices 208 by designating LUNs (Logical Unit Numbers) that are identifiers appended to the corresponding logical volumes 209. Also, the logical volumes 209 are managed in units of a predetermined data amount such as units of 512 Kb, such that input and output of data in this predetermined unit are conducted. Each of the units is called a logical block, and each of the logical blocks is appended with a logical block address (hereafter referred to as a “LBA”) that indicates positional information of the logical block.
The management terminal 207 may be a computer for maintaining and managing the disk array device and the memory devices 208. Changes in the software and parameters to be executed by the channel control section 201 and the disk control section 203 can be conducted by giving instructions from the management terminal 207. The management terminal 207 can be in a form that is built in the disk array device, or can be provided independently from the disk array device.
The remote communications interface 202 is a communications interface (i.e., a channel extender) that is used for data transfer to another storage device. A copy of data is transferred in a remote copy operation to be descried below through this remote communications interface 202. The remote communications interface 202 converts the interface of the channel control section 201 (for example, an interface such as an ESCON® interface or a FICON® interface) to a communications method of the second network 60, whereby data transfer with the other storage device can be realized.
Besides the structure described above, the disk array device may have a structure that functions as a NAS (Network Attached Storage) configured to accept data input/output requests through designating file names from the information processing device 11 according to a relevant protocol such as a NFS (Network File System).
The shared memory 204 can be accessed from both of the channel control section 201 and the disk control section 203. The shared memory 204 is used for delivering data input/output request commands, as well as for storing management information for the storage devices 10 and 20, and the memory devices 208. In the present embodiment, the shared memory 204 stores a LUN map information table 301 shown in
[Virtual Volume]
As described above, the logical volumes 209 are storage regions that are logically set on the physical volumes. Also, by using “virtual volumes” as logical volumes, the storage devices 10 and 20 on which the logical volumes 209 are set can be differentiated from other storage devices that are equipped with physical volumes correlated with the logical volumes 209.
To realize this function, the first storage device 10 stores a LUN map information table 301 shown in
Each entry at “LUN” describes a LUN for each of the logical volumes. When a logical volume 209 is a virtual volume, a storage device that is equipped with the logical volume 209 correlated with the virtual volume is set at “Target.” Furthermore, a LUN of the logical volume 209 correlated with the virtual volume is set at “Mapping LUN.” In other words, when there is a description at “Mapping LUN,” it means that the corresponding logical volume is a virtual volume.
Details of the LUN map information table 301 may be registered, for example, by an operator through the management terminal 207 that is connected to the first storage device 10.
The first storage device 10 uses the LUN map information table 301 described above and provides the second logical volume 40 of the second storage device 20 to the information processing device 11 by a mechanism to be described below as if the second logical volume 40 were the first logical volume 30 of the storage device 10. In other words, the information processing device 11 can make data input/output requests, which are to be issued to the logical volume 209 of the second storage device 20, to the first storage device 10.
Processings by the storage device system, which take place when a data input/output request transmitted from the information processing device 11 is a data write request, will be described with reference to
The information processing device 11 is equipped with a first write request section 401 that writes data in the first storage device, 10 according to a first communications protocol. Upon receiving a data write request from the first write request section 401 (S401), the first storage device 10 writes in the cache memory 205 data to be written that has been received with the data write request.
A data transfer section 402 of the first storage device 10 refers to the LUN map information table 301, and confirms as to whether or not a mapping LUN is set for a first logical volume 30 that is set in the data write request. If a second logical volume 40 is set as the mapping LUN, the data transfer section 402 transfers to a second write request section 403 a request to write the data in the second logical volume 40 according to a second communications protocol. In this embodiment, the second write request section 403 makes data write requests to the second storage device 20 according to the second communications protocol. The second storage device 20 receives the data write request from the second write request section 403, and writes the data in the second logical volume 40 (S402).
It is noted that the first communications protocol and the second communications protocol are for example WRITE commands stipulated by a SCSI standard. Accordingly, the data write interfaces at the first storage device 10 and the second storage device 20 do not need to be changed.
The write processing has been so far described. It is noted however that a read processing to read data from a logical volume is also performed in a manner similar to the write processing except that data is transferred in an opposite direction with respect to the data transfer direction in the write processing.
As describe above, in the storage device system in accordance with the present embodiment, the information processing device 11 accesses the second logical volume as if the second logical volume were a logical volume on the first storage device 10.
[Command Device]
Each of the storage devices 10 and 20 is equipped with a “command device” for controlling special commands. The command device is used to convey commands from the information processing device 11 to the storage devices 10 and 20, and the storage devices 10 and 20 can execute commands that are stored in the command devices. What makes the special commands different from ordinary commands is that the command devices are the logical volumes 209. Functions of the command device will be described below.
Details of the command device management table 501 may be registered, for example, by an operator through the management terminal 207 that is connected to each of the storage devices 10 and 20.
The command device management table 501 of each of the storage devices 10 and 20 can register command devices of other storage devices (that may be similar to the storage device 10 or 20). When the command devices of the other storage devices are registered, LUNs of virtual volumes, which correspond to the LUNs of the command devices of the other storage devices are registered at the entries “Command Device LUN.”
An outline of a process flow to execute a command using a command device will be described with reference to
The first storage device 10 is equipped with a command execution section 703. The command execution section 703 is equipped with a pair management section 704, a copy forming section 705, a restore section 706, a journal storing section 707, a journal acquisition section 708 and a journal stop section 709, which control pairs of the logical volumes 209 to be described below.
The command execution section 703 refers to a command device management table 501, and obtains a LUN of a command device that corresponds to the first storage device 10 (S701). The command execution section 703 refers to the command device (S702) and, if data in the form of the command device interface 601 exists, executes a command designated by a process number indicated in the data.
Referring to flow charts in
Upon receiving the write request, the storage device 10 writes the first data in the command device at the designated LUN.
It is noted that command devices are logical devices that are defined on storage areas of a plurality of storage devices, like the logical volumes 209, and write requests to the command devices are transmitted based on the same communications protocol as that for write requests transmitted to the logical volumes 209.
The storage device 10 refers to the command device management table 501, to specify LUNs of command devices that the storage device 10 itself should refers to, and monitors whether or not the command devices have data written therein (S901). When the first data is found written in any of the command devices under observation, the storage device 10 executes the command designated by the process number in the first data (S902). Having completed the execution of the command, the storage device 10 confirms whether edited data of the first data is present or absent (S903). When edited data is absent, the storage device 10 deletes the first data from the command device (S906). When edited data is present, the storage device 10 sets data outputted as a result of execution of the command as edited data (S904).
The information processing device 11 confirms whether edited data for the command is present or absent (S803); and transmits to the storage device 10 a read request to read the edited data of the first data when the edited data is present (S804). Upon receiving the edited data from the storage device 10 (Yes at S805), the information processing device 11 completes the processing. It is noted that the read request is transmitted based on the same communications protocol for read requests for the logical volumes 209 other than the command device.
When the edited data exists, after receiving the read request for the edited data from the information processing device 11 (S905), the storage device 10 deletes the first data from the command device (S906).
In this manner, read or write requests that are used by the information processing device 11 for reading or writing data from and to ordinary logical volumes of the storage device 10, the information processing device 11 can transfer commands to the storage device 10.
Also, by using the virtual volumes, the information processing device 11 can transfer commands to the second storage device 20 through the first storage device 10, such that the second storage device 20 can execute the commands.
It is noted that, when the information processing device 11 requests the storage device 10 and 20 to execute a “pair formation,” “journal acquisition,” “acquisition of processing state of journal,” “restore” or “swap” processing to be described below, the information processing device 11 uses the virtual volumes and command devices.
[Pair Formation]
Next, a description will be made as to a method for storing a copy of data in the logical volume 209 of the first storage device 10 in the logical volume 209 of the second storage device 20.
Any one of appropriate methods for assigning the logical volumes 209 for storing the journals can be used. For example, the user himself/herself may designate those of the logical volumes 209 to be used as the journals, or the information processing device 11 may select appropriate unused ones of the logical volumes 209.
Referring to
Also, the journal storage section 707 of the first storage device 10 starts a processing to obtain a copy of the data written in the primary volume and its positional information in the primary journal. The correlation between the primary volume and the primary journal is described hereunder with reference to
Also, by using a method similar to the above, a copy of data in the logical volume 209 of the second storage device 20 can be stored in the logical volume 209 of the first storage device 10 by an instruction from the information processing device 11.
As a result, without performing data communications between plural information processing devices, and without adding new commands to the operating system of the information processing device 11, data stored in a storage device at a primary site can be stored as a backup in a storage device at a remote site. Also, in accordance with the present embodiment, a storage device at a remote site transmits a read request to a storage device at a primary site to thereby perform a copy forming processing. By this, the processing load on the storage device at the primary site during the copy forming processing is alleviated. In other words, in a method in which a storage device at a primary site writes data in a storage device at a remote site, the storage device at the primary site needs to write the data in the storage device at the remote site after it confirms that the storage device at the remote site is ready for forming a pair. For this reason, the processing load on the storage device at the primary site becomes heavier, which would affect the overall performance of the primary site that is performing other primary processings. In contrast, in accordance with the present embodiment, since the storage device at the primary site only has to send data in response to a read request from the storage device at the remote site, the processing load at the storage device at the primary site can be alleviated.
[Restoration]
Even after the copy forming processing is performed, the first storage device 10 accepts write requests from the information processing device 11, and updates the data in the primary volumes. For this reason, the data in the primary volumes becomes inconsistent with the data in the auxiliary volumes. As described above, the primary journal stores journal data for executions performed even after the copy forming processing took place. In this respect, the second storage device 20 copies data stored in the primary journal into the auxiliary journal, and writes the data stored in the auxiliary journal into the auxiliary volumes, such that updates of the data on the primary volumes can be likewise performed on the auxiliary volumes.
Here, a processing to copy data stored in the primary journal into the auxiliary journal by the second storage device 20 is referred to as a “journal acquisition” processing, and a processing to write journal data stored in the auxiliary journal into the auxiliary volume is referred to as a “restoration” processing.
Next, referring to
The journal data region 1202 of the auxiliary journal is composed of restoration completed regions 1521 that store journal data that have already been used for restoration in the auxiliary volumes, restore in-progress region 1522 that stores journal data that are designated for restoration, read completed region 1523 that stores journal data that are not designated for restoration, and read in-progress region 1524 that stores journal data that are being read from the primary journal in response to a journal acquisition instruction.
Each of the storage devices 10 and 20 stores journal data in the journal data region 1202 from the head LBA to the end LBA in a chronological order as the journal data is created. When the journal data reaches the end LBA, each of the storage devices 10 and 20 returns to the head LBA again, and stores journal data from there. In other words, the storage devices 10 and 20 use the journal data regions cyclically between the head LBA and the end LBA.
The first storage device 10 that is equipped with the primary journal stores a journal-out LBA 1511 which is a head LBA of the journal storage completed regions 1502, 1503 and 1504, and a journal-in LBA 1512 which is a head LBA of the purge completed region 1501. When the journal-out LBA and the journal-in LBA are equal to each other, it means that journal data is not stored in the primary journal.
The second storage device 20 that is equipped with the auxiliary journal stores a restoration completed LBA 1531 which is the highest LBA of the restoration completed region 1521, a to-be restored LBA 1532 which is the highest LBA of the restore in-progress region 1522, a read completed LBA 1533 which is the highest LBA of the read completed region 1523, and a to-be read LBA 1534 which is the highest LBA of the read in-progress region 1524.
In other words, when the restoration completed LBA 1531 and the to-be restored LBA 1532 are equal to each other, it means that a restoration processing instructed by the information processing device 11 has been completed. Also, when the read completed LBA 1533 and the to-be read LBA 1534 are equal to each other, it means that a journal acquisition processing instructed by the information processing device 10 has been completed.
The information processing device 11 can transmit to the first storage device 10 and the second storage device 20 a request to obtain the processing state of journal. Each of the storage devices 10 and 20 confirms the states of LBAs that indicate the boundaries of the regions described above, and responds to the request.
Also, since the storage devices 10 and 20 use the journal data regions cyclically as described above, regions that become unnecessary need to be released. The processing to release a region is called a “purge” processing. Each of the storage devices 10 and 20 can perform a purge processing by changing addresses of LBAs that indicate the boundaries of the regions. The first storage device 10 can purge the journal storage completed region 1502, among the journal storage completed regions 1502, 1503 and 1504 of the primary journal, which the second storage device 20 has completed acquiring the journal data into the auxiliary journal. In this case, the first storage device 10 changes the journal-out LBA 1511 to the head LBA of the journal storage completed region 1503, such that the journal storage completed region 1502 becomes the purge completed region 1501. The second storage device 20 treats the restoration completed region 1521 of the auxiliary journal as a region that is purged, and stores the journal data obtained in response to the journal acquisition instruction in the restoration completed region 1521.
Referring to a flowchart in
By the processings described above, updated data in a storage device at a primary site can be reflected on a storage device at a remote site without performing data communications between multiple information processing devices, and without adding new commands to the operating system of the information processing devices. It is noted that, with an instruction from the information processing device 11 that is communicatively connected to a storage device at a remote site, the storage device at the remote site can obtain journal data from a storage device at a primary site and restore the data.
[Swap]
Let us assume that a primary volume of the first storage device 10 and an auxiliary volume of the second storage device 20 form a pair by an instruction from an information processing device 11 (hereafter referred to as a “first information processing device”) that is communicatively connected to the first storage device 10. In this case, if a failure occurs in the first information processing device 10, an information processing device 11 (hereafter referred to as a “second information processing device”) that is communicatively connected to the second storage device 20 can continue processings that have been performed by the first information processing device, using the auxiliary volume of the pair. In this instance, the second information processing device switches the relation between the primary volume and the auxiliary volume. In other words, a pair is formed with the logical volume 209 of the second storage device 20 being a primary volume and the logical volume 209 of the first storage device 10 being an auxiliary volume. Such a processing to switch the pair relation is called a “swap” processing.
Referring to
The swap processing performed by the second information processing device 20 and the storage devices 10 and 20 will be described in detail with reference to flowcharts in
Let us consider as an example an information processing system that is composed of a primary site and a remote site. The primary site is equipped with a first information processing device and a first storage device 10, and the remote site is equipped with a second information processing device and a second storage device 20. When a failure occurs in the first information processing device, the second information processing device uses the second storage device 20 to continue primary processings performed at the primary site. The second information processing device may instruct the first storage device 10 and the second storage device 20 to execute the swap instruction described above, such that the second storage device 20 is used for the primary processings, and data on the second storage device 20 can be stored as a backup in the first storage device 10. Furthermore, since the data on the second storage device 20 is stored as a backup in the first storage device 10, the execution of the primary processings can be quickly switched to the primary site, when the first information processing device is recovered from the failure.
Also, since the pair swap instructions from the information processing device 11 to the storage devices 10 and 20 are provided using read/write commands with which the information processing device 11 is equipped, there is no need to add new commands to the operating system on the information processing device 11.
[Three-site Structure with Command Device System]
The storage device system described above transfers commands between two storage devices using virtual volumes and command devices. Furthermore, a virtual volume structure can be set up among three or more storage devices so that commands can be transferred among three or more storage devices.
A logical volume 2201 of the first storage device 10 is a virtual volume of a logical volume 2202 of the second storage device 20, the logical volume 2202 of the second storage device 20 is a virtual volume of a logical volume 2203 of the third storage device 25. A third logical volume 2203 is a command device. By connecting the virtual volumes in this manner, commands can be transferred from an information processing device 11 to the third storage device 25.
Accordingly, by using the single information processing device 11 and three storage devices 10, 20 and 25, a storage device system that can handle disaster recovery can be structured.
The first storage device 10, the second storage device 20 and the third storage device 25 are equipped with logical volumes 2301, 2302 and 2303, respectively. In this example, the logical volume 2301 of the first storage device 10 and the logical volume 2302 of the second storage device 20 form a synchronous pair with the logical volume 2301 of the first storage device as being a primary logical volume.
The following is a description of the synchronous pair.
Upon receiving a data write request from the information processing device 11 to write data in the logical volume 2301 of the first storage device 10, which is a primary volume, the first storage device 10 refers to the synchronous pair management table 2401, and sends the data write request to the second storage device 20 to write the data in the logical volume 2302 of the second storage device 20, which is an auxiliary volume corresponding to the primary volume. Upon receiving a data write completion notification for the data write request from the second storage device 20, the first storage device 10 sends the data write completion notification to the information processing device 11. In other words, when the write completion notification is sent to the information processing device 11, the same data have been written in the primary volume and the auxiliary volume. Accordingly, by using the synchronous pair, data can be backed up in the second storage device 20, without losing the data of the first storage device 10. However, in making a synchronous pair, if there is a great distance between the first storage device 10 and the second storage device 20, transmission of a write completion notification to the information processing device 11 may be delayed, and the processing by the information processing device 11 may be affected.
Accordingly, a synchronous pair is formed when the first storage device 10 and the second storage device 22 are located in a short distance from each other, and a pair using the journals described above (hereafter referred to as an “asynchronous pair”) is formed with the second storage device 20 and the third storage device 25. The information processing device 11 sends to the second storage device 20 and the third storage device 25 a command to form a pair with the logical volume of the second storage device 20 being as a primary volume and the logical volume of the third storage device 25 as being an auxiliary volume, using the virtual volumes and command devices. After the pair is formed, the information processing device 11 sends a command to obtain and restore a journal in the pair to the third storage device 25, by using the virtual volumes and the command devices. Accordingly, data of the first storage device 10 can be backed up in the third storage device 25 that may be installed at a great distance from the first storage device 10. Also, neither the third storage device 25 nor the second storage device 20 that is located intermediate between the first and third storage devices 10 and 25 requires an information processing device. It is noted that an information processing device for backup may be connected to the third storage device 25, and processings may be continued using data in the third storage device 25 when the first storage device 10 fails.
A data backup method in which commands are transferred among storage devices by using virtual volumes and command devices is described above. In the description above, the information processing device 11 sends commands to the storage devices 10, 20 and 25. However, each of the storage devices 10, 20 and 25 may be equipped with a command setting section 701 and a command transmission section 702. For example, in the structure indicated in
[Method for Designating Transfer Destination Address]
Next, a method to designate an address of a storage device of a transfer destination at the time of transferring a command.
An outline of processings to be executed when the information processing device 11 transfers a command to the second storage device 20 will be described with reference to
The data generation section 2701 of the information processing device 11 refers to the path information management table 2501, refers to the path to the second storage device 20, and recognizes that the second storage device 20 is connected via the first storage device 10. The data generation section 2701 generates data 2710 in the command interface 2601 format in which the address of the second storage device 20 is set as a transfer destination address and a first command is set at a process number. The data generation section 2701 notifies the data transfer section 2702 to transfer the data 2710 to the first storage device 10. The data transfer section 2702 transfers the data 2710 to the first storage device 10.
The command analysis section 2703 of the first storage device 10, upon receiving the data 2710, notifies the data transfer section 2704 of the first storage device 10 to transfer the data 2710 to the second storage device 20 that is set as the transfer destination address. The data transfer section 2704 of the first storage device 10 generates data 2711 by deleting the transfer destination parameter from the data 2710, and transfers the data 2711 to the second storage device 20. The command analysis section 2703 of the second storage device 20, upon receiving the data 2711, obtains the first command from the command parameter, as the data 2711 does not contain a transfer destination parameter. The command analysis section 2703 of the second storage device 20 notifies the command execution section 2705 of the second storage device 20 to execute the first command. The command execution section 2705 of the second storage device 20, upon receiving the notification from the command analysis section 2703, executes the first command.
When the first command causes an output result, the command execution section 2705 of the second storage device 20 stores the output result as edited data 2712. In this example, the same identification number is appended to all of the data 2710, 2711 and 2712. When the first command causes an output result, the information processing device 11 can obtain the output result of the first command by designating an identification number at the first storage device 10 and sending a send command to send the edited data 2712.
[Method for Designating Transfer Destination Address in Three-site Structure]
The method for transferring a command between two storage devices by designating a transfer destination address has been described above. Further, a command can be transferred among three or more storage devices by designating multiple transfer destination addresses.
A process flow in which the information processing device 11 transfers a first command to the third storage device 25 is described below. The data generation section 2701 of the information processing device 11 refers to the path information management table 2501 and recognizes that the third storage device 25 is connected via the first storage device 10 and the second storage device 20. The data generation section 2701 generates data 2801 in the command interface 2601 format in which the second storage device 20 and the third storage device 25 are set as transfer destination addresses, and a first command is set as a process number. The data generation section 2701 notifies the data transfer section 2702 to transfer the generated data to the first storage device 10 that is at the head address in the transfer path. The data transfer section 2702, upon receiving the notification, sends the data 2801 to the storage device 10.
The command analysis section 2703 of the first storage device 10, upon receiving the data 2801, obtains the address of the second storage device 20 which is set at the head of the transfer destination parameter, and notifies the data transfer section 2704 of the first storage device 10 to send the data to the second storage device 20. The data transfer section 2704, upon receiving the notification, generates data 2808 by deleting the transfer destination address of the second storage device 20 from the data 2801, and sends the data 2802 to the second storage device 20.
The second storage device 20 obtains the address of the third storage device 25 which is set at the head of the transfer destination parameter just as does the first storage device 10, generates data 2803 by deleting the third storage device 25 from the transfer destination parameter, and sends the data 2803 to the third storage device 25.
The command analysis section 2703 of the third storage device 25, upon receiving the data 2803, obtains the process number of the first command from the control parameter as the data 2803 does not contain a transfer destination address, and notifies the command execution section 2705 of the third storage device 25 to execute the first command. The command execution section 2705 of the third storage device 25, upon receiving the notification, executes the first command.
In this manner, by setting multiple transfer destination addresses in the transfer destination parameter, a command can be transferred among three or more storage devices.
[Processings of Method for Designating Transfer Destination Address]
First, the information processing device 11 generates an identification number and sets the identification number in a command interface 2601 (S3001). The identification number can be any number that can uniquely identify the command interface 2601 generated. The information processing device 11 refers to the path information management table 2501, and obtains a path for a storage device to which a command is to be transferred (S3002). The information processing device 11 sets one or multiple transfer destination addresses at the transfer destination parameter based on the path information obtained (S3003). Next, the information processing device 11 sets a process number of the is command to be executed by the storage device at the transfer destination and a presence/absence of edited data of the command (S3004). Then, the information processing device 11 sends the data of the command interface 2601 to a storage device at the head address in the transfer path (S3005). The information processing device 11 confirms whether the command results in edited data (S3006), and ends the processing when no edited data exists. When edited data exists, the information processing device 11 designates the identification number and sends an edited data send request to the storage device at the head address in the transfer path (S3007). The information processing device 11 waits until it receives the edited data (S3008), and ends the processing upon receiving the edited data.
Each of the storage devices 10, 20 and 25 may perform the following processings. Upon receiving the data of the command interface (S3101), each storage device stores the data in a cache memory (S3102). The storage device confirms whether the data contains a transfer destination parameter (S3103). If the transfer destination parameter exists, the storage device obtains a transfer destination address at the head of the transfer destination parameter (S3104). Next, the storage device deletes the transfer destination address at the head of the transfer destination parameter from the data (S3105), and sends the data to the transfer destination address previously obtained (S3106). The storage device confirms whether the command results in edited data (S3107). If edited data exists, the storage device executes a processing to obtain the edited data (S3108), deletes the data from the cache memory (S3109), and ends the processing. If no edited data exists, the storage device does not perform the processing to obtain edited data, deletes the data from the cache memory (S3109), and ends the processing.
When a transfer destination parameter is not set in the received command interface 2601, the storage device (which may be 10, 20 or 25) obtains a process number of the control parameter (S3110), and executes a command designated by the process number (S3111). The storage device (10, 20, 25) confirms whether the command results in edited data (S3112). If edited data exists, the storage device (10, 20, 25) generates edited data set with the identification number set in the command interface 2601 (S3113), executes a process to obtain edited data (S3108), deletes the data from the cache memory (S3109), and ends the processing. If edited data does not exist, the storage device (10, 20, 25) deletes the data from the cache memory (S3109) and ends the processing.
In the process to obtain edited data, the storage device (10, 20, 25) waits until it receives a edited data obtaining request to obtain edited data (S3201). Upon receiving the request to obtain edited data, the storage device refers to the command interface 2601 at the identification number that is sent in the edited data obtaining request (S3202). The storage device (10, 20, 25) confirms if the command interface 2601 referred to contains a transfer destination parameter (S3203). If the transfer destination parameter exists, it is not the storage device (10, 20, 25) that executed the command, such that the storage device (10, 20, 25) obtains a transfer destination address at the head of the transfer destination parameter (S3204), and sends the edited data obtaining request to the obtained transfer destination address (S3205). The storage device (10, 20, 25) waits until it receives edited data (S3206), and, when it receives the edited data, the storage device sends the edited data received to the transmission source of the edited data obtaining request (S3207). When no transfer destination parameter exists, it is the storage device that executed the command, the storage device (10, 20, 25) obtains edited data of the identification number (S3208), and sends obtained edited data to the transmission source of the edited data obtaining request (S3209). By this, the edited data is sent to the information processing device 11 back along the path in which the command is transferred.
In the processing indicated in
More specifically, the storage device (10, 20, 25) confirms whether the last transfer destination address is the address of its own (S3303), and determines that the storage device (10, 20, 25) is in a transfer path if the last transfer destination address is not its own address, and determines that the storage device (10, 20, 25) should execute a command if the last transfer destination address is its own address. When the storage device (10, 20, 25) is in the transfer path, the storage device confirms whether the transfer destination address includes its own address (S3304); if its own address is included, the storage device obtains a transfer destination address next to the address of its own (S3305); and if its own address is not included, the storage device obtains the head transfer destination address (S3306). Then, the storage device (10, 20, 25) sends the data of the command interface 2601 to the obtained transfer destination address (S3307). When the storage device (10, 20, 25) is to execute a command, the storage device (10, 20, 25) obtains a process number of the control parameter (S3311), and executes a command designated by the process number (S3312). Other processings are basically the same as those of the processings indicated in
[Determination of Shortest Transfer Path]
Next, a description will be made as to a process in which the storage device (10, 20, 25) determines the shortest transfer path and sends a command interface 2601 in the transfer path determined.
The first storage device 10, upon receiving the data 3501 of the command interface from the information processing device 11, refers to the connection information management table 3601, and obtains the address of the third storage device 25 which is set at the last of the transfer destination parameter. The first storage device 10 judges that data can be directly sent to the third storage device 25 because the address of the third storage device 25 is stored in the connection information management table 3601. The first storage device 10 generates data 3502 by deleting the transfer destination addresses of the third storage device 25 and the second storage device 20, and directly sends the data 3502 to the third storage device 25 without passing it through the second storage device 20.
The storage device (10, 20, 25) obtains the number of transfer destination addresses set in the transfer destination parameter, and sets the obtained number as a variable “N” (S3704). Then, the storage device (10, 20, 25) obtains the N-th transfer destination address (S3705), and confirms whether the obtained transfer destination address is stored in the connection information management table 3601 (S3706). When the obtained transfer destination address is not stored in the connection information management table 3601, the number N is reduced by 1 (S3707), and the storage device (10, 20, 25) repeats the processings to obtain the N-th transfer destination address and to confirm whether the obtained transfer destination address is stored in the connection information management table 3601 (S3705, S3706). As a result, the storage device (10, 20, 25) can obtain the shortest transfer path. Then, the storage device (10, 20, 25) deletes the transfer destination addresses before the N-th transfer destination address from the data of the command interface 2601 (S3708), and sends the data to the N-th transfer destination address (S3709). The other processings are the same as those indicated in
In this manner, the storage device (10, 20, 25) determines the shortest transfer path, and transfers a command along the shorted transfer path, such that the data transfer amount among the storage devices can be reduced, and the transfer time for transferring commands can be shorted.
[Execution of Command at Each of the Storage Devices]
Next, a description will be made as to a method in which the information processing device 11 sends commands to a plurality of storage devices 10, 20 and 30 with a single instruction.
The first storage device 10, upon receiving the data 3901, obtains the transfer destination address of the first transfer destination parameter, and sends data 3092 by deleting the head transfer destination parameter from the data 3901 to the second storage device 20. The second storage device 20, upon receiving the data 3902, executes the first command that is set in the first control parameter. Then the second storage device 20 deletes the head control parameter in the data 3902, and obtains the transfer destination address of the transfer destination parameter. The second storage device 20 sends to the third storage device 25 data 3903 by deleting the head transfer destination parameter from the data 3902. The third storage device 25, upon receiving the data 3903, executes the second command set in the head control parameter.
In this manner, by sending commands to a plurality of storage devices 10, 20 and 25, for example, in the case of forming asynchronous pairs among the storage devices, the information processing device 11 can perform controls, such as, simultaneously sending pair forming commands to the storage devices 10, 20 and 25 that form primary volumes and to the storage devices 10, 20 and 25 that form auxiliary volumes in asynchronous pairs. In other words, pairs can be readily formed without communicating among plural information processing devices.
In the above described embodiment, commands are transferred through designating transfer destination addresses. In the embodiment described above, the information processing device 11 sends commands to the storage devices 10, 20 and 25. However, each of the storage devices 10, 20 and 25 may be equipped with a data generation section 2701 and a data transmission section 2702.
By using the method of transferring commands through designating transfer destination addresses in a manner described above, commands can be executed by each of the storage devices 10, 20 and 25 which may not be connected to an information processing device, just as does the storage device system transfer commands through using virtual volumes and command devices. By this, data backup and pair operations can be conducted without communicating data among multiple information processing devices. Also, when commands are transferred through designating transfer destination addresses, dedicated logical volumes such as command devices do not need to be provided, and therefore data areas to be allocated to users would not be reduced. Also, in the case of command devices, there is a possibility that the performance of the storage devices 10, 20 and 25 may lower because data input/output requests may concentrate on specified logical volumes. In contrast, when transferring commands through designating transfer destination addresses, cache memories are used, such that a reduction in the performance of the storage devices 10, 20 and 25 can be prevented.
While the description above refers to particular embodiments of the present invention, it will be understood that these embodiments are presented for ready understanding of the present invention and many modifications may be made without departing from the spirit thereof. The accompanying claims are intended to cover such modifications as would fall within the true scope and spirit of the present invention. For example, in the present embodiments, commands that are transferred are commands relating to pairs of logical volumes. However, commands to be transferred are not limited to commands relating to pairs, but may be any commands that are executable by the storage devices 10, 20 and 25.
The presently disclosed embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims, rather than the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Number | Date | Country | Kind |
---|---|---|---|
2003-325082 | Sep 2003 | JP | national |
2003-400513 | Nov 2003 | JP | national |
This is a continuation application of U.S. Ser. No. 11/050,927, filed Feb. 4, 2005 (now U.S. Pat. No. 7,200,727), which is a continuation application of U.S. Ser. No. 11/010,216, filed Dec. 10, 2004 (now U.S. Pat. No. 7,080,202), which is a continuation application of U.S. Ser. No. 10/820,629, filed Apr. 8, 2004 (now U.S. Pat. No. 7,203,806).
Number | Name | Date | Kind |
---|---|---|---|
3771137 | Barner et al. | Nov 1973 | A |
4025904 | Adney et al. | May 1977 | A |
5155845 | Beal et al. | Oct 1992 | A |
5408465 | Gusella et al. | Apr 1995 | A |
5459857 | Ludlam et al. | Oct 1995 | A |
5504882 | Chai et al. | Apr 1996 | A |
5515521 | Whitted et al. | May 1996 | A |
5548712 | Larson et al. | Aug 1996 | A |
5596706 | Shimazaki et al. | Jan 1997 | A |
5680580 | Beardsley et al. | Oct 1997 | A |
5680640 | Ofek et al. | Oct 1997 | A |
5692155 | Iskiyan et al. | Nov 1997 | A |
5758118 | Choy et al. | May 1998 | A |
5835954 | Duyanovich et al. | Nov 1998 | A |
5870537 | Kern et al. | Feb 1999 | A |
5895485 | Loechel et al. | Apr 1999 | A |
5917723 | Binford | Jun 1999 | A |
5956750 | Yamamoto et al. | Sep 1999 | A |
5978890 | Ozawa et al. | Nov 1999 | A |
6012123 | Pecone et al. | Jan 2000 | A |
6098129 | Fukuzawa et al. | Aug 2000 | A |
6101497 | Ofek | Aug 2000 | A |
6108748 | Ofek et al. | Aug 2000 | A |
6148383 | Micka et al. | Nov 2000 | A |
6173374 | Heil et al. | Jan 2001 | B1 |
6195730 | West | Feb 2001 | B1 |
6209002 | Gagne et al. | Mar 2001 | B1 |
6219753 | Richardson | Apr 2001 | B1 |
6230239 | Sakaki et al. | May 2001 | B1 |
6237008 | Beal et al. | May 2001 | B1 |
6240486 | Ofek et al. | May 2001 | B1 |
6240494 | Nagasawa et al. | May 2001 | B1 |
6247099 | Skazinski et al. | Jun 2001 | B1 |
6247103 | Kern et al. | Jun 2001 | B1 |
6253295 | Beal et al. | Jun 2001 | B1 |
6351792 | Milillo | Feb 2002 | B1 |
6356977 | Ofek et al. | Mar 2002 | B2 |
6374327 | Sakaki et al. | Apr 2002 | B2 |
6393537 | Kern et al. | May 2002 | B1 |
6421767 | Milillo | Jul 2002 | B1 |
6446141 | Nolan et al. | Sep 2002 | B1 |
6446175 | West et al. | Sep 2002 | B1 |
6453354 | Jiang et al. | Sep 2002 | B1 |
6457109 | Milillo et al. | Sep 2002 | B1 |
6457139 | D'Errico et al. | Sep 2002 | B1 |
6484173 | O'Hare et al. | Nov 2002 | B1 |
6484187 | Kern et al. | Nov 2002 | B1 |
6490659 | McKean et al. | Dec 2002 | B1 |
6523096 | Sanada et al. | Feb 2003 | B2 |
6526419 | Burton et al. | Feb 2003 | B1 |
6529944 | Lecrone | Mar 2003 | B1 |
6529976 | Fukuzawa et al. | Mar 2003 | B1 |
6539462 | Mikkelsen et al. | Mar 2003 | B1 |
6553408 | Merrel et al. | Apr 2003 | B1 |
6587933 | Crockett | Jul 2003 | B2 |
6587935 | Ofek | Jul 2003 | B2 |
6591351 | Urabe et al. | Jul 2003 | B1 |
6598134 | Ofek et al. | Jul 2003 | B2 |
6631477 | Le Crone et al. | Oct 2003 | B1 |
6640278 | Nolan et al. | Oct 2003 | B1 |
6640291 | Fujibayashi | Oct 2003 | B2 |
6643671 | Milillo | Nov 2003 | B2 |
6643750 | Achiwa et al. | Nov 2003 | B2 |
6647474 | Yanai et al. | Nov 2003 | B2 |
6647476 | Nagasawa et al. | Nov 2003 | B2 |
6654830 | Taylor et al. | Nov 2003 | B1 |
6654831 | Otterness et al. | Nov 2003 | B1 |
6658540 | Sicola et al. | Dec 2003 | B1 |
6675258 | Bramhall et al. | Jan 2004 | B1 |
6681303 | Watabe et al. | Jan 2004 | B1 |
6681339 | McKean et al. | Jan 2004 | B2 |
6684310 | Anzai et al. | Jan 2004 | B2 |
6687718 | Gagne et al. | Feb 2004 | B2 |
6697367 | Halstead et al. | Feb 2004 | B1 |
6708232 | Obara | Mar 2004 | B2 |
6745281 | Saegusa | Jun 2004 | B1 |
6799255 | Blumenau et al. | Sep 2004 | B1 |
6813698 | Gallo et al. | Nov 2004 | B2 |
6816948 | Kitamura et al. | Nov 2004 | B2 |
6826778 | Bopardikar et al. | Nov 2004 | B2 |
6851020 | Matsumoto et al. | Feb 2005 | B2 |
6883064 | Yoshida et al. | Apr 2005 | B2 |
6922762 | Hirakawa et al. | Jul 2005 | B2 |
7082506 | Nakano et al. | Jul 2006 | B2 |
7219201 | Kasako et al. | May 2007 | B2 |
20010050915 | O'Hare et al. | Dec 2001 | A1 |
20010052018 | Yokokura | Dec 2001 | A1 |
20020004857 | Arakawa et al. | Jan 2002 | A1 |
20020004890 | Ofek et al. | Jan 2002 | A1 |
20020019908 | Reuter et al. | Feb 2002 | A1 |
20020019920 | Reuter et al. | Feb 2002 | A1 |
20020019922 | Reuter et al. | Feb 2002 | A1 |
20020019923 | Reuter et al. | Feb 2002 | A1 |
20020026558 | Reuter et al. | Feb 2002 | A1 |
20020029326 | Reuter et al. | Mar 2002 | A1 |
20020065864 | Hartsell et al. | May 2002 | A1 |
20020078296 | Nakamura et al. | Jun 2002 | A1 |
20020087544 | Selkirk et al. | Jul 2002 | A1 |
20020103889 | Markson et al. | Aug 2002 | A1 |
20020103968 | Grover | Aug 2002 | A1 |
20020112113 | Karpoff | Aug 2002 | A1 |
20020120664 | Horn et al. | Aug 2002 | A1 |
20020133735 | McKean et al. | Sep 2002 | A1 |
20020143903 | Uratani et al. | Oct 2002 | A1 |
20020156887 | Hashimoto | Oct 2002 | A1 |
20020156984 | Padovano | Oct 2002 | A1 |
20020156987 | Gajjar et al. | Oct 2002 | A1 |
20020178335 | Selkirk et al. | Nov 2002 | A1 |
20020188592 | Leonhardt et al. | Dec 2002 | A1 |
20020194428 | Green | Dec 2002 | A1 |
20020194523 | Ulrich et al. | Dec 2002 | A1 |
20030002503 | Brewer | Jan 2003 | A1 |
20030037071 | Harris et al. | Feb 2003 | A1 |
20030051109 | Cochran | Mar 2003 | A1 |
20030051111 | Nakano et al. | Mar 2003 | A1 |
20030056038 | Cochran | Mar 2003 | A1 |
20030078903 | Kimura et al. | Apr 2003 | A1 |
20030093597 | Marshak et al. | May 2003 | A1 |
20030097607 | Bessire | May 2003 | A1 |
20030101228 | Busser et al. | May 2003 | A1 |
20030105931 | Weber et al. | Jun 2003 | A1 |
20030115432 | Biessener et al. | Jun 2003 | A1 |
20030126107 | Yamagami | Jul 2003 | A1 |
20030126327 | Pesola et al. | Jul 2003 | A1 |
20030145168 | LeCrone et al. | Jul 2003 | A1 |
20030145169 | Nagasawa | Jul 2003 | A1 |
20030158999 | Hauck et al. | Aug 2003 | A1 |
20030163553 | Kitamura et al. | Aug 2003 | A1 |
20030167419 | Yanai et al. | Sep 2003 | A1 |
20030182525 | O'Connell | Sep 2003 | A1 |
20030185064 | Hirakawa et al. | Oct 2003 | A1 |
20030200387 | Urabe et al. | Oct 2003 | A1 |
20030204597 | Arakawa et al. | Oct 2003 | A1 |
20030212854 | Kitamura et al. | Nov 2003 | A1 |
20030212860 | Jiang et al. | Nov 2003 | A1 |
20030221077 | Ohno et al. | Nov 2003 | A1 |
20030229764 | Ohno et al. | Dec 2003 | A1 |
20040003022 | Garrison et al. | Jan 2004 | A1 |
20040049553 | Iwamura et al. | Mar 2004 | A1 |
20040054850 | Fisk et al. | Mar 2004 | A1 |
20040064610 | Fukuzawa et al. | Apr 2004 | A1 |
20040064641 | Kodama | Apr 2004 | A1 |
20040073831 | Yanai et al. | Apr 2004 | A1 |
20040078535 | Suzuki et al. | Apr 2004 | A1 |
20040088417 | Bober et al. | May 2004 | A1 |
20040098547 | Ofek et al. | May 2004 | A1 |
20040117369 | Mandal | Jun 2004 | A1 |
20040123026 | Kaneko | Jun 2004 | A1 |
20040123180 | Soejima et al. | Jun 2004 | A1 |
20040143832 | Yamamoto et al. | Jul 2004 | A1 |
20040148443 | Achiwa | Jul 2004 | A1 |
20040158652 | Obara | Aug 2004 | A1 |
20040172510 | Nagashima et al. | Sep 2004 | A1 |
20040186968 | Factor | Sep 2004 | A1 |
20040193795 | Takeda et al. | Sep 2004 | A1 |
20040230980 | Koyama et al. | Nov 2004 | A1 |
20040260735 | Martinez | Dec 2004 | A1 |
20040260875 | Murotani et al. | Dec 2004 | A1 |
20040260966 | Kaiya et al. | Dec 2004 | A1 |
20040267829 | Hirakawa et al. | Dec 2004 | A1 |
20050010743 | Tremblay et al. | Jan 2005 | A1 |
20050033828 | Watanabe | Feb 2005 | A1 |
20050060505 | Kasako et al. | Mar 2005 | A1 |
20050081009 | Williams et al. | Apr 2005 | A1 |
20050102479 | Innan et al. | May 2005 | A1 |
20050166023 | Kasako et al. | Jul 2005 | A1 |
20050240741 | Nagasawa et al. | Oct 2005 | A1 |
Number | Date | Country |
---|---|---|
0869438 | Oct 1998 | EP |
0869438 | Oct 1998 | EP |
1130514 | Sep 2001 | EP |
1313017 | May 2003 | EP |
06-214853 | Aug 1994 | JP |
08-137772 | May 1996 | JP |
09-288547 | Apr 1997 | JP |
10-283272 | Oct 1998 | JP |
11-184641 | Jul 1999 | JP |
2000293317 | Oct 2000 | JP |
2001067187 | Mar 2001 | JP |
2002157091 | May 2002 | JP |
2002230246 | Aug 2002 | JP |
2002-259183 | Sep 2002 | JP |
2003-122509 | Apr 2003 | JP |
2003-131917 | May 2003 | JP |
97-09676 | Mar 1997 | WO |
9745790 | Dec 1997 | WO |
0153945 | Jul 2001 | WO |
0197030 | Dec 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20070150680 A1 | Jun 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11050927 | Feb 2005 | US |
Child | 11707038 | US | |
Parent | 11010216 | Dec 2004 | US |
Child | 11050927 | US | |
Parent | 10820629 | Apr 2004 | US |
Child | 11010216 | US |