This application relates to and claims priority from Japanese Patent Application No. 2004-321015 filed on Nov. 4, 2004 the entire disclosure of which is incorporated herein by reference.
1. Field of the Invention
The present invention relates to a storage system having a plurality of partitions containing storage devices and management method of the storage system.
2. Background of the Invention
Conventionally, in a disk array where data are mirrored in two LUs for acquiring snapshots later time to be used as a backup, a method has been proposed for constituting the respective Array Groups having a storage area of original data and a storage area to be provided as a snapshot as disk structures of nD+1P having mutually different n, such that the respective Array Groups can adopt mutually flexible constitutions. With this method, provided are a mirror source LU as the storage area on a plurality of disk drives constituted with nD+1P; a mirror destination LU as the storage area on a plurality of disk drives constituted with mD+1P; an n-RAID control sub program for performing RAID control of nD+1P; an m-RAID control sub program for performing RAID control of mD+1P; and an LU mirroring sub program which duplicates written data from a host computer to both the mirror source LU and mirror destination LU. Incidentally, m and n are integral numbers of 2 or greater, and m and n are different values (for example, c.f. Japanese Patent Laid-Open Publication No. 2002-259062).
Meanwhile, with respect to storage systems, technology referred to as SLPR (Storage Logical Partition) is known. SLPR is an application of logical partition (LPR) technology of mainframe computers, which virtually partitions a single mainframe computer and enables the use of such single computer as though a plurality of computers exists, to storage systems. In other words, SLPR is a system of logically partitioning ports and LDEVs (logical volume) inside the storage system to make a single storage system seem as though there are a plurality of storage systems to the users (i.e., SLPR managers), and a SLPR manager is only able to view or operate the SLPR ports and LDEVs which he owns.
In a storage system employing this kind of SLPR technology, there are cases when a secondary volume of the ShadowImage, which is snapshot technology employing so-called split-resynchronization of mirrored volumes, stores important data even when such secondary volume is not assigned to a host computer. Nevertheless, as a result of the secondary volume not being assigned to the host computer, the SLPR manager may misunderstand that secondary volumes of the ShadowImage are volumes not being used, and inadvertently delete the data stored in such secondary volumes. Further, when the foregoing manager wants a new volume so that he can store new data, there is also a problem in that the secondary volume tends to be assigned to a host computer although the volume contains important data.
In order to prevent the secondary volume from being assigned to a host computer for storing new data, this can be realized by making the secondary volume a read-only volume, or providing a setting such that it will not be subject to I/O; that is, so that it will not be accessed for the reading of data or writing of data. Nevertheless, the foregoing method is not able to prevent the foregoing manager from misunderstanding that the secondary volume of the ShadowImage is a volume not being used, and inadvertently deleting the volume itself or the data stored in such secondary volume.
Accordingly, an object of the present invention is to overcome inconveniences such as a manager erroneously deleting data in volumes employed as the secondary volumes of mirrored volumes, or deleting the the secondary volumes themselves in a storage system logically partitioned and formed with a plurality of partitions containing ports and volumes.
The storage system management device according to the first perspective of the present invention comprises: a first setting unit for setting first partitions containing active volumes among the plurality of partitions of a storage system formed by logically partitioning the storage system; a second setting unit for setting a second partition containing secondary volumes and candidates of secondary volumes capable of forming (ShadowImage) pairs with primary volumes, with the active volumes as the primary volumes, among the plurality of partitions; a volume information acquisition unit for acquiring information pertaining to volumes contained in the plurality of partitions; a determination unit for determining whether the volume is a candidate of a secondary volume contained in the second partition from the information of the volume acquired by the volume information acquisition unit; and a pair creation unit for extracting a volume capable of making the volume determined by the determination unit as being contained in the second partition become the secondary volume among the volumes contained in the first partitions, and creating a pair with the volume as the primary volume and the determined volume as the secondary volume thereof.
In a preferable embodiment pertaining to the first perspective of the present invention, an access inhibition unit for inhibiting any I/O access to candidates of the secondary volumes contained in the second partition is further provided.
In another embodiment, none of the volumes contained in the first partitions are used as the secondary volumes.
Further, in another embodiment, a manager judgement unit for judging whether the manager of the storage system is a higher-level manager capable of managing all of the respective partitions, or a lower-level manager capable of managing only a specific partition among the respective partitions is further provided.
Moreover, in another embodiment, when the manager judgement unit judges the manager of the storage system to be the higher-level manager or the manager of the second partition, the pair creation unit entrusts him with the selection, from the second partition, of the volumes to be the secondary volumes of the ShadowImage pair volumes.
Further, in another embodiment, the extraction of the volume to be the primary volume from the first partitions is conducted by the top manager.
Moreover, in another embodiment, when the manager judgement unit judges the manager of the storage system to be the lower-level manager, the pair creation unit automatically conducts the selection, from the second partition, of the volume to be the secondary volume of the ShadowImage pair volume.
Further, in another embodiment, only the manager of the second partition performs the processing of assigning the volume made to be the secondary volume contained in the second partition to a host computer which read/write data to/from the volume, and the processing of canceling such assignment (i.e., removing the access path).
The storage system management device according to the second perspective of the present invention comprises: a first setting unit for setting first partitions containing active volumes among the plurality of partitions of a storage system formed by logically partitioning the storage system; a second setting unit for setting a second partition for accommodating, as the secondary volumes, volumes forming (ShadowImage) pairs with primary volumes, with the active volumes as the primary volumes, among the plurality of partitions;
a volume information acquisition unit for acquiring information pertaining to volumes contained in the plurality of partitions excluding the second partition; a judgement unit for judging whether there is a volume forming a pair with an active volume among the volumes contained in the plurality of partitions excluding the second partition from the information of the volume acquired by the volume information acquisition unit; and a volume transfer unit for transferring a volume judged by the judgement unit to be forming a pair as the secondary volume to the second partition when the active volume is made to be the primary volume.
The storage system management method according to the third perspective of the present invention comprises: a first step of setting first partitions containing active volumes among the plurality of partitions of a storage system formed by logically partitioning the storage system; a second step of setting a second partition containing secondary volumes and candidates of a secondary volumes capable of forming a pair with primary volumes, with the active volumes as the primary volumes, among the plurality of partitions; a third step of acquiring information pertaining to a volume contained in the plurality of partitions; a fourth step of judging whether the volume is a candidate of the secondary volume contained in the second partition from the information of volume acquired in the third step; and a fifth step of extracting a volume capable of making the volume judged in the fourth step as being contained in the second partition become the secondary volume with the volume contained in the first partition, and creating a pair with the volume as the primary volume and the judged volume as the secondary volume thereof.
Embodiments of the present invention are now explained in detail with reference to the drawings.
With the storage system depicted in
As shown in
In SLPR1, port 31 is a port for receiving I/Os from user A's host computer 51; and port 32 is a port for receiving I/Os from user A's host computer 52. Next, in SLPR2, port 33 is a port for receiving I/Os from user B's host computer 53; and port 34 is a port for receiving I/Os from user B's host computer 54. Further, in SLPR3, port 35 is a port for receiving I/Os from user C's host computer 55.
An SVP (Service Processor) 20 is connected to the storage system 10, and the SVP 20 is connected to the management computer 40 via the LAN (Local Area Network) 30. The SVP 20 is a PC (personal computer) for performing the maintenance and management operations of the storage system 10; that is, it is a maintenance terminal (SVP is hereinafter referred to as a “maintenance terminal”). The maintenance terminal 20 is able to manage all LDEVs (11 to 110) and all ports 31 to 35 (within the storage system 10) by the manager operating the maintenance terminal 20 logging in as the manager of the storage system; in other words, as the subsystem manager.
Meanwhile, for example, if the manager of user A logs in as the partition manager of SLPR1 (i.e., SLPR manager who is a manager operating the maintenance terminal 20 as with the foregoing subsystem manager), the maintenance terminal 20 is only able to manage the ports 31, 32 contained in SLPR1, and the LDEV11 to 14 contained in SLPR1. Further, for example, if the manager of user B logs in as the partition manager of SLPR2 (i.e., SLPR manager), the maintenance terminal 20 is only able to manage the ports 33, 34 contained in SLPR2, and the LDEV15 to 18 contained in SLPR2. Moreover, for example, if the manager of user C logs in as the partition manager of SLPR3 (i.e., SLPR manager), the maintenance terminal 20 is only able to manage the port 35 contained in SLPR3, and the LDEV19 to 110 contained in SLPR3.
The management computer 40 is a terminal such as a PC loaded with storage management software, and this storage management software operates in the management computer 40.
As explained in
The subsystem manager is a person (operator) who manages the storage system 10 by operating the maintenance terminal (20), and is able to manage the LDEVs (11 to 110) and ports (31 to 35) contained in all partitions (SLPR1, SLPR2, and SLPR3) constituting the storage system (10). The subsystem manager is also able to set the partitions (SLPR1 to SLPR3) in the storage system 10.
As with the subsystem manager described above, the SLPR manager is also a manager (operator) who operates the maintenance terminal (20). Nevertheless, the SLPR manager, unlike the subsystem manager, is only able to view and manage the LDEVs and ports (e.g., LDEV11 to 14 and ports 31, 32) contained in the partition that he personally manages (e.g., SLPR1 if such SLPR manager is the manager of SLPR1), and is not able to view or manage the other LDEVs or ports.
The administrator is a manager (operator) who operates the storage management software 50 loaded in the management computer 40 by operating the management computer 40.
In
Incidentally, the SLPR manager or subsystem manager, by logging onto the maintenance terminal 20, operates the maintenance terminal 20 for managing the respectively corresponding (i.e., his) SLPR (one among SLPR1 to 3), or the storage system 10.
As shown in
In
Although each host computer 611 to 61n is connected to the storage system 65 via the SAN 63, as the communication network for connecting each host computer 611 to 61n and the storage system 65, in addition to the SAN 63, for instance, a LAN, Internet, dedicated line, or public (telephone) line may be suitably used according to the situation. In the present embodiment, since a Fibre Channel SAN (63) is used as the communication network, each host computer 611 to 61n requests the input and output of data to the DKC 71 with a block, which is a fixed-size (e.g., 512 bytes each) data management unit of the storage area provided by a plurality of physical disks, as the unit, according to the Fibre Channel protocol.
Incidentally, when a LAN is to be used as the communication network, data communication via the LAN, for instance, will be conducted according TCP/IP (Transmission Control Protocol/Internet Protocol). Each host computer 611 to 61n designates a file name and requests the input and output of data in the unit of file to the DKC 71 (of the storage system 65). Further, the foregoing adaptor (not shown) is a host bus adaptor (HBA) for example when the SAN is used as the communication network as in the present embodiment, and the foregoing adaptor (not shown) is a LAN-compliant network card (NIC; Network Interface Card) for example when the LAN is used as the communication network. Moreover, the foregoing data communication can also be conducted via the iSCSI protocol.
In DKC 71, each CHA 771 to 77n is for conducting data transfer with each host computer 611 to 61n, and has one or more communication ports (description thereof is omitted in
Each disk adapter (DKA) 931 to 93n is for exchanging data between DKC71 and the physical disks 951 to 95n via the Fibre Channel 73, and has one or more Fibre Channel ports (description thereof is omitted in
Each DKA 931 to 93n also reads data from a target address of the target volume located in physical disks 951 to 95n via the Fibre Channel 73 based on the request (writing command) from a host computer 611 to 61n and stored the data to cache memory 81 via the crossbar switch 79. Then CHA 771 to 77n reads the data from cache memory 81 through the crossbar switch 79, and transmits to the host computer 611 to 61n which issued the read request. Incidentally, when each DKA 931 to 93n is to perform read or write data with the volumes placed in physical disks 951 to 95n via the Fibre Channel 73, the logical address is converted into a physical address. Further, when each DKA 931 to 93n performs data access to a RAID volume dispersed in physical disks 951 to 95n, the physical to logical address conversion will be done according to the RAID configuraion.
The cache memory (CM) 81 temporarily stores the data provided from each CHA 771 to 77n via the crossbar switch 79, wherein each CHA 771 to 77n received such data from each host computer 611 to 61n. Together with this, the CM 81 temporarily stores data provided from each DKA 931 to 93n via the crossbar switch 79, wherein each DKA 931 to 93n read such data from each volume (physical disk) 951 to 95n via the Fibre Channel 73. Incidentally, instead of CM 81, one or a plurality of the volumes located in high performance physical disks 951 to 95n may be used as the cache disk (CM) for the CM 81.
The shared memory (SM) 83 is connected, via the shared bus 87, to each CHA 771 to 77n, each DKA 931 to 93n and the bridge 85. Control information and the like is stored in the SM 83, and, in addition to various tables such as the mapping table being stored therein, it can be used as work area.
The bridge 85 is placed between and connects the internal LAN 91 and the shared bus 87, and is required when the maintenance terminal 89 accesses the SM 83 via the internal LAN 91 and shared bus 87.
The crossbar switch 79 is for mutually connecting each CHA 771 to 77n, each DKA 931 to 93n, and CM 81, and the crossbar switch 79, for example, may be constituted as a high-speed bus such as a ultra-fast crossbar switch for performing data transmission pursuant to a high-speed switching operation.
The maintenance terminal 89, as described above, is connected to the bridge 85 via the internal LAN 91, and connected to the management computer 69 via the LAN 67, respectively.
As a volume, for example, in addition to physical disks such as hard disks or flexible disks, various devices such as magnetic tapes, semiconductor memory, and optical disks may be used. Several LDEVs; that is, logical volumes (or logical devices) are formed from the plurality of physical disks.
The management computer 69 is a terminal such as a PC for running the storage management software 50 described above.
CHA 771 is constituted as a single unit board having one or a plurality of circuit boards, and, as shown in
The host I/F 107 has a dual port Fibre Channel chip which contains SCSI (Small Computer System Interface) protocol controller, as well as two FC ports. The host I/F 107 functions as a communication interface for communicating with each host computer (611 to 61n). The host I/F 107, for example, receives I/O requests transmitted from the host computer (611 to 61n) or controls the transmission and reception of data according to the Fibre Channel protocol.
The memory controller 105, under the control of the CPU 101, communicates with the DMA 109 and host I/F 107. In other words, the memory controller 107 receives read requests of data stored in the physical disks 951 to 95n, or write requests to the physical disks 951 to 95n from the host computers (611 to 61n) via the port of the host I/F 107. And, it further exchanges data and exchanges commands with the DKA 931 to 93n, CM 81, SM 83, and maintenance terminal 89.
The DMA 109 is for performing DMA transfer between the host I/F 107 and CM (81) via the crossbar switch 79, and the DMA 109 executes the transfer of the data transmitted from the host computers (611 to 61n) shown in
In addition to the memory 103 being provided with firmware for the CPU 101 of the CHA 771, the memory 103 is used as the work area for the CPU 101.
The CPU 101 controls the respective components of the CHA 771.
The DKA 931, as shown in
The disk I/F 119 has a single port Fibre Channel chip which has SCSI protocol controller. The disk I/F 119 functions as a communication interface for communicating with the physical disks.
The DMA 117 performs the DMA transfer between the disk I/F 119 and CM 81 via the crossbar switch 79 based on the command provided from the CPU 113 via the memory controller 111. The DMA 117 also functions as the communication interface between the CHA 77, and cache memory 81.
The memory controller 111, under the control of the CPU 113, communicates with the DMA 117 and disk I/F 119.
In addition to the memory 115 being provided with firmware for the CPU 133 of the DKA 931, the memory 115 is used as the work area for CPU 113.
The CPU 113 controls the respective components of the DKA 931.
The maintenance terminal 89, as described above, is for accessing the various management tables on the SM 83 via the internal LAN 91, bridge 85, and shared bus 87, and, for example, is a PC for activating an OS such as US Microsoft's Windows (registered trademark). The maintenance terminal 89, as shown in
The memory 123 stores the OS and other programs and non-volatile fixed data required for the maintenance terminal 89 to perform maintenance and management operations to the storage system 65. The memory 123 outputs the foregoing fixed data to the CPU 121 according to the data read out request from the CPU 121. Incidentally, reproductions of the various management tables stored in the SM 83 may also be stored in the memory 123. In this case, (the CPU 121 of) the maintenance terminal 89 does not have to access the SM 83 each time it is necessary to refer to the various management tables.
Connected to the interface unit 125 are an internal LAN 91, an (external) LAN 67, an input device 129 such as a keyboard or a mouse, an output device 131 such as a display, and a local disk 127. The input device 129 is directly operated by a manager (of the maintenance terminal 89) (i.e., a subsystem manager or SLPR manager) when such manager is to perform the maintenance or management operation of the storage system 65 via the maintenance terminal 89. When the reproduction of the various management tables is not stored in the memory 123, the interface unit 125, under the control of the CPU 121, accesses the SM 83 via the internal LAN 91, bridge 85, and shared bus 87, and refers to the various management tables stored in the SM 83. The interface unit 125, under the control of the CPU 121, receives the management commands issued by the management computer 69 to the maintenance terminal 89 and transmitted via the (external) LAN 67.
The CPU 121 controls the respective components of the maintenance terminal 89.
Incidentally, the local disk 127 is an auxiliary storage medium in the maintenance terminal 89.
The port partition table shown in
The LDEV partition table shown in
The LDEV management table shown in
In the example shown in
Next, the LDEV in which the LDEV number is 1 (i.e., LDEV1) implies that the size (memory capacity) thereof is 0 GB, or does not exist. Therefore, information on the LDEV (in which the LDEV number is 1) regarding the RAID level, physical disk number, top block number, pair number, and pair role will be meaningless.
Next, when the pair number and pair role in the LDEV in which the LDEV number is N (i.e., LDEVN) are both −1, this implies that the LDEV (in which the LDEV number is N) does not constitute a local replication pair.
Incidentally, in each LDEV in which the LDEV number is 0, 1, . . . , n, . . . , N, the pair role “primary” means the LDEV constitutes the primary LDEV, the pair role “secondary” means it constitutes the secondary LDEV, and the pair role “−1” means it does not constitute a pair, respectively.
The storage manager management table shown in
As evident upon comparing
The storage manager management table (A) has information required for the storage management software 50 to issue a command to the maintenance terminal 20.
The pair management table shown in
In this pair management table, contained as the type of pair status in addition to sync, pair and split shown in
Incidentally, a differential bitmap is a bitmap for representing the differential between the data stored in the primary LDEV and the data stored in the secondary LDEV. In the differential bitmap, one logical block in the LDEV is represented with 1 bit, when a given logical block in the primary LDEV and a given logical block in the secondary LDEV corresponding to such logical block coincide, this is represented as “0”, and when they do not coincide, this is represented as “1”.
The administrator management table shown in
The storage management software (50) in the management computer (40) has the SLPR management table for secondary LDEV shown in
As evident from the foregoing description, a command issued by the management computer 40 is transmitted from the management computer 40 to the maintenance terminal 20. And then, a response as a result of the command is transmitted from the maintenance terminal 20 to the management computer 40.
For example, the command transmitted from the management computer 40 to the maintenance terminal 20 via the LAN 30 may be an LDEV information request command. Attached to this LDEV information request command (“GetLdevInfo”), are the user ID of the subsystem manager or the user ID of the SLPR manager as the user ID, and the password to be used upon logging onto the maintenance terminal (20) as the password, and information on the SLPR number corresponding desired LDEVs information, respectively. Incidentally, when LDEVs information pertaining to all the SLPRs in the storage system is desired, the SLPR number will be designated as “all”.
Meanwhile, in the response to be transmitted from the maintenance terminal 20 to the management computer 40 in response to the LDEV information request command, all LDEV information contained in the SLPR (SLPR number) designated with the management computer 40 attached to the LDEV information request command is in a format according to the format of the LDEV management table shown in
As the foregoing response, series of information pertaining to a specific LDEV, such as the LDEV number, size (of LDEV) (memory capacity), RAID level, physical disk number, physical disk number, physical disk number, . . . , top block number, pair number and pair role is transmitted from the maintenance terminal 20 to the management computer 40 for the number of LDEVs designated in the LDEV information request command. Incidentally, since the number of the physical disk numbers listed will change depending on the RAID level (RAID5 (3D+1P)), the number of physical disk numbers to be listed can be sought by checking the RAID level.
In the present embodiment, in addition to the foregoing LDEV information request command, the local replication pair generation command is also used.
Attached to this local replication pair generation command (“CreatePair”) are user ID information, password information, primary LDEV number information, secondary LDEV number information, and so on. As the response to this local replication pair generation command, there are “Succeeded” and “Failed”.
In
As a result of this check, when it is judged as being “all” (Yes in step S143), all SLPR entered in the storage manager management table (or storage manager management table (A)) is made to be the management target. Here, if the storage manager is a subsystem manager, all SLPR in the storage system (65) will become a management target (step S144).
Next, the CPU 121 of the maintenance terminal 89 refers to all SLPR entered in the LDEV partition table shown in
As a result of checking the designated SLPR number attached to the LDEV information request command, if it is judged as not being “all” (No in step S143), the SLPR number designated with the LDEV information request command is checked to see whether it is to be managed by a manager (designated manager) designated in the storage manager management table (or storage manager management table (A)) (step S149). As a result of this check, when it is judged that the designated SLPR number is to be managed by the designated user (Yes in step S149), the SLPR pertaining to the designated SLPR number information is transferred to the processing routine shown in step S145 as the SLPR of the management target (step S150). Incidentally, when the designated user is a subsystem manager, the routine proceeds to the processing routine shown in step S150.
When it is judged that the LDEV currently subject to checking does not belong to the SLPR which is a management target from the LDEV number information held by the LDEV partition table and the independent SLPR information entered in the table corresponding to each LDEV number information (No in step S146), the routine immediately proceeds to the processing routine shown in step S148.
The processing routine from step S145 to step S147 is repeated up to the end of the LDEV partition table (No in step S148), and, when it is judged that the routine reached the end, the series of LDEV information request command processing steps will end. Incidentally, when it is judged as No at all steps of step S141, step S142 and step S149, (the CPU 121 of) the maintenance terminal 89 transmits Failed as the response to the management computer 69 (step S151), and the series of LDEV information request command processing steps will end.
In
Next, the CPU 121 of the maintenance terminal 89, as a result of accessing the SM 83 (of the DKC 71), refers to all SLPR entered in the LDEV partition table shown in
When it is judged that the storage manager is a subsystem manager from the user ID attached to the LDEV information request command (Yes in step S162), the routine immediately proceeds to the routine processing shown in step 165. Unlike the SLPR manager, the subsystem manager is not subject to any restrictions for pairing the primary LDEV and secondary LDEV across different SLPR; that is, by stepping over SLPR.
Incidentally, when it is judged as No at all steps of step S161, step S163, and step S164, (the CPU 121) of the maintenance terminal 89 transmits Failed as the response to the management computer 69 (step S167), and the series of local replication pair creation command processing steps will end.
In
Next, the administrator designates the user ID and password in the SLPR management table for secondary LDEV shown in
Next, the administrator lists all LDEV information regarding all SLPR managed by the storage manager corresponding to the administrator acquired in step S171, and, for example, displays this on a display (not shown) of the management computer 69. Here, the LDEV information contained in the SLPR for secondary LDEV acquired in step S172 is not displayed (step S173).
Next, the administrator, for example, refers to the pair management table stored in the SM 83 (of the DKC 71) shown in
With the processing routine shown in step S175 through step S178 explained below, only the LDEV contained in the SLPR registered as the SLPR for secondary LDEV in the SLPR management table for secondary LDEV shown in
Next, the administrator checks to see whether the storage manager is an SLPR manager for secondary LDEV, or a subsystem manager, or a storage manager (i.e., SLPR manager) other than the above. This check is conducted by the administrator referring to the storage manager management table shown in
Next, the administrator selects the secondary LDEV from the foregoing list (step S177), issues a local replication pair generation command with the user ID and password (registered in the storage manager management table shown in
Meanwhile, when it is judged that the storage manager is neither the SLPR manager for secondary LDEV or the subsystem manager (No in step S175), the administrator selects as the secondary LDEV the item which first matches the primary LDEV regarding the size and RAID level from the information acquired in step S171, step S172 (step S178), and the routine proceeds to the processing routine shown in step S179.
In
Next, the maintenance terminal 89 transmits Succeeded to the management computer 69 as a response to the LDEV transfer command (step S185), and the series of LDEV transfer command processing steps will end.
Incidentally, when it is judged as No in step S181 and step S182, the maintenance terminal 89 transmits Failed to the management computer 69 as a response to the LDEV transfer command (step S186), and the series of LDEV transfer command processing steps will end.
In
Next, the administrator lists all LDEV information acquired in step S191 regarding all SLPR managed by the administrator, and transmits this from the management computer 69 to the maintenance terminal 89. Then, the maintenance terminal 89 which received the foregoing list displays such list on the display (not shown) of the maintenance terminal 89 (step S192). When this list is displayed on the display (not shown) of the maintenance terminal 89, the user designated (in the storage manager management table/storage manager management table (A)) refers to the pair management table stored in the SM 83 (of the DKC 71) shown in
When this list is displayed on the display (not shown) of the maintenance terminal 89, the designated user selects the secondary LDEV from the displayed list (step S195). And, it issues a local replication pair generation command with the user ID and password (registered in the storage manager management table shown in
In
Next, the administrator checks to see whether the pair role has a secondary LDEV by referring to the LDEV management table stored in the SM 83 (of the DKC 71) from all LDEV information that the maintenance terminal 89 acquired in step S202 (step S203). As a result of this check, when it is judged that the pair role has a secondary LDEV (Yes in step S203), the administrator will perform processing such that the pair role will make the secondary LDEV move to the SLPR exclusive to the secondary LDEV.
In other words, the administrator, with the MoveLdevSIpr command, designates the user ID and password in the user management table (storage manager management table/storage manager management table (A)), designates SLPR exclusive to secondary LDEV to the SLPR number, and notifies these designated contents to the maintenance terminal 89. As a result, in the maintenance terminal 89, processing for making the pair role move the secondary LDEV to the SLPR exclusive to secondary LDEV. Incidentally, when the pair role has a plurality of secondary LDEV, one command (MoveLdevSIpr) is issued for each secondary LDEV (step S204).
The processing routine shown from step S201 through step S204 is continued until there is no longer an unchecked SLPR from the SLPR other than the SLPR for secondary LDEV (No in step S205). And, when it is judged that there is no longer an unchecked SLPR (Yes in step S205), the series of secondary LDEV transfer processing steps will end.
As a result of the secondary LDEV transfer processing shown in
Although the preferred embodiments of the present invention have been described above, these are merely exemplifications for explaining the present invention, and are not intended to limit the scope of the present invention to such embodiments. The present invention can be implemented in other various modes.
Number | Date | Country | Kind |
---|---|---|---|
2004-321015 | Nov 2004 | JP | national |