This application is a continuing application, filed under 35 U.S.C. §111(a), of International Application PCT/JP2004/014730, filed Oct. 6, 2004.
1. Field of the Invention
This invention relates to a system environment setting support program, a system environment setting support method, and a system environment setting support apparatus for supporting the setting of an environment for a network system and, more particularly, to a system environment setting support program, a system environment setting support method, and a system environment setting support apparatus for setting an environment for a network system including a disk array system.
2. Description of the Related Art
With the progress of computerization of business transaction in enterprises, the amount of data handled in these enterprises increases. In addition, there are cases where common data is used in plural pieces of business. This has complicated data management. Accordingly, centralized management of data has recently been exercised at predetermined places on networks. A technique used for such centralized management of data is, for example, a storage area network (SAN). With the SAN, a storage device for storing a large amount of data is prepared and a server and the storage device are connected via a fiber channel.
By the way, if centralized management of data is exercised by using a storage device, speed at which the storage device is accessed has a great influence on the processing efficiency of the entire system. Furthermore, if a failure occurs in the storage device in which data is stored and the data cannot be taken out, all pieces of business are stopped.
Therefore, if centralized management of data is exercised by the SAN, not only a technique for increasing speed at which the storage device is accessed but also a technique for increasing the reliability of the storage device are used. For example, a redundant array of independent disks (RAID) is used as a technique for increasing speed at which the storage device is accessed and a technique for increasing the reliability of the storage device. Storage devices for which the RAID technique can be used are referred to as a disk array.
With the RAID, various techniques such as striping by which distributed writing to a plurality of drives and distributed reading from a plurality of drives are performed and mirroring by which the same piece of data is written to a plurality of drives are combined and are used.
With the RAID, however, various techniques are combined for controlling a plurality of drives, so the setting of an operation environment is complicated. Accordingly, a technique for automatically configuring a RAID controller is devised (see, for example, Japanese Unexamined Patent Publication No. 2000-20245).
In addition, a technique for easing configuration change operation by visually representing the configuration of a disk array subsystem is devised (see, for example, Japanese Unexamined Patent Publication No. 9-292954).
With a disk array connected as the SAN, however, expert knowledge of the entire network including the SAN and the RAID is necessary for performing proper setting. A user who is short of knowledge cannot perform setting.
To be concrete, many setting operations must be performed on a disk array connected as the SAN, a fiber channel (FC) switch, and a server so that the server can access a logical volume in the disk array. An advanced knowledge is needed for performing consistent setting on these components which relate to one another. The cost of obtaining such knowledge is high. In addition, this setting is complicated, so a mistake tends to occur. Furthermore, to maintain reliability, configuration must be designed with redundancy and performance taken into consideration. This requires great skill.
These factors contribute to the difficulty of changing the configuration of a storage system or extending a storage system. That is to say, the following problems exist.
The configuration of the storage system cannot be designed easily.
To design a configuration with redundancy and performance taken into consideration, a copious knowledge of specifications for the disk array and the SAN is required. Therefore, a design cannot be prepared easily.
The procedure and operations for setting a designed configuration on the actual components are complicated.
Many setting operations must be performed on the disk array, the FC switch, and the server, so a mistake tends to occur. In addition, a mechanism for detecting a setting mistake is not fully established.
The present invention was made under the background circumstances described above. An object of the present invention is to provide a system environment setting support program, a system environment setting support method, and a system environment setting support apparatus by which setting in which the reliability of and the load on a disk array are taken into consideration can easily be performed.
To accomplish the above object, the present invention provides a computer-readable medium storing a system environment setting support program supporting the setting of an environment in which a server accesses a disk array. This program makes a computer perform the steps in which: a logical volume requirement acquisition section acquires a logical volume requirement in which a degree of reliability required, a degree of a load, and an amount of data required are designated; an applied technique determination section determines an applied technique for improving at least one of reliability and processing speed according to the logical volume requirement; and a logical volume setting section outputs instructions to generate a logical volume which is made up of a plurality of disk drives and which is used for realizing the determined applied technique and instructions to set an environment in which the server accesses the logical volume by using the applied technique to the server and the disk array.
The above and other objects, features and advantages of the present invention will become apparent from the following description when taken in conjunction with the accompanying drawings which illustrate preferred embodiments of the present invention by way of example.
An embodiment of the present invention will now be described with reference to the drawings.
An overview of the present invention applied to an embodiment will be given first and the concrete contents of the embodiment will then be described.
The logical volume requirement acquisition section 1a acquires the logical volume requirement 5 in which the degree of reliability required, the degree of a load (necessity of high-speed data access), and the amount of data required are designated. The logical volume requirement 5 is set by, for example, operation input from a user.
If a logical volume is used as a database regarding, for example, the contents of the settlement of electronic commerce, then high reliability is required. Conversely, if a logical volume is used for storing data, such as data stored in an electronic bulletin board for employees of a company, the loss of which has little influence on business, then low reliability is justifiable.
The degree of a load on a logical volume used for storing, for example, information to be retrieved by a search engine is set to a great value because speed at which the logical volume is accessed has a great influence on processing speed.
The applied technique determination section 1b determines an applied technique for improving at least one of reliability and processing speed according to the logical volume requirement 5. For example, if high reliability is required, then the applied technique determination section 1b determines that the mirroring technique or the like suitable for improving reliability is applied. If the degree of a load is high, then the applied technique determination section 1b determines that the striping technique or the like suitable for improving processing speed is applied.
The logical volume setting section 1c outputs instructions to generate a logical volume which is made up of a plurality of disk drives and which is used for realizing the determined applied technique and instructions to set an environment in which the server 3 accesses the logical volume by using the applied technique to the server 3 and the disk array 4.
With the computer 1 which executes the above system environment setting support program, the logical volume requirement acquisition section 1a acquires the logical volume requirement 5. The applied technique determination section 1b then determines an applied technique for improving at least one of reliability and processing speed according to the logical volume requirement 5. The logical volume setting section 1c then outputs the instructions to generate the logical volume which is made up of the plurality of disk drives and which is used for realizing the determined applied technique and the instructions to set the environment in which the server 3 accesses the logical volume by using the applied technique to the server 3 and the disk array 4.
This makes it easy to build a logical volume in a disk array and to set an environment in which a server accesses the logical volume built. As a result, even a user who is not familiar with a technique for increasing data access speed or reliability by the use of a disk array can set an environment for a system to which these techniques are applied.
The function of automatically setting an environment in which such a storage device is accessed is referred to as a storage provisioning function.
By the way, a system to which the present invention is applied is, for example, the SAN. Furthermore, the RAID can be used as a technique for increasing data access speed or reliability by the use of a disk array. If the storage provisioning function is applied to disk units (disk array) for which the RAID can be used, it is possible to make a server or the disk array perform a logical volume setting process or a resource provisioning process by a remote operation from a management server which manages a network system.
When the management server performs the function of automatically building a logical volume in the disk array, the management server performs the following process. The logical volume built is put into a state in which it can be used in the system as a storage pool.
When a user sets a logical volume, he/she need only input the degree of reliability required, the degree of a load, and the amount of data required. The management server determines logical volume structure which satisfies the logical volume requirement inputted and automatically set a logical volume in the disk array. The logical volume generated is managed as a volume included in the storage pool.
In addition, the management server uses the concepts of “domain” and “group” for performing the resource provisioning function by which a logical volume included in the pool is assigned to a server by one action.
With the resource provisioning function, the uniformity of the physical connection of the SAN is guaranteed by a server domain. To be concrete, the management server performs the following process.
With the resource provisioning function, servers used for carrying out the same business are managed as a server group and a logical volume is assigned to a storage group connected to the server group. To be concrete, the management server performs the following process.
The management server having the above processing functions makes it possible to apply the present invention to the SAN. An embodiment in which the present invention is applied to the SAN will now be described in detail.
The management server 100 is connected to a client 210, the servers 310 and 320, the FC switches 410 and 420, and the disk arrays 430 and 440 via a network 10. The client 210 is a terminal unit used for sending the contents of operation input by a manager to the management server 100 and receiving and displaying the result of a process performed by the management server 100.
The servers 310 and 320 are connected to the FC switches 410 and 420 by fiber channels. The FC switches 410 and 420 are connected to the disk arrays 430 and 440 by fiber channels. A transmission path from each of the servers 310 and 320 to each of the disk arrays 430 and 440 is multiplexed. For example, transmission paths from the server 310 to the disk array 430 include a transmission path via the FC switch 410 and a transmission path via FC switch 420.
Names are given to the servers 310 and 320, the FC switches 410 and 420, and the disk arrays 430 and 440 to uniquely identify them on the system. The name of the server 310 is “server A”. The name of the server 320 is “server B”. The name of the FC switch 410 is “FC switch a”. The name of the FC switch 420 is “FC switch b”. The name of the disk array 430 is “disk array α”. The name of the disk array 440 is “disk array β”.
The hardware configuration of each component will now be described with reference to
The RAM 102 temporarily stores at least part of an operating system (OS) or an application program executed by the CPU 101. The RAM 102 also stores various pieces of data which the CPU 101 needs to perform a process. The HDD 103 stores the OS and application programs.
A monitor 11 is connected to the graphics processing unit 104. In accordance with instructions from the CPU 101, the graphics processing unit 104 displays an image on a screen of the monitor 11. A keyboard 12 and a mouse 13 are connected to the input interface 105. The input interface 105 sends a signal sent from the keyboard 12 or the mouse 13 to the CPU 101 via the bus 107.
The communication interface 106 is connected to a network 10. The communication interface 106 exchanges data with another computer via the network 10.
By adopting the above-mentioned hardware configuration, the processing function of this embodiment can be realized. The client 210 can also be realized by adopting the same hardware configuration that is shown in
The RAM 312 temporarily stores at least part of an OS or an application program executed by the CPU 311. The RAM 312 also stores various pieces of data which the CPU 311 needs to perform a process. The HDD 313 stores the OS and application programs.
A monitor 14 is connected to the graphics processing unit 314. In accordance with instructions from the CPU 311, the graphics processing unit 314 displays an image on a screen of the monitor 14. A keyboard 15 and a mouse 16 are connected to the input interface 315. The input interface 315 sends a signal sent from the keyboard 15 or the mouse 16 to the CPU 311 via the bus 319.
The communication interface 316 is connected to the network 10. The communication interface 316 exchanges data with another computer via the network 10.
The HBA 317 is connected to the FC switch 410 by a fiber channel. The HBA 317 communicates with the disk arrays 430 and 440 via the FC switch 410. The HBA 318 is connected to the FC switch 420 by a fiber channel. The HBA 318 communicates with the disk arrays 430 and 440 via the FC switch 420.
In
The control section 411 and each of the ports 412 through 419 are connected (not shown). The control section 411 controls the whole of the FC switch 410. The control section 411 is connected to the management server 100 and can set an environment in which the FC switch 410 operates in accordance with instructions from the management server 100. Each of the ports 412 through 419 is a communication port for connecting a communication line (such as an optical fiber) for a fiber channel.
Similarly, the FC switch 420 includes a control section 421 and ports 422 through 429. A port number is assigned to each of the ports 422 through 429. The port number of the port 422 is “0”. The port number of the port 423 is “1”. The port number of the port 424 is “2”. The port number of the port 425 is “3”. The port number of the port 426 is “4”. The port number of the port 427 is “5”. The port number of the port 428 is “6”. The port number of the port 429 is “7”.
The control section 421 and each of the ports 422 through 429 are connected (not shown). The control section 421 controls the whole of the FC switch 420. The control section 421 is connected to the management server 100 and can set an environment in which the FC switch 420 operates in accordance with instructions from the management server 100. Each of the ports 422 through 429 is a communication port for connecting a communication line (such as an optical fiber) for a fiber channel.
The disk array 430 includes a management module 431, channel adapters (CA) 432 and 433, and disk drives 434 through 437. The management module 431 and each of the CAs 432 and 433 are connected (not shown) and the management module 431 and each of the disk drives 434 through 437 are connected (not shown). In addition, each of the CAs 432 and 433 and each of the disk drives 434 through 437 are connected. The CAs 432 and 433 are connected to the FC switches 410 and 420 respectively.
The management module 431 manages the whole of the disk array 430. The management module 431 is connected to the management server 100 and sets an environment for each of the CAs 432 and 433 and the disk drives 434 through 437 in response to a request from the management server 100.
The disk array 440 includes a management module 441, CAs 442 and 443, and disk drives 444 through 447. The management module 441 and each of the CAs 442 and 443 are connected (not shown) and the management module 441 and each of the disk drives 444 through 447 are connected (not shown). In addition, each of the CAs 442 and 443 and each of the disk drives 444 through 447 are connected. The CAs 442 and 443 are connected to the FC switches 410 and 420 respectively.
The management module 441 manages the whole of the disk array 440. The management module 441 is connected to the management server 100 and sets an environment for each of the CAs 442 and 443 and the disk drives 444 through 447 in response to a request from the management server 100.
The management server 100 automatically sets environments in the system in which the above hardware configuration is adopted.
The physical connection information 110 stores information indicative of physical connection relationships between the servers 310 and 320, the FC switches 410 and 420, and the disk arrays 430 and 440. Types of the RAID corresponding to the degree of reliability required or the degree of a load are defined in the RAID determination table 120. Correspondences for determining a disk drive to be used according to the type of the RAID and the amount of data required are defined in the drive determination table 130. The RAID group structure information 140 stores information regarding RAID group structure. The logical volume structure information 150 stores information regarding logical volume structure determined by the volume structure determination section 170.
The physical connection information acquisition section 160 acquires information regarding a physical connection with another component from each of the servers 310 and 320, the FC switches 410 and 420, and the disk arrays 430 and 440. The physical connection information acquisition section 160 stores the information it acquired in the physical connection information 110.
The volume structure determination section 170 accepts a logical volume requirement 21 from the client 210 and determines logical volume structure corresponding to the logical volume requirement 21. At this time the volume structure determination section 170 refers to the physical connection information 110, the RAID determination table 120, and the drive determination table 130. When the volume structure determination section 170 determines the logical volume structure, the volume structure determination section 170 stores the contents of the determination in the RAID group structure information 140 and the logical volume structure information 150.
When the volume structure determination section 170 stores new information in the logical volume structure information 150, the logical volume setting section 180 determines the contents of setting performed on the servers 310 and 320, the FC switches 410 and 420, and the disk arrays 430 and 440 in accordance with the new information. The logical volume setting section 180 then gives the servers 310 and 320, the FC switches 410 and 420, and the disk arrays 430 and 440 instructions to set an environment in accordance with the contents of setting.
The servers 310 and 320, the FC switches 410 and 420, and the disk arrays 430 and 440 set logical volumes in accordance with the instructions from the logical volume setting section 180. With the servers 310 and 320, for example, an agent sets a logical volume.
The agent 310a included in the server 310 sets environments for the HBAs 317 and 318, the volume management section 310b, and the like included in the server 310 in accordance with instructions from the management server 100. Similarly, the agent 320a included in the server 320 sets environments for the HBAs 327 and 328, the volume management section 320b, and the like included in the server 320 in accordance with instructions from the management server 100.
The volume management section 310b has a volume management function for mirroring a disk unit which can be accessed from the server 310 and for protecting property from an unforeseen situation, such as a disk failure. Similarly, the volume management section 320b has a volume management function for mirroring a disk unit which can be accessed from the server 320 and for protecting property from an unforeseen situation, such as a disk failure.
Names are given to the HBAs 317 and 318 included in the server 310 so that they can uniquely be identified in the server 310. The name of the HBA 317 is “HBA0” and the name of the HBA 318 is “HBA1”. Similarly, names are given to the HBAs 327 and 328 included in the server 320 so that they can uniquely be identified in the server 320. The name of the HBA 327 is “HBA0” and the name of the HBA 328 is “HBA1”.
By the way, a RAID group is formed of one or more disk drives. One or more logical volumes are built on the disk drives which belong to the RAID group.
The plurality of logical volumes built are grouped. Connections to the servers 310 and 320 are managed according to groups. In this case, a logical volume group is referred to as an affinity group. An access path from each of the servers 310 and 320 to a disk drive is recognized as a path via an affinity group logically set.
The HBA 317 of the server 310 is connected to the port 412 of the FC switch 410 the port number of which is “0”. The HBA 318 of the server 310 is connected to the port 422 of the FC switch 420 the port number of which is The HBA 327 of the server 320 is connected to the port 415 of the FC switch 410 the port number of which is “3”. The HBA 328 of the server 320 is connected to the port 425 of the FC switch 420 the port number of which is “3”.
The port 416 of the FC switch 410 the port number of which is 11411 is connected to the CA 432 of the disk array 430. The port 419 of the FC switch 410 the port number of which is “7” is connected to the CA 442 of the disk array 440.
The port 426 of the FC switch 420 the port number of which is “4” is connected to the CA 433 of the disk array 430. The port 429 of the FC switch 420 the port number of which is “7” is connected to the CA 443 of the disk array 440.
Logical volumes 451 through 454 included in the disk array 430 are classified into affinity groups 450a and 450b. The CAs 432 and 433 of the disk array 430 are connected to the logical volumes 451 through 454 via the affinity groups 450a and 450b logically set.
Logical volumes 461 through 464 included in the disk array 440 are classified into affinity groups 460a and 460b. The CAs 442 and 443 of the disk array 440 are connected to the logical volumes 461 through 464 via the affinity groups 460a and 460b logically set.
The physical connection information acquisition section 160 included in the management server 100 acquires the connection relationships shown in
The server connection information 31 and 32 are connection information acquired from the servers 310 and 320 respectively. Identification numbers of the HBAs included in the server 310 are set in the server connection information 31 and identification numbers of the HBAs included in the server 320 are set in the server connection information 32. In this embodiment, world wide port names (WWPN) are used as the identification numbers.
The server connection information 31 is connection information regarding the server 310 having the name “server A”. The HBA 317 of the server 310 has the name “HBA0” and the WWPN “AAAABBBBCCCC0000”. The HBA 318 of the server 310 has the name “HBA1” and the WWPN “AAAABBBBCCCC0001”.
The server connection information 32 is connection information regarding the server 320 having the name “server B”. The HBA 327 of the server 320 has the name “HBA0” and the WWPN “DDDDEEEEFFFF0000”. The HBA 328 of the server 320 has the name “HBA1” and the WWPN “DDDDEEEEFFFF0001”.
The disk array connection information 33 and 34 are connection information acquired from the disk arrays 430 and 440 respectively. Identification numbers of the CAs included in the disk array 430 are set in the disk array connection information 33 and identification numbers of the CAs included in the disk array 440 are set in the disk array connection information 34. In this embodiment, WWPNs are used as the identification numbers.
The disk array connection information 33 is connection information regarding the disk array 430 having the name “disk array α”. The CA 432 of the disk array 430 has the name “CA0” and the WWPN “1122334455667700”. The CA 433 of the disk array 430 has the name “CA1” and the WWPN “1122334455667701”.
The disk array connection information 34 is connection information regarding the disk array 440 having the name “disk array β”. The CA 442 of the disk array 440 has the name “CA0” and the WWPN “7766554433221100”. The CA 443 of the disk array 440 has the name “CA1” and the WWPN “7766554433221101”.
The FC switch connection information 35 and 36 is connection information acquired from the FC switches 410 and 420 respectively. An identification number of a device to which each port included in the FC switch 410 is connected is set in the FC switch connection information 35 and an identification number of a device to which each port included in the FC switch 420 is connected is set in the FC switch connection information 36.
The FC switch connection information 35 is connection information regarding the FC switch 410 having the name “FC switch a”. The port 412 of the FC switch 410 the port number of which is “0” is connected to a device having the WWPN “AAAABBBBCCCC0000”. This shows that the port 412 is connected to the HBA 317 which has the name “HBA0” and which is included in the server 310 having the name “server A”. Similarly, WWPNs in the FC switch connection information 35 set for the ports 415, 416, and 419 the port numbers of which are “3,” “4,” and “7,” respectively, indicate devices to which the ports 415, 416, and 419 are connected.
The FC switch connection information 36 is connection information regarding the FC switch 420 having the name “FC switch b”. The port 422 of the FC switch 420 the port number of which is “0” is connected to a device having the WWPN “AAAABBBBCCCC0001”. This shows that the port 422 is connected to the HBA 318 which has the name “HBA1” and which is included in the server 310 having the name “server A”. Similarly, WWPNs in the FC switch connection information 36 set for the ports 425, 426, and 429 the port numbers of which are “3,” “4,” and “7,” respectively, indicate devices to which the ports 425, 426, and 429 are connected.
The volume structure determination section 170 refers the above physical connection information 110 and determines volume structure corresponding to a logical volume requirement designated by a user. At this time the volume structure determination section 170 refers to the RAID determination table 120 and the drive determination table 130.
Parameters inputted by a user and a drawn value are registered in the RAID determination table 120 and are associated with one another.
The degree of reliability required and the degree of a load of parameters (information included in the logical volume requirement 21) inputted by the user are used in the RAID determination table 120.
A drawn value is indicated by a RAID level. RAID0 through RAID5 exist as RAID levels.
RAID0 is a technique for improving read/write speed by dividing data into plural pieces and storing the plural pieces of data in a plurality of hard disks in a disk array (striping). RAID1 is a technique for increasing data security by recording the same data on two hard disks (mirroring). RAID2 is the technique of using not only hard disks for recording but also one or more hard disks for correcting an error. RAID3 is the technique of using a hard disk in a disk array for recording parity for the purpose of correcting an error. The unit of data division in RAID4 is larger than that of data division in RAID3. With RAID5, a drive onto which parity information is written is not determined and data and the parity information are dispersed on many drives in a disk array. That is to say, RAID5 is a technique for increasing resistance to faults.
In this embodiment, both of RAID0 and RAID1 (RAID0+1) or RAID5 is applied as a RAID level. If RAID0+1 and RAID5 are compared, usually RAID0+1 is more advantageous than RAID5 from the viewpoint of performance. Compared with RAID5, however, the number of disk units needed for making up a RAID is large. This leads to high costs. Therefore, a RAID level is adopted in the following way in the example shown in
By referring to the above RAID determination table 120, an optimum RAID level can be determined according to a combination of the degree of reliability required and the degree of a load. Even if both of the degree of reliability required and the degree of a load are low, some RAID technique is applied in the example shown in
The range of the total value (in gigabytes) of the amount of data required which is included in the logical volume requirement 21 is set as the parameter inputted by the user. The ranges of 0 to 99 GB, 100 to 499 GB, and 500 to 999 GB are set in the example shown in
The type name of a disk array which corresponds to the total value of the amount of data required is set as the drawn value. In the example shown in
The performance and the like of a disk array having each type name are set as the characteristic information for the drawn unit. To be concrete, the performance of disk drives included in a disk array (in descending order of data access speed), the number of disk drives included in a RAID group (in the case of RAID0+1), the number of disk drives included in a RAID group (in the case of RAID5), and the maximum number of RAID groups are set.
When a type name of a disk array is selected, a disk array that can hold data the amount of which is larger than the amount of data required should be used. However, to reduce extra costs, it is desirable that a necessary and minimum disk array should be selected.
The performance of a disk drive is indicated by the memory capacity (GB) and the revolution speed (Krpm). If two RAID groups having the same memory capacity are formed, usually a RAID group in which memory capacity per disk drive is the smaller needs more disk drives. If there are many disk drives, parallel access can be gained and access speed can be increased. If the number of revolutions of a disk drive is large, a high access speed is obtained.
A logical volume is generated by referring to the above data.
[Step S11] The volume structure determination section 170 acquires a requirement (logical volume requirement 21) for generating a logical volume needed in a system from the client 210.
For example, a user designates logical volumes needed in the system by an input device, such as a keyboard or a mouse, of the client 210 and inputs a requirement for generating each logical volume. The degree of reliability required, the degree of a load, and the amount of data required for each logical volume are designated in each logical volume requirement 21. The logical volume requirement 21 is sent from the client 210 to the management server 100 and is inputted to the volume structure determination section 170.
[Step S12] The volume structure determination section 170 performs a logical volume automatic structure determination process. The process will later be described in detail.
[Step S13] The logical volume setting section 180 performs a process so that the structure of the logical volume in a disk array will be reflected in each component included in the system. That is to say, the logical volume setting section 180 registers the logical volume generated in a storage pool.
After that, the logical volume setting section 180 performs its resource provisioning function. By doing so, the logical volume is transferred from the storage pool to a storage group and is assigned as an object to be accessed by a server (see
[Step S21] When the volume structure determination section 170 accepts the logical volume requirement 21, the volume structure determination section 170 refers to the RAID determination table 120 and determines a RAID level to be applied. That is to say, the volume structure determination section 170 finds an optimum (or recommended) RAID level on the basis of the degree of reliability required and the degree of a load set according to pieces of data.
To be concrete, the volume structure determination section 170 acquires a combination of the degree of reliability required and the degree of a load from the logical volume requirement 21 and retrieves a record which meets the degree of reliability required and the degree of a load from the RAID determination table 120. The volume structure determination section 170 then takes a RAID level out of a RAID level column of the record detected.
If there are a plurality of logical volumes to be generated, then a RAID level determination process is repeated by the number of the plurality of logical volumes.
[Step S22] The volume structure determination section 170 refers to the drive determination table 130 and determines the type name of a disk array to be applied. The concrete procedure for a volume determination process will now be described with reference to
The volume structure determination section 170 retrieves a row which matches the total value of capacity inputted by the user from the Total Value Of Amount Of Data Required column of the drive determination table 130. For example, if the user inputs a data capacity of 300 GB, then the record in the second row is retrieved.
The volume structure determination section 170 then acquires a disk array type name described in the appropriate record. For example, if the record in the second row is retrieved on the basis of the total value of the amount of data required, then the type name “RAID-Model2” is acquired.
[Step S23] The volume structure determination section 170 refers to the drive determination table 130 and acquires characteristic information corresponding to the disk array type name. To be concrete, the volume structure determination section 170 acquires the performance of disk drives, the number of disk drives included in a RAID group, and the maximum number of RAID groups included in the record retrieved in step S22. When the volume structure determination section 170 acquires the performance of disk drives, the volume structure determination section 170 acquires the type of a disk drive the data access speed of which is the highest.
The information acquired is used for generating a RAID group and assigning logical volumes.
[Step S24] The volume structure determination section 170 determines logical volume structure corresponding to the logical volume requirement 21 inputted by the user.
[Step S25] The volume structure determination section 170 generates a RAID group and assigns logical volumes. A RAID group is a set of logical volumes. The RAID levels of logical volumes included in the same RAID group must be the same. Therefore, the volume structure determination section 170 selects logical volumes to which the same RAID level is applied from among the logical volumes generated in step S23 and generates a RAID group.
In this case, the volume structure determination section 170 uses disk drives of the disk drive type acquired in step S23 for generating the RAID group. In addition, the volume structure determination section 170 selects disk drives the number of which corresponds to the “number of disk drives included in a RAID group” acquired in step S23 for generating the RAID group. Furthermore, the number of RAID groups generated by the volume structure determination section 170 must be smaller than or equal to the “maximum number of RAID groups” acquired in step S23.
The capacity of one RAID group can be calculated from these pieces of information in the following way.
capacity of RAID5=capacity of disk drive type x (number of disk drives included in RAID group−1)
capacity of RAID0+1=capacity of disk drive type x (number of disk drives included in RAID group)/2
The volume structure determination section 170 assigns logical volumes to RAID groups so as to make the capacity smaller than or equal to the values calculated in the above way. In this case, the volume structure determination section 170 can indicate assignment relationships by lines and display the assignment relationships on a screen of the client 210.
[Step S26] The volume structure determination section 170 creates a command file which contains instructions given to each component according to the assignment of the logical volumes to the RAID group determined in step S25. After that, step S13 shown in
The RAID group structure information 140 and the logical volume structure information 150 are created in this way according to the logical volume requirement 21 inputted by the user.
The result of logical volume building is displayed on the screen of the client 210. An example of inputting a logical volume requirement and an example of a result display screen corresponding to contents inputted will now be indicated.
The management server 100 then returns the result of generating the logical volumes to the client 210. SAN structure obtained as a processing result is displayed on the screen of the client 210.
The logical volume requirements inputted on the input screen 211 are displayed in the logical volume requirement display areas 212a, 212b, and 212c. The logical volumes (and disk drives which make up RAID groups including the logical volumes) generated are displayed in the logical volume display areas 212d and 212e.
The logical volume requirement display areas 212a, 212b, and 212c and the logical volume display areas 212d and 212e are connected by lines indicative of assignment relationship.
A user can check the result of generating the RAID groups and how the logical volumes are assigned by referring to the result display screen 212. After the user checks contents displayed on the result display screen 212, the volume structure determination section 170 included in the management server 100 creates a command file used for generating the logical volumes. The logical volume setting section 180 then sends the command file to the disk arrays 430 and 440. As a result, the RAID groups and the logical volumes are generated in the disk arrays 430 and 440, as displayed on the result display screen 212.
The logical volumes generated are registered in a storage pool as unused volumes. If a logical volume is added to a server, the logical volume is selected from the storage pool. Various setting processes are then performed automatically so that the servers 310 and 320 can access the logical volume.
[Step S31] The logical volume setting section 180 included in the management server 100 sets affinity groups first. To be concrete, the logical volume setting section 180 gives the disk arrays 430 and 440 instructions to generate affinity groups.
[Step S32] The logical volume setting section 180 sets access paths from the servers 310 and 320 to the affinity groups generated in step S31. The number of access paths from the server 310 or 320 to each affinity group is two. The logical volume setting section 180 gives the servers 310 and 320 and the FC switches 410 and 420 instructions to set access paths.
[Step S33] The logical volume setting section 180 sets multipaths. A multipath setting process is performed so that applications can recognize two access paths as one access path. The logical volume setting section 180 gives the servers 310 and 320 instructions to set multipaths.
[Step S34] The logical volume setting section 180 sets cluster resources. If the servers 310 and 320 form a cluster, resources shared on the cluster are defined for the servers 310 and 320 by performing a cluster resource setting process. The logical volume setting section 180 gives the servers 310 and 320 instructions to set cluster resources.
[Step S35] The logical volume setting section 180 performs a mirror volume setting process. The logical volume setting section 180 gives the servers 310 and 320 instructions to set a mirror volume.
In this case, the disk arrays 430 and 440 have generated the affinity groups 450a and 450b and the affinity groups 460a and 460b, respectively, in accordance with instructions from the logical volume setting section 180. The logical volumes 451 and 452 are added as logical volumes under the affinity groups 450a and 450b generated and the logical volumes 463 and 464 are added as logical volumes under the affinity groups 460a and 460b.
The logical volume setting section 180 gives each component instructions to perform setting. As a result, two access paths are set from each of the servers 310 and 320 to each of the affinity groups 450a, 450b, 460a, and 460b generated. The servers 310 and 320 can access the logical volumes via the access paths set. The servers 310 and 320 identify the logical volumes by their logical unit numbers (LUN).
In addition, the logical volume setting section 180 gives the servers 310 and 320 instructions to perform setting. As a result, multipath setting, cluster resource setting, and mirror volume setting are performed. With the multipath setting, a LUN given to each access path to the same logical volume is associated with one multipath instance number (mplb0, mplb1, mplb2, or mplb3). With the cluster resource setting, multipath instances mplb0, mplb1, mplb2, and mplb3 generated by the servers 310 and 320, which are cluster nodes, are registered as cluster resources. With the mirror volume setting, a plurality of multipath instance numbers are associated with each of mirror volume numbers (M0 and M1). Mirroring is performed by a volume which can be accessed by a plurality of multipath instance numbers associated with one mirror volume number.
The contents of the setting will now be described concretely with the addition of logical volumes as an example. As shown in
The affinity group structure data 181a is structure data regarding the affinity group 450a to which the affinity group number “Affinity Group 0” is given. The affinity group 450a is generated for access from the server 310. The logical volumes 451 and 452 are registered in the affinity group structure data 181a so that the server 310 can access them. The LUNs of the logical volumes 451 and 452 are “0” and “1” respectively.
The affinity group structure data 181b is structure data regarding the affinity group 450b to which the affinity group number “Affinity Group 1” is given. The affinity group 450b is generated for access from the server 320. The contents of the affinity group structure data. 181b are the same as those of the affinity group structure data 181a.
The affinity group structure data 181c is structure data regarding the affinity group 460a to which the affinity group number “Affinity Group 10” is given. The affinity group 460a is generated for access from the server 310. The logical volumes 463 and 464 are registered in the affinity group structure data 181c so that the server 310 can access them. The LUNs of the logical volumes 463 and 464 are “12” and “13” respectively.
The affinity group structure data 181d is structure data regarding the affinity group 460b to which the affinity group number “Affinity Group 11” is given. The affinity group 460b is generated for access from the server 320. The contents of the affinity group structure data 181d are the same as those of the affinity group structure data 181c.
When the logical volumes are added from the storage pool 61 to the storage group 62, affinity group structure data corresponding to each of the servers 310 and 320 is created in this way.
The affinity group structure data 181a and 181b is defined for the affinity groups included in the disk array 430. Therefore, the affinity group structure data 181a and 181b is sent to the disk array 430 and the affinity groups are set in the disk array 430.
The affinity group structure data 181c and 181d is defined for the affinity groups included in the disk array 440. Therefore, the affinity group structure data 181c and 181d is sent to the disk array 440 and the affinity groups are set in the disk array 440.
When the affinity groups are set in the disk arrays 430 and 440, the access paths from the servers 310 and 320 to the affinity groups are set. At this time two paths which differ in route are set by a redundant path setting function. When the access paths are set, devices indicated by LUNs are set at each HBA port included in the servers 310 and 320. As a result, the servers 310 and 320 can access the affinity groups via the HBAs.
After that, the multipath setting process is performed. Multipath structure data used for managing multipaths is created in the multipath setting process.
In the example shown in
The multipath structure data 181e is sent to the servers 310 and 320. Each of the servers 310 and 320 then executes a multipath creation command in accordance with the multipath structure data 181e to create a multipath instance.
Each of the servers 310 and 320 then performs the cluster resource setting to register the multipath instance as a cluster resource.
Next, the management server 100 creates mirror volume structure data.
The management server 100 sends the above mirror volume structure data 181f to the servers 310 and 320 and gives the servers 310 and 320 instructions to perform the mirror volume setting. As a result, the servers 310 and 320 perform the mirror volume setting in accordance with the mirror volume structure data 181f.
In this example, the multipath instance numbers “mplb0” and “mplb1” indicate volumes included in the disk array 430. The multipath instance numbers “mplb2” and “mplb3” indicate volumes included in the disk array 440. Therefore, when multipath instances the multipath instance numbers of which are “mplb0” and “mplb2” are set as constituent disks corresponding to the mirror volume number “M0,” the mirror volume setting is performed by using the different disk arrays. Similarly, when multipath instances the multipath instance numbers of which are “mplb1” and “mplb3” are set as constituent disks corresponding to the mirror volume number “M1,” the mirror volume setting is performed by using the different disk arrays.
As has been described in the foregoing, the management server 100 determines optimum logical volume structure according to a logical volume requirement inputted by a user and an environment in which a logical volume generated in accordance with the contents of the determination is accessed can automatically be built. As a result, when the user inputs only the logical volume requirement, the logical volume structure is automatically determined and the logical volume is set in a disk array. Therefore, even if the user is not familiar with specifications for each device, the user can generate the logical volume which satisfies redundancy or performance.
In addition, the number of logical volumes can be increased or decreased by one action. That is to say, there is no need to individually perform setting operation on each device. This prevents a setting mistake. Furthermore, setting operation is simplified, so setting work can be performed promptly.
The above functions can be realized with a computer. In this case, a program in which the contents of the functions the management server should have are described is provided. By executing this program on the computer, the above functions are realized on the computer. This program can be recorded on a computer readable record medium. A computer readable record medium can be a magnetic recording device, an optical disk, a magneto-optical recording medium, a semiconductor memory, or the like. A magnetic recording device can be a hard disk drive (HDD), a flexible disk (FD), a magnetic tape, or the like. An optical disk can be a digital versatile disk (DVD), a digital versatile disk random access memory (DVD-RAM), a compact disk read only memory (CD-ROM), a compact disk recordable (CD-R)/rewritable (CD-RW), or the like. A magneto-optical recording medium can be a magneto-optical disk (M0) or the like.
To place the program on the market, portable record media, such as DVDs or CD-ROMs, on which it is recorded are sold. Alternatively, the program is stored in advance on a hard disk in a server computer and is transferred from the server computer to another computer via a network.
When the computer executes this program, it will store the program, which is recorded on a portable record medium or which is transferred from the server computer, on, for example, its hard disk. Then the computer reads the program from its hard disk and performs processes in compliance with the program. The computer can also read the program directly from a portable record medium and perform processes in compliance with the program. Furthermore, each time the program is transferred from the server computer, the computer can perform processes in turn in compliance with the program it receives.
In the present invention, an applied technique for improving at least one of reliability and processing speed is determined according to the degree of reliability required, the degree of a load, and the amount of data required and instructions to generate a logical volume and instructions to set an access environment are given. As a result, only by designating the degree of reliability required, the degree of a load, and the amount of data required, a user can generate an appropriate logical volume and build an environment in which the logical volume is accessed.
The foregoing is considered as illustrative only of the principles of the present invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and applications shown and described, and accordingly, all suitable modifications and equivalents may be regarded as falling within the scope of the invention in the appended claims and their equivalents.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP04/14730 | Oct 2004 | US |
Child | 11731012 | Mar 2007 | US |