This application relates to and claims priority from Japanese Patent Application No. 2010-61203, filed on Mar. 17, 2010, the entire disclosure of which is incorporated herein by reference.
The present invention relates to a management apparatus and management method and, for example, can be suitably applied to a management apparatus for managing power savings of storage apparatuses.
Conventionally, a power saving technology for storage apparatuses that has been proposed is a technology for measuring the load of a storage apparatus and for controlling a power source of a controller for controlling access to a disk device according to the measurement result (refer to Japanese Published Unexamined Application No. 2007-102409, for example).
Another storage apparatus power saving technology is a ‘MAID (Massive Array of Idle Disks) function’ for stopping the drive rotation of a hard disk device with no I/O (Input/Output) for a fixed period of time. The ‘MAID function’ allows the power consumption of the storage apparatus to be reduced since it is possible to match a time for stopping I/O (access) to a logical volume defined in the storage apparatus with a time for stopping the hard disk drive group that configures the hard disk group (such as a RAID (Redundant Array of Inexpensive Disks) group or an HDP Pool) providing the logical volume.
Furthermore, conventionally, if the time for stopping inputs and outputs (I/O) with respect to certain resources of the same type (such as logical devices of a server, file systems or logical volumes and so on of a host server) is made to coincide with the time when the hard disk device group is stopped, the I/O from the host server are then concentrated in another hard disk device group. However, no consideration has hitherto been paid to the number of I/O in resources of the same type.
Thus, with a conventional storage apparatus power saving method, the storage apparatus response performance drops in time zones where access is concentrated, which poses a risk to business operations.
The present invention was conceived in view of the above and proposes a highly reliable management apparatus and management method for providing power savings for a storage apparatus while preventing a drop in response performance.
In order to solve the above problem, the present invention provides a management apparatus for managing storage apparatuses which are equipped with a plurality of memory apparatus groups each configured from one or more memory apparatuses of the same type, which provide a storage area supplied by the memory apparatus groups to a host apparatus and which, when the storage area supplied by the memory apparatus groups is not accessed by the host apparatus for a predetermined period, stop operation of each of the memory apparatuses configuring the memory apparatus groups, comprising an information collection unit for collecting information indicating a number of accesses, in each predetermined time zone, to each of a plurality of resources of the same type each having a periodic time zone in which the number of accesses by the host apparatus is zero; a response time in each of the time zones by the resources to an application installed on the host apparatus; and an association between the application and the resource; a grouping unit for grouping, among the resources, resources with overlapping time zones for which the number of accesses by the application is zero, in the same group; a mapping unit for mapping each of the groups to the memory apparatus groups respectively; a migration execution unit for controlling the storage apparatuses to migrate data between memory apparatus groups where necessary on the basis of the result of the mapping of the groups to the memory apparatus groups by the mapping unit; and a reference value calculation unit for configuring, for each of the memory apparatus groups, a maximum value for the number of accesses by the application to the resources mapped to the memory apparatus groups as a reference value of the memory apparatuses, on the basis of the number of accesses in each time zone by the application to each of the resources collected by the information collection unit, and a response time for each time zone for the response by the resources to the application, wherein, if the number of accesses in each of the time zones of the groups of resources mapped to the memory apparatus groups exceeds the reference value of the memory apparatus groups, the mapping unit divides the group into a plurality of groups and maps each of the plurality of groups to the memory apparatus groups.
Furthermore, the present invention provides a management method for managing storage apparatuses which are equipped with a plurality of memory apparatus groups each configured from one or more memory apparatuses of the same type, which provide a storage area supplied by the memory apparatus groups to a host apparatus and which, when the storage area supplied by the memory apparatus groups is not accessed by the host apparatus for a predetermined period, stop operation of each of the memory apparatuses configuring the memory apparatus groups, comprising a first step of collecting information indicating a number of accesses, in each predetermined time zone, to each of a plurality of resources of the same type each having a periodic time zone in which the number of accesses by the host apparatus is zero; a response time in each of the time zones by the resources to an application installed on the host apparatus; and an association between the application and the resource; a second step of configuring, for each of the memory apparatus groups, a maximum value for the number of accesses by the application to the resources mapped to the memory apparatus groups as a reference value of the memory apparatuses, on the basis of the number of accesses in each time zone by the application to each of the collected resources, and a response time for each time zone for the response by the resources to the application, and of grouping, among the resources, resources with overlapping time zones for which the number of accesses by the application is zero, in the same group; a third step of mapping each of the groups to the memory apparatus groups respectively; and a fourth step of controlling the storage apparatuses to migrate data between memory apparatus groups where necessary on the basis of the result of the mapping of the groups to the memory apparatus groups, wherein, in the third step, if the number of accesses in each of the time zones of the groups of resources mapped to the memory apparatus groups exceeds the reference value of the memory apparatus groups, the group is divided into a plurality of groups and each of the plurality of groups is mapped to the memory apparatus groups.
The present invention allows grouping to be performed which considers the number of times resources of the same type are accessed and therefore this grouping of resources can be performed to the extent that there is no bottleneck affecting response performance.
Thus, a highly reliable management apparatus and management method for providing power savings for a storage apparatus while preventing a drop in response performance can be realized.
An embodiment of the present invention is now explained in detail with reference to the attached drawings.
In
The business operation system comprises, as hardware, one or more host servers 101, one or more SAN switches 102, one or more storage apparatuses 103, and a LAN (Local Area Network) 104, and comprises, as software, one or more business operation software 120 installed on the host server 101, and one or more database management software 121 similarly installed on the host server 101.
The host server 101 comprises a CPU (Central Processing Unit) 110, a memory 111, a hard disk device 112, and a network device 113.
The CPU 110 is a processor that executes a variety of software programs stored in the hard disk device 112 by reading these programs to the memory 111. In the following description, processing executed by the software programs thus read to the memory 111 is executed by the CPU 110 that actually executes these software programs.
The memory 111 is configured from a semiconductor memory such as a DRAM (Dynamic Random Access Memory) or the like, for example. The memory 111 stores various types of software that is executed by the CPU 110, and various types of information that the CPU 110 refers to. Specifically, the memory 111 stores software programs such as an OS (Operating System) 120, an application monitoring agent 123, a database performance/configuration information collection agent 124, and a host monitoring agent 125 or the like.
The hard disk device 112 is used to store various types of software and various types of information and so on. Note that a semiconductor memory such as flash memory or an optical disk device or the like, for example, may be adopted in place of the hard disk device 112.
The network device 113 is used by the host server 101 to communicate with the performance monitoring server 106 via the LAN 104 and to communicate with the storage apparatuses 103 via the SAN switches 102. The network device 113 comprises ports 1′14 that serve as communication cable connection terminals. In the case of this embodiment, the inputting and outputting of data from the host server 101 to the storage apparatuses 103 is performed in accordance with the fibre channel (FC: Fibre Channel) protocol but may also be performed using a different protocol. Furthermore, for the communications between the host server 101 and the storage apparatuses 103, the LAN 104 may also be used instead of using the network device 113 and the SAN switches 102.
The SAN switches 102 comprise one or more host ports 130 and storage ports 131 respectively, and the data access route between the host server 101 and the storage apparatuses 103 is configured by switching the coupling between the host ports 130 and the storage ports 131.
The storage apparatuses 103 have a built-in MAID function and are configured comprising one or more ports 140, a control unit 141, and a plurality of memory apparatuses 142, respectively.
The ports 140 are used to communicate with the host server 101 or the performance/configuration information collection servers 107 via the SAN switches 102.
The memory apparatuses are configured from costly disks such as SSD (Solid State Drive), SAS (Serial Attached SCSI) disks and low-cost disks such as SATA (Serial AT Attachment) disks or the like, for example. Note that, in addition to or rather than using SSD, SAS disks and SATA disks as the memory apparatuses 142, SCSI (Small Computer System Interface) disks or optical disk devices and so on, for example, may also be adopted.
One or more the array groups 144 are formed by one or more memory apparatuses 142 of the same type (SSD, SAS disks or SATA disks or the like), and one or more logical volumes 145 are formed in a storage area provided by one array group 144. Furthermore, data from the host server 101 is read and written from and to the logical volumes 145. The relationships between the memory apparatuses 142, the array groups 144, and the logical volumes 145 will be described subsequently (refer to
The control unit 141 is configured comprising hardware resources such as a processor and memory, and controls the operation of the storage apparatuses 103. For example, the control unit 141 controls the reading and writing of data with respect to the memory apparatuses 142 in accordance with I/O requests sent from the host server 101.
In addition, the control unit 141 monitors the access status of access by the host server 101 with respect to each logical volume 145, and if there is no access for a predetermined period with respect to any of the logical volumes 145 provided by a certain array group 144, sets that array group 144 (that is, each memory apparatus 142 configuring the array group 144) to a stopped operation status and if there is access by the host server 101 with respect to the logical volume 145 provided by the array group 144, starts up the array group 144 (that is, each memory apparatus 142 that configures the array group 144).
Furthermore, the control unit 141 comprises a migration execution unit 143 as software. The migration execution unit 143 executes migration processing for migrating data between array groups 144 (described subsequently) by controlling the corresponding memory apparatuses 142 upon receiving a migration command from the performance monitoring server 106.
The business operation software 120 and the database management software 121 is application software that provides a business operation logic function of the business operation system. The business operation software 120 and the database management software 121 execute inputting and outputting of data with respect to the storage apparatuses 103 where necessary. Note that, in the following description, the business operation software 120 will be suitably referred to as an “application.”
Access to the data in the storage apparatuses 103 by the business operation software 120 and the database management software 121 takes place via the OS 122, the network device 113, the ports 114, the SAN switches 102, and the ports 140 of the storage apparatuses 103.
The OS 122 is basic software of the host server 101 and provides a storage area that serves as the input/output destination of data with respect to the business operation software 120 and the database management software 121 in units referred to as files. The files managed by the OS 122 are associated by a mount operation with the logical volumes 145 managed by the OS 122 in units of a certain group (hereinafter called a file system). The files in the file system are in many cases managed using a tree structure.
Meanwhile, the storage management system comprises, as hardware, a storage management client 105, a performance monitoring server 106, one or more performance configuration information collection servers 107 and, as software, storage management software 154 installed on the performance monitoring server 106, a switch monitoring agent 164 and a storage monitoring agent 165 which are installed on the performance configuration information collection servers 107, and an application monitoring agent 123, a database performance configuration information collection agent 124 and a host monitoring agent 125 which are installed on the host server 101.
The storage management client 105 is an apparatus for providing a user interface function of the storage management software 154. The storage management client 105 comprises at least an input device for receiving inputs from the user and a display device (not shown) for displaying information to the user. The display device is configured from a CRT (Cathode Ray Tube) or a liquid-crystal display device and so on, for example. A configuration example of a GUI (Graphical User Interface) screen that is displayed on the display device will be described subsequently (
The performance monitoring server 106 comprises a CPU 150, a memory 151, a hard disk device 152, and a network device 153.
The CPU 150 is a processor that executes the software programs stored in the hard disk device 152 by reading these programs to the memory 151. In the following description, the processing executed by the software program read to the memory 151 is executed by the CPU 150 that actually executes the software program.
The memory 151 is configured from a semiconductor memory such as DRAM, for example. The memory 151 stores software programs that are read from the hard disk device 152 and executed by the CPU 150, and information that the CPU 150 refers to, and so forth. Specifically, the memory 151 stores at least the storage management software 154.
The hard disk device 152 is used to store various types of software and information and so on. Note that a semiconductor memory such as flash memory or an optical disk device or the like, for example, may be adopted in place of the hard disk device 152.
The network device 153 is used to allow the performance monitoring server 106 to communicate with the storage management client 105, the performance/configuration information collection servers 107 and the host server 101 and so forth via the LAN 104.
The performance/configuration information collection servers 107 comprise a CPU 160, a memory 161, a hard disk device 162, and a network device 163.
The CPU 160 is a processor that executes the software programs stored in the hard disk device 162 by reading these programs to the memory 161. In the following description, the processing that is executed by the software programs read to the memory 161 is executed by the CPU 160 that actually executes these software programs.
The memory 161 is configured from semiconductor memory such as DRAM, for example. The memory 161 stores software programs that are read from the hard disk device 162 and executed by the CPU 160 as well as data that the CPU 160 refers to, and so forth. Specifically, the memory 161 stores at least either the switch monitoring agent 164 or the storage monitoring agent 165.
The hard disk device 162 is used to store various types of software and data and so forth. Note that a semiconductor memory such as flash memory or an optical disk device or the like, for example, may also be used in place of the hard disk device 162.
The network device 163 is used to allow the performance/configuration information collection servers 107 to communicate with the performance monitoring server 106, or with the SAN switches 102 and storage apparatuses 103 that are the monitoring targets of the switch monitoring agent 164 and the storage monitoring agent 165 that are installed on the performance/configuration information collection servers 107 via the LAN 104.
The storage management software 154 is software that provides a function for collecting and monitoring SAN configuration information, performance information and application information. The storage management software 154 acquires the configuration information, performance information and application information from the hardware and software that form the SAN environment and therefore employ dedicated agent software, respectively.
The switch monitoring agent 164 is software for collecting performance information and configuration information that are required from the SAN switches 102 via the network device 163 and the LAN 104. In
The storage monitoring agent 165 is software for collecting the required performance information and configuration information from the storage apparatuses 103 by way of a port 166 of the network device 163 and the SAN switch 102. In
The application monitoring agent 123 is software for collecting various performance information and configuration information relating to the business operation software 120, and the database performance/configuration information collection agent 124 is software for collecting various performance information and configuration information relating to the database management software 121. Furthermore, the host monitoring agent 125 is software for collecting required information relating to the performance and configuration of the host server 101.
Furthermore, in
The host monitoring agent 125 and the application monitoring agent 123 which are installed on the host server 101, and the storage monitoring agent 165 which is installed on the performance/configuration information collection server 107 are started up with predetermined timing (at regular intervals using a timer in accordance with scheduling settings, for example) or started up in response to a request from the storage management software 154 and these agents collect required performance information and/or configuration information from monitoring targets under the control of these agents.
The agent information collection unit 201 of the storage management software 154 is also started up with predetermined timing (at regular intervals in accordance with scheduling settings, for example) and collect performance information and configuration information of monitoring targets from the host monitoring agent 125, the application monitoring agent 123 and the storage monitoring agent 165 in the SAN environment. Furthermore, the agent information collection unit 201 stores the collected information in the resource performance information table group 202 and the resource configuration information table group 203.
Here, resources is a generic term for the hardware configuring the SAN environment (the storage apparatuses, the host server and so on) and the physical and logical components (array groups, logical volumes and the like), programs that are executed on the hardware (business operation software, database management software, file management systems, volume management software and the like) and logical components (file systems, logic devices and the like).
The resource performance information table group 202 may be broadly divided into tables for managing as is information collected by the agent information collection unit 201 from the storage monitoring agent 165, the host monitoring agent 125 and the application monitoring agent 123, and tables for managing information that is obtained by processing information collected by the agent information collection unit 201. The resource performance information table group 202 is configured from tables subsequently described in
The resource configuration information table group 203 may also be broadly divided into tables for managing as is information collected by the agent information collection unit 201 from the storage monitoring agent 165, the host monitoring agent 125, and the application monitoring agent 123, and tables for managing information obtained by processing information collected by the agent information collection unit 201. The resource configuration information table group 203 is configured from tables that will be subsequently described in
Meanwhile, the resource grouping unit 204 of the storage management software 154 groups logical volumes 145 with overlapping time zones for which the I/O count is ‘0’ into the same group on the basis of the average I/O. count in each predetermined time zone (for example, time zones for every 10 minutes) of each logical volume 145 (
Furthermore, the array group reference value calculation unit 207 of the storage management software 154 acquires, respectively for each array group 144 (
The array group reference value calculation unit 207 determines, for each array group 144, the I/O count of the time zone with the largest value in the total value of the I/O count for each time zone of each logical volume 145 associated with the array group 144, as a reference value of the array group 144 (referred to hereinbelow as the array group reference value), and stores the array group reference value of each array group 144 thus determined in the reference value storage table 208.
The group/array group mapping unit 206 of the storage management software 154 calculates, for each group of logical volumes 145 and on the basis of the resource grouping information table group 205, the total value for the defined capacity of each logical volume 145 belonging to the group. Furthermore, the group/array group mapping unit 206 performs mapping of array groups 144 to groups of logical volumes 145 on the basis of the actual capacity of each array group 144 obtained on the basis of the calculated value and the resource configuration information table group 203.
In addition, the group/array group mapping unit 206 divides these groups of logical volumes 145 if, as a result of mapping the groups with the array groups 144 on the basis of the actual capacity of the array groups 144 as mentioned earlier, a forecast value for the I/O count of a certain time zone of a certain group that is forecast on the basis of the average I/O. count for each time zone of the logical volumes 145 exceeds an array group reference value of the array group 144 associated with that group. Furthermore, the group/array group mapping unit 206 performs group integration if the number of groups of logical volumes 145 exceeds the number of array groups 144.
The reduction rate calculation unit 210 of the storage management software 134 calculates the current power consumption amount per year of all the memory apparatuses 142 on the basis of the current operating status of each array group 144, calculates the power consumption amount per year of all the memory apparatuses 142 after there has been a change to the group configuration of the logical volumes 145 as a result of this grouping and to the mapping of the array groups 144 to each group (referred to hereinafter as a configuration change), and stores the calculation results in the reduction rate storage table 211 respectively. Furthermore, the reduction rate calculation unit 210 calculates the power consumption reduction amount and the power consumption reduction rate per year of the memory apparatuses 142 overall on the basis of the power consumption amount per year before and after this configuration change, and stores the calculation results in the reduction rate storage table 211.
Meanwhile, the grouping display unit 212 of the storage management software 154 displays information such as the grouping result and the resulting power consumption reduction amount on the storage management client 105 on the basis of the grouping result storage table 209, the reduction rate storage table 211, and the subsequently described response time threshold table 216.
In addition, if the operating mode is preset to “automatic,” the migration control unit 213 of the storage management software 154 detects the difference between a new configuration following a configuration change that is identified from the grouping result storage table 209, and an pre-existing configuration prior to the configuration change. Furthermore, by controlling the migration execution unit 143 of the storage apparatus 103 on the basis of this detection result, the migration control unit 2131 migrates data stored in a corresponding logical volume 145 to another logical volume 145 in order to construct this new configuration.
In addition, if the operating mode has been set to “manual,” by controlling the migration execution unit 143 of the storage apparatus 103 in accordance with a migration command that corresponds to an operation by the user notified via the grouping display unit 212, and a the migration control unit 213 migrates data stored in the migration target logical volume 145 from a source array group 144 to a destination array group 144 in order to construct this new configuration. Furthermore, by controlling the migration execution unit 143, the migration control unit 213 reroutes the I/O path to the migrated logical volume 145 between array groups 144 in the storage apparatus 103 in accordance with the new configuration.
However, the power consumption reference value configuration unit 215 of the storage management software 154 stores, in the power consumption reference value storage table 214, an estimate value (referred to hereinafter as the power consumption reference value) for the power consumption amount per hour for each type of memory apparatus 142 (SSD, SATA, SAS, and the like) collected by the agent information collection unit 201 or set by the user via the storage management client 105. The power consumption reference values per hour for each type of memory apparatus stored in the power consumption reference value storage table 214 are used by the reduction in rate calculation unit 210 when calculating the power consumption amount per year before and after grouping as mentioned earlier.
In addition, the response time threshold configuration unit 217 of the storage management software 154 stores, and in the response time threshold table 216, the maximum value (referred to as the response time threshold hereinbelow) allowed as the response time from the corresponding logical volume 145 for each application installed on the host server 101 and set by the user via the storage management client 224. The response time threshold for each application stored in the response time threshold table 216 is used when the array group reference value calculation unit 207 calculates the array group reference value for each array group 144 as mentioned earlier.
The SAN environment hardware shown in
“Host server A” to “host server C” correspond to the host servers 101 in
The applications 307 to 309 known as “AP A” to “AP C” run on the host server 301 called the “host server A,” the applications 310, 311 called “AP D” and “AP E” run on the host server 302 known as “host server B,” and applications 312 to 314, known as “AP E” to “AP G” run on the host server 303 called “host server C.” These applications 307 to 314 correspond to the business operation software 120 in
Furthermore, file systems 315, 316, known as “FS A” and “FS B” and device files 322, 323, known as “DF A” and “DF B” are defined on the host server 301 known as “host server A”; file systems 317, 318, known as “FS C” and “FS D” and device file 324 known as “DF C” are defined on the host server 302 called “host server B”; and file systems 319 to 321, known as “FS E” to “FS G” and device files 325 to 327, known as “DF D” to “DF F” are defined for the host server 303 called “host server C.”
Running on these host servers 301 to 303 are the application monitoring agent 123 (
The line linking the file systems 315 and 316 called “FS A” and “FS B” to the device file 322 known as “DF A” indicates an association whereby the I/O load on these file systems 315, 316 is a read or write of the device file 322.
The narrow line representing “no change” in
Note that although omitted from
In the array groups 339 to 343 are logical disk drives configured from one or more memory apparatuses 344 to 358 of the same type depending on the functions of the control unit 141 (
Furthermore, logical volumes 331 to 338 are logical disk drives formed as a result of the function of the control unit 141 of the storage apparatuses 305, 306 dividing up the array groups 339 to 342 within the same apparatuses according to a designated size. These logical volumes 331 to 338 correspond to the logical volumes 145 in
Each of the device files 322 to 327 of the host servers 301 to 303 is allocated to any of the logical volumes 331 to 338 of the storage apparatuses 305, 306 respectively. The configuration information representing the correspondence relationship between the device files 322 to 327 and the logical volumes 331 to 338 is collected by the host monitoring agent 125 (
As described hereinabove, when association information between resources reaching the logical volumes 331 to 338 from the applications 307 to 314 via the file systems 315 to 321 and the device files 322 to 327 is correlated, a so-called I/O route is obtained.
For example, when the application 314 known as “AP H” issues an I/O request to the file system 321 known as “FS G,” the file system 321 is secured in the device file 327 known as “DF F” and the device file 327 is allocated to the logical volume 338 known as “LDEV H” and the logical volume 338 is allocated to the array group 343 known as “AG E.” In this case, the load of the I/O generated by the application 314 known as “AP H” arrives at the corresponding memory apparatuses 356 to 358 from the file system 321 known as “FS G” via a route that passes through the device file 327 known as “DF F,” the logical volume 338 known as “LDEV H,” and the array group 343 known as “AG E.”
A configuration example of a GUI screen displayed on the display device of the storage management client 105 (
The grouping result display screen 401 is configured from a grouping result list 402 and a migration execution button 403. Furthermore, the grouping result list 402 is configured from a grouping configuration display area 410, an array group configuration display area 411, a power consumption display area 412, and a migration execution configuration area 413.
The grouping configuration display area 410 is configured from a group identifier field 420, a host server identifier field 421, a device file identifier field 422, a file system identifier field 423, an application field 424, a storage apparatus identifier field 425, and a logical volume identifier field 426. Furthermore, the application field 424 is configured from an identifier field 427, a response time threshold field 428, and a maximum response time field 429.
Furthermore, the group identifier field 420 displays an identifier (group identifier) that is assigned to each group of logical volumes 145 (
In addition, the identifier field 427 of the application field 424, the response time threshold field 428 and the maximum response time field 429 display, in association with the logical volume identifier stored in the logical volume identifier field 426, the identifier (application identifier) of the application (business operation software 120) that uses the logical volume 145 to which this logical volume identifier has been assigned, a response time threshold that is configured for the application, and a maximum response time (described subsequently), respectively.
Furthermore, the host server identifier field 421 displays the identifier (host server identifier) of the host server 101 in which the corresponding application is installed, and the device file identifier field 422 and the file system identifier field 423 display the identifier of the device file (device file identifier) and the identifier of the file system (file system identifier) respectively which are associated with the corresponding application.
Meanwhile, the array group configuration display area 411 is configured from an identifier field 430 and a memory apparatus type field 431. Further, the identifier field 430 displays the identifier (array group identifier) of the array group 144 (
Furthermore, the power consumption display area 412 is configured from a reduction amount field 432 and a reduction rate field 433. Furthermore, the reduction amount field 432 displays, in kilowatt (‘kW’) units, the reduction amounts of power consumption per year that is expected as a result of the configuration change to configure the corresponding group of logical volumes 145, and the reduction rate field 433 displays the reduction rate of power consumption per year that is expected as a result of a configuration change for configuring this group.
Furthermore, the migration execution configuration area 413 displays two radio buttons 434 for opting to change (“YES”) or not change (“NO”) the configuration of the corresponding groups that are displayed in the grouping results list 402 at this time. However, two radio buttons 434 are displayed as invalid for those groups which, as a result of the grouping, do not require the execution of a configuration change. In addition, two radio buttons 434 are displayed for opting, when a configuration change to a certain group will also affect another group, whether or not to execute a configuration change for this group as a whole (for example, the two groups “A” and “C” in
The migration execution button 403 is a button for executing, if the operating mode of the storage management software 154 is “manual,” a change in configuration according to the processing results of the grouping processing that is triggered by an execution command from the user.
Thus, if the user desires to change, via the grouping result display screen 401, the configuration of a group displayed in the grouping result list 402 to the configuration displayed on the grouping result display screen 401 at the time, the user selects the radio button 434 corresponding to “YES” of the migration execution configuration area 413 corresponding to that group (the user clicks on the radio button 434 so that a black circle is displayed); however, if the user does not desire this group configuration change, the user selects the radio button 434 corresponding to “NO” of the migration execution configuration area 413 corresponding to that group and then, by clicking the migration execution button 403, is able to change the configuration of the desired group (in other words, the group to which the radio button 434 corresponding to “YES” is selected in the corresponding migration execution configuration area 413) to the configuration displayed in the grouping result list 402 at the time.
Meanwhile,
The power consumption reference value configuration unit 502 is configured from a memory apparatus type field 510 and a power consumption reference value field 511. Furthermore, the memory apparatus type field 510 displays all the types of memory apparatuses 142 (
Thus, after using the power consumption reference value configuration screen 501 to input forecast values for the power consumption per hour pertaining to the memory apparatus types that respectively correspond to the power consumption reference value input fields 512 of the power consumption reference value fields 511 in the power consumption reference value configuration unit 502, the user is able to configure these numerical values as power consumption reference values for the corresponding types of memory apparatuses 142 by clicking the OK button 503. Furthermore, by clicking the Cancel button 504 via the power consumption reference value configuration screen 501, the user is able to close the power consumption reference value configuration screen 501 with updating the power consumption reference values of each of the memory apparatus types.
Meanwhile,
The response time threshold configuration unit 602 is configured from an application identifier field 610 and a response time threshold configuration field 611. Furthermore, the application identifier field 610 stores the identifiers of applications (application identifiers) installed on any host server 101. The response time threshold configuration field 611 displays a response time threshold input field 612 with which the user configures the response time threshold for the corresponding application.
Thus, after using the response time threshold configuration screen 601 to input the desired numerical values in the response time threshold input field 612 of each response time threshold configuration field 611 of the response time threshold configuration unit 602, the user is able to configure these numerical values as response time thresholds for the corresponding applications by clicking the OK button 603. In addition, by clicking the cancel button 604 on the response time threshold configuration screen 601, the user is able to close the response time threshold configuration screen 601 without updating the response time thresholds of each of the applications.
An example of the configuration of the resource performance information table group 202 (
The resource performance information table group 202 is configured from the application performance information table 700 (
The application performance information table 700 is a table that is used to hold and manage information relating to the performance of each application (business operation software 120) collected by the agent information collection unit 201 (
Furthermore, the date and time field 701 stores the date and time zone (time zone every 10 minutes in the example of
Accordingly,
Furthermore, the logical volume performance information table 800 is a table that is used to hold and manage information relating to the performance of each logical volume 145 created in the storage apparatus 103 and collected by the storage monitoring agent 165 (
Furthermore, the date and time field 801 stores the date and time zone when the information was collected (time zones for every 10 minutes in the example of
Therefore,
The array group performance information table 900 is a table that is used to hold and manage information relating to the performance of each array group 144 that is defined in the storage apparatus 103 and collected by the storage monitoring agent 165 (
In addition, the date and time field 901 stores a date and time zone when the information was collected (time zones for every 10 minutes in the example of
Hence,
An example of the configuration of the resource configuration information table group 203 (
The resource configuration information table group 203 is configured from the device file/file system-logical volume association table 1000 (
The device file/file system-logical volume association table 1000 is a table that is used to manage associations between the device files and file systems of the host servers 101, and the logical volumes 145 defined in the storage apparatuses 103 and, as shown in
Furthermore, the device file identifier field 1002, the file system identifier field 1003, and the logical volume identifier field 1005 store the identifiers of the device files, file systems, and logical volumes 145 that are associated with the aforementioned fields respectively (linked by lines in
Therefore,
In addition, the device file/file system/application association table 1100 is a table that is used to manage associations between device files, file systems, and applications (business operation software 120) in the host servers 101 and, as shown in
Furthermore, the device file identifier field 1102, the file system identifier field 1103, and the application identifier field 1104 store the respective identifiers of the device files, file systems, and applications associated with the aforementioned fields respectively (linked by lines in
Hence,
In addition, the logical volume/array group association table 1200 is a table that is used to manage associations between the logical volumes 145 and array groups 144 defined in the storage apparatuses 103 and, as shown in
Further, the logical volume identifier field 1202 stores the logical volume identifiers of each of the logical volumes 145 created in any of the storage apparatuses 103 respectively, and the array group identifier field 1204 stores the array group identifiers of the array groups 144 with which the corresponding logical volumes 145 are associated. Furthermore, the storage apparatus identifier field 1201 stores the storage apparatus identifiers of the storage apparatuses 103 in which the logical volumes 145 have been defined, and the logical volume defined capacity field 1203 stores the defined capacity of the corresponding logical volumes 145.
Hence,
The array group configuration information table 1300 is a table that is used to manage the array groups 144 defined in the storage apparatus 103 and, as shown in
Furthermore, the array group identifier field 1301 stores the array group identifiers that are assigned to each of the array groups 144 defined in any of the storage apparatuses 103 respectively, and the memory apparatus type field 1302 stores the types of the memory apparatuses 142 configuring the array groups 144.
Furthermore, the memory apparatus count field 1303 stores the numbers of memory apparatuses 142 that belong to the corresponding array groups 144 and the array group actual capacity field 1304 stores the actual overall capacity of the corresponding array groups 144. Additionally, the array group power consumption amount field 1305 stores the overall power consumption amounts of the corresponding array groups 144, which are calculated as will be described subsequently.
Hence,
Meanwhile, the resource grouping information table group 205 (
The resource grouping configuration table 1400 is a table that is used to hold and manage information related to the configuration of each group of logical volumes 145 formed by the resource grouping unit 204 (
Furthermore, the logical volume identifier field 1401 stores the respective logical volume identifiers of each of the logical volumes 145 defined in any of the storage apparatuses 103, and the group identifier field 1402 stores the group identifiers of the groups to which the corresponding logical volumes 145 belong.
Hence,
Furthermore, the resource grouping performance information table 1500 is a table that is used to hold and manage performance information for each group of logical volumes 145 formed by the resource grouping unit 204 (
Furthermore, the group identifier field 1501 stores the respective group identifiers of each group of logical volumes 145, and the date and time field 1502 stores the date and time zone when the information was collected (time zones for every 10 minutes in the example of
The average I/O count forecast value field 1503 stores a forecast value for the average value of the I/O counts in the corresponding group for every predetermined time (every minute in the example of
Hence,
Furthermore, the logical volume identifier field 1601 and the group identifier field 1602 store the same information as the logical volume identifier field 1401 and the group identifier field 1402 respectively in the resource grouping configuration table 1400 (
Hence,
Additionally, the array group identifier field 1701 stores the respective array group identifiers of each of the array groups 144 defined in any of the storage apparatuses 103, and the array group reference value field 1702 stores the maximum value of the average I/O count forecast values (referred to as the array group reference values hereinbelow) that are stored in the corresponding average I/O count forecast value field 903 in the array group performance information table 900 (
Hence,
Furthermore, the array group identifier field 1801 stores the respective array group identifiers of each of the array groups 144 defined in any of the storage apparatuses 103. The pre-change ELECTRIC ENERGY field 1802 stores the power consumption amounts per year prior to the configuration change of the corresponding array groups 144, while the post-change ELECTRIC ENERGY field 1803 stores the power consumption amount following the configuration change per year of the array groups 144.
In addition, the power consumption reduction amount field 1804 stores the power consumption reduction amount per year resulting from a configuration change, and the power consumption reduction rate field 1805 stores the power consumption reduction rate per year resulting from the configuration change.
Hence, in the example of
Meanwhile,
Furthermore, the memory apparatus type field 1901 stores the types of all the memory apparatuses mounted in the storage apparatuses 103 such as “SAS,” “SATA,” and “SSD,” and the power consumption reference value field 1902 stores values configured by the user as the power consumption reference value per hour for the corresponding memory apparatus type.
Hence, the example in
Meanwhile,
Furthermore, the application identifier field 2001 stores the application identifiers of each of the applications installed on the host servers 101, and the response time threshold field 2002 stores values set by the user as the response time thresholds for the corresponding applications.
Hence, the example of
The processing content of various processes executed by each of the program modules in the storage management software 154 will be described next with reference to
In other words, in the storage management software 154, the response time threshold for each application configured by the user using the response time threshold configuration screen 601 (
Thereafter, the power consumption reference value for each memory apparatus type configured by the user using the power consumption reference value configuration screen 501 (
The agent information collection unit 201 (
Subsequently, the resource grouping unit 204 (
Thereafter, the array group reference value calculation unit 207 (
Subsequently, the group/array group mapping unit 206 (
Subsequently, the reduction rate calculation unit 210 (
Thereafter, the grouping display unit 212 (
Subsequently, in the case of manual mode, the migration control unit 213 (
The storage management software 154 then terminates the power saving processing.
Here,
The agent information collection unit 201 starts the agent information collection processing shown in
Thereafter, the agent information collection unit 201 derives associations between the hosts servers 101, the applications, the file systems, the device files, the storage apparatuses 103, the logical volumes 145, and the array groups 144 on the basis of the configuration information of the storage apparatuses 103, the host servers 101, and the applications collected in step SP10, and stores the derived result in the device file/file system-logical volume association table 1000 (
The agent information collection unit 201 then acquires the defined capacity of each logical volume 145 and the actual amount for each of the array groups 144 respectively from the configuration information of the storage apparatuses 103, the host servers 101 and the applications collected in step SP10. The agent information collection unit 201 subsequently stores the defined capacity of each logical volume 145 thus acquired in the logical volume/array group association table 1200, and stores the acquired actual capacity for each array group 144 in the array group configuration information table 1300 (SP12).
The agent information collection unit 201 then calculates for each application and based on the application performance information collected from the application monitoring agent 123, the respective maximum response time average values, which are the average values of the maximum response times for every predetermined time (one minute) in each predetermined time zone (times zone every 10 minutes), and stores the calculation result in the application performance information table 700 of the resource performance information table group 202 (
In addition, the agent information collection unit 201 calculates, for each logical volume 145 and on the basis of the performance information of the logical volumes 145 collected from the storage monitoring agent 165, the average I/O count in each case, which is the average value of the I/O counts of the logical volumes 145 for each predetermined time (one minute) in predetermined time zones (time zones for every 10 minutes), and stores the calculated result in the logical volume performance information table 800 (
Thereafter, the agent information collection unit 201 calculates as an average I/O count forecast value the total value of the average I/O count of each of the logical volumes 145 associated with the array groups 144 in each predetermined time zone (time zones for every 10 minutes), for each array group 144 and based on the average I/O count for each predetermined time (one minute) and for each logical volume 145 stored in the logical volume performance information table 800 in step SP14, and on information representing associations between each of the logical volumes 145 and array groups 144 stored in the logical volume/array group association table 1200 in step SP11, and stores the calculation result in the array group performance information table 900 (SP15).
The agent information collection unit 201 then terminates the agent information collection processing.
Meanwhile,
The resource grouping unit 204 is started up at regular intervals by scheduling settings, for example. At startup, the resource grouping unit 204 starts the resource grouping processing shown in
The resource grouping unit 204 then selects one of the unprocessed logical volumes 145 for which the processing (described subsequently) of steps SP22 to SP39 has not yet been executed or which has not yet been allocated to any group (SP21).
The resource grouping unit 204 then determines whether time zones for which the average I/O count is “0” are the same every day for the logical volume 145 selected in step SP21 (SP22). When this determination yields an affirmative result, the resource grouping unit 204 determines whether logical volumes 145 with the same time zone with an average I/O count of “0” exist elsewhere (SP23).
The resource grouping unit 204 proceeds to step SP40 when this determination yields a negative result. However, when the determination yields an affirmative result, the resource grouping unit 204 determines whether or not there is one year or more worth of performance information for the logical volumes 145 stored in the logical volume performance information table 800 (
When this determination yields a negative result, the resource grouping unit 204 then proceeds to step SP27. However, when this determination yields a positive result, the resource grouping unit 204 extracts, for the logical volume 145 selected in step SP21 and all the other logical volumes 145 detected in step SP21, all time zones for which the average I/O count is “0” on the basis of the performance information for the logical volumes 145 for the preceding week stored in the logical volume performance information table 800 (SP25).
The resource grouping unit 204 subsequently determines, based on the processing result of step SP25 and for the logical volume 145 selected in step SP21 and all the other logical volumes 145 detected in step SP23, whether or not, on a month by month basis, there are days with different time zones for which the average I/O count is “0” for one to several days or so (SP26).
When this determination yields a negative result, the resource grouping unit 204 subsequently configures the logical volume 145 selected in step SP21 and all the other logical volumes 145 extracted in step SP23 in the same group (SP27). Specifically, the resource grouping unit 204 stores the same unique group identifier in each of the group identifier fields 1402 (
However, when the determination of step SP26 yields of an affirmative result, the resource grouping unit 204 determines, for the logical volume 145 selected in step SP21 and all the other logical volumes 145 extracted in step SP23, whether or not on a quarterly basis there are days with different time zones for which the average I/O count is “0” for one to several days or so (SP28).
When the determination of step SP28 subsequently yields a negative result, the resource grouping unit 204 configures, similarly to step SP27, the logical volume 145 selected in step SP21 and all the logical volumes 145 extracted in step SP23 into the same group (SP29).
Furthermore, when the determination of step SP28 yields an affirmative result, the resource grouping unit 204 configures, similarly to step SP27, the logical volume 145 selected in step SP21 and all the other logical volumes 145 extracted in step SP23 into the same group (SP30).
However, when the determination of step SP22 yields a negative result, the resource grouping unit 204 determines, for the logical volume 145 selected in step SP21, whether or not the time zones for which the average I/O count is zero are the same every day (SP31). Specifically, the resource grouping unit 204 determines in step SP31 whether or not there are overlapping time zones for which the average I/O count is “0” if, for example, time zones with an average I/O count of “0” are different on Sunday and Monday but there are overlapping time zones with an average I/O count of “0” if Sundays and Saturdays are compared from week to week. When this determination yields a negative result, the resource grouping unit 204 then proceeds to step SP40.
However, when the determination of step SP31 yields an affirmative result, the resource grouping unit 204 then executes the steps SP32 to SP39 in the same way as steps SP23 to SP30.
When before long the execution of the processing of steps SP36, SP38, or SP39 is complete, the resource grouping unit 204 proceeds to step SP40 and determines whether or not the execution of the processing of steps SP22 to SP39 is complete for all the logical volumes 145 defined in any of the storage apparatuses 103 (SP40).
The resource grouping unit 204 returns to step SP21 when this determination yields a negative result, and then repeats the processing of steps SP21 to SP40.
When before long an affirmative result is obtained in step SP40 as a result of terminating execution of the processing steps SP22 to SP39 for all the logical volumes 145, the resource grouping unit 204 terminates the resource grouping processing.
When the resource performance information table group 202 and/or the resource configuration information table group 203 is updated by the agent information collection unit 201, the array group reference value calculation unit 207 starts the array group reference value calculation processing shown in
The array group reference value calculation unit 207 then extracts, for each application, the days and time zones for which the maximum response time does not exceed the response time threshold of the application from the application performance information table 700 (
The array group reference value calculation unit 207 then detects, for each application, the date and time for which the average I/O forecast value of the application stored in the array group performance information table 900 (
In addition, the array group reference value calculation unit 207 stores the array group reference value for each array group 144 determined in step SP42 in the reference value storage table 208 (FIG. 17)-(SP53), and then terminates the array group reference value calculation processing.
In other words, once started up, the group/array group mapping unit 206 starts the group/array group mapping processing shown in
The group/array group mapping unit 206 then refers to the array group configuration information table 1300 (
The group/array group mapping unit 206 refers to the resource grouping performance information table 1500 (
The group/array group mapping unit 206 proceeds to step SP64 when this determination yields a negative result, but when an affirmative result is obtained, the group/array group mapping unit 206 performs group division by focusing on the time zone with the largest I/O count and balancing the I/O counts for that time zone for each group of logical volumes 145 detected in step SP62 and which has a larger average I/O count forecast value than the array group reference value of the allocated array group 144 (divides the logical volumes 145 configuring the group into two or more groups) (SP63).
Thereafter, the group/array group mapping unit 206 determines whether or not the number of groups of logical volumes 145 is greater than the number of array groups 144 (SP64). The group/array group mapping unit 206 subsequently returns to step SP60 when this determination yields a negative result, and then repeats the processing of steps SP60 to SP64.
When before long an affirmative result is obtained in step SP64 as a result of the number of groups of logical volumes 145 exceeding the number of array groups 144, the group/array group mapping unit 206 selects one group or logical volume 145 with the greatest capacity among the groups of logical volumes 145 to which an array group 144 has not yet been allocated and the logical volumes 145 to which an array group 144 has not yet been allocated and which do not belong to any group (SP65).
The group/array group mapping unit 206 then refers to the array group configuration information table 1300 and searches, among the array groups 144 already allocated to either a group or logical volume 145, array groups 144 for which the difference between the actual capacity of the array group 144 and the total capacity of the group and/or logical volume 145 allocated to that array group 144 is greater than the capacity of the group or logical volume 145 selected in step SP65 (SP66).
The group/array group mapping unit 206 then extracts, among the array groups 144 retrieved in step SP66, array groups 144 for which a value does not exceed the array group reference value of the array group 144, this value being obtained by totaling up the total value of average I/O count forecast values of groups and the like already allocated to the array group 144 (groups and/or logical volumes 145 that do not belong to any group) and the average I/O count forecast value or average I/O count of the group or logical volume 145 selected in step SP65 (SP67).
In addition, the group/array group mapping unit 206 selects a relevant group or the like from among the groups and so on already allocated to the array groups 144 extracted in step SP67, and links this group or the like to the group or the like selected in step SP65 as a single group (SP68).
The group/array group mapping unit 206 then determines whether or not there is a group to which an array group 144 has not yet been allocated or a logical volume 145 to which an array group has not been allocated and that does not belong to any group (SP69).
The group/array group mapping unit 206 returns to step SP65 when this determination yields a negative result and then repeats the processing of steps SP65 to SP69.
The group/array group mapping unit 206 terminates the group/array group mapping processing when before long an affirmative result is obtained in step SP69 as a result of completion of the allocation of array groups to all groups and all the logical volumes 145 that do not belong to any group.
Note that, in step SP68, the following method can be adopted as the method for selecting the group or the like to be linked to the group or the like selected in step SP65 from among the groups and so on already allocated to the array groups 144 extracted in step SP67. In the following description, the group or the like selected in step SP65 will be referred to as “group A” and the one or plurality of groups or the like already allocated to the array groups 144 extracted in step SP67 will be referred to as “groups B.”
Foremost, groups with a large time zone overlap with group A and for which the I/O count is “0” are extracted from groups B.
Specifically, each time zone (time zones for every 10 minutes, for example) is divided into a plurality of sections of equal length and, for each section, determination is made of whether there is I/O with respect to groups A and B. in the section. Thereafter, “1” is allocated to those sections if there is I/O with respect to both groups A and B, and if there is no I/O with respect to either group A or B, and “0” is allocated to those sections if there is I/O with respect to both groups A and B and if there is I/O with respect to only either one of groups A and B. The number n of sections to which “1” is allocated is divided by the total number N of sections. The larger the number of groups B with a division result (n/N) close to 1, the greater the overlap with group A of time zones with an I/O count of “0.”
Thereafter, from among groups B for which there is a large overlap with group A of time zones with an I/O count of “0” (no less than the threshold, for example), a group or the like is selected which has an average I/O count in each time zone when the I/O with respect to group A and the I/O with respect to groups B are combined.
Specifically, if we let the I/O amount of group A in each section in a certain time zone be Xi (i=1, 2, . . . N) and let the I/O amount of group B in each section in the same time zone be Yi (i=1, 2, . . . N), a correlation between a data string {Xi} and a data string {Yi} is found, and a group or the like for which a correlation K is closest to “−1” (negative correlation) is selected from among groups B for which there is a large overlap with group A of time zones with an I/O count of “0.” This is because the closer the correlation K is to “−1,” the smaller the I/O of the groups B in the sections with a large group A I/O; hence, adding together the I/O amounts of groups A and B in the respective sections has the effect of balancing the totals of the overall I/O amounts across the whole time zone. Here, the sections for which the I/O amounts of groups A and B are both “0” are excluded from the data string targets for calculating this correlation.
In other words, when the grouping result storage table 209 is updated by the group/array group mapping unit 206, the reduction rate calculation unit 210 starts the reduction rate calculation processing, first referring to the array group performance information table 900 (
The reduction rate calculation unit 210 then acquires information indicating the association between array groups 144 and logical volumes 145 after the configuration change from the grouping result storage table 209 (
The reduction rate calculation unit 210 then calculates the difference in the operating time forecast values before and after the configuration change for each array group 144 (SP72). The reduction rate calculation unit 210 also uses the difference in operating time forecast values before and after the configuration change for each array group 144 obtained in step SP72 to calculate, for each of the array groups 144, the power consumption reduction rate before and after the configuration change, and stores the calculation results in the reduction rate storage table 211 (SP73).
The reduction rate calculation unit 210 then acquires, for each array group 144, the type and number of memory apparatuses 142 (
The reduction rate calculation unit 210 then calculates, for each array group 144, the power consumption amount before and after the configuration change and the power consumption reduction amount for the array groups 144 respectively based on the operating time forecast value per year for each array group 144 before the configuration change obtained in step SP70, the operating time forecast value per year of each array group 144 after the configuration change obtained in step SP71, and a power consumption amount per hour of each array group 144 obtained in step SP74, and stores the calculation results in the reduction rate storage table 211 (FIG. 18) (SP75). The reduction rate calculation unit 210 then ends the reduction rate calculation processing.
The grouping display unit 212 starts grouping display processing shown in
However, when the determination of step SP80 yields a negative result, the grouping display unit 212 derives, for each group following the configuration change, the device files, file systems, applications, logical volumes 145, and array groups 144 associated with the groups on the basis of the device file/file system-logical volume association table 1000 (
The grouping display unit 212 then refers to the application performance information table 700 (
The grouping display unit 212 then refers to the array group configuration information table 1300 (
In addition, the grouping display unit 212 refers to the response time threshold table 216 (
The grouping display unit 212 then refers to the reduction rate storage table 211 (
The grouping display unit 212 subsequently generates screen data of the grouping result display screen 401 (
The grouping display unit 212 then waits for clicking of the migration execution button 403 (
Meanwhile,
Once started up by the grouping display unit 212, the migration control unit 213 starts the migration control processing shown in
When this determination yields an affirmative result, the migration control unit 213 refers to the logical volume/array group association table 1200 (
However, when the determination of step SP90 yields a negative result, the migration control unit 213 acquires the group identifier of each group subject to the configuration change from the grouping display unit 212 (SP92), and derives, in each case, the difference in configurations before and after a configuration change for each group subject to the configuration change on the basis of the acquired group identifier and the logical volume/array group association table 1200 (
The migration control unit 213 then controls the storage apparatuses 103 so that the storage apparatuses 103 migrate, to the corresponding array group 144 following the configuration change, the data stored in the corresponding array group 144 before the configuration change on the basis of the difference in configurations before and after the configuration change for each group subject to the configuration change derived in step SP91 or step SP93 (SP94).
The migration control unit 213 then ends the migration control processing.
As described earlier, in the computer system 100 according to this embodiment, when the logical volumes 145 defined in the storage apparatuses 103 are grouped, these resources can be grouped to the extent that there is no bottleneck affecting the response performance because the I/O count of each logical volume 145 is also considered. A reliable computer system that enables power savings for storage apparatuses while preventing a drop in response performance can accordingly be realized.
Furthermore, in the computer system 100 of this embodiment, the amount of power that can be reduced by a new grouping (configuration change) of the logical volumes 145 is presented to the user as a grouping result display screen 401, and therefore the user is aware of the trade-off with power consumption if the configuration of the devices coupled to an application with an undesirable load is changed.
Note that although the embodiment hereinabove described a case where, in step SP61 of the group/array group mapping processing mentioned earlier with respect to
Moreover, although the foregoing embodiment described a case where any of the groups are continually divided without paying attention to the I/O count of the group of logical volumes 145 until the number of groups of logical volumes 145 exceeds the number of array groups in the group/array group mapping processing mentioned earlier with respect to
Moreover, the foregoing embodiment described a case where there is no synergy between the application response time and the array group reference value of the array group 144, but the present invention is not limited to such an arrangement. The array group reference value of the corresponding array group 144 may be lowered and mapping of the logical volumes 145 and array groups 144 may be performed once again if the application response time exceeds the response time threshold configured by the user for the application, for example.
Moreover, although the foregoing embodiment described a case where resources of the same type with which time zones with zero access by the host apparatus occur periodically used for the logical volumes 145 and memory apparatus groups serve as the array groups 144, the present invention is not limited to such an arrangement. Resources of the same type other than the logical volumes 145 may also be adopted and the memory apparatus group may be an entity other than the array group 144; hence, the present invention has a wide range of possible applications.
The present invention is widely applicable to a management apparatus for managing power savings for storage apparatuses in a computer system that comprises host servers and storage apparatuses.
Number | Date | Country | Kind |
---|---|---|---|
2010-061203 | Mar 2010 | JP | national |