According to recent trends, there has been a tremendous increase in the amount of computer equipment making up information systems, such as in data centers, computer systems, and the like. The equipment typically included in such information systems, including switches and storage systems, may be added to the information systems in an on-demand basis or on short notice, while generally, older equipment in the information systems is often removed and used for other purposes or discarded. Thus, the equipment in an information system typically has a limited lifecycle. However, as the amount of equipment in these information systems increases, it has become more difficult for administrators to manage the multitude of equipment, while also meeting the requirements of various users of the information systems, even when conventional management technologies such as CMDB (configuration management database) are employed for managing the equipment in the information systems.
Furthermore, it has become advantageous for the resources of information systems to be implemented in a logically-partitioned manner for the users and applications using the information systems, so that the equipment in the information systems can be used more efficiently. For example, ideally, a logically-partitioned information system is able to provide independent name spaces or ID spaces for users and/or applications. Thus, availability of names or identifiers (IDs), storage areas, data, objects, processes, functions and resources in a logical partition should be able to be handled and managed to operate independently of each other, regardless of the usage or operations taking place in other partitions in the information system. For example, logical partitions in the information system should provide services and engaged quality of the services regardless of the services provided to the other partitions. Additionally, a partition should dynamically provide or modify services, engaged quality of the services, and engaged conditions of resources, regardless of other changes taking place in the information system's configuration. However, conventional management technologies used in information systems are unable to provide effective partitioning in a dynamic and flexible cloud of computing resources in which, as various equipment and resources in the information system are replaced, there is no noticeable effect on the users of the numerous partitions in the data system.
Related art includes U.S. Pat. No. 7,222,172 to Arakawa et al., entitled “Storage System Having Virtualized Resource”; US Pat. Appl. Pub. No. 2006/0010169, to M. Kitamura, entitled “Hierarchical Storage Management System”; and IBM Redbook entitled “IBM System Storage DS8000 Series: Architecture and Implementation”, fourth edition, April 2008, the entire disclosures of which are incorporated herein by reference.
Exemplary embodiments of the invention provide for partitioned z storage resources and services in an information system whereby the information system can be used in a partitioned manner by users and applications regardless of changes that take place in the configuration of the information system, such as addition, removal or replacement of equipment, or changes in the information system configuration. These and other features and advantages of the present invention will become apparent to those of ordinary skill in the art in view of the following detailed description of the preferred embodiments.
The accompanying drawings, in conjunction with the general description given above, and the detailed description of the preferred embodiments given below, serve to illustrate and explain the principles of the preferred embodiments of the best mode of the invention presently contemplated.
In the following detailed description of the invention, reference is made to the accompanying drawings which form a part of the disclosure, and in which are shown by way of illustration, and not of limitation, exemplary embodiments by which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. Further, it should be noted that while the detailed description provides various exemplary embodiments, as described below and as illustrated in the drawings, the present invention is not limited to the embodiments described and illustrated herein, but can extend to other embodiments, as would be known or as would become known to those skilled in the art. Reference in the specification to “one embodiment”, “this embodiment”, or “these embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same embodiment. Additionally, in the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that these specific details may not all be needed to practice the present invention. In other circumstances, well-known structures, materials, circuits, processes and interfaces have not been described in detail, and/or may be illustrated in block diagram form, so as to not unnecessarily obscure the present invention.
Furthermore, some portions of the detailed description that follow are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to most effectively convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In the present invention, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals or instructions capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, instructions, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “displaying”, or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system's memories or registers or other information storage, transmission or display devices.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer-readable storage medium, such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of media suitable for storing electronic information, and may be accessible locally, via a network or other communications media, or both. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs and modules in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.
Exemplary embodiments of the invention, as will be described in greater detail below, provide apparatuses, methods and computer programs for partitioning resources in an information system. In some embodiments, the disclosed computer information system comprises host computers, switches, storage servers, storage systems and management computers. For example, the management computers manage and assign resources provided by storage servers, storage systems and switches to each partition. The resources of the information system include functions provided by storage systems and switches. By using assigned resources, the partition provides storage services to the host computer and applications running on the host computer. When a configuration change takes place in the information system, such as addition or removal of equipment, the management computers perform reassignment of partition resources, manage migration of services and/or data, and otherwise maintain the allotted partitions for the users and/or applications. Also, in addition to managing services and quality of the services in the partitions, engaged conditions regarding the resources are also maintained. Moreover, for relocation of a partition in the information system, the above principles of the partition are maintained.
Exemplary System Configuration
According to exemplary embodiments,
As illustrated in
As illustrated in
Storage area service 701 may be provided according to exemplary embodiments of the invention for providing an independent name space or ID space to a partition 101 regardless of the existence of other partitions 101. The availability of the name or ID of a storage area, data, objects, process, function and resources of a partition can be handled according to exemplary embodiments of the invention regardless of the usage of other partitions. Also, clients, such as hosts 500, applications, users, or the like, can be able to specify the quality of the storage area within the partition 101, such as guaranteed performance, or the like. For example, clients of this service can specify attributes such as location of a storage area 800, for example, disk type (e.g. FC disk, SAS disk, SATA disk, Flash memory, etc.), actual location (e.g. at a separated area from a failure event perspective, at a separate storage system from original data or at a remote site, etc.) and methods used for ensuring reliability or availability (e.g. mirroring (RAID1), RAID5, distributed storing of data, and the like).
Backup service 702 may be provided according to exemplary embodiments of the invention for enabling backup and/or recovery operations and backup data for data stored in the partition 101. For example, backup service 702 may provide various backup and recovery methods, such as full backup, incremental backup, differential backup, backup to disk, backup to tape, local replication, remote replication, and CDP (continuous data protection). Backup service 702 may use storage area service 701 or replication service 703 for some of these features. Using backup service 702, clients are able to recall backup data to recovery data since this service manages information regarding backup data (i.e., cataloging). Clients of backup service 702 can specify a schedule for carrying out backup operations. In addition, clients of backup service 702 can specify attributes regarding the location at which the backup data is stored, such as disk type (e.g., FC disk, SAS disk, SATA disk, Flash memory, or the like), actual location (e.g., at a separated area from a failure event perspective, at a separate storage system from original data or at a remote site, etc.) and methods for ensuring reliability and availability (e.g., mirroring (RAID1), RAID5, distributed storing of data, or the like).
Replication service 703 may be provided according to exemplary embodiments of the invention for managing and enabling replication of data for various purposes, such as achieving availability, increasing performance (load balancing) and data mining or OLAP (Online Analytical Processing). Replication service 703 provides various replication methods such as file (or object) replication and volume replication; full replication and logical replication (e.g., copy-on-write or snapshot); and local replication and remote replication. Replication service 703 may use storage area service 701 for carrying out some features. Using replication service 703, clients are able to recall replica data since this service manages the information regarding replicas. In addition, clients of replication service 703 can specify attributes regarding the location at which the replicated data is stored, such as disk type (e.g., FC disk, SAS disk, SATA disk, Flash memory, or the like), actual location (e.g., at a separated area from a failure event perspective, at a separate storage system from original data or at a remote site, etc.) and methods for reliability and availability (e.g., mirroring (RAID1), RAID5, distributed storing of data, or the like).
Disaster recovery service 704 may be provided according to exemplary embodiments of the invention to for managing and providing disaster recovery operations and replication for disaster recovery capability. Disaster recovery service 704 provides various methods, such as server-based copying, storage-system-based copying and switch-based copying; synchronous remote copying and asynchronous remote copying; file (or object) copying and volume copying; and conventional remote copy and remote CDP (continuous data protection). Disaster recovery service 704 may use storage area service 701 or replication service 703 for carrying out some features. Using disaster recovery service 704, clients can use secondary data, since this service manages information regarding copy relations and secondary data. Clients of disaster recovery service 704 can also specify expected (or required) performance of the secondary system. In addition, clients of disaster recovery service 704 can specify attributes regarding the location at which the replicated data is stored, such as disk type (e.g., FC disk, SAS disk, SATA disk, Flash memory, or the like) and methods for reliability and availability (e.g., mirroring (RAID1), RAID5, distributed storing of data, and the like).
Archive service 705 may be provided according to exemplary embodiments of the invention for managing and providing archiving operations and archive data for the data stored in a partition 101. Archive service 705 may use storage area service 701 for some features. Clients of archive service 705 can specify schedule (conditions) for archiving data, retention policies regarding protection against modification of archived data, and the like. Clients can also order shredding of the data after expiration of the retention period. Furthermore, clients of archive service 705 can specify metadata stored for each archived data. In addition, clients of archive service 705 can specify various attributes, such as the location at which the backup data is stored, for example, disk type (e.g., FC disk, SAS disk, SATA disk, Flash memory, and the like), actual location (e.g., at a separated area from a failure event perspective, at a separate storage system from original data or at a remote site, etc.) and methods for reliability and availability (e.g., mirroring (RAID1), RAID5, distributed storing of data, and the like).
Search/analyze service 706 may be provided according to exemplary embodiments of the invention for enabling search capabilities for searching data stored in a partition 101, especially, for example, archived data and backup data. For example, by specifying a search key such as a keyword or value stored in metadata, clients of search/analyze service 706 can obtain the results of a search regarding the data in one of partitions 101. Search/analyze service 706 also provides results of analysis regarding the data stored in the partition 101 by specifying conditions and/or logic used to analyze the data.
Access control service 707 may be provided according to embodiments of the invention for enabling access control capabilities for maintaining security regarding the data stored in one of partitions 101. To realize this, various technologies such as authorization, LUN masking, zoning, FC-SP (Fibre Channel-Security Protocol), VLAN (virtual LAN) and IP security can be used. In other words, such functions available in various equipments can be used for access control service 707. Access control service 707 may be performed based on user management for the information system.
Life cycle management service 708 may be provided according to embodiments of the invention for enabling management and storing of data and information in appropriate areas according to a life cycle of the data or information. This includes management methods so called HSM (hierarchical storage management) or tier management. Clients of life cycle management service 708 can specify a schedule (or conditions) for initiating migration of data to other areas (tiers) according to a life cycle of the data. In addition, clients of life cycle management service 708 can specify attribute regarding a new location of the migrated data, such as disk type (e.g., FC disk, SAS disk, SATA disk, Flash memory, and the like) and methods for reliability and availability of the data (e.g., mirroring (RAID1), RAID5, distributed storing of data, and the like).
Furthermore, a portal service 700 may be provided in each partition 101 for providing a common interface of all the other services 701-708 in the partition 101. Also, in addition to the services discussed above, other sorts of services, such as an encryption service and a data streaming service can be provided in some cases, if desired. Additionally, with regards to replication service 703 and disaster recovery service 704, the storage area selected to store replicated data or secondary data may be automatically selected from storage areas 800 which have the same storage attributes (e.g., disk type and reliability/availability) as the original (primary) data.
Configuration of Storage Server Computer
Service requester information 541 is information that records the requestor of each service provided by the storage server 510.
Service resource information 542 is information that maintains information about assigned resources needed to provide each service. The process to assign and manage resources is described later.
Service status information 543 is information that maintains the status of the services, such as availability of the services.
In addition to an OS 544, memory 540 stores the following modules/programs, and processor 511 performs various processes regarding the storage server computer 510 by executing these modules/programs.
Storage area server 551 is a program used to provide storage area service 701. According to a request from requestor (e.g., an application 501), storage area server 551 supplies a storage area 800 from storage areas 800 provided by storage system 102.
Backup server 552 is a program used to provide backup service 702. In addition to a storage area 800, backup server 552 may use functions provided by storage system 102, such as local replication, continuous data protection, remote replication, deduplication and compression.
Replication server 553 is a program used to provide replication service 703. In addition to a storage area 800, replication server 553 may use functions provided by storage system 102, such as local replication, logical snapshot and remote replication.
Disaster recovery server 554 is a program used to provide disaster recovery service 704. In addition to a storage area 800, disaster recovery server 554 may use functions provided by storage system 102, such as remote replication.
Archive server 555 is a program used to provide archive service 705. In addition to a storage area 800, archive server 555 may use functions provided by storage system 102, such as local replication, logical replication, remote replication, WORM (write once, read many protection) with retention management, shredding, deduplication, compression and integrity checking by using hash values or comparison with replicas.
Search/analyze server 556 is a program used to provide search/analyze service 706. This program works as a so called search engine, with index management. Search/analyze server 556 may support an external search engine.
Access control server 557 is a program used to provide access control service 707. Access control server 557 may use functions provided by storage system 102, such as access control including LUN masking and authentication.
Life cycle management server 558 is a program used to provide life cycle management service 708. In addition to a storage area 800, life cycle management server 558 may use functions provided by storage system 102, such as transparent data relocation (migration).
File access program 559 is a program used to provide means to access files (objects) stored in storage system 102 via file access protocols such as NFS (network file system protocol) and CIFS (common Internet file system protocol). That is, file access program 559 recognizes file access commands and processes them.
Block access program 560 is a program used to provide means to access data stored in storage system 102 via block access protocols such as FC (Fibre Channel), iSCSI (internet SCSI) and FCoE (Fibre Channel over Ethernet).
Information migration program 545 is used to migrate the above-discussed information 541-543 from one storage server 510 to another storage server 510. A related process is described below.
Storage server 510 also has QoS control capability for each service to realize the partition 101 described above. In addition to this QoS control capability, such as guaranteeing performance, storage server computer 510 may use functions provided by storage system 102 such as QoS control and cache control.
Configuration of Management Computer
Partition information 531 maintains information regarding whether a partition 101 corresponding to partition ID exists or not, types of services provided in each partition 101, and engaged condition regarding each service, such as disk type being used, actual location of data, and methods implemented for reliability/availability, as mentioned above.
Service information 532 maintains information regarding resources required to provide each service and resources required to satisfy a particular condition, such as disk type, actual location and methods for reliability/availability.
Resource information 533 maintains information regarding a list of resources currently existing in storage platform 100. The resources listed include storage servers 510 (i.e., each server and its computing resources), storage areas and functions available in storage system 102. Resource information 533 also maintains the status and availability condition (e.g., free, used, failed) of each listed resource.
Asset configuration information 534 maintains information regarding a list of assets (i.e., equipment) currently present in storage platform 100. The assets include storage server computers 510, management computer 520, storage systems 102 and switches 910. Asset configuration information 534 also maintains the status and availability condition (e.g., free, used, failed) of each listed asset and the configuration of the equipment. By using this configuration information, management computer 520 can detect a change in the configuration of storage platform 100.
Assignment information 535 maintains information about the resources assigned to each service in each partition 101. In other words, assignment information 535 includes a mapping between resources and each service in each partition 101.
In addition to OS 536, memory 530 stores the following programs/modules, and processor 521 performs various processes on the management computer 520 by executing these programs/modules.
Clustering program 537 is a module used by management computer 520 to achieve clustering with another management computer 520. In other words, with clustering program 537, the management computer 520 can transfer the above-described information to the other management computer 520, and the other management computer 520 can take over processes from the former management computer 520 when a configuration change is carried out in the information system that includes replacing the former management computer 520.
Partition manager 538 is a program that performs processes for generating and deleting a partition 101. The details of these processes are described additionally below with reference to
Resource manager 539 is a program that manages resource information 533 and asset information 534. Resource manager module 539 detects or handles a change of the resources in storage platform 100. The details of this module are described below with reference to
Assignment manager 540 is a program that manages assignment of resources to each service in each partition 101. Therefore, assignment manager program 540 manages assignment information 535 which specifies which resources of storage platform 100 are assigned to which partitions 101. The details of this module are described below with reference to
Furthermore, while the programs, data structures and functions of the management computer are described in these embodiments as being implemented on a physically separate management computer 520, in other embodiments some or all of these programs, data structures and/or functions may be implemented using other computers or processing devices. Thus a management module may incorporate the functionality of the management computer 520 and be executed by one or more of the processing devices in the information system. For example, one or more of storage server computers 510 may implement the functionality of management computer 520, as described above and below, such as through installation of a management module or program on one or more of storage server computers 510, or the like. Additionally or alternatively, one or more of storage systems 102 may implement some or all of the functionality of management computer 520, as described above and below. Other alternative configurations will also be apparent to those of skill in the art in light of the disclosure provided herein, and the invention is not limited to any particular physical or logical configuration for management computer 520.
Configuration of Storage System
Function management information 201 maintains information regarding processes of each function of storage system 102. Examples of function management information 201 include: the target area/data of each function; conditions for copy processing; a source and destination of copying (i.e., copy relation or copy pair); retention periods regarding WORM data; and mappings between logical (or virtual) storage areas and physical storage areas.
Function resource information 202 maintains records of resources to be used for carrying out each function.
Function status information 203 maintains the status of each function, such as the availability of the functions.
Main processor 111 of storage system controller 110 provides functions or features by executing the following programs/modules stored in memory 200 of storage system controller 110.
Volume management function 211 creates and manages the volumes in storage system 102. More details regarding this function may be garnered from U.S. Pat. No. 7,222,172, which was incorporated herein by reference above, by referring to “Definition of volumes” and “Volume management”, such as with regards to the description of FIG. 3 in U.S. Pat. No. 7,222,172.
Local replication function 212 manages replication and snapshots in storage system 102. More details regarding this function may be garnered from U.S. Pat. No. 7,222,172, which was incorporated herein by reference above, by referring to “snapshots”, such as with regards to the description of FIG. 3 in U.S. Pat. No. 7,222,172.
Logical snapshot function 213 is a function or module that provides a logical snapshot without an actual secondary storage area. One example of a method used to achieve this is maintaining old data by copy-on-write. The management method of these snapshots is almost same as local replication mentioned above.
Remote replication function 214 provides for remote replication from storage system 102. More details regarding this function may be garnered from U.S. Pat. No. 7,222,172, which was incorporated herein by reference above, by referring to “remote replication”, such as with regards to the description of FIG. 3 in U.S. Pat. No. 7,222,172.
WORM function 215 is a function or module that provides write protection (prohibition against modification of the data) based on a predetermined retention period. After the retention period has expired, shredding of the data may be performed.
QoS control function 216 manages the quality of service in storage system 102. More details regarding this function may be garnered from U.S. Pat. No. 7,222,172, which was incorporated herein by reference above, by referring to “port control”, such as with regards to the description of FIG. 3 in U.S. Pat. No. 7,222,172. In addition, QoS control function 216 can control QoS (performance) of each component (resource) such as main processor 111, cache 300, internal switch 112, disk controller 400, and network interfaces in the storage system 102.
Access control function 217 provides LUN masking, authentication, and so on. Protocol standard specifications such as FC-SP and IPSec are also available. More details regarding access control function 217 may be garnered from U.S. Pat. No. 7,222,172, which was incorporated herein by reference above, by referring to “security control”, such as with regards to the description of FIG. 3 in U.S. Pat. No. 7,222,172.
Data relocation function 218 provides for relocation of data. More details regarding this function may be garnered from U.S. Pat. No. 7,222,172, which was incorporated herein by reference above, by referring to “volume relocation”, such as with regards to the description of FIG. 3 in U.S. Pat. No. 7,222,172. In addition, as disclosed in US Pat. Appl. Pub. No. 2006/0010169, to M. Kitamura, which was incorporated by reference herein above, relocation with finer units (e.g. extent, page, segment, block, etc.) can be performed in any similar manner, and relocation of files is also available.
Thin provisioning function 219 is a function or module that realizes use of storage areas in an on-demand basis by assigning storage area from common storage area pool only when a storage areas actually needed for use.
Deduplication function 220 is a function or module that reduces wasted space in a storage system by aborting storage of redundant data. Typically, as is known in the art, the de-duplication function detects duplication of contents through comparison of contents or their hash values. That is, the de-duplication function 220 realizes reduction of consumption of actual storage area by locating and deleting redundant data.
Compression function 221 is a function or module that realizes a reduction in consumption of actual storage capacity by compression/decompression of the data stored in a storage system.
Cache control function 222 is a function or module that controls the use of cache 300. More details regarding this function may be garnered from U.S. Pat. No. 7,222,172, which was incorporated herein by reference above, by referring to “cache control”, such as with regards to the description of FIG. 3 in U.S. Pat. No. 7,222,172. In addition, cache control function 222 also provides separated use of cache 300 for different users.
Read process program 223 is a function or module that performs the processes necessary for read access.
Write process program 224 is a function or module that performs the processes necessary for a write access.
Other types of functions and processes regarding data stored in the storage system 102 may also be performed in addition to those discussed above. Function management information 201 and function resource information 202 maintains information regarding these functions as mentioned above.
Moreover, memory 200 has the following programs, in addition to OS 204.
Information migration program 205 is a module used to migrate the above-described information 201-203 from one storage system 102 to another storage system 102.
Data migration program 206 is a module used to migrate data stored in storage system 102 from the storage system 102 to another storage system 102. An exemplary process using these programs is described below with respect to
Process to Generate a Partition
At step 1001, management computer 520 receives a request from a user or an application 501 to generate a partition 101.
At step 1002, management computer 520 updates partition information 531.
At step 1003, management computer 520 determines the services to be provided. For example, a default set of services can be predetermined. As another example of this method, the above request received in step 1001 can include information to specify the services to be provided.
At step 1004, management computer 520 refers to service information 532 to determine any resources necessary for providing the services identified in step 1003.
At step 1005, management computer 520 refers to resource information 533 to locate the resources identified in step 1004.
At step 1006, management computer 520 selects the resources to be used to provide the services.
At step 1007, management computer 520 updates the resource information 533 to obtain the resources selected in step 1006.
At step 1008, management computer 520 updates assignment information 535 to show that the selected resources have been assigned to the newly-generated partition.
At step 1009, management computer 520 reports the completion of generating the new partition 101 to the requester.
Process for Deletion of a Partition
At step 1101, management computer 520 receives a request from a user or an application 501 to delete a particular partition 101.
At step 1102, management computer 520 instructs the related storage server(s) to stop services in the partition 101.
At step 1103, management computer 520 updates assignment information 535 to release the resources for the partition 101.
At step 1104, management computer 520 updates resource information 533 to release the resources for the partition 101.
At step 1105, management computer 520 updates partition information 531 to delete the specified partition 101 from the partition information 531.
At step 1106, management computer 520 reports the completion of deletion of the partition 101 to the requester.
Process to Provide a Service
At step 1201, a storage server computer 510 assigned to the targeted partition 101 receives a request from a user or an application 501 for a service. This request can specify certain conditions regarding the requested service, such as attributes regarding location for storage of the replicated data, such as disk type (e.g. FC disk, SAS disk, SATA disk, Flash memory), actual location (e.g., at a separated area from a failure event perspective, at a separate storage system from original data or at a remote site, etc.), reliability/availability, backup cycle/scheme and/or a retention period.
At step 1202, the storage server 510 updates service requester information 541 to record the requestor.
At step 1203, service program (i.e., the server module of the requested service) in the storage server 510 requires management computer 520 to assign resources to provide the services while also satisfying the specified conditions.
At step 1204, management computer 520 refers to service information 532 to determine the necessary resources.
At step 1205, management computer 520 selects the resources necessary to provide the requested service and any specified conditions.
At step 1206, management computer 520 refers to resource information 533 to seek the resources.
At step 1207, management computer 520 determines the resources required to provide the requested services and the condition.
At step 1208, when management computer 520 is able to locate the necessary resources, the process proceeds to step 1209. On the other hand, when management computer 520 is not able to locate the necessary resources, the process proceeds to step 1215.
At step 1209, management computer 520 updates the resource information 533 to obtain the identified resources.
At step 1210, management computer 520 updates assignment information 535 for the identified resources.
At step 1211, management computer 520 informs the storage server 510 of the assigned resources.
At step 1212, with the information received from management computer 520, the server module that requested to assign the resources (i.e., one of the server modules 551-558) updates the service resource information 542.
At step 1213, the server module provides the requested service according to any requested conditions.
At step 1214, the server module updates the service status information 543.
At step 1215, management computer 520 informs the storage server 510 of failure in obtaining the necessary resources.
At step 1216, storage server 510 reports to the requester that the requested service is not available.
At step 1217, the storage server 510 updates the service requester information 541.
Accordingly, it may be seen that through the above process, the requested service is provided. This process is able to operate to modify services that are already being supplied. This process also can serve to modify the conditions regarding services already being supplied, while a method similar to that described next can be used when a release of resources is needed for the modification of the conditions regarding services already being supplied.
Process to Stop a Service
At step 1301, storage server 510 assigned to the targeted partition 101 receives a request to stop a service from a user or an application 501.
At step 1302, storage server 510 refers to service requester information 541.
At step 1303, the targeted service program (i.e., one of the server modules 551-558 of the requested service) updates the service status information 543.
At step 1304, the targeted server stops the service.
At step 1305, the targeted server updates the service resources information 542.
At step 1306, the targeted server sends a request to management computer 520 to release the resources for the service.
At step 1307, management computer 520 updates assignment information 535.
At step 1308, management computer 520 updates resource information 533 to release the resources.
At step 1309, management computer 520 reports completion of release of the resources to the affected storage server 510.
Changing Configuration of Storage Platform
As discussed above, one of the characteristics of a partition 101 of embodiments of the invention is that the partition 101 provides services, engaged quality of the services, and engaged conditions about resources, regardless of any changes that might take place in the configuration of the information system. In other words, a change in the configuration of the storage platform 100 does not affect the partitions 101 that are in existence on the storage platform 100.
At step 1401, by using asset information 534, management computer 520 detects the addition and/or deletion of equipment such as switch 910, storage server 510 and storage system 102. Conventional methods for CMDB can be used for the detection.
At step 1402, management computer 520 investigates the resources and functions of the equipment in the storage platform 100.
At step 1403, management computer 520 updates resource information 533 for addition of resources, and makes the added resources available.
At step 1404, management computer 520 searches for any partition(s) related to affected resources.
At step 1405, when management computer 520 finds any partition(s) related to affected resources, the process proceeds to step 1406. If not, the process proceeds to step 1407.
At step 1406, management computer 520 directs migration of services, information and/or data of any affected partitions. The detailed processes are described later with respect to
At step 1407, management computer 520 updates resource information 533 for deletion of the resources.
At step 1408, management computer 520 updates asset information 534, and the process ends.
Configuration Change Relating to Storage Server
At step 1501, management computer 520 determines services to be migrated.
At step 1502, management computer 520 refers to service information 532 to determine the current resources.
At step 1503, management computer 520 identifies the resources being used to achieve the services and the specified conditions to be migrated.
At step 1504, management computer 520 refers to resource information 533 to seek comparable resources.
At step 1505, management computer 520 identifies at the destination of the migration the resources necessary to achieve the services and the conditions to be migrated. In making the identification, the type of resources (including functions) can be different between the source and the destination as equivalents. For example, to realize access control service 707 with the same condition, functions for iSCSI can be used as alternative of method for FC when the source uses FC and the destination uses iSCSI protocol.
At step 1506, when management computer 520 is able to identify sufficient available if resources at the migration destination, the process proceeds to step 1507. If not, the process proceeds to step 1525.
At step 1507, management computer 520 updates the resource information 533 to obtain the resources.
At step 1508, management computer 520 updates the assignment information 535.
At step 1509, management computer 520 instructs the source storage server 510 (i.e., the source of migration) and the target storage server 510 (i.e. the target of migration) to migrate the services between them.
At step 1510, management computer 520 informs the destination storage server 510 of the assigned resources.
At step 1511, the destination storage server 510 receives information about services to be migrated from the source storage server 510. That is, migration of the aforesaid information regarding services of the source storage server 510 is performed.
At step 1512, with the received information from management computer 520 and the source storage server 510, server modules on the destination storage server 510 update service resource information 542.
At step 1513, the server modules on the destination storage server 510 start to provide the service according to any specified condition.
At step 1514, the server modules on the destination storage server 510 update service status information 543.
At step 1515, the destination storage server 510 reports initiation of providing services to the source storage server 510.
At step 1516, the source storage server 510 performs process to change target of service requests issued by the user (e.g., an application on host 500) to the destination storage server 510. To achieve this, conventional methods and processes may be used, such as use of multipath software, updating information of name server and redirection of the requests.
At step 1517, server modules on the source storage server 510 update service status information 543.
At step 1518, server modules on the source storage server 510 stop the services.
At step 1519, the server modules on the source storage server 510 update service resources information 542.
At step 1520, the source storage server 510 sends a request to management computer 520 to release the resources for the service.
At step 1521, management computer 520 updates assignment information 535.
At step 1522, management computer 520 updates resource information 533 to release the resources.
At step 1523, management computer 520 reports completion of the release of the resources to the source storage server 510.
At step 1524, management computer 520 logs the result of migration.
At step 1525, management computer 520 logs failure of obtaining the necessary resources.
At step 1516, management computer 520 logs that the migration of the services could not be achieved.
With the above process, for a configuration change relating to one of storage server computers 510, partitions 101 in storage platform 100 are maintained with their services and any specified conditions regarding the services. In addition to logging success or failure of the process, management computer 520 may report the success or failure to a user and/or administrator of the configuration change. Furthermore, the above migration may be performed from one source storage server 510 to plural destination storage servers 510.
Configuration Change Relating to Storage Systems
At step 1601, management computer 520 determines any data to be migrated.
At step 1602, management computer 520 refers to assignment information 535 to recognize necessary resources currently being used.
At step 1603, management computer 520 determines what resources (includes functions) will be necessary for storing and handling the data to be migrated.
At step 1604, management computer 520 refers to resource information 533 to seek availability of the resources determined in step 1603.
At step 1605, management computer 520 determines which resources will store and handle the data at the destination of the migration. In making the determination, the type of resources (including functions) can be different between the source and the destination as equivalents. For example, to realize access control service 707 with the same conditions, functions for iSCSI can be used as an alternative to FC when the source uses FC and the destination uses iSCSI protocol.
At step 1606, when management computer 520 is able to locate the necessary resources at a suitable destination, the process proceeds to step 1607. If not, the process proceeds to step 1619.
At step 1607, management computer 520 updates the resource information 533 to obtain the necessary resources at the destination storage system 102.
At step 1608, management computer 520 updates assignment information 535 to reflect selection of the resources at the destination storage system 102.
At step 1609, management computer 520 instructs storage server computer(s) 510 that use(s) the resources in the source storage system 102 (i.e., the source of the migration) to conduct the migration of the data to the destination storage system 102.
At step 1610, management computer 520 informs the storage server computer(s) 510 of the newly assigned resources.
At step 1611, the storage server computer 510 instructs the source storage system 102 and the destination storage system 102 (i.e., destination of the migration) to migrate the date between them according to the information regarding the assigned resources.
At step 1612, the source storage system 102 and the destination storage system 102 relocate the data according to the information regarding the assigned resources. To achieve this, remote replication function 214 of storage system 102 can be used as a conventional method. As another conventional method, storage server 510 may perform relocation processing including copying of the data from the source storage system 102 to the destination storage system 102.
At step 1613, servers on the storage server 510 update service resource information 542 to use the resources (including functions) in the destination storage system 102.
At step 1614, the storage server 510 sends a request to management computer 520 to release the resources in the source storage system 102.
At step 1615, management computer 520 updates assignment information 535 with regards to the released resources.
At step 1616, management computer 520 updates resource information 533 to release the resources of the source storage system 102.
At step 1617, management computer 520 reports completion of release of the resources of the source storage system 12 to the storage server computer 510.
At step 1618, management computer 520 logs the result of migration.
At step 1619, management computer 520 logs failure to obtaining the necessary resources at any destination storage system 102.
At step 1620, management computer 520 logs that the migration of the services could not be achieved.
With the above process, for a configuration change relating to storage system 102, partitions 101 in storage platform 100 are maintained in keeping with their specified services and any specified conditions regarding the services. In addition to logging success or failure of the process, management computer 520 may report the success or failure to a user and/or administrator managing the configuration change. The above migration may be performed from one source storage system 102 to plural destination storage systems 102, from plural source storage systems 102 to a single destination storage system 102, or from plural source storage systems 102 to plural destination storage systems 102. Occurrence of a process for configuration change relating to storage server 510 mentioned above and a process for configuration change relating to storage system 102 may be arranged by management computer 520 to avoid or resolve any conflicts that may arise between the processes executed for the configuration changes. That is, exclusive execution of the configuration change may be achieved by management computer 520.
Configuration Change Relating to Management Computer
As discussed above, through use of clustering program 537, management computer 520 includes clustering capability. By using this clustering capability, plural management computers 520 are able to make up a cluster, and can maintain their functions and processes through coupling and decoupling within the cluster, even if addition or deletion of a management computer 520 occurs.
Configuration Change Relating to Switches
As a conventional method, switches 910 can be arranged in a redundant configuration to maintain paths and other functions for addition or deletion of switches in a network. Accordingly, when a switch is added or removed from the storage platform 100, redundancy of the switches ensures that operation of individual partitions is not affected.
Relocation of Partitions
With the systems and the processes in the exemplary embodiments described above, a storage platform 100 can establish durable partitions 101 that are unified across storage systems and storage server computers. The unified durable partitions provide independent name spaces, and are able to maintain specified services and conditions regardless of operations taking place in other partitions, and regardless of configuration changes in the information system. The durable partitions of the exemplary embodiments are unified across both storage systems and storage server computers, and also may include switches in the information system, and, in some embodiments, processes in the host computers. Furthermore, in addition to being durable and resistant to configuration changes, partitions 101 can have mobility within the storage platform 100 for various purposes such as improved performance, load balancing, heat balancing, and power consumption considerations. The management computer is able to manage and assign resources and functions provided by storage server computers and storage systems to each partition. By using the assigned resources, a partition is able to provide storage and other services to the host computer and applications on the host computer. When a configuration change occurs, such as addition or deletion of equipment in the information system, the management computer performs reassignment of resources, manages migration of services and/or data, and otherwise maintains the functionality of the partition for the user or application. Also, in addition to services and quality of the services, engaged conditions regarding resources are also maintained. Moreover, for relocation of partition in the information system, the above principles of the partition are maintained by above manner when a partition is relocated to another part of the information system.
Of course, the system configurations illustrated in
In the description, numerous details are set forth for purposes of explanation in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that not all of these specific details are required in order to practice the present invention. It is also noted that the invention may be described as a process, which is usually depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged.
As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of embodiments of the invention may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out embodiments of the invention. Furthermore, some embodiments of the invention may be performed solely in hardware, whereas other embodiments may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.
From the foregoing, it will be apparent that the invention provides methods, apparatuses and programs stored on computer readable media for implementing durable partitions in an information system. Additionally, while specific embodiments have been illustrated and described in this specification, those of ordinary skill in the art appreciate that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments disclosed. This disclosure is intended to cover any and all adaptations or variations of the present invention, and it is to be understood that the terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with the established doctrines of claim interpretation, along with the full range of equivalents to which such claims are entitled.