1. Technical Field
The present invention relates generally to storage area networks, and more particularly, to a method for implementing security management in a storage area network by controlling access to network resources.
2. Statement of the Problem
A storage area network (SAN) is a dedicated, centrally managed, information infrastructure, which enables interconnection of compute nodes and storage nodes. A storage area network facilitates universal access and sharing of storage resources. SANs are presently being integrated into distributed network environments using Fibre Channel technology (described below). Typically, a SAN utilizes a block-oriented protocol for providing storage to compute nodes, while general purpose networks (GPNs), including local area networks (LANs), wide area networks (WANs) and the Internet, typically implement file-oriented protocols. Storage area networks also differ from general purpose networks in that SANs carry large amounts of data with low latency, and historically have lacked a mechanism for implementing security across the network.
Storage area networks presently typically provide an ‘everyone (on the network) is trusted’ security model because, prior to the availability of Fibre Channel, SANs had a distance limitation on the order of tens of meters. Therefore, compute node operating system (O/S) behavior in existing storage area networks has been in accordance with the distance constraint, i.e., there has been little relatively storage resource sharing among different compute node and each compute node often has dedicated data storage.
Compute nodes on SANs are often also server nodes of a GPN. In these networked systems, the SAN is often implemented with separate, high-speed, network hardware from that of the GPN so as to offload the data from the GPN, thereby increasing GPN and effective CPU performance. Such separation is often desirable because effective compute node CPU performance is often limited by the available bandwidth between compute and storage nodes, and because the bandwidth required between compute and storage nodes often far exceeds all other network traffic affecting the same compute nodes.
Development of storage area networks has been motivated by the need to manage and share the dramatically increasing volume of business data, and to mitigate its effect on GPN performance. Using Fibre Channel connections, SANs can provide high-speed compute node to/from storage node, and storage node to storage node, communications at distances that allow remote workstation and server compute nodes to easily access large shared data storage pools.
Using SAN technology, management of storage systems can be more easily centralized than with alternative technologies, and data backup is facilitated. Both factors act to increase overall system efficiency. The large distances allowed by Fibre Channel SAN technology make it easier to deploy remote disaster recovery sites than with prior technology.
A Fibre Channel SAN can be local, or can now be extended over large geographic distances. The SAN can be viewed as an extension to the storage bus concept that enables storage devices and servers to be interconnected using similar elements as in local area networks (LANs) and wide area networks (WANs): routers, hubs, switches and gateways.
Fibre Channel is presently considered to be the architecture on which most future SAN implementations will be built. Fibre Channel is a technology standard that allows data to be transferred from one network node to another at very high speeds. This standard is backed by a consortium of industry vendors and has been accredited by the American National Standards Institute (ANSI). The word Fibre in Fibre Channel is spelled differently than “fiber” to indicate that the interconnections between nodes are not necessarily based on fiber optics, but can also use copper cables. Fibre Channel is, in essence, a high performance serial link supporting its own, as well as higher level protocols such as the FDDI, SCSI, HIPPI and IPI. SAN configurations may incorporate the
Data integrity is an important issue in storage area network technology, since multiple compute nodes employing diverse types of operating systems could coexist within the SAN, and some operating systems do not gracefully share access to the same storage devices with other operating systems. Some operating systems do not even gracefully share access to storage devices among multiple compute nodes even if each node runs the same O/S. Because of this, conflicts can occur that can have damaging results. These conflicts may include file and record lock conflicts, overwrites of home blocks on previously initialized disks, reservations taken out on disks which a compute node should not have access to, improper reformatting, overwriting of files, or other maloperation.
Presently, many current SAN implementations rely on limits on access to the physical wiring for security purposes. As SANs become larger and more geographically dispersed, a security scheme is required which will provide SAN-wide security in order to prevent conflicts over the entire network.
One security mechanism presently being implemented is a partitioning approach called ‘zoning’, or effectively partitioning at the ‘wire’ level of the SAN. Various levels of ‘zoning’ may be used to restrict the any-to-any access by limiting compute node attachment to specific storage nodes. Zoning is often implemented in Fibre Channel switches, such as those available from Brocade Communications Systems, Inc. These switches can be programmed to filter Fibre Channel frames according to their source and destination identifiers, thereby restricting SAN communications to those among authorized nodes and node pairs. Multiple such switches may be incorporated into a switched fabric that appears to each node as a larger, potentially geographically dispersed, switch.
It is known that storage nodes of a SAN could be RAID (Redundant Array of Independent Disks) controllers. RAID controllers are also known as array controllers since they are typically operable to present storage to a SAN with or without operating in redundancy modes. The RAID controllers could be configured to serve multiple logical units of storage. Each logical unit may represent a physical disk or tape drive, or be formed from part of one, all of one, or a combination of several, disk drives with or without redundancy. Each logical unit is a storage resource intended for use by a set of one or more compute nodes, where the sets of intended compute nodes for each logical unit may differ. Zoning as currently implemented typically restricts communications on a node basis, not a logical unit basis.
Resource providers of a SAN include storage nodes as well as any other node configured to provide resources to the SAN. Similarly, resource users of a SAN include compute nodes as well as any other node configured to use resources available on the SAN. For example, but not by way of limitation, a storage node having a data backup device and a disk device can be simultaneously both a resource provider—providing disk LUNs to the SAN—and a resource user—accessing disk resources of other storage nodes to backup data. Similarly, a storage node having disk devices could be a resource provider—providing disk LUNs to the SAN—and a resource user—transmitting data changes to a second resource provider to maintain a mirrored dataset.
Therefore, security and access control needs to be improved to guarantee data integrity by preventing conflicts. It is also desirable that the security and access controls be capable of management from a single network management point. The Fibre Channel specification does not include a specific mechanism for managing security-related issues, and there is presently no commonly available solution to the above-described problems of providing secure access to shared SAN resources.
Solution to the Problem
The present invention overcomes the aforementioned problems of the prior art and achieves an advance in the field by providing a method for controlling and managing resources on storage area networks (SANs). The method is applicable to a wide range of storage area networks, including large scale storage area networks.
The present system provides a table-driven mechanism whereby resources at the LUN level on the SAN can be allocated to specific resource users. Unless authorized to do so, a compute node or other resource user on the SAN is not allowed to access a particular resource.
Two tables are maintained at each resource provider to control storage resources. These include a table for ‘approved’ storage users (identified by port WWN and node WWN) and the approved resources (typically logical units, or LUNs) to which they have been granted access, and one for ‘not-yet-approved’ storage users (identified by port WWN and node WWN) that can see, but have not been granted resource access approval by, the resource provider. The table of approved storage resource users may be stored in non-volatile memory of each storage RAID controller so that only initial setup of the table is required. If the table of approved hosts/resources is stored in non-volatile memory, the information therein is available after any event that would require a resource user to re-poll for resources, such as a re-boot operation. The table of ‘not-yet-approved’ resource users is provided to allow a system administrator to copy information therefrom to the ‘approved’ table rather than having to enter the information by hand. This ‘not-yet-approved’ table may, but need not, be stored in volatile memory, which is often less expensive than non-volatile memory.
To facilitate operation of the present system, a SAN management interface is made available to a system manager. The use of the SAN management interface is password protected by the use of known network protocols and standards. At this management interface, the system manager creates or adds to the ‘approved’ table of each resource provider, matching resource users with access to specific LUNs. Once permission has been granted, information maintained in the ‘approved’ table facilitates subsequent access by the specified resource user or users.
An embodiment of the present system 100 (
In the exemplary embodiment of
Each RAID controller 101 is connected to a disk array 102 containing a plurality of disk storage devices. Each RAID controller 101 contains volatile memory (e.g., RAM) 13 and non-volatile memory 14. Volatile memory 13 in each controller 101 contains a table 15, the ‘not-yet-approved entity table’) in which is stored SAN hosts 103/105 that presently do not have any ‘approved’ resources. Non-volatile memory 14 in each controller 101 contains an ‘approved entity’ table 16 in which is stored:
(1) SAN entities with ‘approved’ resources,
(2) a list of those resources, and
(3) other host 103/105 configuration information.
The function of each of these tables 15/16 is described in detail below. The term “resources”, as used herein, refers to data storage devices such as magnetic or optical disk and tape drives, and may for some devices refer more specifically to data storage areas, such as logical units (LUNs) such as may exist on disk storage devices in a disk array 102.
At step 210, a host 103/105(*) logs-in to the controller 101 and presents its node WWN (World-Wide Name, a unique identifier) and port WWN to the discovered RAID controller 101. In doing so, the host 103/105(*) provides the source identifier S_ID that will identify communications from that host. RAID controller 101 or other resource provider examines the ‘approved entity’ table 16 in the controller's non-volatile memory 14, at step 215, to determine whether resources 102(*) are available to the requesting host 103/105(*). The verification made at step 215 includes verifying that the node WWN and port WWN supplied by host 103/105 match corresponding entries in the ‘approved entity’ table 16, described in detail below. Alternatively, this verification may include only checking either the node or port WWN, or verifying other host identification information previously established and stored in table 16.
If authorized, the host may then send commands to the resource. For every command sent, the host provides a source identification (S_ID) that identifies frames associated with that host.
Exemplary contents of the ‘approved entity’ table 16 are illustrated in
301 Host port WWN;
302 Host Node WWN;
303 Host symbolic name;
304 Unit offset_host_relative address of presented LUN resource;
305 Pending unit attentions_information about the state of the storage that is held until the host issues a command to a given device;
307 S_ID an identifier on commands indicating the source host application. While not necessarily unique throughout the world to the host like the WWN, S_ID is typically unique to the host within the SAN;
308 Persistent Reserve Information_a SCSI standard host support mechanism that allows hosts to manage locking and resource allocation dynamically;
309 LUN access map_indicates which LUNs this host (port WWN and node WWN) has access to; and
310 Host Mode13 controller tailors specific behaviors to what the host is looking for; for example, this field might indicate that this is an NT host, or an IBM host.
Fields 301-310 (listed above) are entered into ‘approved entity’ table 16 in step 245, described below. In an exemplary embodiment of the present system, ‘approved entity’ table 16 may contain up to 256 entries, each of which is slightly larger than 4K bytes, primarily due to the size of Persistent Reserve Information field 308, which is itself a 4K byte length field. In this embodiment, approximately 1 MB of non-volatile memory is required to contain 256 entries. It is anticipated that other embodiments may permit the table to contain other maximum numbers of entries, including numbers of entries greater than 256.
When a host 103/105 presents its node WWN and port WWN to the resource provider or RAID controller 101, the resource provider checks its ‘approved entity’ table 16 in the controller's non-volatile memory. If, at step 215, the host port WWN and node WWN are listed in table 16, then at step 220, resource provider 101 returns the LUN access map 309 for the entry to the host. LUN access map 309 indicates which resources 102 managed by resource provider (RAID controller) 101 are to be made accessible to a given host 103/105, and therefore, only a single entry in table 16 is required for access to multiple resources by a single host.
The LUN access map 309 is provided to the host through a SCSI “Report LUNs” command embedded in fibre channel frames. The response to this command is formatted according to
If, at step 215, it is determined that the requested connection is not stored in the ‘approved entity’ table 16, then at step 225, the Host port WWN and Host Node WWN from the log-in request is placed in the ‘not-yet-approved entity’ table 15 and no resources are provided.
It is anticipated that the presently described embodiment may be operable with or without zoning implemented in the switch or switched fabric 104. In the event that zoning is not implemented, the ‘not-yet-approved’ table of each resource provider will tend to acquire entries corresponding to all resource users, or hosts, in the SAN. In the event that zoning is implemented, the ‘not-yet-approved’ table of each resource provider will tend to acquire entries corresponding to all resource users, or hosts, in the SAN that have been zoned such that they can communicate with that resource provider.
In order to allocate resources to a host 103/105 (as identified by port WWN and node WWN), a system manager or SAN administrator logs in 240 (
A list (
Once the administrator has selected a host, the administrator then selects, from the list of available resources, the available resource or resources to be made available to, or remove from availability to, the selected host. As the administrator makes selections indicating desired changes, a record of the desired changes is made in memory of the management station 106.
Management station 106 then conveys the record of the desired changes to RAID controller 101(1) in a secure manner. RAID controller 101(1) in turn, then performs the following actions to update the ‘approved entity’ table as per the desired updates. For example, updating the ‘approved entity’ table 16 to allow access to an additional resource may involve the steps
(1) if the designated port WWN and node WWN combination appears in ‘not-yet-approved entity’ table 15 then
(2) else, if port WWN and node WWN already appears in ‘approved entity’ table 16, then add the designated LUN to the LUN access map 309.
At step 240 (
At step 245, the management station 106 requests the current ‘approved entity’ table 16, as well as the ‘not yet approved entity’ table 15 and a resource list, which are returned by the selected RAID controller 101(1). At the management station 106 the SAN administrator is able to view the list of current resources
Upon receiving the association, the RAID controller 101 checks 260 the host entry in its ‘not-yet-approved entity’ table 15. If it is not found in this table, a check is performed to see if the designated host is already present in the ‘approved entity’ table 16; if it is not present it is added by moving 285 the associated entry from the ‘not yet approved’ table 15 to the ‘approved entity’ table 16.
If the designated host was already present in the ‘approved entity’ table 16, a check is made 265 to determine if the designated resource is already associated with the host in the ‘approved entity’ table. If so, a further check 267 is made to verify that the administrator is de-associating the resource, and an warning is issued if not. De-association, or release of resources, is performed by deleting 270 the available resource entry from the LUN map of the host entry. If 275 there are no more active LUNs in the LUN map for that host entry the host entry is moved 280 from the ‘approved entity’ table to the ‘not-yet-approved entity’ table.
When adding an association between a host and a resource, the resource identification is added 290 to the LUN map of the host entry in the ‘approved entity’ table 16.
The selected resource provider 101 saves the altered ‘approved entity’ table in NV memory 14 in the selected RAID controller 101(1); and the altered ‘not-yet-approved entity’ table in volatile memory. Finally, the RAID controller 101(1) notifies 295 the indicated host that the RAID controller may have changed its resources.
Finally, at step 295, when a change in resources is presented to a given host 103/105(*) by RAID controller 101(*), host discovery of SAN resources is initiated, so that hosts 103/105 may be made aware of the available resources 102 on the storage area network.
Once one or more hosts has discovered and has been granted access to resources, following frames attempting to read or write resources are filtered to eliminate those that come from hosts that have not logged-in to the resources. For example, any frame attempting to read or write a resource that has a source identification (S_ID) not associated with a host in the ‘approved entity’ table will be rejected.
A system for selectively presenting logical units (LUNs) to host computing systems, disclosed in co-pending U.S. Patent Application entitled “System and Method for Selectively Presenting Logical Storage Units to Multiple Host Operating Systems in a Networked Computing System”, Ser. No. 09/312,944, filed May 17, 1999 (the '944 patent application), is incorporated herein by reference. The system disclosed therein includes an RAID controller or other resource provider 101 for controlling and coordinating the operations of persistent storage devices 102; a memory 14 accessible by the RAID controller; and a configuration table (not shown in
While preferred embodiments of the present invention have been shown in the drawings and described above, it will be apparent to one skilled in the art that various embodiments of the present invention are possible. For example, the specific configuration of tables 15 and 16 as well as the particular entities in the storage area network described above should not be construed as limited to the specific embodiments described herein. Modification may be made to these and other specific elements of the invention without departing from its spirit and scope as expressed in the following claims.
This application is a continuation of U.S. patent application Ser. No. 09/761,938, entitled System for Controlling Access to Resources in a Storage Area Controller, filed Jan. 17, 2001 now U.S. Pat. No. 6,968,463, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5790828 | Jost | Aug 1998 | A |
5923876 | Teague | Jul 1999 | A |
6161192 | Lubbers | Dec 2000 | A |
6170063 | Golding | Jan 2001 | B1 |
6260120 | Blumenau et al. | Jul 2001 | B1 |
6295578 | Dimitroff | Sep 2001 | B1 |
6343324 | Hubis et al. | Jan 2002 | B1 |
6347358 | Kuwata | Feb 2002 | B1 |
6374336 | Peters et al. | Apr 2002 | B1 |
6397293 | Shrader | May 2002 | B2 |
6487636 | Dolphin | Nov 2002 | B1 |
6490122 | Holmquist | Dec 2002 | B1 |
6493656 | Houston | Dec 2002 | B1 |
6505268 | Schultz | Jan 2003 | B1 |
6523749 | Reasoner | Feb 2003 | B2 |
6546459 | Rust | Apr 2003 | B2 |
6560673 | Elliott | May 2003 | B2 |
6587962 | Hepner | Jul 2003 | B1 |
6594745 | Grover | Jul 2003 | B2 |
6601187 | Sicola | Jul 2003 | B1 |
6606690 | Padovano | Aug 2003 | B2 |
6609145 | Thompson | Aug 2003 | B1 |
6629108 | Frey | Sep 2003 | B2 |
6629273 | Patterson | Sep 2003 | B1 |
6643795 | Sicola | Nov 2003 | B1 |
6647514 | Umberger | Nov 2003 | B1 |
6658590 | Sicola | Dec 2003 | B1 |
6663003 | Johnson | Dec 2003 | B2 |
6681308 | Dallmann | Jan 2004 | B1 |
6684209 | Ito et al. | Jan 2004 | B1 |
6694413 | Mimatsu et al. | Feb 2004 | B1 |
6701449 | Davis et al. | Mar 2004 | B1 |
6708285 | Oldfield | Mar 2004 | B2 |
6715101 | Oldfield | Mar 2004 | B2 |
6718404 | Reuter | Apr 2004 | B2 |
6718434 | Veitch | Apr 2004 | B2 |
6721902 | Cochran | Apr 2004 | B1 |
6725393 | Pellegrino | Apr 2004 | B1 |
6742020 | Dimitroff | May 2004 | B1 |
6745207 | Reuter | Jun 2004 | B2 |
6763409 | Elliott | Jul 2004 | B1 |
6772231 | Reuter | Aug 2004 | B2 |
6775790 | Reuter | Aug 2004 | B2 |
6795904 | Kamvysselis | Sep 2004 | B1 |
6802023 | Oldfield | Oct 2004 | B2 |
6807605 | Umberger | Oct 2004 | B2 |
6817522 | Brignone | Nov 2004 | B2 |
6823453 | Hagerman | Nov 2004 | B1 |
6839824 | Camble | Jan 2005 | B2 |
6842833 | Phillips | Jan 2005 | B1 |
6845403 | Chadalapaka | Jan 2005 | B2 |
7080196 | Kitamura | Jul 2006 | B1 |
7143119 | Furukawa et al. | Nov 2006 | B2 |
20010054151 | Wheeler et al. | Dec 2001 | A1 |
20020019863 | Reuter | Feb 2002 | A1 |
20020019908 | Reuter | Feb 2002 | A1 |
20020019920 | Reuter | Feb 2002 | A1 |
20020019922 | Reuter | Feb 2002 | A1 |
20020019923 | Reuter | Feb 2002 | A1 |
20020048284 | Moulton | Apr 2002 | A1 |
20020188800 | Tomaszewski | Dec 2002 | A1 |
20030051109 | Cochran | Mar 2003 | A1 |
20030056038 | Cochran | Mar 2003 | A1 |
20030063134 | Lord | Apr 2003 | A1 |
20030074492 | Cochran | Apr 2003 | A1 |
20030079014 | Lubbers | Apr 2003 | A1 |
20030079074 | Sicola | Apr 2003 | A1 |
20030079082 | Sicola | Apr 2003 | A1 |
20030079083 | Lubbers | Apr 2003 | A1 |
20030079102 | Lubbers | Apr 2003 | A1 |
20030079156 | Sicola | Apr 2003 | A1 |
20030084241 | Lubbers | May 2003 | A1 |
20030101318 | Kaga | May 2003 | A1 |
20030110237 | Kitamura | Jun 2003 | A1 |
20030126315 | Tan | Jul 2003 | A1 |
20030126347 | Tan | Jul 2003 | A1 |
20030140191 | McGowen | Jul 2003 | A1 |
20030145045 | Pellegrino | Jul 2003 | A1 |
20030145130 | Schultz | Jul 2003 | A1 |
20030170012 | Cochran | Sep 2003 | A1 |
20030177323 | Popp | Sep 2003 | A1 |
20030187847 | Lubbers | Oct 2003 | A1 |
20030187947 | Lubbers | Oct 2003 | A1 |
20030188085 | Arakawa | Oct 2003 | A1 |
20030188114 | Lubbers | Oct 2003 | A1 |
20030188119 | Lubbers | Oct 2003 | A1 |
20030188153 | Demoff | Oct 2003 | A1 |
20030188218 | Lubbers | Oct 2003 | A1 |
20030188229 | Lubbers | Oct 2003 | A1 |
20030188233 | Lubbers | Oct 2003 | A1 |
20030191909 | Asano | Oct 2003 | A1 |
20030191919 | Sato | Oct 2003 | A1 |
20030196023 | Dickson | Oct 2003 | A1 |
20030212781 | Kaneda | Nov 2003 | A1 |
20030229651 | Mizuno | Dec 2003 | A1 |
20030236953 | Grieff | Dec 2003 | A1 |
20040019740 | Nakayama | Jan 2004 | A1 |
20040024838 | Cochran | Feb 2004 | A1 |
20040024961 | Cochran | Feb 2004 | A1 |
20040025546 | Cochran | Feb 2004 | A1 |
20040030727 | Armangau | Feb 2004 | A1 |
20040030846 | Armangau | Feb 2004 | A1 |
20040049634 | Cochran | Mar 2004 | A1 |
20040078638 | Cochran | Apr 2004 | A1 |
20040078641 | Fleischmann | Apr 2004 | A1 |
20040128404 | Cochran | Jul 2004 | A1 |
20040168034 | Sigeo | Aug 2004 | A1 |
20040215602 | Cioccarelli | Oct 2004 | A1 |
20040230859 | Cochran | Nov 2004 | A1 |
20040267959 | Cochran | Dec 2004 | A1 |
Number | Date | Country | |
---|---|---|---|
20050223183 A1 | Oct 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09761938 | Jan 2001 | US |
Child | 11144108 | US |