The present invention relates generally to the field of storage systems, and particularly to ways of presenting virtual arrays.
Today's enterprise data centers store ever-larger amounts of business critical data that must be immediately and continuously available. Ever larger and more complex storage systems are used for storage of the data. Many different hosts and applications access data on these storage systems. In order to provide security and prevent data corruption, it is often necessary to ensure that the applications and hosts have exclusive access to particular areas of storage in the system.
One mechanism for partitioning storage systems employs the concept of “virtual arrays”. Accordingly, software is provided within a storage array to logically partition the array into separate storage groups. These groups are accessible only to hosts that have been granted access by the storage array. Other hosts cannot access a storage group to which they have not been granted access. Unfortunately, the current methods for partitioning storage arrays into virtual arrays are highly complex and expensive, and operate only at the storage array level. It is desirable to provide a simpler, inexpensive means of presenting virtual arrays to host systems, and to provide a way of centralizing array partitioning from another part of the system—for example, the fabric.
In accordance with the principles of the invention, a storage array includes a plurality of groups of logical units of storage. At least one physical port is coupled to the groups. The groups are coupled to a switch through the at least one physical port. Each group is assigned a unique virtual port ID for each physical port to which it is coupled. Further in accordance with the invention, the virtual port IDs are assignable by the switch. The virtual port IDs are is used by hosts coupled to the switch to exchange data with the groups to which the virtual port IDs are assigned.
The invention is particularly applicable in a Fibre Channel fabric environment. Each group of logical units thereby has its own unique virtual port ID through which it can be addressed by a host. The groups thus appear to the host as separate virtual arrays In accordance with a further aspect of the invention, the switch includes host facing ports. Each host is coupled to the switch through a host facing port. A zoning table in the switch associates each virtual port ID to a host facing port. Each group communicates only with hosts coupled to host facing ports associated with the virtual ID assigned to the group. Each group now appears to a host as a separate physical storage array.
In order to facilitate a fuller understanding of the present invention, reference is now made to the appended drawings. These drawings should not be construed as limiting the present invention, but are intended to be exemplary only.
In
The switch 14 includes switch ports 20. Host facing switch ports are labeled as 20h. Array facing switch ports are labeled as 20a. Host ports 22 on the hosts are coupled via Fibre Channel links 24 to host-facing switch ports 20h on the switch 14. Physical array ports 26 on the array 16 are coupled via Fibre Channel links 24 to array-facing switch ports 20a on the switch 14. The disks 18 within the array 16 are organized into logical units (“LUNs”) 30. “LUN”, originally a SCSI (small computer system interface) term, is now commonly used to describe a logical unit of physical storage space. The LUNs are exported by the array ports 26 for access by the hosts 12 via the fibre channel links 24 and switch 14. As herein shown, each disk appears to be configured as a separate LUN, though it is understood that a LUN can encompass part of a disk, or parts of multiple disks, or multiple complete disks. The arrangement shown is chosen for convenience of description.
In a Fibre Channel system such as that of
In a Fabric topology, the switch 14 assigns IDs to the host ports 22 and array ports 26 during initialization. IDs as described in the Fibre Channel specification are actually 24 bit quantities containing several fields. In
The Fibre Channel switch 14 includes a name server database 40. The name server database 40 is used by the switch 14 to assign IDs to host ports 22 and array ports 26 during initialization. The name server database 40 includes a name server table 42 that is used by the switch to resolve IDs to names. The general process by which port IDs are assigned in accordance with the Fibre Channel T1 standard is shown in
An example of the name server table 42 is shown in
Now that the hosts have IDs to access the ports, they can learn what. LUNs are available. LUN names and numbers are managed at the array level. Each host 12 sends a query to each array port 26 ID in turn, requesting a list of available LUN numbers. Once the LUN numbers for a given array port ID are known, the host is able to query each LUN 30 by using a combination of the port ID and LUN number to access the LUNs 30. The host 12 then queries each LUN 30 for its corresponding LUN name. Once the host has gathered all this information, it builds a directory LUN table 70 that relates LUN names, port IDs, and LUN numbers. A representation of such a LUN table 70 is shown in
During operation, hosts 12 refer to LUNs 30 by their LUN numbers. In order to access a LUN 30, a host 12 port 22 sends a message whose Fibre Channel address includes the array port ID and LUN number. The switch 14 parses the port ID portion of the address in order to forward the message to the identified array port 26. The array 16 then uses the LUN number portion of the address to access the proper LUN 30 within the array 16. So, for example, if host 12a needs to access LUN #L71, the host 12a port 22 sends a message to an address including the port ID 1 and the LUN number L71. The switch 14 sees the port ID 1 and sends the message to the array port 26 with ID 1. The array sees that the message is directed to LUN # L71 and thus proceeds to perform the appropriate operation on LUN #L71.
Note that, in accordance with the prior art arrangement of
It is often desirable to separate a storage array into several distinctly accessible sub-arrays, or “virtual arrays”. Each host or application has access to a virtual array, but does not have access to the other virtual arrays within the storage array. For example, it may be desirable to arrange the LUN numbers L00-L12 as a first virtual array accessible only to the host 12a, and LUN numbers L20-L32 as a second virtual array accessible only to the host 12b. Such an arrangement can provide security against data corruption and can provide ease of management for host applications. But, in the prior art example of
In accordance with the principles of the invention, the storage array and fabric are employed to present virtual arrays to the hosts. The LUNs in a storage array 16 are arranged into several storage groups. The term “storage group” can have different meanings in different contexts, so for clarity, a “storage group” as used herein is simply a group of LUNs. Virtual Port IDs are established over each physical port on the array. Each storage group has assigned to it at least one virtual port ID, used by the hosts to access the storage groups. Each storage group is thus separately accessible via at least one unique virtual port ID. A host 12 can access only the LUNs 30 in a storage group with a virtual port ID to which the switch 14 allows it access. As will be seen, the provision of unique virtual IDs for each storage group allows zoning to be applied by the switch 14 such that each host 12 has access to only designated storage groups. The storage groups thus appear as individual virtual arrays to the hosts 12. Therefore, the storage groups will herein further be referred to as “presented virtual arrays”.
In
In accordance with one implementation of the virtual Port IDs of the invention, the virtual port IDs are assigned by the modified switch 14a. The ANSI T11 standard “ ”, which currently defines virtual ports used by hosts, is extended to support storage arrays. The process by which virtual Port IDs are provided by the switch 14a is shown in
Then, if virtual port IDs are needed by the array 16a ports 26 (step 232), the array controller 44a sends an “FDISC” command to the switch 14a (step 234). The switch 14a receives the FDISC command (step 236) and responds by sending a virtual port ID to the array controller 44a (step 238). The array controller 44a receives the virtual port ID from the switch 14a (step 240). The switch 14a and array controller 44a then perform the registration process to add the virtual Port ID to the name server table 42a, as will be described (steps 242, 244). The FDISC command and response is repeated for each virtual ID required for each physical port (steps 232-244).
Now the switch 14a can build the name server table 42a in a manner similar to that previously described with respect to name server table 42, except the name server table 42a associates multiple virtual port IDs with the physical port names. An example of such a name server table 42a is shown in
Now that the hosts 12 have the virtual port IDs v0-v5, they can build their directory LUN tables in a manner similar to that previously described with regard to
In accordance with one advantage of the invention, storage array “zoning” can be implemented at the fabric switch in order to physically separate the presented virtual arrays for access only by certain hosts. Fibre Channel switches are able to implement zoning, whereby access between host ports and array ports is specified. But zoning can only be implemented at the port level; that is, it cannot be implemented at the LUN level. In the prior art arrangement of
But in accordance with this further aspect of the invention, since each presented virtual array 210a-e is associated with its own unique virtual Port ID v0-v5, the switch 14a can differentiate between each presented virtual array 210a-e based upon its virtual Port ID. The switch 14a can be programmed to allow or disallow access to each virtual port address from each host facing array port address through the use of its zoning process. Host access to the presented virtual arrays 210a-e can thus be physically separated, enhancing security, data integrity, and ease of storage management for host applications.
Referring now to
Now, when the switch 14a updates the hosts 12 with the contents of the name server table 42a, it uses the zoning table 43a to filter the presentation of the name server table 42a information to the hosts 12. Referring to
Now each host has access to only the LUNs 30 on the virtual array ports allowed by the zoning table 43a in the switch 14, rather than to all LUNs 30 on a physical array port 26. The invention thus allows a very simple and efficient means of presenting virtual arrays to the hosts, without requiring complex array level software.
The present invention is not to be limited in scope by the specific embodiments described herein. Various modifications of the present invention, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. For example, the disclosed controllers can be implemented in hardware, software, or both. All such modifications are intended to fall within the scope of the invention. Further, although aspects of the present invention have been described herein in the context of a particular implementation in a particular environment for a particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present invention can be beneficially implemented in any number of environments for any number of purposes.
Number | Name | Date | Kind |
---|---|---|---|
5247632 | Newman | Sep 1993 | A |
5568629 | Gentry et al. | Oct 1996 | A |
5963555 | Takase et al. | Oct 1999 | A |
6336152 | Richman et al. | Jan 2002 | B1 |
6421711 | Blumenau et al. | Jul 2002 | B1 |
6563834 | Ogawa | May 2003 | B1 |
6684209 | Ito et al. | Jan 2004 | B1 |
6839750 | Bauer et al. | Jan 2005 | B1 |
6907505 | Cochran et al. | Jun 2005 | B2 |
6944785 | Gadir et al. | Sep 2005 | B2 |
7043663 | Pittelkow et al. | May 2006 | B1 |
7051101 | Dubrovsky et al. | May 2006 | B1 |
7120728 | Krakirian et al. | Oct 2006 | B2 |
7124143 | Matsunami et al. | Oct 2006 | B2 |
7260737 | Lent et al. | Aug 2007 | B1 |
7318120 | Rust et al. | Jan 2008 | B2 |
7340639 | Lee et al. | Mar 2008 | B1 |
7366846 | Boyd et al. | Apr 2008 | B2 |
7383357 | Leichter et al. | Jun 2008 | B2 |
7398421 | Limaye et al. | Jul 2008 | B1 |
7433948 | Edsall et al. | Oct 2008 | B2 |
7500134 | Madnani et al. | Mar 2009 | B2 |
7979517 | Wang et al. | Jul 2011 | B1 |
20020071386 | Gronke | Jun 2002 | A1 |
20020165982 | Leichter et al. | Nov 2002 | A1 |
20030018927 | Gadir et al. | Jan 2003 | A1 |
20030131182 | Kumar et al. | Jul 2003 | A1 |
20040054866 | Blumenau et al. | Mar 2004 | A1 |
20040133576 | Ito et al. | Jul 2004 | A1 |
20040139240 | DiCorpo et al. | Jul 2004 | A1 |
20040151188 | Maveli et al. | Aug 2004 | A1 |
20040177228 | Leonhardt et al. | Sep 2004 | A1 |
20040213272 | Nishi et al. | Oct 2004 | A1 |
20040243710 | Mao | Dec 2004 | A1 |
20050008016 | Shimozono et al. | Jan 2005 | A1 |
20050010688 | Murakami et al. | Jan 2005 | A1 |
20050015415 | Garimella et al. | Jan 2005 | A1 |
20050120160 | Plouffe et al. | Jun 2005 | A1 |
20050154849 | Watanabe | Jul 2005 | A1 |
20050204104 | Aoshima et al. | Sep 2005 | A1 |
20050243611 | Lubbers et al. | Nov 2005 | A1 |
20050251620 | Matsunami et al. | Nov 2005 | A1 |
20060041595 | Taguchi et al. | Feb 2006 | A1 |
20060047930 | Takahashi et al. | Mar 2006 | A1 |
20060064466 | Shiga et al. | Mar 2006 | A1 |
20060075005 | Kano et al. | Apr 2006 | A1 |
20060080516 | Paveza et al. | Apr 2006 | A1 |
20060107010 | Hirezaki et al. | May 2006 | A1 |
20060155777 | Shih et al. | Jul 2006 | A1 |
20060190698 | Mizuno et al. | Aug 2006 | A1 |
20070220310 | Sharma et al. | Sep 2007 | A1 |
20070234342 | Flynn et al. | Oct 2007 | A1 |
20070266212 | Uchikado et al. | Nov 2007 | A1 |
20070291785 | Sharma et al. | Dec 2007 | A1 |
20080005468 | Faibish et al. | Jan 2008 | A1 |
20080086608 | Kano | Apr 2008 | A1 |
20100185828 | Kano | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
1130514 | Sep 2001 | EP |
1357465 | Oct 2003 | EP |
03062979 | Jul 2003 | WO |
Entry |
---|
Office Action for U.S. Appl. No. 11/318,734 mailed Sep. 22, 2008 (44 pages). |
Office Action for U.S. Appl. No. 11/318,719 mailed Sep. 17, 2008 (21 pages). |
Office Action mailed Sep. 3, 2008 for U.S. Appl. No. 11/427,646. |
Office Action mailed Dec. 9, 2008 for U.S. Appl. No. 11/427,724. |
Final Office Action mailed Sep. 17, 2008 for U.S. Appl. No. 11/427,749. |
Office Action mailed Jul. 20, 2009 for U.S. Appl. No. 11/427,724. |
Office Action mailed Oct. 2, 2009 for U.S. Appl. No. 11/771,604. |
Final Office Action mailed Oct. 21, 2009 for U.S. Appl. No. 11/318,734. |
Final Office Action mailed Oct. 28, 2009 for U.S. Appl. No. 11/318,719. |
Notice of Allowance mailed Dec. 3, 2009 for U.S. Appl. No. 11/318,719. |
Notice of Allowance mailed Dec. 11, 2009 for U.S. Appl. No. 11/318,734. |
Notice of Allowance mailed Nov. 16, 2009 for U.S. Appl. No. 11/427,646. |
Final Office Action mailed Dec. 2, 2009 for U.S. Appl. No. 11/427,759. |
Final Office Action mailed Dec. 14, 2009 for U.S. Appl. No. 11/427,744. |
Final Office Action mailed Dec. 14, 2009 for U.S. Appl. No. 11/427,749. |
Final Office Action mailed Dec. 29, 2009 for U.S. Appl. No. 11/427,731. |
Office Action mailed Nov. 27, 2009 for U.S. Appl. No. 11/771,686. |
Notice of Allowance mailed Mar. 6, 2010 for U.S. Appl. No. 11/427,724. |
Office Action mailed Nov. 16, 2010 for U.S. Appl. No. 11/427,731. |
Office Action mailed Dec. 16, 2010 for U.S. Appl. No. 11/427,749. |
Office Action mailed Dec. 13, 2010 for U.S. Appl. No. 11/427,744. |
Office Action mailed Feb. 3, 2011 for U.S. Appl. No. 11/427,759. |
Final Office Action mailed Jul. 2, 2010 for U.S. Appl. No. 11/771,604. |
Final Office Action mailed Jun. 23, 2010 for U.S. Appl. No. 11/771,686. |
Office Action mailed May 6, 2011 for U.S. Appl. No. 11/427,731, 23 pgs. |
Office Action mailed Apr. 21, 2011 for U.S. Appl. No. 11/771,686, 23 pgs. |
Non-Final Office Action in related U.S. Appl. No. 11/771,604, mailed on Aug. 15, 2012; 17 pages. |
Final Office Action in related U.S. Appl. No. 11/771,604, mailed on Apr. 5, 2012; 21 pages. |
Final Office Action in related U.S. Appl. No. 11/427,759, mailed on Jul. 21, 2011; 15 pages. |
Final Office Action in related U.S. Appl. No. 11/427,744, mailed on Sep. 16, 2011; 21 pages. |
Final Office Action in related U.S. Appl. No. 11/427,749, mailed on Sep. 21, 2011; 17 pages. |
Notice of Allowance in Related U.S. Appl. No. 11/771,686, mailed on Mar. 9, 2015; 9 pages. |
Notice of Allowance in related U.S. Appl. No. 11/427,731, mailed on Apr. 12, 2013; 10 pages. |
Notice of Allowance in related U.S. Appl. No. 11/427,744, mailed on Apr. 12, 2013; 12 pages. |
Notice of Allowance in related U.S. Appl. No. 11/427,759, mailed on Jul. 5, 2013; 20 pages. |
Notice of Allowance in related U.S. Appl. No. 11/427,749, mailed on Dec. 18, 2012; 5 pages. |
International Search Report for PCT/US2006/062353 on Aug. 20, 2007, 3 pages. |
Ofer; U.S. Appl. No. 11/318,719, filed Dec. 27, 2005; 73 pages. |
Madnani; U.S. Appl. No. 11/318,675, filed Dec. 27, 2005; 90 pages. |
Ofer; U.S. Appl. No. 11/318,734, filed Dec. 27, 2005; 75 pages. |
Brown, et al.; U.S. Appl. No. 11/427,646, filed Jun. 29, 2006; 47 pages. |
Ofer, et al.; U.S. Appl. No. 11/427,759, filed Jun. 29, 2006; 76 pages. |
Ofer, et al.; U.S. Appl. No. 11/427,724, filed Jun. 29, 2006; 78 pages. |
Madnani, et al.; U.S. Appl. No. 11/427,731, filed Jun. 29, 2006; 82 pages. |
Madnani, et al.; U.S. Appl. No. 11/427,744, filed Jun. 29, 2006; 83 pages. |
Ofer, et al.; U.S. Appl. No. 11/427,749, filed Jun. 29, 2006; 76 pages. |
Charles Millilgan et al., Online Storage Virtualization: The key to managing the data explosion, Proceedings of the 35th Hawaii International Conference on System Sciences, 2002, IEEE. |
Office Action mailed Mar. 19, 2008 for U.S. Appl. No. 11/427,759. |
Office Action mailed Mar. 20, 2008 for U.S. Appl. No. 11/427,731. |
Office Action mailed Mar. 20, 2008 for U.S. Appl. No. 11/427,744. |
Office Action mailed Mar. 18, 2008 for U.S. Appl. No. 11/427,749. |
Notice of Allowance mailed Jun. 12, 2008 for U.S. Appl. No. 11/318,675. |
Office Action mailed Nov. 26, 2012 for U.S. Appl. No. 11/427,731, 9 pgs. |
Office Action mailed Nov. 26, 2012 for U.S. Appl. No. 11/427,744, 9 pgs. |
Office Action mailed Mar. 5, 2009 for U.S. Appl. No. 11/427,646. |
Office Action mailed Mar. 17, 2009 for U.S. Appl. No. 11/427,759. |
Office Action mailed Apr. 27, 2009 for U.S. Appl. No. 11/427,731. |
Office Action mailed Apr. 27, 2009 for U.S. Appl. No. 11/427,744. |
Office Action mailed Apr. 27, 2009 for U.S. Appl. No. 11/427,749. |
Office Action mailed May 7, 2009 for U.S. Appl. No. 11/318,734. |
Office Action mailed May 11, 2009 for U.S. Appl. No. 11/318,719. |
Number | Date | Country | |
---|---|---|---|
20070208836 A1 | Sep 2007 | US |