Storage area network interconnect server

Information

  • Patent Grant
  • 7526527
  • Patent Number
    7,526,527
  • Date Filed
    Monday, March 31, 2003
    21 years ago
  • Date Issued
    Tuesday, April 28, 2009
    15 years ago
Abstract
Systems and methods for interconnecting a SAN with hosts on a remote network are disclosed. The systems and methods receive a set of device identifiers for a set of physical storage devices on the SAN. The device identifiers are mapped to a set of virtual device identifiers. Virtual devices having the virtual device identifiers are created on the remote network. The virtual devices correspond with the physical devices. Commands and responses are mapped and communicated between the virtual devices and the corresponding physical devices.
Description
FIELD

The present invention relates generally to storage area networks, and more particularly to a server for interconnecting a storage area network to a host system.


RELATED FILES

This invention is related to application Ser. No. 10/128,656, filed Apr. 22, 2002, now U.S. Pat. No. 7,165,258 issued Jan. 16, 2007, entitled “SCSI-BASED STORAGE AREA NETWORK”, application Ser. No. 10/131,793, filed Apr. 22, 2002, now U.S. Pat. No. 7,281,062, issued Oct. 9, 2007 entitled “VIRTUAL SCSI BUS FOR SCSI-BASED STORAGE AREA NETWORK”, provisional application Ser. No. 60/374,921, filed Apr. 22, 2002, entitled “INTERNET PROTOCOL CONNECTED STORAGE AREA NETWORK”, application Ser. No. 09/862,648, filed May 22, 2001 entitled “DETERMINING A REMOTE DEVICE NAME” and application Ser. No. 10/356,073, filed Jan. 31, 2003, entitled “STORAGE ROUTER WITH INTEGRATED SCSI SWITCH”, all of the above of which are hereby incorporated by reference.


BACKGROUND

The use of Storage Area Networks (SANs) continues to grow. Generally described, a SAN is a specialized network of storage devices that are connected to each other and to a server or cluster of servers that act as an access point to the SAN. SAN's typically use special switches as a mechanism to connect the storage devices. Typically the switches are Fibre Channel based switches.


A SAN provides many advantages to users requiring large amounts of storage. First, a SAN helps to isolate storage activity from a general purpose network. For example, a SAN can be providing data to users on the general purpose network at the same time it is being backed up for archival purposes. The data traffic associated with the backup does not compete for bandwidth on the general purpose network, it typically stays on the specialized network.


An additional advantage is that a SAN can be reconfigured, i.e. storage can be added or removed, without disturbing hosts on the general purpose network.


Recently the iSCSI protocol has provided a means for computers on a TCP/IP based network to take advantage of SAN technology without the need for purchasing and installing expensive Fibre Channel interfaces and software for each host desiring access to the SAN. The iSCSI protocol has provided increased flexibility in the location of SANs with respect to the hosts that accesses the SAN, because the SAN and the host need only have a TCP/IP based network in order to communicate.


Unfortunately, this same flexibility is not available to hosts on a Fibre Channel based network. In order for a Fibre Channel based host to access a SAN, the SAN must be typically be collocated on the same network as the host, resulting in less flexibility in locating storage resources.


In view of the above, there is a need in the art for the present invention.


SUMMARY

The above-mentioned shortcomings, disadvantages and problems are addressed by the present invention, which will be understood by reading and studying the following specification.


Systems and methods interconnect a SAN with hosts on a remote network. The systems and methods receive a set of device identifiers for a set of physical storage devices on the SAN. The device identifiers are mapped to a set of virtual device identifiers. Virtual devices having the virtual device identifiers are created on the remote network. The virtual devices correspond with the physical devices. Commands and responses are mapped and communicated between the virtual devices and the corresponding physical devices.


The present invention describes systems, methods, and computer-readable media of varying scope. In addition to the aspects and advantages of the present invention described in this summary, further aspects and advantages of the invention will become apparent by reference to the drawings and by reading the detailed description that follows.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a hardware and operating environment in which different embodiments of the invention can be practiced;



FIG. 2 is a block diagram of the major hardware components of a iSCSI Interconnect router according to an embodiment of the invention;



FIGS. 3A-3E are flowcharts illustrating methods of interconnecting a SAN to a host according to embodiments of the invention;





DETAILED DESCRIPTION

In the following detailed description of exemplary embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the present invention.


Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.


In the Figures, the same reference number is used throughout to refer to an identical component which appears in multiple Figures. Signals and connections may be referred to by the same reference number or label, and the actual meaning will be clear from its use in the context of the description.


The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.


Operating Environment

Some embodiments of the invention operate in an environment of systems and methods that provide a means for Fibre Channel based Storage Area Networks (SANs) to be accessed from a host on a remote network. FIG. 1 is a block diagram describing the major components of such an environment. In some embodiments of the invention, a SAN interconnect environment 100 includes one or more storage routers 110 connected through storage network 130 to one or more SCSI devices 140. In the exemplary embodiment shown in FIG. 1, two storage routers 110.1 and 110.2 on two storage networks 130.1 and 130.2 are shown. However, the invention is not limited to any particular number of storage routers or storage area networks.


Each storage router 110 includes an iSCSI interface 104, one or more SCSI routers 105 and a SCSI interface 106. iSCSI interface 104 receives encapsulated SCSI packets to and from IP network 129, extracts the SCSI packet and send the SCSI packet to SCSI router 105. SCSI router 105 determines the appropriate target, and sends the SCSI packet to SCSI interface 106. SCSI interface 106 modifies the SCSI packet to conform to its network protocol (e.g., Fibre Channel, parallel SCSI, or iSCSI) and places the modified SCSI packet onto storage network 130. The SCSI packet is then delivered to its designated SCSI device 140. Conversely, SCSI data received from storage network 130 by SCSI interface 106 is sent to SCSI router 105, which then determines an appropriate destination and sends the SCSI packet to iSCSI interface 104 for encapsulation in a TCP/IP packet.


In one embodiment, storage router 110 provides IPv4 router functionality between a Gigabit Ethernet and a Fibre Channel interface. In one such embodiment, static routes are supported. In addition, storage router 110 supports a configurable MTU size for each interface, and has the ability to reassemble and refragment IP packets based on the MTU of the destination interface.


In one embodiment, storage router 110 acts as a gateway, converting SCSI protocol between Fibre Channel and TCP/IP. Storage router 110 is configured in such an embodiment to present Fibre Channel devices as iSCSI targets, providing the ability for clients on the IP network to directly access storage devices.


In one embodiment, SCSI routing occurs in the Storage Router 110 through the mapping of physical storage devices to iSCSI targets. An iSCSI target (also called a logical target) is an arbitrary name for a group of physical storage devices. Mappings between an iSCSI target to multiple physical devices can be established using configuration programs on storage router 110. An iSCSI target always contains at least one Logical Unit Number (LUN). Each LUN on an iSCSI target is mapped to a single LUN on a physical storage target.


Environment 100 also includes an iSCSI SAN interconnect router 111, also referred to as ISI router 111 communicably coupled to one or more storage routers 110 through IP network 129. ISI router 111 is also communicably coupled one or more Fibre Channel hosts 150 through Fibre Channel network 160. Fibre Channel network 160 may also include one or more physical storage devices 142.


ISI router 111, like storage router 110 includes an iSCSI interface 104 and SCSI interface 106. In some embodiments, SCSI interface 106 is a Fibre Channel interface connected to a Fibre Channel network 160. In some embodiments, ISI router 111 includes a Fibre Channel server 120. Fibre Channel server 120 is typically configured with the address of each storage router 110. Fibre Channel server 120 learns the SCSI devices 140 coupled through storage network 130 to each configured storage router 110, and presents these devices as if they were present on Fibre Channel network 160. Thus, Fibre Channel server 111 maintains a set of one or more virtual storage devices 144 for Fibre Channel network 160, each virtual storage device 144 corresponding to a physical SCSI device 140. Fibre Channel hosts 150 can access the virtual storage devices 144 as if they were physically present on network 160. The ISI router 111 effectively serves as a bridge between the hosts on Fibre Channel network 160 and the devices on storage area network 130, using the iSCSI protocol to communicate between the ISI router 111 and storage routers 110. Further details on the operation of the system are described below in the methods section.



FIG. 2 is a block diagram providing further details of the major hardware components comprising storage router 110 and ISI router 111. In some embodiments, a storage router 110 or ISI router 111 includes a router portion 210 and a switch portion 220 on a common motherboard 200. The motherboard is powered by a power supply (not shown) and cooled by common cooling system, such as a fan (also not shown).


Router portion 210, which in the exemplary embodiment complies with draft 08 and later versions of the iSCSI protocol and incorporates commercially available router technology, such as the 5420 and 5428 Storage Routers from Cisco Systems, Inc. of San Jose, Calif., includes Gigabit Ethernet (GE) ports 211.1 and 211.2, console port 212, management port 213, high-availability (HA) port 214, bridge-and-buffer module 215, interface software 216, router processor 217, and router-to-switch interface 218.


GE ports 211.1 and 211.2 couple the storage router to an IP network for access by one or more servers or other computers, such as servers or iSCSI hosts (in FIG. 1). In some embodiments, GE ports 211.1 and 211.2 have respective MAC addresses, which are determined according to a base MAC address for the storage router plus 31 minus the respective port number. Two or more Gigabit Ethernet interfaces may be available. In some embodiments, one or more of the Gigabit Ethernet interfaces may provide internal support for maintaining Virtual Local Area Networks (VLANs). Each SCSI router typically supports a single IP address. The SCSI router IP address may be tied to any network (or VLAN) on either GE interface. Generally at least one SCSI router instance is created for each GE interface.


Console port 212 couples to a local control console (not shown). In the exemplary embodiment, this port takes the form of an RS-232 interface.


Management port 213 provides a connection for managing and/or configuring storage router 110. In the exemplary embodiment, this port takes the form of a 10/100 Ethernet port and may be assigned the base MAC address for the router-switch.


HA port 214 provides a physical connection for high-availability communication with another router-switch, such as storage router 110 in FIG. 1. In the exemplary embodiment, this port takes the form of a 10/100 Ethernet port, and is assigned the base MAC address plus 1.


Bridge-and-buffer module 215, which is coupled to GE ports 211.1 and 211.2, provides router services that are compliant with draft 08 and later versions of the iSCSI protocol. In the exemplary embodiment, module 215 incorporates a Peripheral Component Interface (PCI) bridge, such as the GT64260 from Marvell Technology Group, LTD. of Sunnyvale, Calif. Also module 215 includes a 64-megabyte flash file system, a 1-megabyte boot flash, and a 256-megabyte non-volatile FLASH memory (not shown separately.) Configuration memory 230 may be part of the flash file system, the boot flash or the non-volatile flash memory, or it may be a separate non-volatile flash memory. In addition, in alternative embodiments, configuration memory 230 may be part of a hard disk, CD-ROM, DVD-ROM or other persistent memory (not shown). The invention is not limited to any particular type of memory for configuration memory 230.


In addition to data and other software used for conventional router operations, module 215 includes router-switch interface software 216. Router-switch software 216 performs iSCSI routing between servers and the storage devices. The software includes an integrated router-switch command line interface module CLI and a web-based graphical-user-interface module (GUI) for operation, configuration and administration, maintenance, and support of the router-switch 110. Both the command-line interface and the graphical user interface are accessible from a terminal via one or both of the ports 213 and 214. Additionally, to facilitate management activities, interface software 216 includes an SNMP router-management agent AGT and an MIB router handler HD. (SNMP denotes the Simple Network Management Protocol, and MIB denotes Management Information Base (MIB)). The agent and handler cooperate with counterparts in switch portion 220 (as detailed below) to provide integrated management and control of router and switching functions in router-switch 200.


Router Processor 217, in the exemplary embodiment, is implemented as a 533-MHz MPC7410 PowerPC from Motorola, Inc. of Schaumburg, Ill. This processor includes 1-megabyte local L2 cache (not shown separately). In the exemplary embodiment, router processor 217 runs a version of the VX Works operating system from WindRiver Systems, Inc. of Alameda, Calif. To support this operating system, the exemplary embodiment also provides means for isolating file allocations tables from other high-use memory areas (such as areas where log and configuration files are written).


Coupled to router processor 217 as well as to bridge-and-buffer module 215 is router-to-switch (RTS) interface 218. RTS interface 218 includes N/NL switch-interface ports 218.1 and 218.2 and management-interface port 218.3, where the port type of N or NL is determined by negotiation. N type ports may act as a Fibre Channel point to point port, NL type ports may negotiate as a loop.


Switch-interface ports 218.1 and 218.2 are internal Fibre Channel (FC) interfaces through which the router portion conducts I/O operations with the switch portion. When a mapping to a FC storage device is created, the router-switch software automatically selects one of the switch-interface ports to use when accessing the target device. The internal interfaces are selected at random and evenly on a per-LUN (logical unit number) basis, allowing the router-switch to load-balance between the two FC interfaces. The operational status of these internal FC interfaces is monitored by each active SCSI Router application running on the switch-router. The failure of either of these two interfaces is considered a unit failure, and if the switch-router is part of a cluster, all active SCSI Router applications will fail over to another switch-router in the cluster. Other embodiments allow operations to continue with the remaining switch-interface port. Still other embodiments include more than two switch-interface ports.


In the exemplary embodiment, the N/NL switch-interface ports can each use up to 32 World Wide Port Names (WWPNs). The WWPNs for port 218.1 are computed as 28+virtual port+base MAC address, and the WWPNs for port 218.2 are computed as 29+virtual port+base MAC address. Additionally, switch-interface ports 218.1 and 218.2 are hidden from the user. One exception is the WWPN of each internal port. The internal WWPNs are called “initiator” WWPNs. Users who set up access control by WWPN on their FC devices set up the device to allow access to both initiator WWPNs.


Switch-interface port 218.3 is used to exchange configuration data and get operational information from switch portion 220 through its management-interface port 224. In the exemplary embodiment, switch-interface port 218.3 is an 10/100 Ethernet port. In the exemplary embodiment, this exchange occurs under the control of a Switch Management Language (SML) Application Program Interface (API) that is part of interface software 216. One example of a suitable API is available from QLogic Corporation of Aliso Viejo, Calif. Ports 218.1, 218.2, and 218.3 are coupled respectively to FC interface ports 221.1 and 221.2 and interface port 224 of switch portion 220.


Switch portion 220, which in the exemplary embodiment incorporates commercially available technology and supports multiple protocols including IP and SCSI, additionally includes internal FC interface ports 221.1 and 221.2, an FC switch 222, external FC ports (or interfaces) 223.1-223.8, a management interface port 224, and a switch processor module 225.


FC interface ports 221.1221.2 are coupled respectively to ports of 218.1 and 218.2 of the router-to-switch interface via internal optical fiber links, thereby forming internal FC links. In the exemplary embodiment, each FC interface supports auto-negotiation as either an F or FL port.


FC switch 222, in the exemplary embodiment, incorporates a SANbox2-16 FC switch from QLogic Corporation. This SANbox2 switch includes QLogic's Itasca switch ASIC (application-specific integrated circuit.) Among other things, this switch supports Extended Link Service (ELS) frames that contain manufacturer information.


FC ports 223.1-223.8, which adhere to one or more FC standards or other desirable communications protocols, can be connected as point-to-point links, in a loop or to a switch. For flow control, the exemplary embodiment implements a Fibre Channel standard that uses a look-ahead, sliding-window scheme, which provides a guaranteed delivery capability. In this scheme, the ports output data in frames that are limited to 2148 bytes in length, with each frame having a header and a checksum. A set of related frames for one operation is called a sequence.


Moreover, the FC ports are auto-discovering and self-configuring and provide 2-Gbps full-duplex, auto-detection for compatibility with 1-Gbps devices. For each external FC port, the exemplary embodiment also supports: Arbitrated Loop (AL) Fairness; Interface enable/disable; Linkspeed settable to 1 Gbps, 2 Gbps, or Auto; Multi-Frame Sequence bundling; Private (Translated) Loop mode.


Switch processor module 225 operates the FC switch and includes a switch processor (or controller) 225.1, and associated memory that includes a switch management agent 225.2, and a switch MIB handler 225.3. In the exemplary embodiment, switch processor 225.1 includes an Intel Pentium processor and a Linux operating system. Additionally, processor 225 has its own software image, initialization process, configuration commands, command-line interface, and graphical user interface (not shown). (In the exemplary embodiment, this command-line interface and graphical-user interface are not exposed to the end user.) A copy of the switch software image for the switch portion is maintained as a tar file 226 in bridge-and-buffer module 215 of router portion 210.


Further details on the operation of the above describe system, including high availability embodiments can be found in application Ser. No. 10/128,656, entitled “SCSI-BASED STORAGE AREA NETWORK”, application Ser. No. 10/131,793, entitled “VIRTUAL SCSI BUS FOR SCSI-BASED STORAGE AREA NETWORK”, and provisional application Ser. No. 60/374,921, entitled “INTERNET PROTOCOL CONNECTED STORAGE AREA NETWORK”, all of which have been previously incorporated by reference.


SAN Interconnection Method


FIGS. 3A-3E are flowcharts illustrating methods according to embodiments of the invention for interconnecting a SAN with hosts on a remote network. The method to be performed by the operating environment constitutes computer programs made up of computer-executable instructions. Describing the methods by reference to a flowchart enables one skilled in the art to develop such programs including such instructions to carry out the methods on suitable computers (the processor or processors of the computer executing the instructions from computer-readable media). The methods illustrated in FIGS. 3A-3E are inclusive of acts that may be taken by an operating environment executing an exemplary embodiment of the invention.


The method begins when a system such as ISI router 111 receives a set of device identifiers representing physical storage devices on a SAN (block 305). In some embodiments, the device identifiers comprise a set of SCSI World Wide Port Names (WWPNs). In some embodiments, these device identifiers will be received in response to a discovery request issued to a storage router connected to the SAN. In some embodiments, the system must login to the storage router using ISCSI login procedures prior to issuing the request. In some embodiments, the login process and device discovery takes place as described in U.S. patent application Ser. No. 09/862,648 entitled “Determining a Remote Device Name” which has been previously incorporated by reference.


Next, the system maps the WWPNs for the physical devices to a corresponding set of WWPNs for virtual devices (block 310). Typically the NAA (Network Address Authority) value of the WWPN for the physical device will be 2, indicating a globally unique WWPN. In some embodiments, the mapping step comprises creating a WWPN for the virtual device by changing the NAA to a value of 3, indicating that the WWPN is a locally assigned address. In alternative embodiments of the invention, a WWPN for the virtual device may be constructed using the MAC address of the ISI router system and an NAA value of 3.


In some embodiments, a World Wide Node Name (WWNN) for the virtual devices is formed comprising the MAC address of the ISI router with a NAA value of 1.


Next, the system presents a set of virtual devices with their corresponding WWPNs on a second network, i.e. a remote network from the SAN (block 315). The virtual devices appear to hosts on the second network as if they were physically present on the second network. The ISI router system responds to commands issued by hosts on the second network for the virtual devices on behalf of the physical devices. Upon receiving such a command (block 320), the system determines the physical device corresponding to the virtual device (block 325) and sends the command to the physical device (block 330). The system maintains data associating the command with the host that issued the command so that it may send any responses to the command to the correct host.


Upon receiving a response from a physical device (block 335), the system determines the corresponding virtual device (block 340), and then determines which host issued the command to that virtual device. The response is then sent to the host (block 345).



FIG. 3B illustrates a method according to an embodiment of the invention for updating virtual device information. The method begins by discovering iSCSI targets on physical devices on a SAN (block 352). This may result in the discovery of new targets, or the elimination of target as the SAN is reconfigured or as devices on the SAN power up or down. The current set of targets is then mapped to corresponding Fibre Channel targets on the virtual devices presented by the system (block 354). The system then waits for a predetermined time (block 356) and returns to block 352 to rediscover targets. In some embodiments of the invention, the predetermined time is sixty seconds, however the invention is not limited to any particular time period. By rediscovering periodically, some embodiments of the invention avoid presenting stale targets for the virtual devices.



FIG. 3C illustrated a method according to an embodiment of the invention for processing the opening of a target on a virtual device. The method begins by receiving a request to open a target on a virtual device from a host on the same network as the virtual device (block 360). In some embodiments, the system does not immediately establish an iSCSI connection to a target on the corresponding physical device, but rather waits until a command is received for the target on the virtual device (block 362). The system then attempts to open a connection to the target on the physical device (block 364). If the iSCSI connection attempt is successful, the command is sent to the physical target through the iSCSI connection (block 366). A successful connection attempt in some embodiments is one in which a TCP connection can be established and a successful iSCSI login completed.


Otherwise, if the connection cannot be established within a predetermined time, the system returns to block 364 to attempt another connection. By waiting until a command is actually received, the system conserves iSCSI connections and does not open one until it is actually needed. Additionally, in some embodiments, there is one iSCSI connection per remote host/target combination. This has the effect in some embodiments of preserving the host identity through the ISI router.



FIG. 3D illustrates a method for maintaining a connection once it has been opened. The method begins by opening an iSCSI connection for a virtual device to physical device data flow (block 370). After the connection is established, the system checks to see if there is command being processed through the connection (block 372). If a command is outstanding, the system returns to wait a predetermined time before checking again.


If a command is not currently begin processed through the connection, the system sends a “no operation” command to the target. In one embodiment, an iSCSI NOP (no operation) command may be sent to the target on the physical device associated with connection (block 374). In some embodiments, the response bit is set in the command to indicate that the device should respond. The system then waits for a predetermined time to receive a response to the NOP command (block 376). In some embodiments of the invention, the system waits up to sixty seconds for a response. If a response is received (block 378), the system returns to block 372 and rechecks again at a later time.


If no response is received (block 378), then a check is made to determine if a maximum waiting time has been reached (block 380). In some embodiments, the maximum waiting time is measured from when the first NOP command is sent after detecting that no commands are outstanding. If the maximum wait time has not yet been reached, the system returns to block 374 and resends a NOP command to the target on the physical device. Otherwise, if the maximum wait time has been reached, the system closes the connection (block 382). In some embodiments of the invention, the maximum waiting time is 180 seconds. However the invention is not limited to any particular value for the maximum waiting time. The method illustrated in FIG. 3D allows the system of some embodiments of the invention to prune dead connections faster than would otherwise be available utilizing standard network timeouts.



FIG. 3E illustrates a method for closing a connection in accordance with an embodiment of the invention. The method begins when an iSCSI connection is closed on the system (block 390). The closure may be related to any one of a number reasons. For example, the network connection between an ISI router and a storage router may be dropped due to network failure, a physical device may have powered down, a storage router or SAN may have powered down. The invention is not limited to any particular reason for connection closure.


After the iSCSI connection has closed, the system checks to determine if any commands were outstanding on the closed connection (block 392). If not, the system may proceed to block 360 (FIG. 3C) to await a new command to cause a new iSCSI connection to be opened. The host using the target on the virtual device will typically not even notice that the connection was closed. Additionally, there is no need for the host to perform a Fibre Channel login for the target on the virtual device.


Otherwise, if a command was outstanding, the system returns the command to the host issuing the command (block 394). In some embodiments, the command is returned with a unit check and sense record to indicate that the command should be retried.


This section has described the various software methods in a system that interconnects a SAN with hosts on a remote network. As those of skill in the art will appreciate, the software can be written in any of a number of programming languages known in the art, including but not limited to C/C++, Visual Basic, Smalltalk, Pascal, Ada and similar programming languages. The invention is not limited to any particular programming language for implementation.


CONCLUSION

Systems and methods for interconnecting a SAN with hosts on a remote network are described. The systems and methods form a bridge from hosts on a remote network to storage devices on a SAN. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments shown. For example, the present invention has been described in the context of a storage router network device. The systems and methods of the invention apply equally as well to other types of network devices having a plurality of internal and external network interfaces. This application is intended to cover any adaptations or variations of the present invention.


The terminology used in this application is meant to include all of these environments. It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. Therefore, it is manifestly intended that this invention be limited only by the following claims and equivalents thereof.

Claims
  • 1. A method for providing access to a storage area network, the method comprising: accessing a storage router connected to a set of physical storage devices on the storage area network, the storage router including a plurality of SCSI routers assigned a plurality of IP addresses;issuing a discovery request to the storage router to determine the presence of the set of physical devices on the storage area network;receiving a set of device identifiers for the set of physical storage devices on an iSCSI interconnect router;mapping the set of identifiers to a set of virtual device identifiers;presenting to a host on a second network a set of virtual storage devices corresponding to the set of physical storage devices and identified by the set of virtual device identifiers;waiting a predetermined period of time to determine if changes in the set of physical devices exist;updating the set of virtual storage devices in accordance with the changes.
  • 2. The method of claim 1, wherein the predetermined time is sixty seconds.
  • 3. The method of claim 1, wherein determining if changes exist includes determining changes in iSCSI targets on the physical devices.
  • 4. A method for interconnecting a storage device on a storage area network to a host on a remote network, the method comprising: accessing a storage router connected to a set of physical storage devices on the storage area network, the storage router including a plurality of SCSI routers assigned a plurality of IP addresses;receiving from the host a request to open a target on a virtual device presented on the remote network, said virtual device corresponding to a physical device on the storage area network and coupled to the storage router;receiving a command from the host for the target on the virtual device;opening an iSCSI connection to a target on the physical device coupled to the storage router using one of the plurality of SCSI routers;if the connection succeeds, then issuing the command to the target on the physical device, otherwise waiting a predetermined time and repeating the attempt to open an iSCSI connection to the target on the physical device.
  • 5. A method for maintaining a connection between a physical storage device on a storage area network and a virtual storage device presented on a remote network, the method comprising: accessing a storage router connected to a set of physical storage devices on the storage area network, the storage router including a plurality of SCSI routers assigned a plurality of IP addresses, wherein at least one of the plurality of SCSI routers manages one or more iSCSI connections to one or more targets on the physical storage device;mapping a virtual storage device to a physical storage device on a storage area network connected to the storage router through one of the plurality of SCSI routers;determining if a command is currently being processed by the physical storage device;if no command is currently being processed, then: sending a no-op command to the physical storage device,waiting up to a predetermined time to determine if a response is received from the physical storage device, andif no response is received, then determining if a maximum waiting time has elapsed and if so closing the connection.
  • 6. The method of claim 5, wherein the predetermined time is sixty seconds.
  • 7. The method of claim 5, wherein the maximum waiting time is 180 seconds.
  • 8. The method of claim 5, wherein the command is a SCSI command.
  • 9. A tangible computer-readable storage medium having stored thereon computer executable instructions for performing a method for providing access to a storage area network, the method comprising: accessing a storage router connected to a set of physical storage devices on the storage area network, the storage router including a plurality of SCSI routers assigned a plurality of IP addresses;issuing a discovery request to the storage router to determine the presence of the set of physical devices on the storage area network;receiving a set of device identifiers for the set of physical storage devices on an iSCSI interconnect router;mapping the set of identifiers to a set of virtual device identifiers;presenting to a host on a second network a set of virtual storage devices corresponding to the set of physical storage devices and identified by the set of virtual device identifiers;waiting a predetermined period of time to determine if changes in the set of physical devices exist;updating the set of virtual storage devices in accordance with the changes.
  • 10. The tangible computer-readable storage medium of claim 9, wherein the predetermined time is sixty seconds.
  • 11. The tangible computer-readable storage medium of claim 9, wherein determining if changes exist includes determining changes in iSCSI targets on the physical devices.
  • 12. A tangible computer-readable storage medium having stored thereon computer executable instructions for performing a method for interconnecting a storage device on a storage area network to a host on a remote network, the method comprising: accessing a storage router connected to a set of physical storage devices on the storage area network, the storage router including a plurality of SCSI routers assigned a plurality of IP addresses;receiving from the host a request to open a target on a virtual device presented on the remote network, said virtual device corresponding to a physical device on the storage area network and coupled to the storage router;receiving a command from the host for the target on the virtual device;opening an iSCSI connection to a target on the physical device coupled to the storage router using one of the plurality of SCSI routers;if the connection succeeds, then issuing the command to the target on the physical device, otherwise waiting a predetermined time and repeating the attempt to open an iSCSI connection to the target on the physical device.
  • 13. A tangible computer-readable storage medium having stored thereon computer executable instructions for performing a method for maintaining a connection between a physical storage device on a storage area network and a virtual storage device presented on a remote network, the method comprising: accessing a storage router connected to a set of physical storage devices on the storage area network, the storage router including a plurality of SCSI routers assigned a plurality of IP addresses, wherein at least one of the plurality of SCSI routers manages one or more iSCSI connections to one or more targets on the physical storage device;mapping a virtual storage device to a physical storage device on a storage area network connected to the storage router through one of the plurality of SCSI routers;determining if a command is currently being processed by the physical storage device;if no command is currently being processed, then: sending a no-op command to the physical storage device,waiting up to a predetermined time to determine if a response is received from the physical storage device, andif no response is received, then determining if a maximum waiting time has elapsed and if so closing the connection.
  • 14. The tangible computer-readable storage medium of claim 13, wherein the predetermined time is sixty seconds.
  • 15. The tangible computer-readable storage medium of claim 13, wherein the maximum waiting time is 180 seconds.
  • 16. The tangible computer-readable storage medium of claim 13, wherein the command is a SCSI command.
  • 17. A computerized system comprising: means for accessing a storage router connected to a set of physical storage devices on the storage area network, the storage router including a plurality of SCSI routers assigned a plurality of IP addresses;means for issuing a discovery request to the storage router to determine the presence of the set of physical devices on the storage area network; means for receiving a set of device identifiers for the set of physical storage devices on an iSCSI interconnect router;means for mapping the set of identifiers to a set of virtual device identifiers;means for presenting to a host on a second network a set of virtual storage devices corresponding to the set of physical storage devices and identified by the set of virtual device identifiers;waiting a predetermined period of time to determine if changes in the set of physical devices exist;means for updating the set of virtual storage devices in accordance with the changes.
US Referenced Citations (183)
Number Name Date Kind
4495617 Ampulski et al. Jan 1985 A
5289579 Punj Feb 1994 A
5390326 Shah Feb 1995 A
5461608 Yoshiyama Oct 1995 A
5473599 Li et al. Dec 1995 A
5535395 Tipley et al. Jul 1996 A
5544077 Hershey Aug 1996 A
5579491 Jeffries et al. Nov 1996 A
5600828 Johnson et al. Feb 1997 A
5666486 Alfieri et al. Sep 1997 A
5684957 Kondo et al. Nov 1997 A
5732206 Mendel Mar 1998 A
5812821 Sugi et al. Sep 1998 A
5862404 Onaga Jan 1999 A
5870571 Duburcq et al. Feb 1999 A
5909544 Anderson et al. Jun 1999 A
5941972 Hoese et al. Aug 1999 A
5951683 Yuuki et al. Sep 1999 A
5978478 Korematsu Nov 1999 A
5991813 Zarrow Nov 1999 A
5996024 Blumenau Nov 1999 A
5996027 Volk et al. Nov 1999 A
6006224 McComb et al. Dec 1999 A
6006259 Adelman et al. Dec 1999 A
6009476 Flory et al. Dec 1999 A
6018765 Durana et al. Jan 2000 A
6041381 Hoese Mar 2000 A
6078957 Adelman et al. Jun 2000 A
6108300 Coile et al. Aug 2000 A
6108699 Moiin Aug 2000 A
6131119 Fukui Oct 2000 A
6134673 Chrabaszcz Oct 2000 A
6145019 Firooz et al. Nov 2000 A
6151684 Alexander et al. Nov 2000 A
6163855 Shrivastava et al. Dec 2000 A
6178445 Dawkins et al. Jan 2001 B1
6185620 Weber et al. Feb 2001 B1
6195687 Greaves et al. Feb 2001 B1
6195760 Chung et al. Feb 2001 B1
6209023 Dimitroff et al. Mar 2001 B1
6219771 Kikuchi et al. Apr 2001 B1
6260158 Purcell et al. Jul 2001 B1
6268924 Koppolu et al. Jul 2001 B1
6269396 Shah et al. Jul 2001 B1
6286038 Reichmeyer et al. Sep 2001 B1
6298446 Schreiber et al. Oct 2001 B1
6314526 Arendt et al. Nov 2001 B1
6334154 Gioquindo et al. Dec 2001 B1
6343320 Fairchild et al. Jan 2002 B1
6363416 Naeimi et al. Mar 2002 B1
6378025 Getty Apr 2002 B1
6393583 Meth et al. May 2002 B1
6400730 Latif et al. Jun 2002 B1
6421753 Hoese et al. Jul 2002 B1
6425035 Hoese et al. Jul 2002 B2
6425036 Hoese et al. Jul 2002 B2
6449652 Blumenau et al. Sep 2002 B1
6470382 Wang et al. Oct 2002 B1
6470397 Shah et al. Oct 2002 B1
6473803 Stern et al. Oct 2002 B1
6477576 Angwin et al. Nov 2002 B2
6480901 Weber et al. Nov 2002 B1
6484245 Sanada et al. Nov 2002 B1
6529963 Fredin et al. Mar 2003 B1
6553408 Merrell et al. Apr 2003 B1
6574755 Seon Jun 2003 B1
6591310 Johnson Jul 2003 B1
6597956 Aziz et al. Jul 2003 B1
6640278 Nolan et al. Oct 2003 B1
6654830 Taylor et al. Nov 2003 B1
6658459 Kwan et al. Dec 2003 B1
6675196 Kronz Jan 2004 B1
6678721 Bell Jan 2004 B1
6683883 Czeiger et al. Jan 2004 B1
6691244 Kampe et al. Feb 2004 B1
6697924 Swank Feb 2004 B2
6701449 Davis et al. Mar 2004 B1
6718361 Basani et al. Apr 2004 B1
6721907 Earl Apr 2004 B2
6724757 Zadikian et al. Apr 2004 B1
6732170 Miyake et al. May 2004 B2
6738854 Hoese et al. May 2004 B2
6748550 McBrearty et al. Jun 2004 B2
6757291 Hu Jun 2004 B1
6760783 Berry Jul 2004 B1
6763195 Willebrand et al. Jul 2004 B1
6763419 Hoese et al. Jul 2004 B2
6771663 Jha Aug 2004 B1
6771673 Baum et al. Aug 2004 B1
6789152 Hoese et al. Sep 2004 B2
6799316 Aguilar et al. Sep 2004 B1
6807581 Starr et al. Oct 2004 B1
6823418 Langendorf et al. Nov 2004 B2
6839752 Miller et al. Jan 2005 B1
6845403 Chadalapaka Jan 2005 B2
6856591 Ma et al. Feb 2005 B1
6859462 Mahoney et al. Feb 2005 B1
6877044 Lo et al. Apr 2005 B2
6886171 MacLeod Apr 2005 B2
6895461 Thompson May 2005 B1
6898670 Nahum May 2005 B2
6920491 Kim Jul 2005 B2
6938092 Burns Aug 2005 B2
6941340 Kim et al. Sep 2005 B2
6941396 Thorpe et al. Sep 2005 B1
6944152 Heil Sep 2005 B1
6944785 Gadir et al. Sep 2005 B2
6976134 Lolayekar et al. Dec 2005 B1
6977927 Bates et al. Dec 2005 B1
7000015 Moore et al. Feb 2006 B2
7020696 Perry et al. Mar 2006 B1
7043727 Bennett et al. May 2006 B2
7089293 Grosner et al. Aug 2006 B2
7103888 Cayton et al. Sep 2006 B1
7107395 Ofek et al. Sep 2006 B1
7120837 Ferris Oct 2006 B1
7139811 Lev Ran et al. Nov 2006 B2
7165258 Kuik et al. Jan 2007 B1
7171453 Iwami Jan 2007 B2
7185062 Lolayekar et al. Feb 2007 B2
7197561 Lovy et al. Mar 2007 B1
7203746 Harrop Apr 2007 B1
7231430 Brownell et al. Jun 2007 B2
7281062 Kuik et al. Oct 2007 B1
7370078 Woodruff May 2008 B1
7437477 Kuik et al. Oct 2008 B2
20010020254 Blumenau et al. Sep 2001 A1
20010049740 Karpoff Dec 2001 A1
20020010750 Baretzki Jan 2002 A1
20020010812 Hoese et al. Jan 2002 A1
20020042693 Kampe et al. Apr 2002 A1
20020049845 Sreenivasan et al. Apr 2002 A1
20020052986 Hoese et al. May 2002 A1
20020055978 Joon-Bo, et al. May 2002 A1
20020059392 Ellis May 2002 A1
20020065864 Hartsell et al. May 2002 A1
20020065872 Genske et al. May 2002 A1
20020103889 Markson et al. Aug 2002 A1
20020103943 Lo et al. Aug 2002 A1
20020116460 Treister et al. Aug 2002 A1
20020126680 Inagaki et al. Sep 2002 A1
20020156612 Schulter et al. Oct 2002 A1
20020161950 Hoese et al. Oct 2002 A1
20020176434 Yu et al. Nov 2002 A1
20020188657 Traversat et al. Dec 2002 A1
20020188711 Meyer et al. Dec 2002 A1
20020194428 Green Dec 2002 A1
20030005068 Nickel et al. Jan 2003 A1
20030014462 Bennett et al. Jan 2003 A1
20030014544 Pettey Jan 2003 A1
20030018813 Antes et al. Jan 2003 A1
20030018927 Gadir, et al. Jan 2003 A1
20030058870 Mizrachi et al. Mar 2003 A1
20030079014 Lubbers et al. Apr 2003 A1
20030084209 Chadalapaka May 2003 A1
20030093541 Lolayekar et al. May 2003 A1
20030093567 Lolayekar et al. May 2003 A1
20030097607 Bessire May 2003 A1
20030131157 Hoese et al. Jul 2003 A1
20030145074 Penick Jul 2003 A1
20030149829 Basham et al. Aug 2003 A1
20030163682 Kleinsteiber et al. Aug 2003 A1
20030182455 Hetzler et al. Sep 2003 A1
20030204580 Baldwin et al. Oct 2003 A1
20030208579 Brady et al. Nov 2003 A1
20030210686 Terrell et al. Nov 2003 A1
20030233427 Taguchi Dec 2003 A1
20030236988 Snead Dec 2003 A1
20040010545 Pandya Jan 2004 A1
20040022256 Green Feb 2004 A1
20040024778 Cheo Feb 2004 A1
20040030766 Witkowski Feb 2004 A1
20040064553 Kjellberg Apr 2004 A1
20040085893 Wang et al. May 2004 A1
20040117438 Considine et al. Jun 2004 A1
20040141468 Christensen Jul 2004 A1
20040148376 Rangan et al. Jul 2004 A1
20040233910 Chen et al. Nov 2004 A1
20050055418 Blanc et al. Mar 2005 A1
20050063313 Nanavati et al. Mar 2005 A1
20050268151 Hunt et al. Dec 2005 A1
20070112931 Kuik et al. May 2007 A1
20070299995 Hoese et al. Dec 2007 A1