A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the drawings, the software descriptions/examples, and data as described below: Copyright © 2001-2002, Cisco Systems, Inc., All Rights Reserved.
This invention relates generally to data storage, and more particularly to a system and method for making SCSI-based devices accessible across a network.
As electronic business (ebusiness) grows, so does the need for better ways to share and manage large amounts of data. The amount of data storage required by today's ebusinesses is staggering. A good example of this is mail.com, which grew to 60 terabytes of storage in just 45 days.
Today almost all client access to large scale storage is accomplished by sending requests through general-purpose servers that connect an IP network (e.g., LAN or WAN) to the storage network (e.g., a Storage Area Networks (SAN)). Storage Area Networks provide access to large amounts of data storage.
SANs, however, are complex systems. A recent Enterprise Management Associates (EMA) study of 187 IT professionals stated, however, that only 20% of customers had installed SANs by the end of 1999. 46% of the respondents in that survey said they had no plans to install a SAN. The top four reasons for delaying or for deciding not to install a SAN were: high implementation costs, lack of qualified staff, technology immaturity, and lack of standards. Furthermore, although SANs typically are very good at connecting native storage resources, they are distance-limited and have no knowledge of IP and its priorities.
Often, customers outsource their storage to a SSP provider who will manage their storage needs for a predetermined fee. A typical application would use a distributed Fibre-Channel (FC) network to connect an IP network to FC devices located at either a local or a remote site. In this example, the S SP provides the entire storage infrastructure on the customers premises. While FC has numerous advantages, it lacks network management tools and is significantly higher priced than comparable Ethernet products. Most importantly, due to lack of network security, the SSP must create a separate Storage Area Network for each customer at the SSP to separate data from multiple customers.
For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for a system and method for accessing SANs over an IP network in a more integrated fashion.
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the actions and processes of a computer system, or similar computing device, to manipulate and transform data. Unless specifically stated otherwise, the data being manipulated is stored as physical (e.g., electronic) representations within computer system registers and memories, or within other information storage, transmission or display devices. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
A SCSI-based storage system is shown in
As is shown in
Similarly, as is shown in the embodiment in
In one embodiment, a driver in each server 127, 128 is used to encapsulate SCSI commands into one or more IP packets. Such an embodiment is shown in
The iSCSI protocol aims to be fully compliant with the requirements laid out in the SCSI Architecture Model-2 (SAM2) document. The iSCSI protocol is a mapping of the SCSI remote procedure invocation model (see the SAM document) over the TCP protocol. SCSI commands are carried by iSCSI requests and SCSI responses and status are carried by iSCSI responses. iSCSI also uses the request response mechanism for iSCSI protocol mechanisms.
Returning to
One embodiment of storage router 110 is shown in
In the embodiment shown in
In the embodiment shown in
In one such embodiment, processor 170 is implemented as a 750PowerPC microprocessor 171 running at 500 MHz and having 512 KB of local L2 cache 172. Microprocessor 171 connects through bus 176 to a 64-bit, 66-MHz PCI bridge 173 that controls 128 MB to 1 GB of SDRAM 174. Bridge 173 also controls interfaces 148,158 and 168 and a PCI bus 177.
In the embodiment shown in
In one embodiment, a 32 MB FLASH-type non-volatile storage 175 is provided to store the software that is loaded into processor 170.
The storage router 110 software provides SCSI routing between servers and the storage devices. In one embodiment, the software includes a command line interface (CLI) and web-based graphical user interface (GUI) for operation, configuration and administration, maintenance, and support tasks of storage router 110 from a terminal connected to one or both of the management ports 158 and/or 168.
Another embodiment of a SCSI-based storage system 100 is shown in
In one embodiment, computers 127-128 formulate storage commands as if to their own iSCSI devices (with target and LUN addresses (or names)). The commands are placed in IP packets that are passed over IP network 129 (for example, a GbE network) and are received by iSCSI interface 104 which strips off TCP/IP headers. SCSI router 105 then maps the logical iSCSI targets or target/LUN combinations to SCSI addresses used on storage network 139. Interface 106, which in some embodiments is a Fiber Channel interface, and in other embodiments is a parallel SCSI interface (or even another iSCSI interface), then packages the commands and/or data (for example, adding FCP headers and FC headers for information going to an FC network 139) and sends it to one of the storage devices 140.
In some embodiments, each server 127,128 that requires IP access to storage 140 via the storage router 110 must have an iSCSI driver, such as the Cisco Storage Networking iSCSI driver, installed. One such embodiment is shown in
As noted above, one disadvantage of systems for accessing SANs over IP networks is the lack of security. In contrast, security in system 100 takes advantage of the many mechanisms available for security services in IP networks. With existing SAN security, SSPs often have to allocate separate storage resources to each customer. In addition, the SSP has to worry about the segregation and privacy of the customer's data as it crosses the SSP's shared fiber optic infrastructure. Concepts like virtual private networks, encryption, authentication, and access control do not exist in SANs. All of these concepts, however, are present in IP networks. By encapsulating SCSI over IP, the years of development of security in IP networks becomes instantly available to storage networks and to the storage service providers, allowing them to ensure access control to storage and the privacy of data on their shared infrastructure.
As noted above, today almost all client access to storage is accomplished by sending the requests through general-purpose servers that connect that the IP networks (LAN, WAN, etc.) to the storage networks (SAN). With storage router 110, and a SCSI/IP driver in the client, the general-purpose server is unnecessary. Eliminating this server allows for the rapid growth of storage service providers, companies who want to storage access across the Internet and large enterprise customers who want to allocate storage resources based on application, by department or by division.
In one embodiment, storage router 110 provides IPv4 router functionality between a single Gigabit Ethernet and a Fibre Channel interface. In one such embodiment, static routes are supported. In addition, storage router 110 supports a configurable MTU size for each interface, and has the ability to reassemble and refragment IP packets based on the MTU of the destination interface.
In one embodiment, storage router 110 acts as a gateway, converting SCSI protocol between Fibre Channel and TCP/IP. Storage router 110 is configured in such an embodiment to present Fibre Channel devices as iSCSI targets, providing the ability for clients on the IP network to directly access storage devices.
As noted above, today almost all client access to storage is accomplished by sending the requests through general-purpose servers that connect that the IP networks (LAN, WAN, etc.) to the storage networks (SAN). With storage router 110, and a SCSI/IP driver in the client, the general-purpose server is unnecessary. Eliminating this server allows for the rapid growth of storage service providers, companies who want to storage access across the Internet and large enterprise customers who want to allocate storage resources based on application, by department or by division.
The SCSI Router
In one embodiment, SCSI routing occurs in the Storage Router 110 through the mapping of physical storage devices to iSCSI targets. An iSCSI target (also called logical target) is an arbitrary name for a group of physical storage devices. You can map an iSCSI target to multiple physical devices. An iSCSI target always contains at least one Logical Unit Number (LUN). Each LUN on an iSCSI target is mapped to a single LUN on a physical storage target.
In one such embodiment, you can choose either of two types of storage mapping: target-and-LUN mapping or target-only mapping. Target-and-LUN mapping maps an iSCSI target and LUN combination to a physical storage target and LUN combination. Target-only mapping maps an iSCSI target to a physical storage target and its LUNs.
With target-and-LUN mapping, an iSCSI target name and iSCSI LUN number are specified and mapped to the physical storage address of one LUN. This mapping can take the form of a Loop ID+LUN combination, a WWPN+LUN combination, or a WWNN. If the LUN is available, it is made available as an iSCSI LUN and numbered with the iSCSI LUN number specified.
For example, if an iSCSI target and iSCSI LUN specified as Database, LUN 9 were mapped to the physical storage address, Loop ID 070, LUN 12, then LUN 127, 128 of the device identified as Loop ID 070 would be available as one iSCSI LUN. An iSCSI driver would see the iSCSI target named Database, with one iSCSI LUN identified as LUN 9. The iSCSI LUN would appear as one storage device to a server. (See Table 1 below.)
With target-only mapping, an iSCSI target name is specified and mapped to the physical storage address of a storage controller only. This mapping can take the form of a; either a Loop ID or WWPN. Any LUNs that are available in the storage controller are made available as iSCSI LUNs and are numbered the same as the LUNs in the storage controller.
For example, if an iSCSI target specified as Webserver200 were mapped to the physical storage address Loop ID 050, and LUNs 1 through 3 were available in that controller, those LUNs would become available as three iSCSI LUNs. An iSCSI driver would see the iSCSI target named Webserver2000 as a controller with three iSCSI LUNs identified as LUN 1, LUN 2, and LUN 3. Each iSCSI LUN would appear as a separate storage device to a server. (See Table 2 below.)
Access for SCSI routing is controlled in computers 127, 128 and in storage router 110. In computer 127, for instance, the IP address of each storage router 110 with which computer 127 is to transport SCSI requests and responses is configured in the iSCSI driver. In storage router 110, an access list identifies which computers 127, 128 can access storage devices attached to it.
Once the access is configured in computers 127, 128 and in storage router 110, and once the storage mapping is configured in storage router 110, storage router 110 routes SCSI requests and responses between servers 127, 128 and the mapped storage devices 140. The concept of storage mapping and access control is illustrated in
In
The system 100 illustrated in
In the example shown in
An example of iSCSI routing according to the present invention is illustrated in
Creating SCSI routing services consists of creating and naming a base set of SCSI routing services. Table 5 illustrates one method of creating SCSI routing services.
In one embodiment, it is possible to define up to four instances on a single storage router 110 or across a cluster of routers 110.
Configuring a server interface consists of identifying which SCSI routing service instances to add to the server interface, identifying the server interface name, and assigning an IP address to the server interface. Table 6 illustrates one method of configuring a server interface for an instance of SCSI routing services.
Configuring a device interface consists of specifying which SCSI routing service instances to add to the device interface and the device interface name and topology. Table 7 illustrates one method of configuring a device interface for an instance of SCSI routing services.
Once the device interface is added, the SCSI routing service instance becomes active.
Configuring iSCSI targets 140 consists of specifying the SCSI routing services to which the iSCSI target is to be added, specifying an iSCSI target, and mapping the iSCSI target to a physical storage device 140. When adding an iSCSI target, you can specify the physical storage device 140 either by physical storage address or by an index number assigned to the device. Some representative addressing modes are shown in
High Availability Applications
One can configure a plurality of storage routers 100 in a cluster 300 to allow the storage routers 110 to back each other up in case of failure. A storage router cluster 300 includes, in some embodiments, two configured storage routers 110 connected as follows:
In one embodiment, storage routers 110 within a cluster 300 continually exchange HA information to propagate configuration data to each other and to detect failures in the cluster. In one such embodiment (such as is shown in
In one embodiment, each cluster 300 supports up to four active SCSI routing service instances. In one such embodiment, at any given time, a SCSI routing service instance can run on only one storage router 110 in a cluster 300. The SCSI routing service instance continues running on the storage router 110 where it was started until it is explicitly stopped or failed over to another storage router 110 in the cluster 300, or automatically fails over to another storage router 110 because an interface is unavailable or another software or hardware problem occurs.
In one embodiment, each storage router 110 in cluster 300 can run up to four SCSI routing service instances. For example, if one storage router is already running two SCSI routing service instances, it is eligible to run up to two additional SCSI routing service instances.
One example of configuring management parameters within router 110 is given in Table 8. In the example provided in Table 8, configuring management parameters includes tasks such as setting the system name, IP address and mask, gateway, and DNS servers.
One example of configuring network management access within router 110 is given in Table 9. In the example provided in Table 9, configuring network management access consists of tasks for SNMP.
When the storage router 110 is part of a storage router cluster 300, you will need to configure the high availability (HA) interface. In one embodiment, Table 10 can be used to configure the HA interface parameters.
In one embodiment, completing step 4 in Table 10 will cause the storage router 110 to reboot.
In one embodiment, one of the storage routers 110 operates in master mode and another operates in slave mode within cluster 300. In one such embodiment, each router 110 is able to handle multiple application instances. Each router 110 has at least one state machine in the Null State at all times, and that state machine is waiting to discover new application instances within the other nodes of the network. This state machine is referred to as an “idle state machine,” indicating that it is idling until a new application instance is discovered. Such an approach is described in application Ser. No. 10/122,401, filed Apr. 11, 2002, entitled “METHOD AND APPARATUS FOR SUPPORTING COMMUNICATIONS BETWEEN NODES OPERATING IN A MASTER-SLAVE CONFIGURATION”, which is a continuation of application Ser. No. 09/949,182, filed Sep. 7, 2001, entitled “METHOD AND APPARATUS FOR SUPPORTING COMMUNICATIONS BETWEEN NODES OPERATING IN A MASTER-SLAVE CONFIGURATION”, the description of which is incorporated herein by reference.
In one such embodiment, each of the storage routers 110 exchanges heartbeat information. Such an approach is described in application Ser. No. 10/094,552, filed Mar. 7,2002, entitled “METHOD AND APPARATUS FOR EXCHANGING HEARTBEAT MESSAGES AND CONFIGURATION INFORMATION BETWEEN NODES OPERATING IN A MASTER-SLAVE CONFIGURATION”.
The inclusion of the idle state machine in this embodiment provides an advantage over previous approaches. Previous approaches assume that only one type of application instance exists within the node and within the other networked nodes (i.e., a time synchronization application). Accordingly, these approaches promptly enters either the master state or slave state upon initiation of the application, and only one master or slave state machine is maintained by a router 110 at any one time. That approach, therefore, is incapable of managing multiple application instances on the nodes, or listening for new application instances on the network.
In contrast, this approach described above always has one or more state machines in the Null State, and so it can provide a new state machine whenever a new application instance is started in router 110 or is discovered in another router 110 through the receipt of a MasterAck or Heartbeat message from that other router 110.
In addition, high-availability is enhanced in storage router 110 by providing multiple pathways between storage routers 110 (such as is shown in networks 302 and 306 in
Application Ser. No. 10/131,275, filed even date herewith, entitled “METHOD AND APPARATUS FOR CONFIGURING NODES AS MASTERS OR SLAVES” and application Ser. No. 10/131,274, filed even date herewith, entitled “METHOD AND APPARATUS FOR TERMINATING APPLICATIONS IN A HIGH-AVAILABILITY NETWORK”, also contain information relevant to configuring storage routers 110 within a high availability cluster 300. Their descriptions are incorporated herein by reference.
In one embodiment, respective sessions are created between a respective host (from among hosts 127 through 128) and a particular iSCSI target (from among targets 310 through 311). SCSI routing occurs in storage router 110 through the mapping between physical storage devices (or LUNs located on physical devices) and iSCSI targets (310-311). An iSCSI target (e.g., 310, also called logical target 310) is an arbitrary name or value for a group of one or more physical storage devices. One can map a single iSCSI target to multiple physical devices. An iSCSI target always includes or contains at least one Logical Unit Number (LUN). Each LUN on an iSCSI target is mapped to a single LUN on a physical storage target.
In one embodiment, SCSI router 105 includes one or more instances 114, one for each iSCSI target 310-311. Each instance 114 uses the respective mapping 318 to convert the iSCSI address to the physical address used to access a particular LUN 141-142. In some embodiments, a configuration manager application 320 uses one or more access lists 322 to control access to particular LUNs, i.e., to check that the particular source computer 127-128 has authorization to access the particular LUN 141-142 on one particular target 140.
The storage network 149, in some embodiments, is implemented as a fibre-channel loop 148 as shown in
In one embodiment, one can choose between two types of storage mapping: target-and-LUN mapping 314 or target-only mapping 312. As described above, target-and-LUN mapping 314 maps an iSCSI-target-and-LUN combination to a physical storage target-and-LUN combination. Target-only mapping maps an iSCSI target to a physical storage target and its associated LUNs.
In one embodiment, SCSI router 105 includes two or more virtual SCSI routers 114. Each virtual SCSI router 114 is associated with one or more IP sessions. Such an embodiment is described in “VIRTUAL SCSI BUS FOR SCSI-BASED STORAGE AREA NETWORK”, U.S. patent application Ser. No. 10/131,793, filed herewith, the description of which is incorporated herein by reference.
In one embodiment, each interface 104 performs TCP connection checking on iSCSI traffic. TCP connection checking is described in “METHOD AND APPARATUS FOR ASSOCIATING AN IP ADDRESS AND INTERFACE TO A SCSI ROUTING INSTANCE”, U.S. patent application Ser. No. 10/131,789, now U.S. Pat. No. 6,895,461, issued on May 17, 2005, filed herewith, the description of which is incorporated herein by reference.
As noted above, SCSI routing occurs in the Storage Router 110 through the mapping of physical storage devices to iSCSI targets. An iSCSI target (also called a logical target) is an arbitrary name for a group of physical storage devices. You can map an iSCSI target to multiple physical devices. An iSCSI target always contains at least one Logical Unit Number (LUN). Each LUN on an iSCSI target is mapped to a single LUN on a physical storage target.
Configuration module 320 operates to configure various aspects of storage router 110, including the mappings described above. In addition, configuration module 320 may be used to configure communications with storage network 139 and IP network 129.
In some embodiments, the configuration data may be supplied through a command interpreter. Such a command interpreter is described in “SYSTEM AND METHOD FOR CONFIGURING FIBRE-CHANNEL DEVICES”, U.S. patent application Ser. No. 10/128,655, filed herewith, now U.S. Pat. No. 7,200,610, issued on Apr. 3, 2007, the description of which is incorporated herein by reference.
In one embodiment, the command interpreter is command line based. However, the invention is not limited to any particular form of command interpreter, and in alternative embodiments of the invention, the command interpreter may include a graphical user interface.
Database 318 includes information regarding devices on the storage area network 139. Database 322 includes one or more access lists as described above. In one embodiment, databases 318 and 322 are in-memory databases comprising one or more structures containing device data. For example, databases 318 and 322 may comprise a table, an array, a linked list of entries, or any combination thereof. Additionally, databases 318 and 322 may comprise one or more files on a file system. Furthermore, either of databases 318 and 322 may comprise a relational database management system. The invention is not limited to any particular database type or combination of database types. Databases 318 and 322 may exist as two or more databases. In one embodiment, databases 318 and 322 are combined in a single database.
Port database 210 comprises a set of fields providing information about ports in a network, including storage area networks. In some embodiments, port database 210 includes one or more entries 212 having a set of fields. In some embodiments, the fields in port database 210 include a port index, a port WWPN, and LUN list. The port index uniquely identifies an entry in port database 210. In some embodiments, the port index can be inferred by the position of the entry in the table, and need not be physically present. The port WWPN field contains data specifying the WWPN for the port. The LUN list field contains data that identifies the LUNs associated with the port. In some embodiments, the LUN list field is a link (i.e. a pointer) to a linked list of LUN database entries. However, the invention is not limited to any particular representation for the LUN list field, and in alternative embodiments the LUN list field may be a table or array of LUN list entries.
LUN database 220 comprises a set of fields that provide information about LUNs in a network. Typically the LUNs will be associated with a port. In some embodiments, the LUN database comprises a linked list of entries 222. In some embodiments, the fields in port database 220 include a LUN field, a WWNN field, and a next LUN link. The LUN field contains data identifying the LUN. The WWNN field contains the WWNN associated with the LUN. The next LUN field comprises data identifying the next LUN in a list of LUNs.
Some embodiments of the invention include an alternative path database 202. Alternative path database 202 comprises one or more entries 204 that define paths to targets available in a storage network. In some embodiments, the fields in an entry 204 include a target ID, a primary WWPN, and a secondary WWPN. The target ID identifies a particular target in a storage area network. The primary WWPN field contains data identifying the primary WWPN, that is, the WWPN that the system will attempt to use first when communicating with the target. The secondary WWPN contains data identifying the secondary WWPN for the target. The system will use the secondary WWPN to communicate with the target if the primary WWPN is not available.
In some embodiments, a discovery process is used to provide data for some portions of database 318. The discovery process comprises logic to determine the devices 140 that are communicably coupled to a storage network 139. Several different events may trigger the discovery process. For example, the discovery process may execute when the system is initialized, when the system is reset, when a new device is added to the storage network, or when a device on the storage network changes state. The discover logic may be executed in firmware, or it may be executed in software, for example, in a device driver. As those of skill in the art will appreciate, the discovery process will differ depending on the type of storage network 139 coupled to storage router 110.
An exemplary discovery process for a fibre-channel based storage network used in some embodiments of the invention will now be described. In some embodiments, discovery comprises two main steps, port discovery and device discovery. Port discovery determines the target and/or initiator ports on the fibre-channel, and device discovery determines the LUNs (Logical Unit Numbers) on each target port.
As is known in the art, fibre-channel networks may exist in a number of different network topologies. Examples of such network topologies include private loops, public loops, or fabrics. The port discovery process in different embodiments of the invention may vary according to the network topology.
In loop based topologies, such as private or public loops, some embodiments of the invention, the discovery process acquires a loop map. The loop map is typically created during low-level loop initialization. In some embodiments, the loop map comprises an ALPA (Arbitrated Loop Physical Address) map. For each port in the loop map, the discovery process populates various fields of the port database. In some embodiments, these fields include the world wide port name (WWPN), the ALPA/loopid, and the port role (e.g. target and/or initiator). If the loop is a private loop, the port discovery process is generally complete when each port in the loop map has been processed. If the loop is a public loop, port discovery continues with the discovery of devices connected to the fabric.
In fabric-based topologies, the discovery process communicates with a fabric directory server (also referred to as a name server) and obtains a list of all devices known to the fabric switch. In some embodiments, a series of “Get All Next (GA13 NXT) extended link service commands are issued to the storage network to obtain the list. The directory server responds with the port identifier (portId) and WWPN for the port. This data may then be used to populate various fields of the port database 210.
In some embodiments, after port discovery as discovered ports on the storage network, device discovery identifies devices on each port. In some embodiments, for each port found during port discovery that is a target device, a “Report LUNS” SCSI command is issued to LUN 0 on the port. If the device supports the command, the device returns a list of LUNs on the port. If the device does not support the command, the discovery process of some embodiments builds a local list of LUNs comprising LUN 0 to LUN 255.
For each LUN in the list, the discovery process issues one or more SCSI inquiry commands. These commands and the returned data include the following:
The data returned by the above-described commands is the used to populate corresponding fields in the LUN database 220.
It should be noted that while the exemplary environment has been described in terms of a storage router, the present invention may be implemented in any type of network element, including IP routers, switches, hubs and/or gateways.
Applications
Applications of computer system 100 will be discussed next. For instance, by using system 100, a Storage Service Provider (SSP) is able to immediately deploy new storage services at lower costs. Moving storage over the IP infrastructure also allows the SSP to offer customers secure (encrypted) access to storage at price points not possible with today's storage products.
As noted above, customers outsource their storage to a SSP provider who will manage their storage needs for a pre-determined fee. A typical application would use a distributed Fibre-Channel (FC) network to connect an IP network to FC devices located at either a local or a remote site. In this example, the SSP provides the entire storage infrastructure on the customers premises. While Fibre Channel has numerous advantages, it lacks network management tools and is significantly higher priced than comparable Ethernet products. Most importantly, due to lack of network security, the SSP must create a separate Storage Area Networks (SAN) for each customer at the SSP to separate data from multiple customers.
In contrast, system 100 (as illustrated in
In another application, the Application/Internet Service Provider (ASP/ISP) is able to centralize Web server storage using system 100. Centralization using system 100 dramatically lowers the cost of storage for Web servers and provides a means of backing up real-time data over IP.
Finally, enterprise customers gain significant cost savings in deploying storage over IP by leveraging their installed IP infrastructure. As storage becomes universally accessible using IP, local applications also will be able to be shared globally, greatly simplifying the task of managing storage. Mirroring and off-site backup of data over the IP infrastructure is expected to be an important application.
Systems, methods and apparatus to integrate IP network routing and SCSI data storage have been described. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiments shown. This application is intended to cover any adaptations or variations of the present invention. For example, although described in procedural terms, one of ordinary skill in the art will appreciate that the invention can be implemented in an object-oriented design environment or any other design environment that provides the required relationships.
In the above discussion and in the attached appendices, the term □computer□ is defined to include any digital or analog data processing unit. Examples include any personal computer, workstation, set top box, mainframe, server, supercomputer, laptop or personal digital assistant capable of embodying the inventions described herein.
Examples of articles comprising computer readable media are floppy disks, hard drives, CD-ROM or DVD media or any other read-write or read-only memory device.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement calculated to achieve the same purpose maybe substituted for the specific embodiment shown. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.
This application is a continuation of U.S. application Ser. No. 10/128,656, filed Apr. 22, 2002, now U.S. Pat. No. 7,165,258, issued on Jan. 16, 2007, entitled “SCSI-BASED STORAGE AREA NETWORK HAVING A SCSI ROUTER THAT ROUTES TRAFFIC BETWEEN SCSI AND IP NETWORKS”, which is related to the following co-pending, commonly assigned U.S. patent applications: Application Ser. No. 10/122401, filed Apr. 11, 2002, entitled “METHOD AND APPARATUS FOR SUPPORTING COMMUNICATIONS BETWEEN NODES OPERATING IN A MASTER-SLAVE CONFIGURATION”, which is a continuation of application Ser. No. 09/949,182, filed Sep. 7, 2001, entitled “METHOD AND APPARATUS FOR SUPPORTING COMMUNICATIONS BETWEEN NODES OPERATING IN A MASTER-SLAVE CONFIGURATION”; application Ser. No. 10/094,552, filed Mar. 7, 2002, entitled “METHOD AND APPARATUS FOR EXCHANGING HEARTBEAT MESSAGES AND CONFIGURATION INFORMATION BETWEEN NODES OPERATING IN A MASTER-SLAVE CONFIGURATION”; application Ser. No. 10/131,275, filed even date herewith, entitled “METHOD AND APPARATUS FOR CONFIGURING NODES AS MASTERS OR SLAVES”; application Ser. No. 10/131,274, filed even date herewith, entitled “METHOD AND APPARATUS FOR TERMINATING APPLICATIONS N A HIGH-AVAILABILITY NETWORK”; application Ser. No. 10/131,793, filed even date herewith, entitled “VIRTUAL SCSI BUS FOR SCSI-BASED STORAGE AREA NETWORK”; application Ser. No. 10/131,782, filed even date herewith, entitled “VIRTUAL MAC ADDRESS SYSTEM AND METHOD”; application Ser. No. 10/128,655, filed even date herewith, entitled “SYSTEM AND METHOD FOR CONFIGURING FIBRE-CHANNEL DEVICES”; application Ser. No. 10/131,789, filed even date herewith, now U.S. Pat. No. 6,895,461, issued on May 17, 2005, entitled “METHOD AND APPARATUS FOR ASSOCIATING AN IP ADDRESS AND INTERFACE TO A SCSI ROUTING INSTANCE”; application Ser. No. 10/128,657, filed even date herewith, entitled “METHOD AND APPARATUS FOR EXCHANGING CONFIGURATION INFORMATION BETWEEN NODES OPERATING IN A MASTER-SLAVE CONFIGURATION”; and application Ser. No. 10/128,993, filed even date herewith, now U.S. Pat. No. 7,188,194, issued on Mar. 6, 2007, entitled “SESSION-BASED TARGET/LUN MAPPING FOR A STORAGE AREA NETWORK AND ASSOCIATED METHOD”, all of the above of which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4495617 | Ampulski et al. | Jan 1985 | A |
5390326 | Shah | Feb 1995 | A |
5461608 | Yoshiyama | Oct 1995 | A |
5473599 | Li et al. | Dec 1995 | A |
5535395 | Tipley et al. | Jul 1996 | A |
5544077 | Hershey | Aug 1996 | A |
5579491 | Jeffries et al. | Nov 1996 | A |
5600828 | Johnson et al. | Feb 1997 | A |
5666486 | Alfieri et al. | Sep 1997 | A |
5684957 | Kondo et al. | Nov 1997 | A |
5732206 | Mendel | Mar 1998 | A |
5812821 | Sugi et al. | Sep 1998 | A |
5870571 | Duburcq et al. | Feb 1999 | A |
5909544 | Anderson et al. | Jun 1999 | A |
5951683 | Yuuki et al. | Sep 1999 | A |
5991813 | Zarrow | Nov 1999 | A |
5996024 | Blumenau | Nov 1999 | A |
5996027 | Volk et al. | Nov 1999 | A |
6006224 | McComb et al. | Dec 1999 | A |
6006259 | Adelman et al. | Dec 1999 | A |
6009476 | Flory et al. | Dec 1999 | A |
6018765 | Durana et al. | Jan 2000 | A |
6041381 | Hoese | Mar 2000 | A |
6078957 | Adelman et al. | Jun 2000 | A |
6108300 | Coile et al. | Aug 2000 | A |
6108699 | Moiin | Aug 2000 | A |
6131119 | Fukui | Oct 2000 | A |
6134673 | Chrabaszcz | Oct 2000 | A |
6145019 | Firooz et al. | Nov 2000 | A |
6151684 | Alexander et al. | Nov 2000 | A |
6163855 | Shrivastava et al. | Dec 2000 | A |
6178445 | Dawkins et al. | Jan 2001 | B1 |
6185620 | Weber et al. | Feb 2001 | B1 |
6195687 | Greaves et al. | Feb 2001 | B1 |
6195760 | Chung et al. | Feb 2001 | B1 |
6209023 | Dimitroff et al. | Mar 2001 | B1 |
6219771 | Kikuchi et al. | Apr 2001 | B1 |
6260158 | Purcell et al. | Jul 2001 | B1 |
6268924 | Koppolu et al. | Jul 2001 | B1 |
6269396 | Shah et al. | Jul 2001 | B1 |
6314526 | Arendt et al. | Nov 2001 | B1 |
6327622 | Jindal et al. | Dec 2001 | B1 |
6343320 | Fairchild et al. | Jan 2002 | B1 |
6363416 | Naeimi et al. | Mar 2002 | B1 |
6378025 | Getty | Apr 2002 | B1 |
6393583 | Meth et al. | May 2002 | B1 |
6400730 | Latif et al. | Jun 2002 | B1 |
6449652 | Blumenau et al. | Sep 2002 | B1 |
6470382 | Wang et al. | Oct 2002 | B1 |
6470397 | Shah et al. | Oct 2002 | B1 |
6473803 | Stern et al. | Oct 2002 | B1 |
6480901 | Weber et al. | Nov 2002 | B1 |
6484245 | Sanada et al. | Nov 2002 | B1 |
6574755 | Seon | Jun 2003 | B1 |
6591310 | Johnson | Jul 2003 | B1 |
6597956 | Aziz et al. | Jul 2003 | B1 |
6606690 | Padovano | Aug 2003 | B2 |
6640278 | Nolan et al. | Oct 2003 | B1 |
6654830 | Taylor et al. | Nov 2003 | B1 |
6658459 | Kwan et al. | Dec 2003 | B1 |
6678721 | Bell | Jan 2004 | B1 |
6683883 | Czeiger et al. | Jan 2004 | B1 |
6691244 | Kampe et al. | Feb 2004 | B1 |
6697924 | Swank | Feb 2004 | B2 |
6701449 | Davis et al. | Mar 2004 | B1 |
6718361 | Basani et al. | Apr 2004 | B1 |
6721907 | Earl | Apr 2004 | B2 |
6724757 | Zadikian et al. | Apr 2004 | B1 |
6732170 | Miyake et al. | May 2004 | B2 |
6748550 | McBrearty et al. | Jun 2004 | B2 |
6757291 | Hu | Jun 2004 | B1 |
6760783 | Berry | Jul 2004 | B1 |
6763195 | Willebrand et al. | Jul 2004 | B1 |
6763419 | Hoese et al. | Jul 2004 | B2 |
6771663 | Jha | Aug 2004 | B1 |
6771673 | Baum et al. | Aug 2004 | B1 |
6779016 | Aziz et al. | Aug 2004 | B1 |
6785742 | Teow et al. | Aug 2004 | B1 |
6799316 | Aguilar et al. | Sep 2004 | B1 |
6807581 | Starr et al. | Oct 2004 | B1 |
6823418 | Langendorf et al. | Nov 2004 | B2 |
6839752 | Miller et al. | Jan 2005 | B1 |
6848007 | Reynolds et al. | Jan 2005 | B1 |
6856591 | Ma et al. | Feb 2005 | B1 |
6859462 | Mahoney et al. | Feb 2005 | B1 |
6877044 | Lo et al. | Apr 2005 | B2 |
6886171 | MacLeod | Apr 2005 | B2 |
6895461 | Thompson | May 2005 | B1 |
6920491 | Kim | Jul 2005 | B2 |
6938092 | Burns | Aug 2005 | B2 |
6941340 | Kim et al. | Sep 2005 | B2 |
6944785 | Gadir et al. | Sep 2005 | B2 |
6977927 | Bates et al. | Dec 2005 | B1 |
6985490 | Czeiger et al. | Jan 2006 | B2 |
7020696 | Perry et al. | Mar 2006 | B1 |
7043727 | Bennett et al. | May 2006 | B2 |
7107395 | Ofek et al. | Sep 2006 | B1 |
7139811 | Lev Ran et al. | Nov 2006 | B2 |
7146233 | Aziz et al. | Dec 2006 | B2 |
7171453 | Iwami | Jan 2007 | B2 |
7185062 | Lolayekar et al. | Feb 2007 | B2 |
7197561 | Lovy et al. | Mar 2007 | B1 |
7203746 | Harrop | Apr 2007 | B1 |
7231430 | Brownell et al. | Jun 2007 | B2 |
7281062 | Kuik et al. | Oct 2007 | B1 |
7353259 | Bakke et al. | Apr 2008 | B1 |
20020010750 | Baretzki | Jan 2002 | A1 |
20020042693 | Kampe et al. | Apr 2002 | A1 |
20020049845 | Sreenivasan et al. | Apr 2002 | A1 |
20020055978 | Joon-Bo et al. | May 2002 | A1 |
20020059392 | Ellis | May 2002 | A1 |
20020065864 | Hartsell et al. | May 2002 | A1 |
20020065872 | Genske et al. | May 2002 | A1 |
20020103889 | Markson et al. | Aug 2002 | A1 |
20020103943 | Lo et al. | Aug 2002 | A1 |
20020116460 | Treister et al. | Aug 2002 | A1 |
20020126680 | Inagaki et al. | Sep 2002 | A1 |
20020156612 | Schulter et al. | Oct 2002 | A1 |
20020188657 | Traversat et al. | Dec 2002 | A1 |
20020188711 | Meyer et al. | Dec 2002 | A1 |
20020194428 | Green | Dec 2002 | A1 |
20030005068 | Nickel et al. | Jan 2003 | A1 |
20030014462 | Bennett et al. | Jan 2003 | A1 |
20030018813 | Antes et al. | Jan 2003 | A1 |
20030018927 | Gadir et al. | Jan 2003 | A1 |
20030058870 | Mizrachi et al. | Mar 2003 | A1 |
20030084209 | Chadalapaka | May 2003 | A1 |
20030097607 | Bessire | May 2003 | A1 |
20030145074 | Penick | Jul 2003 | A1 |
20030145116 | Moroney et al. | Jul 2003 | A1 |
20030182422 | Bradshaw et al. | Sep 2003 | A1 |
20030182455 | Hetzler et al. | Sep 2003 | A1 |
20030204580 | Baldwin et al. | Oct 2003 | A1 |
20030208579 | Brady et al. | Nov 2003 | A1 |
20030210686 | Terrell et al. | Nov 2003 | A1 |
20030212898 | Steele et al. | Nov 2003 | A1 |
20030229690 | Kitani et al. | Dec 2003 | A1 |
20040010545 | Pandya | Jan 2004 | A1 |
20040024778 | Cheo | Feb 2004 | A1 |
20040030766 | Witkowski | Feb 2004 | A1 |
20040064553 | Kjellberg | Apr 2004 | A1 |
20040141468 | Christensen | Jul 2004 | A1 |
20040233910 | Chen et al. | Nov 2004 | A1 |
20050055418 | Blanc et al. | Mar 2005 | A1 |
20050063313 | Nanavati et al. | Mar 2005 | A1 |
20050268151 | Hunt et al. | Dec 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20070112931 A1 | May 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10128656 | Apr 2002 | US |
Child | 11622436 | US |