1. Field of the Invention
The invention relates to a method and apparatus for receiving and transmitting data in a network, and more particularly, to a method for receiving and transmitting data between separate Fibre Channel fabrics.
2. Description of the Related Art
As computing power has increased over the years, the need for high performance storage capacity has also increased. To this end, storage area networks (SANs) have been developed. Basically a SAN is an interconnection network between a series of hosts or servers and a series of storage devices. The interconnection network is very high performance to allow each of the servers to access each of the desired storage units without significant performance penalties. The use of SANs allows a more optimal use of the available storage capacity than would otherwise be the case if the storage capacity was directly attached to the particular hosts.
The preferred interconnection network for SANs is Fibre Channel. This is a high speed link system according to a series of ANSI standards. In a Fibre Channel network a series of fabrics or inter-switch connections are developed. Hosts are connected to the fabric, as are the storage units. Then the interconnected switches in the fabric provide a path or route between the host and the storage unit. Thus the development of SANs has allowed very large increases in cost effective storage capacity.
However, there are certain problems when developing networks using Fibre Channel switches. One of the problems is that there can only be 239 distinct domains in a Fibre Channel fabric. Further, there are many conditions under which the fabric will segment or break into two fabrics, so that communication between devices on the two fabrics is not possible. For example, segmentation can be caused when certain parameters associated with the particular switches are not set to the proper values. As the number of switches in the fabric grows larger, the chances of segmentation ever increase. In fact, in many cases it is not possible to maintain all of the desired switches in a single fabric. This then hinders configuration of the particular network because certain devices will not be allowed to access other devices because the two fabrics are not connected. Therefore, it is desirable to have a way to connect the two fabrics so that devices can talk across the two fabrics without requiring that the fabrics be merged or allowing the combination of the two fabrics to have a total of more than 239 domains.
Methods and devices according to the present invention provide an interfabric link between the two separate Fibre Channel fabrics so that devices in one fabric can communicate with devices in another fabric without requiring the merger of the two fabrics. Alternatively, two fabrics with more than a combined total of 239 domains can be created and devices can still communicate. An interfabric switch according to the present invention is connected and linked to each of the two separate fabrics. The interfabric switch then performs a conversion or a translation of device addresses in each fabric so that they are accessible to the other fabric. For example, if a host is connected to fabric A and a storage unit is connected to fabric B, the interfabric switch according to the present invention provides an effective address in fabric A for the storage unit and additionally an effective address for the host unit in fabric B. The interfabric switch then allows a link to be developed between the fabrics and transfers the data packets with the translated addresses over this link. Thus the host and storage unit can communicate as though they were in the same fabric and yet the two particular devices are in separate and distinct fabrics.
This translation is preferably done using public to private and then private to public loop address translations. Using this technique the address translation can be done at full wire speed, after initial setup, so that performance of the network is not hindered by the interfabric switch.
Two particular embodiments of the loop translation are illustrated. In a first embodiment the external ports of the interfabric switch are configured as E_ports. A series of internal ports in each interfabric switch are joined together, with the interfabric switch then having a series of virtual or logical switches. In the preferred embodiment connections from each of the two particular fabrics are provided to a different virtual switch. The internal ports forming the virtual switches are then interconnected using private loops. The use of the private loop in the internal connection is enabled by the presence of translation logic which converts fabric addresses to loop addresses and back so that loop and fabric devices can communicate. Because each port can do this translation and the private loop addressing does not include domain or area information, the change in addresses between the fabrics is simplified.
In a second embodiment the external ports are configured as NL_ports and the connections between the virtual switches are E_ports. Thus the private to public and public to private translations are done at the external ports rather than the internal ports as in the prior embodiment. The virtual switches in the interfabric switch match domains with their external counterparts so that the virtual switches effectively form their own fabric, connected to the other fabrics by the private loops.
According to the patent, the switch 102 includes various tables to do public to private and private to public address conversions. The switch 102 develops a phantom loop private address for the public device and a phantom public address for the private device and maps the addresses between the public and private spaces. This is shown in more detail in
It is also required that the host 100 appears as an addressable device to the storage unit 104 on the private loop 114. This is done by having the port 116 pick an open loop port address, in this example 02, and assign that to a phantom device representing the host 100. Thus the storage unit 104 would address the phantom unit of the host 100 by using an address of 02 on the loop 114. The FL_port 116 would intercept this communication addressed to an address of 02 and convert that address to 010101 indicating the full fabric address of the host mo. This address and the similarly converted storage unit 104 address of 0105EF would be substituted in the particular packet and then the packet would be provided to the port 110 for transmission to the host 100. Then when a communication is received from the host 100 at port 116, the address 0105EF is translated to the loop address EF and the address 010101 is translated to loop address 02, so that the storage unit 104 can properly receive the frame.
A detailed addressing example is shown in
The interfabric switch 120, as before, includes virtual switches 134 and 136. In the illustrated embodiment, the virtual switch 134 is assigned domain 5 in fabric 122, while virtual switch 136 becomes domain 6 in fabric 124. Virtual switches 134 and 136 are connected by ports 220 and 222, respectively, which are configured as private loop ports so that a loop 224 results. As described above, then the workstation 206 and the tape unit 204 must have phantom addresses on the private loop 224. In the illustrated embodiment, the address 04 is provided to the tape unit 204 and the address 02 is provided for the workstation 206.
Thus the workstation 206 will address the tape drive 204 by providing a destination address of 050104 that is a full public loop address. The domain 05 indicates the virtual switch 134, the port 01 indicating the virtual port in the virtual switch 134 which in actuality is physical port 3208. The 04 is the phantom public address of the tape 204 as provided by the private loop translation. This address of 050104 is converted by the virtual loop port 220 to a loop address of 04. This loop address of 04 in turn is translated by virtual loop port 222 to an address of 0205EF, which is the actual address of the tape unit 204 in fabric 124. This address is developed because the tape 204 is connected to port 5218 of the switch 200, which is domain 2, and the tape unit 204 is preferably a public loop device with an actual loop address of EF. This results in an address of 0205EF for the tape unit 204. For the tape unit 204 to address the workstation 206, an address of 060102 is used. This is developed because the virtual switch 136 is in domain 6 and physical port 4212 is virtual port 1 indicating that it is 060100 in fabric B. Then as the loop address of the workstation 206 on the virtual private loop 224 is 02, this fully presents itself to the tape unit 204 as a public address of 060102. This address of 060102 is converted by the virtual loop port 222 into a loop address of 02. Packets transmit from the virtual loop port 222 to the virtual loop port 220 are then converted from this loop address of 02 to the desired address of 010101 for the workstation. Similar flow occurs for packets from the workstation 206 to the tape unit 204.
An alternative version of this illustration of this is shown in
The drawings and explanations of
To provide the necessary interconnections to represent the virtual switches in the interfabric switch 120, the miniswitches 414, 422, 428, and 434 are interconnected. Thus, four ports on miniswitch 414 are connected to four ports on miniswitch 422, four ports on miniswitch 414 are connected to four ports on miniswitch 434 and four ports on miniswitch 414 are connected to four ports on miniswitch 428. Similarly, four ports on miniswitch 422 are connected to four ports on miniswitch 428 and four ports on miniswitch 422 are connected to four ports on miniswitch 434. Finally, four ports on miniswitch 428 are connected to four ports on miniswitch 434. Thus, this provides a full direct interconnect between any of the four miniswitches 414, 422, 428, and 434. The various ports connected between the various miniswitches are configured to be private loop ports so that the miniswitches 414, 422, 428, and 434 provide the private to public translations as previously described. The external ports for the interfabric switch 120 are configured as E_ports in the miniswitches 414, 422, 428, and 434. It is also noted that each of the groups of four is preferably obtained from a quad in each of the miniswitches.
Referring to
A slightly different view is shown in
To coordinate with
In previous embodiments it has been seen that the external ports of the interfabric switch 120 are configured in an E_port mode. In an alternative embodiment as shown in
The NL_ports of the interfabric switch 700 are configured as having two addresses, one public and one private. The interfabric switch 700 uses the public address to log into the connected fabric and learn the address of the connected FL_port, which it then configures as its own address. The FL_port will also detect the private address by probing as described in U.S. Pat. No. 6,353,612, which is hereby incorporated by reference. The NL_port will then create a public-private translation for the private device. The FL_port will also develop a phantom address in the connected device, which the interfabric switch 700 will determine. This is done for each fabric, so the interfabric switch 700 ends up knowing all the device public-private address translations and has addresses for the connected ports in different domains.
The interfabric switch 700 then assigns public addresses for each of the phantom devices connected to each port based on the port address. The interfabric switch 700 then effectively separates the ports into virtual switches as described above, with the domain of each virtual switch defined by the public port address. The virtual switches thus effectively form their own fabric separated from the other fabrics by the loops. The virtual switches are connected by E_ports so no address translations are necessary and the public addresses of the phantom devices are used.
In this mode the public to private translations occur between the interfabric switch 700 and the switches 702 and 704 instead of internal to the interfabric switch 700. The address mappings are shown in detail in
As previously mentioned with respect to
In a third variation shown in
A host 130 is shown connected to port 1, an F_port, of interfabric switch 802, which is illustrated as being domain 3. A storage device 132 is similarly connected to port 1, an F_port, of interfabric switch 804, which is illustrated as being domain 4. Thus, the address of the host 130 is 030100 and of the storage device 132 is 040100. The interfabric switch 804 presents the storage device 132 as a phantom storage device 132′ with a private address of 6. The interfabric switch 802 presents the host 130 as a phantom host 130′ with a private address of 5. The interfabric switch 804 translates this private address 5 to a public address of 040205, indicating connection to domain 4, port 2, device 5. Similarly, the interfabric switch 802 translates the private address 6 as 030206 indicating domain 3, port 2, device 6. Thus addressing by the various devices occurs as in the prior examples.
The I_ports must be defined as such at switch setup or initialization for proper operation. Further, messaging must occur between each of the interfabric switches 802 and 804 to confirm that they are connected through I_ports by an IFL. Additionally, each I_port will have to keep track of all allocated private addresses to prevent duplication. Ports not defined as I_ports would be initialized according to normal protocols. The interfabric switches 802 and 804 would then operate as normal switches, routing frames between ports as usual.
In this embodiment a V_FOS is not required as there are no virtual switches, but the export/import list 518, address translation manager 512 and phantom ALPA manager 514 are still needed. This embodiment does have the possible disadvantage that it may be less clear for an administrator to use as it will be more difficult to determine which ports are the I_ports, while in the prior embodiments all the ports will perform the necessary functions.
In all of the above examples of interfabric switches, most interfabric events must be suppressed so that they do not cross between the fabrics. Basically, the only messages that are passed are RSCNs for devices which are imported into the other fabric as the devices come on line or go off line in their original fabric and various SW_ILS frames as the switches initiate operations.
Additionally, certain frames must be captured for operation by the processor on each switch. One example is a PLOGI frame so that the import and export tables can be checked and the SID or DID in the header changed if necessary. A second example are various SW_ILS frames which include SID and DID values in their payload so that the payload values can be changed. This trapping is done in normal manner, such as hardware trapping as described in Ser. No. 10/123,996.
While illustrative embodiments of the invention have been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.
This application is a continuation of U.S. application Ser. No. 10/356,392 filed Jan. 31, 2003, now U.S. Pat. No. 8,081,642, entitled “Method and Apparatus for Routing Between Fibre Channel Fabrics,” which is hereby incorporated by reference as though fully set forth herein.
Number | Name | Date | Kind |
---|---|---|---|
5363367 | Kobayashi et al. | Nov 1994 | A |
5400325 | Chatwani et al. | Mar 1995 | A |
6269404 | Hart et al. | Jul 2001 | B1 |
6285276 | Nedele et al. | Sep 2001 | B1 |
6339842 | Fernandez et al. | Jan 2002 | B1 |
6401128 | Stai et al. | Jun 2002 | B1 |
6470007 | Berman | Oct 2002 | B1 |
6529963 | Fredin et al. | Mar 2003 | B1 |
6532212 | Soloway et al. | Mar 2003 | B1 |
6608819 | Mitchem et al. | Aug 2003 | B1 |
6763417 | Paul et al. | Jul 2004 | B2 |
6834311 | Rao | Dec 2004 | B2 |
6879593 | Kunze et al. | Apr 2005 | B1 |
6941260 | Emberty et al. | Sep 2005 | B2 |
6985490 | Czeiger et al. | Jan 2006 | B2 |
7068651 | Schmidt et al. | Jun 2006 | B2 |
7103704 | Chatterjee | Sep 2006 | B2 |
7103711 | Valdevit | Sep 2006 | B2 |
7107347 | Cohen | Sep 2006 | B1 |
7120728 | Krakirian et al. | Oct 2006 | B2 |
7130303 | Hadzic | Oct 2006 | B2 |
7206288 | Cometto et al. | Apr 2007 | B2 |
7206314 | Liao et al. | Apr 2007 | B2 |
7236496 | Chung et al. | Jun 2007 | B2 |
7287116 | Iwami et al. | Oct 2007 | B2 |
7292567 | Terrell et al. | Nov 2007 | B2 |
7305069 | Day | Dec 2007 | B1 |
7340167 | McGlaughlin | Mar 2008 | B2 |
7385982 | Warden et al. | Jun 2008 | B2 |
7533256 | Walter et al. | May 2009 | B2 |
7542676 | McGlaughlin | Jun 2009 | B2 |
7577134 | Gopal Gowda et al. | Aug 2009 | B2 |
7606167 | DeSanti et al. | Oct 2009 | B1 |
7616637 | Lee et al. | Nov 2009 | B1 |
7936769 | Chung et al. | May 2011 | B2 |
8055794 | Shanbhag et al. | Nov 2011 | B2 |
8081642 | Del Signore et al. | Dec 2011 | B2 |
8135858 | Shanbhag et al. | Mar 2012 | B2 |
20020010790 | Ellis et al. | Jan 2002 | A1 |
20020013848 | Rene Salle | Jan 2002 | A1 |
20020023184 | Paul | Feb 2002 | A1 |
20020101859 | Maclean | Aug 2002 | A1 |
20020110125 | Banks et al. | Aug 2002 | A1 |
20020114328 | Mijamoto et al. | Aug 2002 | A1 |
20020116564 | Paul et al. | Aug 2002 | A1 |
20020141424 | Gasbarro et al. | Oct 2002 | A1 |
20020161567 | Emberty et al. | Oct 2002 | A1 |
20020163910 | Wisner et al. | Nov 2002 | A1 |
20020165978 | Chui | Nov 2002 | A1 |
20020188711 | Meyer et al. | Dec 2002 | A1 |
20020191602 | Woodring et al. | Dec 2002 | A1 |
20020191649 | Woodring | Dec 2002 | A1 |
20030012204 | Czeiger et al. | Jan 2003 | A1 |
20030030866 | Yoo | Feb 2003 | A1 |
20030037127 | Shah et al. | Feb 2003 | A1 |
20030058853 | Gorbatov et al. | Mar 2003 | A1 |
20030061220 | Ibrahim et al. | Mar 2003 | A1 |
20030076788 | Grabauskas et al. | Apr 2003 | A1 |
20030084219 | Yao et al. | May 2003 | A1 |
20030095549 | Berman | May 2003 | A1 |
20030118047 | Collette et al. | Jun 2003 | A1 |
20030118053 | Edsall et al. | Jun 2003 | A1 |
20030135385 | Karpoff | Jul 2003 | A1 |
20030158971 | Renganarayanan et al. | Aug 2003 | A1 |
20030189930 | Terrell et al. | Oct 2003 | A1 |
20030189935 | Warden et al. | Oct 2003 | A1 |
20030189936 | Terrell et al. | Oct 2003 | A1 |
20030210685 | Foster et al. | Nov 2003 | A1 |
20040013092 | Betker et al. | Jan 2004 | A1 |
20040013125 | Betker et al. | Jan 2004 | A1 |
20040024905 | Liao et al. | Feb 2004 | A1 |
20040024911 | Chung et al. | Feb 2004 | A1 |
20040081186 | Warren et al. | Apr 2004 | A1 |
20040085972 | Warren et al. | May 2004 | A1 |
20040146254 | Morrison | Jul 2004 | A1 |
20040151174 | Del Signore et al. | Aug 2004 | A1 |
20050018673 | Dropps et al. | Jan 2005 | A1 |
20050025075 | Dutt et al. | Feb 2005 | A1 |
20050036499 | Dutt et al. | Feb 2005 | A1 |
20050044354 | Hagerman | Feb 2005 | A1 |
20050073956 | Moores et al. | Apr 2005 | A1 |
20050169311 | Millet et al. | Aug 2005 | A1 |
20050198523 | Shanbhag et al. | Sep 2005 | A1 |
20050232285 | Terrell et al. | Oct 2005 | A1 |
20060023707 | Makishima et al. | Feb 2006 | A1 |
20060023708 | Snively et al. | Feb 2006 | A1 |
20060023725 | Makishima et al. | Feb 2006 | A1 |
20060023726 | Chung et al. | Feb 2006 | A1 |
20060023751 | Wilson et al. | Feb 2006 | A1 |
20060034302 | Peterson | Feb 2006 | A1 |
20060092932 | Ghosh et al. | May 2006 | A1 |
20060203725 | Paul et al. | Sep 2006 | A1 |
20070002883 | Edsall et al. | Jan 2007 | A1 |
20070058619 | Gopal Gowda et al. | Mar 2007 | A1 |
20070091903 | Atkinson | Apr 2007 | A1 |
20080028096 | Henderson et al. | Jan 2008 | A1 |
20090290589 | Green | Nov 2009 | A1 |
20120044934 | Del Signore et al. | Feb 2012 | A1 |
Entry |
---|
“Fibre Channel Methodologies for Interconnects (FC-MI) Rev 1.8;” NCITS working draft proposed Technical Report; Sep. 28, 2001. |
“Fibre Channel Methodologies for Interconnects—2 (FC-MI-2) Rev 2.60;” INCITS working draft proposed Technical Report; Jun. 7, 2005 |
American National Standard for Information Technology; “Fibre Channel—Fabric Generic Requirements (FC-FG)” Secretariat: Information Technology Industry Council; Approved Dec. 4, 1996: American National Standards Institute, Inc. |
“Fibre Channel Switch Fabric (FC-SW) Rev 3.3;” NCITS working draft proposed American National Standard for Information Technology; Oct. 21, 1997. |
“Fibre Channel Switch Fabric—2 (FC-SW-2) Rev 5.3;” NCITS working draft proposed American National Standard for Information Technology; Jun. 26, 2001 (pp. begin to 32 & 69-94). |
“Fibre Channel Fabric—3 (FC-SW-3) Rev 6.6;” NCITS working draft proposed American National Standard for Information Technology; Dec. 16, 2003 (pp. begin to 30 & 91-116). |
“Fibre Channel Physical and Signaling Interface (FC-PH) Rev 4.3;” working draft proposed American National Standard for Information Systems; Jun. 1, 1994 (pp. begin to 32). |
IP Storage Working Group, IETF, “draft-monia-ips-ifcparch-00.txt”, Internet Draft Standard, Nov. 2000, pp. 1-18. |
INCITS Working Draft Proposed American National Standard for Information Technology, “Fibre Channel Link Services (FC-LS) Rev. 1.5.1” Sep. 6, 2006 Secretariat: Information Technology Industry Council. |
IP Storage Working Group, IETF, “draft-monia-ips-ifcp-01.txt”, Internet Draft Standard, Jan. 2001, pp. 1-48. |
IP Storage Working Group, IETF, “draft-chau-fcip-ifcp-encap-00.txt”, Internet Draft Standard, Feb. 2001, pp. 1-8. |
IP Storage Working Group, IETF, “draft-monia-ips-ifcpenc-00.txt”, Internet Draft Standard, Feb. 2001, pp. 1-7. |
IP Storage Working Group, IETF, “draft-tseng-ifcpmib-00.txt”, Internet Draft Standard, Aug. 2001, pp. 1-22. |
IP Storage Working Group, IETF, “draft-monia-ips-ifcplcc-00.txt”, Internet Draft Standard, Apr. 2002, pp. 1-44. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-wglcc-01.txt”, Internet Draft Standard, May 2002, pp. 1-45. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-01.txt”, Internet Draft Standard, Feb. 2001, pp. 1-55. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-02.txt”, Internet Draft Standard, May 2001, pp. 1-68. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-03.txt”, Internet Draft Standard, Jul. 2001, pp. 1-67. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-04.txt”, Internet Draft Standard, Aug. 2001, pp. 1-84. |
IP Storage Working Group, IETF, draft-ietf-ips-ifcp-05.txt, Internet Draft Standard, Sep. 2001, pp. 1-91. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-06.txt”, Internet Draft Standard, Oct. 2001, pp. 1-91. |
IP Storage Working Group, IETF, draft-ietf-ips-ifcp-07.txt, Internet Draft Standard, Nov. 2001, pp. 1-90. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-08.txt”, Internet Draft Standard, Jan. 2002, pp. 1-98. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-09.txt”, Internet Draft Standard, Jan. 2002, pp. 1-97. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-10.txt”, Internet Draft Standard, Feb. 2002, pp. 1-98. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-11.txt”, Internet Draft Standard, May 2002, pp. 1-103. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-12.txt”, Internet Draft Standard, Jun. 2002, pp. 1-104. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-14.txt”, Internet Draft Standard, Dec. 2002, pp. 1-104. |
NCITS Working draft proposed American National Standard for Information Technology; “Fibre Channel—Framing and Signaling (FC-FS);” Feb. 8, 2002; Secretariat: Information Technology Industry Council. |
NCITS Working draft proposed American National Standard for Information Systems; “Fibre Channel—Backbone (FC-BB-2);” Mar. 6, 2002; Secretariat: Information Technology Industry Council. |
NCITS Working draft proposed American National Standard for Information Technology; “Fibre Channel—Generic Services—4 (FC-GS-4);” Sep. 19, 2001; Secretariat: Information Technology Industry Council. |
Pendry et al.; “InfiniBand Architecture: Bridge Over Troubled Waters;” Apr. 27, 2000: Illuminata, Inc.; Nashua, New Hampshire. |
IP Storage Working Group, IETF, “draft-ietf-ips-ifcp-13.txt”, Internet Draft Standard, Aug. 2002, pp. 1-104. |
Weber et al.; “Fibre Channel (FC) Frame Encapsulation;” Dec. 2003; The Internet Society. |
“Fibre Channel Backbone (FV-BB);” Rev 4.7; NCITS working draft proposed American National Standard for Information Systems; Jun. 8, 2000. |
“Fibre Channel Backbone (FC-BB-2);” Rev 6.0; INCITS working draft proposed American National Standard for Information Systems; Feb. 4, 2003. |
Travostino et al.; “IFCP—A Protocol for Internet Fibre Channel Networking;” Dec. 2002. |
Rajagopal et al.; IP and ARP over Fibre Channel; Network Working Group Request for Comments 2625; Gadzooks Networks; Jun. 1999. |
Fabric Extension Study Group, Draft Minutes, T11/04-129v0, Feb. 3, 2004. |
Pelissier et al., FR—Header Definition, T11/04-241v0, Apr. 2004. |
Frame Routing Extended Header (FR—Header), T11/05-214v0 Revision 3, Mar. 15, 2005. |
Inter-Fabric Routing Extended Header (IFR—Header), T11/05-214v1 Revision 5d, May 20, 2005. |
Pelissier et al., Inter-Fabric Routing (T11/04-520v0), Jul. 30, 2004. |
Desanti et al., Inter-Fabric Routing, T11/04-408v0, Jun. 2004. |
Transport Fabric Model Rev 1.3, T11/05-075v1, Apr. 1, 2005. |
Pelissier et al., Inter-Fabric Routing Solutions, 05-232v1, Apr. 5, 2005. |
Fabric Routing Types, T11/05-099v0, Feb. 1, 2005. |
Newman et al., Seagate Fibre Channel Interface Product manual, Mar. 21, 1997, Seagate. |
Elsbernd et al., Fabric Expansion Study Group Draft minutes 03-691-v0, Oct. 7, 2003. |
Fibre Channel Framing and Signaling-2, standard, Apr. 1, 2004, page Nos. i-11, Rev. 0.10, INCITS working draft proposed American National Standard for Information Technology. |
I2O architecture Specification, Ver. 1.5, Mar. 1997. |
“Fibre Channel Physical and Signaling Interface (FC-PH) Rev 4.3”, Jun. 1, 1994, p. front cover to I-9. |
“Fibre Channel Fabric Loop Attachment (FC-FLA) Rev. 2.7”, Aug. 21, 1997, p. front cover to 122. |
“Fibre Channel—Switch Fabric—2 (FC-SW-2) Rev. 5.3”, Jun. 26, 2001, p. front cover to 186. |
“Fibre Channel Private Loop SCSI Direct Attach (FC-PLDA) Rev. 2.1”, Sep. 22, 1997, p. front cover to 1-4. |
Number | Date | Country | |
---|---|---|---|
20120044933 A1 | Feb 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10356392 | Jan 2003 | US |
Child | 13284778 | US |