The present disclosure generally relates to Storage Area Networks (SANs) and more particularly to providing a login proxy to Fiber Channel (FC) storage arrays for FC servers.
Storage Area Networks (SANs) reliably store large amounts of data for an organization. Clusters of storage devices, e.g., FC storage arrays, in one location are called SAN islands and communicate using the FC Protocol. Users accessing a SAN typically reside on an Ethernet based Local Area Network (LAN) at another location that may be coupled to an FC server cluster for communication with the FC storage array. To mediate communication between the FC server cluster and the FC storage array, an FC switch network (also called “switched fabric”) is employed.
Recent advances have led to virtualization resulting in the creation of Virtual SANs (VSANs) and Virtual LANs (VLANs). VSANs and VLANs remove the physical boundaries of networks and allow a more functional approach. For example, an engineering department VLAN can be associated with an engineering department VSAN, or an accounting department VLAN can be associated with an accounting department VSAN, regardless of the location of network devices in the VLAN or storage devices in the VSAN. In a virtualized server environment typically there are multiple virtual machines (VMs) running on each physical server in the FC server cluster that are capable of migrating from server to server.
The physical servers are typically grouped in clusters. Each physical server in a cluster needs to have access to the same set of storage ports so that when a VM moves from one physical server to another physical server, the VM still has access to the same set of applications and data in the storage device. Due to this requirement, whenever a new physical server is added to the cluster, the access permissions in the storage device need to be modified to allow the new server to access it. This creates operational challenges due to the coordination needed between the server administrators and the storage administrators for change management.
Overview
Techniques are provided herein for receiving at a proxy device in a network, a login request from a source device to access a destination device. The source device does not have direct access permission to the destination device. A response to the login request is sent that is configured to appear to the source device that the response was sent from the destination device. The proxy device logs into the destination device on behalf of the source device to obtain access to the destination device. Thereafter, the proxy device receives first network traffic frames associated with a service flow between the source device and the destination device from the source device that are destined for the destination device. Information is overwritten within the first network traffic frames such that the first network traffic frames appear to originate from the proxy device when transmitted to the destination device. The first network traffic frames are transmitted from the proxy device to the destination device. Similar operations are performed for frames sent from the destination device to the source device. At the proxy device, second network traffic frames are received from the destination device that are destined for the source device. Information within the second network traffic frames is overwritten such that the second network traffic frames appear to originate from the destination device when transmitted to the source device. The second network traffic frames are transmitted from the proxy device to the source device.
While the terms “source device” and “destination device” are used herein, these are meant only for explanatory purposes. Data is sent in both directions between these two devices. The source device may be viewed as a first device and the destination device viewed as a second device. In the specific examples described herein, the first device is a FC server and the second device is a FC storage array.
Referring first to
The FCID may be separated into three bytes in a Domain.Area.Port notation that may be used, e.g., in a frame header to identify source ports of a source device and destination ports of a destination device. The domain is always associated with the respective switch. In this example, communications between FC physical servers 110(1) and 110(2) and switch 130(1) uses FCID 20.1.1 for FC server 110(1) and FCID 20.2.3 for server 110(2) where “20” is the domain for switch 130(1). Thus, all connections to switch 130(1) will use a 20.x.y FCID. Switch 130(2) has domain of 30 and switch 130(3) has a domain of 10. FCIDs with arbitrary areas and ports are assigned for communications on the various paths shown in
One or more VMs may be running on each of the FC physical servers 110(1)-110(m). Individual VMs may migrate from server to server. As such, each of the FC servers 110(1)-110(m) needs to access the same set of storage ports so that the VMs can access the same applications and data that they operate on in the FC storage arrays 140(1) and 140(2) as they migrate from one FC server to another.
When a new FC server is added to the FC server cluster, the new FC server needs the same access permissions to the FC storage array(s) as other FC servers in the same cluster, e.g., FC servers 110(1)-110(m), so that the VMs can retain access to the applications and data as they migrate to the new FC server. The process of adding a new FC server necessitates the coordination of two network administrators, one for the FC server cluster and one for the FC storage array, in order to set up the required access permissions on both systems.
For example, when a new physical server is deployed in a virtualization server cluster, the server administrator needs to ask the storage administrator to add the new server in the access control list of the storage array. This may be done in the form of a Media Access Control (MAC)-based access control, Port World Wide Name (PWWN)-based zoning, or using Logical Unit Number (LUN) masking in the storage array. LUNs provide a way to logically divide storage space, e.g., hard drive or optical drive volumes. For FC storage arrays, the LUN masking configuration has to be modified to allow the new server to access a selected set of LUNs. All the servers in a virtualization cluster are typically zoned with the same set of storage ports and they are given access to the same set of LUNs. Thus, when a new server is added, the operation tasks described above have to be performed. In addition, the increasing demand for virtualization places a greater demand on the storage array because the ports in the storage array have a limit on the number of servers that can be simultaneously logged in.
However, by assigning access permissions to the proxy server 150, the proxy server 150 may proxy for any newly added FC servers by handling the login procedures and translating or overwriting identification information, e.g., a source or destination FCID and an originator exchange identifier (OXID) that is carried in every FC frame, in the network traffic headers such that the FC storage array is unaware of newly added FC servers. Any frames sent back to the FC server will echo the OXID. Thus, OXIDs can be used by the various devices to track service flows, i.e., the device can match response messages to read or write request messages.
The proxy server also reduces the number of server logins to the storage array by representing more than one physical server. In other words, the proxy server 150 can be thought of as an instantiated virtual server that may represent some or all of the physical FC servers 110(1)-110(m). As shown in
Referring to
The data processing device 210 is, for example, a microprocessor, a microcontroller, systems on a chip (SOCs), or other fixed or programmable logic. The data processing device 210 is also referred to herein simply as a processor. The memory 230 may be any form of random access memory (RAM) or other data storage block that stores data used for the techniques described herein. The memory 230 may be separate or part of the processor 210. Instructions for performing the process logic 300 may be stored in the memory 230 for execution by the processor 210 such that when executed by the processor, causes the processor to perform the operations describe herein in connection with
The functions of the processor 210 may be implemented by a processor or computer readable tangible medium encoded with instructions or by logic encoded in one or more tangible media (e.g., embedded logic such as an application specific integrated circuit (ASIC), digital signal processor (DSP) instructions, software that is executed by a processor, etc.), wherein the memory 230 stores data used for the computations or functions described herein (and/or to store software or processor instructions that are executed to carry out the computations or functions described herein). Thus, functions of the process logic 300 may be implemented with fixed logic or programmable logic (e.g., software or computer instructions executed by a processor or field programmable gate array (FPGA)).
Hardware logic 240 may be used to implement fast address and OXID rewrites/overwrites in hardware, e.g., at an ASIC level, in the FC frames without involving the switch Central Processing Unit (CPU), e.g., processor 210, or a separate processor associated with one of the network interfaces 220. The hardware logic 240 may be coupled to processor 210 or be implemented as part of processor 210.
Referring to
At 330, the proxy device logs into the destination device on behalf of the source device to obtain access to the destination device. The destination device is configured to allow access by the proxy server. In one example, the proxy server can perform read and write access to storage associated with the destination device, e.g., an FC storage array. At 340, first network traffic frames are received from the source device that are destined for the destination device. At 350, information within the first network traffic frames is overwritten such that the first network traffic frames appear to originate from the proxy device when transmitted to the destination device. An example, of how the proxy device overwrites information within frames is described in connection with
Turning to
Referring to
An FC frame 510 is transmitted, as shown with a solid line arrow, from FC server 110(1) with an FCID of 20.1.1 intended for FC storage array 140(1) with an FCID of 10.1.1. FC frame 510 may be generated by FC server 110(1) or by VMs running on FC server 110(1) that use the same PWWN as the FC server 110(1). The FC frame 510 has a source FCID of 20.1.1, a destination FCID of 10.1.1, and an FC server generated OXID of Xa. The proxy server 150 is configured to receive all traffic from the FC server cluster intended for storage array 140(1) based on the destination FCID contained in the frame 510. That is, the switches in the network 120 redirect traffic addressed to the storage array 140(1) to the proxy server 150.
Redirection may be accomplished in a number of ways. In one example, the traffic may be redirected at the ports of the storage array, e.g., at the ports of storage arrays 140(1) and 140(2). In another example, the traffic is redirected at the ports of the FC servers, e.g., FC servers 110(1)-110(m). Redirection may be accomplished with an access control rule in an Access Control List (ACL), e.g., an ACL with redirect option is placed at the server ports in FC servers 110(1)-110(m) in the ingress direction such that any traffic from that server to the storage array port is redirected to the proxy device.
Thus, the proxy server 150 intercepts FC frame 510 with a destination FCID of 10.1.1. The proxy server 150 overwrites the source FCID 20.1.1 of the frame 510 with its FCID of 10.1.2 and overwrites the OXID with a proxy server generated OXID (proxy OXID) of Xb to produce FC frame 520. The proxy OXID is configured to uniquely identify a source port of the source device. The proxy server 150 may maintain its own pool of proxy OXIDs to avoid the possibility of multiple flows from the same or different FC servers having the same OXID, which could be the case if the proxy server 150 uses the OXID in the FC frames received from the FC servers. The proxy server 150 transmits FC frame 520 to the storage array 140(1).
An FC frame 530 is transmitted, as shown with a dashed line arrow, from FC server 110(m) with an FCID of 30.2.1 to FC storage array 140(1) with an FCID of 10.1.1. The FC frame 530 has a source FCID of 30.2.1, a destination FCID of 10.1.1 and an FC server generated OXID of Xa. Although the OXID of Xa is the same OXID as that used by FC server 110(1) for FC frame 510, this does not present an issue because the frame's FCIDs are unique and the service flows are thereby distinguishable by the proxy server 150. The network 120 redirects the FC frame 530 to the proxy server 150. The proxy server 150 overwrites the source FCID of 30.2.1 with its FCID of 10.1.2 and overwrites the FC server generated OXID Xa with a proxy server generated OXID of Xc to produce FC frame 540. FC frame 540 is transmitted to the storage array 140(1). The proxy server 150 maintains a response queue for frames 510 and 530 and will use the queue to match responses received from the FC storage array 140(1), i.e., The proxy server 150 generates information comprising the source OXID and the proxy OXID to map network traffic frames for the service flow. Also maintained in the queue are placeholders for responses that are due from the storage array 140(1) for FC frames 520 and 540. In other words, the response queue allows the proxy server 150 to track FC exchanges and in exchange sequence order for FC service flows to and from the FC servers, and to and from the storage arrays.
The FC storage array 140(1) responds to FC frame 520 with FC frame 550, as shown with a solid line arrow, to proxy server 150. The response frame has a source FCID of 10.1.1 for storage array 140(1), a destination FCID of 10.1.2 for proxy server 150, and the proxy server generated OXID of Xb. The proxy server 150 will use the Xb OXID to associate FC frame 550 with FC frame 520 using the pending response queue for storage array 140(1) in order to send the frame FC 550 to FC server 110(1) as a response to FC frame 510. The proxy server 150 overwrites the destination FCID of 10.1.2 with an FCID of 20.1.1 for the FC server 110(1), and overwrites the OXID Xb contained in FC frame 550 with the original FC server (source device) generated OXID of Xa to produce FC frame 560. The FC frame 560 is transmitted to the FC server 110(1) and is a response to FC frame 510. The FC server 110(1) is completely unaware that a proxy server was involved in the frame exchange.
Similarly, the FC storage array 140(1) responds to FC frame 540 with FC frame 570, as shown with a dashed line arrow, to proxy server 150. The response frame has a source FCID of 10.1.1 for storage array 140(1), a destination FCID of 10.1.2 for proxy server 150, and the proxy server generated OXID of Xc. The proxy server 150 will use the OXID Xc contained in frame 570 to associate it with FC frame 530 using the pending response queue for storage array 140(1) in order to send FC frame 570 to FC server 110(1) as a response to FC frame 540. The proxy server 150 overwrites the destination FCID of 10.1.2 with an FCID of 30.2.1 for the FC server 110(m), and overwrites the OXID Xc contained in FC frame 570 with the original FC server generated OXID of Xa to produce FC frame 580. The FC frame 580 is transmitted to the FC server 110(m) and is a response to FC frame 530. The FC server 110(m) is completely unaware that a proxy server was involved in the frame exchange Likewise, the storage array 140(1) is completely unaware that the physical FC servers 110(1) and 110(m) are being proxied by FC proxy server 150. Any LUN masking and access control at the storage array 140(1) may be performed based on the PWWN of the proxy server 150.
As can be seen from the above example, the proxy server multiplexes traffic from servers 110(1) and 110(m) into a single service flow with source FCID 10.1.2 for proxy server 150 and a destination FCID of 10.1.1 for storage array 140(1). The same proxy server can proxy for multiple physical servers as well as multiple storage arrays. Only the proxy server 150 needs to be logged into FC storage array 140(1) for FC servers 110(1)-110(m) to communicate with the storage array. The unique OXIDs within the FC frames identify the service flows to the FC storage array 140(1) and to the proxy server 150 when they are echoed by the FC storage array 140(1) in response frames sent back to the proxy server 150. The proxy server 150 then uses the unique OXIDs to demultiplex the FC frames back into the individual service flows to FC servers 110(1) and 110(m), respectively. The proxy server 150 maintains information to map service flows between the source devices and the destination devices. Example types of information are shown in
As mentioned above, one possible implementation of the proxy functionality is at a switch with a switch port that is connected to the storage array. Frames sent from the physical servers to the storage array can be captured on ingress and the source FCID and OXID can be rewritten at the egress switch port connected to the storage array. Similarly, frames from the storage array to the proxy server can be captured at the ingress switch port where the storage array is connected, and the destination FCID and OXID can be rewritten before the frames are forwarded to the respective physical servers, as described above. The actual rewriting of the frames and OXID pool management may be implemented in an intelligent line card device with a programmable processor (e.g. Cisco System's SSN 16 line card with Octeon datapath processor or certain Brocade Serveriron modules) or via an ASIC.
In an intelligent line card the FC traffic is received and processed by one or more network processors. Software running on these network processors can be customized to implement the server proxy functionality. ACLs at the real physical server ports would be programmed to redirect traffic to the storage array ports to the intelligent line card. Alternatively, an ACL can be placed at the egress ports to the storage arrays such that the traffic to the storage ports from the physical servers are redirected to the intelligent line card while allowing traffic from the proxy server to the storage arrays to reach the storage arrays' ports.
Example tables of information that stored by proxy device to map service flows are shown in
When a new FC server is added to the server cluster no changes to the storage arrays are required. The network, e.g., network 120 (
Proxy server logic may be implemented at the ASIC level with OXID pool management functionality in the ASIC. Rewrites of the FCIDs are deterministic, e.g., using ACLs. For an OXID rewrite, the ASIC maintains a table of the original OXID, the original server FCID or logical name, and the new OXID as shown in
Techniques are provided herein for receiving at a proxy device in a network, a login request from a source device to access a destination device. The source device does not have direct access the destination device. A response to the login request is sent that is configured to appear to the source device that the response is from the destination device. The proxy device logs into the destination device on behalf of the source device to obtain access to the destination device. The proxy device receives first network traffic frames associated with a service flow between the source device and the destination device from the source device that are destined for the destination device. Information is overwritten within the first network traffic frames such that the first network traffic frames appear to originate from the proxy device when transmitted to the destination device. The first network traffic frames are transmitted from the proxy device to the destination device.
Techniques are provided herein for performing similar operations on frames sent from the destination device to the source device. At the proxy device, second network traffic frames are received from the destination device that are destined for the source device. Information within the second network traffic frames is overwritten such that the second network traffic frames appear to originate from the destination device when transmitted to the source device. The second network traffic frames are transmitted from the proxy device to the source device.
In summary, the techniques described herein vastly reduce the operational steps for provisioning a new server to an existing server cluster by eliminating administrator functions for the storage arrays and the number of storage array logins are reduced by the proxy server's ability to multiplex traffic from multiple servers to the storage array.
The above description is intended by way of example only.