Remote shared server peripherals over an ethernet network for resource virtualization

Information

  • Patent Grant
  • 10880235
  • Patent Number
    10,880,235
  • Date Filed
    Tuesday, April 3, 2018
    6 years ago
  • Date Issued
    Tuesday, December 29, 2020
    3 years ago
  • CPC
  • Field of Search
    • US
    • 709 226000
    • 709 250000
    • 709 220000
    • 709 223000
    • CPC
    • G06F9/45558
    • G06F2009/45579
    • H04L49/351
  • International Classifications
    • H04L12/931
    • Disclaimer
      This patent is subject to a terminal disclaimer.
      Term Extension
      256
Abstract
Provided is a novel approach for connecting servers to peripherals, such as NICs, HBAs, and SAS/SATA controllers. Also provided are methods of arranging peripherals within one or more I/O directors, which are connected to the servers over an Ethernet network. Such arrangement allows sharing the same resource among multiple servers.
Description
BACKGROUND

A server or computing system generally includes one or more processors, memory, and peripheral components and peripheral interfaces. Examples of peripheral components include cryptographic accelerators, graphics accelerators, and extensible markup language (XML) accelerators. Examples of peripheral interfaces include network interface cards (NICs), serial ATA (SATA) adapters, serial attached SCSI (SAS) adapters, RAID adapters, and Fibre Channel and iSCSI host bus adapters (HBAs). Processors, memory, and peripherals are often connected using one or more buses and bus bridges. To provide fault-tolerance, individual servers are often configured with redundant resources.


Since resources, such as peripheral components and peripheral interfaces, are assigned on a per server basis, other servers do not typically have access to these resources. In order to provide adequate resources for each server, resources are typically over-provisioned. For example, more hardware acceleration is provided than is typically needed. More network interface capacity is allocated than is typically used simply to handle worst-case or expected worst-case scenarios. Resources are over-provisioned resulting in overall waste and low utilization. Resource assignment on a per server basis also limits the ability to reconstruct or reconfigure a resource environment.


A more efficient and flexible approach is to provide remote peripherals which can be shared among servers while maintaining quality-of-service guarantees and providing the ability to change dynamically the assignment of peripherals to servers. Such shared remote peripherals are referred to as virtualized resources.


Ethernet is a commonly deployed server networking technology and it may be used for communication between servers and their remote peripherals. However, the high reliability, performance, and quality-of-service guarantees needed for communication with remote peripherals are lacking for known Ethernet applications. Consequently, the techniques and mechanisms are needed to provide efficient and reliable data transfer between servers and remote peripherals over Ethernet, along with quality of service and methods to discover and manage the remote peripherals.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention may best be understood by reference to the following description taken in conjunction with the accompanying drawings, which are illustrative of specific embodiments of the present invention.



FIG. 1 is a diagrammatic representation showing multiple servers having virtualized resources interconnected with an I/O director using an Ethernet fabric.



FIG. 2 is a diagrammatic representation showing an Input/Output Director (IOD) platform.



FIG. 3 is a diagrammatic representation showing an example of a Virtual Network Interface Card (vNIC) module.



FIG. 4 is a diagrammatic representation showing an example of a Virtual Host Bus Adapter (vHBA) module.



FIGS. 5A and 5B are diagrammatic representations showing two examples of protocol stacks.



FIGS. 6A and 6B are diagrammatic representations showing two examples of device driver stacks.



FIG. 7 is a diagrammatic representation showing an example of a vHBA write flow protocol that may be used over an Ethernet fabric for writing data to storage devices.



FIG. 8 is a diagrammatic representation showing an example of a vHBA read flow protocol that may be used over an Ethernet fabric for reading data from storage devices.



FIG. 9 is a diagrammatic representation showing one example of a management protocol that may be used over an Ethernet fabric for discovering an IOD and establishing communication between servers and the IOD as well as its I/O modules.



FIG. 10 illustrates a technique for virtualizing I/O resources in accordance with certain embodiments.





DETAILED DESCRIPTION

Reference will now be made in detail to some specific examples of the invention including the best modes contemplated for carrying out the invention. Examples of these specific embodiments are illustrated in the accompanying drawings. While the invention is described in conjunction with these specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims.


For example, the techniques of the present invention will be described in the context of lossy and lossless Ethernet and Fibre Channel Storage Area Networks (SANs). However, it should be noted that the techniques of the present invention can be applied to a variety of different standards and variations of Ethernet and SANs. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In other instances, well-known operations have not been described in detail in order not to obscure unnecessarily the present invention.


Furthermore, techniques and mechanisms of the present invention will sometimes be described in singular form for clarity. However, it should be noted that some embodiments can include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. For example, a processor is used in a variety of contexts. However, it will be appreciated that multiple processors can also be used while remaining within the scope of the present invention unless otherwise noted.


A server or computing system generally includes one or more processors, memory, as well as other peripheral components and peripheral interfaces, such as HBAs, hardware accelerators, NICs, graphics accelerators, disks, etc. Applications running on servers can access storage within a SAN using resources such as HBAs, while other networks are accessed using NICs.


Servers using conventional internal dedicated I/O controllers are typically over-provisioned with NICs and HBAs due to many reasons. One reason is a need to provide sufficient capacity for the occasional peak loads incurred by the server. Another reason is the need to connect the server to multiple networks, each requiring its own I/O adapter port. Furthermore, there may be a need to provide dedicated bandwidth to different applications. For example, servers are increasingly virtualized to run multiple operating systems in different virtual machines on the same server at the same time. These reasons may lead to multiple I/O controllers installed on virtualized servers. Over-provisioning resources increases the cost and complexity of servers (e.g., servers with many bus slots to accommodate various adapters), increases the number of edge switch ports to connect to the adapters, and leads to extensive cabling.


Consequently, the techniques and mechanisms described here provide I/O resources such as NICs, HBAs, and other peripheral interfaces in one or more I/O director devices connected to servers over an Ethernet network. Individual servers no longer each require numerous HBAs and NICs, but instead can share HBAs and NICs provided at an I/O director. The individual servers are connected to the I/O director over an Ethernet network, which is used as an I/O fabric. In this configuration, I/O resources can now be shared across the entire server pool rather than being dedicated to individual servers. Quality of service of I/O for individual servers is provided by the I/O director. As a result, fewer I/O resources are required across multiple servers leading to less complex and less expensive systems. Furthermore, such configurations tend to be more flexible and easier to manage since I/O resources can be assigned dynamically without having to change the physical configurations of the servers (e.g., install or remove I/O adapters).


The techniques and mechanisms of the present invention provide virtual HBAs and virtual NICs that a server can access as though physical HBAs and NICs were included in the server and connected to its I/O bus. In certain embodiments, the actual HBAs and NICs are included in a remote shared I/O module within an I/O director connected to the server over an Ethernet network. I/O buses provide reliable ordered delivery with flow control, which is important for communication with some I/O devices. For example, if some of the traffic between the server and an HBA is lost or delivered out-of-order, storage corruption may occur. Similarly, communication between the server and its vNICs and vHBAs provided by the I/O director must have the same properties of reliability, in-order delivery, and flow control. The techniques and mechanisms of various embodiments address this requirement for both standard (lossy) and lossless Ethernet.


Connecting servers to I/O directors using Ethernet allows for widespread adoption of I/O directors with minimal investment and no disruption. For example, Ethernet cards are readily available on numerous existing servers, and existing servers can be connected to I/O directors using these existing Ethernet interface cards. No additional interfaces or interface cards need to be configured or installed on individual servers, so servers can be used as-is without a need to disrupt their activity, open them, and install new cards. Using these existing Ethernet cards, servers can access and share HBAs, NICs and other peripheral interfaces and components included at an I/O director over an Ethernet network.



FIG. 1 is a diagrammatic representation showing multiple servers 101-103 having virtualized resources interconnected with an IOD 107 using an Ethernet I/O fabric 105. According to various embodiments, the servers 101-103 include NICs in order to connect to the Ethernet I/O fabric 105, processors, and memory. The IOD 107 may include an Ethernet switch in order to connect to the servers 101-103 over the Ethernet fabric. The Ethernet switch may be a lossless Ethernet switch or a lossy Ethernet switch. It may be a stand-alone entity, or provided with the IOD 107. Additional details of the IOD 107 are described in the context of FIGS. 2-4 below.


According to various embodiments, NICs, HBAs and other server I/O resources can be offloaded to the IOD 107. The NICs and HBAs are maintained in a shared and virtualized manner on the IOD 107, which provides links to various external switches. By using the IOD 107, the number of resources and links can be significantly reduced, thus increasing operational efficiencies. Further, the network illustrated in FIG. 1 provides certain degrees of flexibility. For example, the number and types of I/O resources allocated to servers is flexible. Connectivity of servers to external Ethernet and Fibre Channel networks becomes flexible as well, e.g., servers can be connected to different physical networks at different times as needed.


Lossy Ethernet refers to an Ethernet technology in which data packets may be lost or delivered out of order by the Ethernet fabric. Lossless Ethernet refers to an enhanced Ethernet technology, which is also known as Data Center Ethernet (DCE), Convergence Enhanced Ethernet (CEE), or Data Center Bridging (DCB). Lossless Ethernet provides guaranteed packet delivery without drops. It also provides ordered packet delivery, flow control for eight classes of traffic, and some other enhancements, such as congestion management.


According to various embodiments, the IOD 107 can use lossy and/or lossless Ethernet to manage data transfer between the servers 101-103 and the IOD. While lossless Ethernet may be more particularly suitable, other fabrics may be used. For example, a transport protocol used by the IOD 107 may have capabilities for re-transmission, re-ordering out of order packets, back-off upon packet loss for dealing with the lack of flow control, and other capabilities for addressing problems that may arise when using lossy Ethernet. Certain transport protocols may be used on both lossless and lossy Ethernet and furthermore optimized for each Ethernet type. Overall, the transport protocols and IOD configurations of the present invention can be implemented over a lossy or lossless Ethernet fabric.


A number of different transport protocols can be used for communication between the servers and the remote virtual I/O devices provided by the I/O modules within the IOD 107. One example of such a transport protocol is the Internet Wide Area RDMA Protocol (iWARP) which runs on top of TCP. Another example is the Reliable Connection (RC) protocol which is generally defined by the InfiniBand architecture but can be used over an Ethernet fabric as well. iWARP may be used for lossy Ethernet since it makes use of TCP, which provides back-off upon packet loss as well as retransmissions and handling of out-of-order packets. RC may be used for lossless Ethernet since it does not include the complexity and overhead of congestion control (slow start and back-off), which is not necessary at the transport protocol level for lossless Ethernet.



FIG. 2 is a diagrammatic representation showing one example of an IOD 107. I/O modules are connected to multiple servers through an internal Ethernet fabric 201 and, in certain embodiments, an external Ethernet fabric 105. These Ethernet fabrics can be lossy or lossless. An IOD 107 may contain one or more I/O modules of the same type or a combination of I/O modules of different types, such as vHBA I/O module 205 and vNIC I/O module 203. Virtual I/O devices, such as vNICs 213 and vHBAs 215 are implemented in the respective I/O modules. Virtual I/O devices can be assigned to servers dynamically as needed. Multiple servers may have virtual I/O resources on the same I/O modules or on different I/O modules. The I/O modules are responsible for enforcing quality-of-server guarantees, such as committed and peak data rates, for the virtual I/O devices. The IOD 107 may also include a management module 207, which is responsible for tasks such as control, configuration, and logging.


Multiple IOD devices may exist on the same Ethernet network, and they may be managed individually or from a single central management station. Servers may have virtual I/O devices on a single IOD or on multiple IODs. Also, multiple IODs may be used in redundant pairs for fault-tolerance. The failover itself may be implemented at the I/O module level, the IOD level, or within the virtual I/O device drivers on the server.


According to various embodiments, the IOD 107 can provide flexible termination points for the I/O resources assigned to servers. The IOD 107 may be connected to multiple Ethernet networks 109 and/or Fibre Channel networks 108. Connection of a server to one or more networks is performed by assigning the server's vNICs and/or vHBAs to these networks. Therefore, servers can be connected to different physical networks without a need for re-cabling or any other physical intervention.


Similarly, I/O resources can be moved from one server to another. For example, a vNIC 213 which was originally assigned to one server (e.g., server 101 in FIG. 1) may be reassigned to another server (e.g., server 102 in FIG. 1). In certain embodiments, vNICs retain their MAC address on the Ethernet network. In another example, a vHBA can be moved between servers while maintaining the same world-wide name (WWN) on the Fibre Channel network. This allows changing the I/O identity of a server such that it may assume the identity of another server. This functionality may be useful when exchanging servers, e.g., during server failover or bringing down a server for maintenance. For example, if a server fails or needs to be removed, the server identity can be moved to another operational server. If the server is booted from a remotely stored boot image on the SAN or the network, the entire server identity (including both its I/O identity and the image that it runs) can be migrated between servers. I/O migration is also useful in cases of virtual machine migration where there may be a need to migrate the I/O resources used by the virtual machine along with the virtual machine itself.


Virtual machines running on the servers may be assigned their own dedicated virtual I/O devices on the IOD 107. Since the I/O modules within the IOD 107 are capable of enforcing quality-of-service guarantees, this provides a way to divide I/O capacity between virtual machines, and make sure that a virtual machine gets no more than a predetermined share of I/O resources, thus preventing one virtual machine from limiting the I/O resources available to other virtual machines.


IODs may be used to offload the network switching from virtual machine hypervisors. Typically, virtual machines communicate over a virtual switch, which is implemented within the hypervisor software on the server. External switching provided at IOD 107 may be used to enhance control or security. In certain embodiments, each virtual machine can be assigned its own dedicated vNIC 213 on IOD 107, and in this case all switching is performed externally.


The internal Ethernet I/O fabric 201 of an IOD may serve a dual purpose. First, it is an Ethernet switch, which provides communication between the servers and the I/O modules, such as vHBA module 205 and vNIC module 203. Second, as a switch, the internal Ethernet I/O fabric can provide direct communication between the servers. This communication may consist of standard TCP or UDP traffic. Furthermore, RC, iWARP, and other similar transport protocols, which are utilized for providing reliable high-performance communication between the servers and the I/O modules, can be used for server-to-server communication. This allows using high-performance communication protocols and libraries, such as Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), and Message Passing Interface (MPI), for server-to-server communication while using the vNIC and vHBA capabilities of the IOD at the same time.


The target channel adapter (TCA) is a device which connects one or more I/O modules (e.g., vHBA module 205, vNIC module 203) of the IOD 107 to the Ethernet I/O fabric, such as Internal Ethernet Fabric 201. In certain embodiments, each I/O module contains a TCA as shown in FIGS. 4 and 5. A TCA can be a discrete device, or its functionality can be integrated into another device of the I/O module. A TCA may recognize and terminate various transport protocols (iWARP, RC, etc.)


In certain embodiments, when a server transmits a data packet to an I/O module, the corresponding TCA removes the link and transport protocol headers (e.g., Ethernet link headers, iWARP/TCP/IP, RC, or other transport headers) from the packet and then forwards the packet with an internal header to the next stage of the I/O module, such as the vNIC network processor or the vHBA virtualization logic, which are further described below in the context of FIGS. 4 and 5. When an I/O module sends a data packet to a server, the TCA is responsible for adding the appropriate link and transport protocol headers similar to the ones described above. An internal I/O module protocol (e.g., SPI-4, which is one example of the System Packet Interface protocols; other examples may be used in certain embodiments) may be implemented between the TCA and the network processor or virtualization logic.



FIG. 4 is a diagrammatic representation showing an example of a vNIC I/O module 203. In addition to TCA 301 and the Ethernet physical layer (PHY) component 315, which provides physical connectivity to the Ethernet network 109, the module 305 may include a buffer manager 305, a queue manager 307, classifier logic 309, vNIC-to-vNIC switching logic 311, and learning logic 313. These elements may be implemented in a network processor 303 or in hardware, such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC). The network processor 303 may also include the TCA functionality within the same device.


According to various embodiments, the classifier logic 309 includes header parsing and lookup logic configured to identify information, such as packet destination, priority, and TCP port. The classifier logic 309 can be used to filter incoming data or apply traffic engineering policies. In some instances, the classifier logic 309 can be used to block packets in order to implement a firewall. In certain embodiments, the buffer manager 305 manages data in memory. In the same or other embodiments, the queue manager 307 manages packet queues and performs traffic engineering tasks, such as traffic “policing” (i.e., enforcing committed and peak data rates available for each vNIC), shaping, and prioritizing based on results from classier logic 309 and configuration information. The queue manager 307 may also perform load-balancing operations by distributing incoming traffic across multiple vNICs.


Multiple vNICs may terminate on the same vNIC I/O module Ethernet port. Typically, different vNICs have distinct MAC addresses that are visible on the Ethernet network. As a result, services that rely on MAC addresses, such as Dynamic Host Configuration Protocol (DHCP), are not disrupted.


The vNIC-to-vNIC switching logic 311 performs packet forwarding between vNICs terminating on the same Ethernet port. It maintains a table of corresponding vNICs and MAC addresses and performs packet forwarding based on MAC addresses using a process similar to the one used in regular Ethernet switches. For example, if vNIC_1 is linked to address MAC_1, and a data packet having MAC_1 as its destination address is received on vNIC_2 which terminates on the same Ethernet port as vNIC 1, then the vNIC-to-vNIC switching logic 311 forwards this packet to vNIC_1. This functionality enables using an IOD with external switches that do not forward packets to the same link that they came from, so the switching is performed in this case within I/O modules themselves.


In certain embodiments, the vNIC I/O module 203 also has learning logic 313, which is used to establish a mapping of vNICs created by virtualization software (on the servers) to vNICs of the IOD 107. When a server is virtualized and one or more virtual machines are created on the server, each virtual machine can be associated with one or more vNICs, which are implemented by the server virtualization software. These vNICs are also referred to as Virtual Machine vNICs or simply VM vNICs. Each VM vNIC has a MAC address, which is assigned by the virtualization software. One or more VM vNICs may be bridged to a single vNIC of the IOD 107 using a software virtual switch, which is implemented by the virtualization software. In these embodiments, the traffic of multiple VM vNICs may appear on the same vNIC of the IOD 107, and this traffic may consist of packets with different source MAC addresses for the different VM vNICs. The vNIC I/O module 203 needs to establish a mapping between a VM vNIC MAC addresses and a corresponding vNIC of the IOD 107. This mapping enables directing incoming traffic to the correct vNIC of the IOD 107. For example, if a packet with destination MAC address MAC_1 arrives at the I/O module Ethernet port, and MAC_1 is the address of VM vNIC_1, then the I/O module needs to know which vNIC of the IOD 107 should receive this packet. In certain embodiments, a lookup is performed in a mapping table to establish this IOD vNIC to VM vNIC correspondence.


The mapping table may be populated by the learning logic 313 as packets arrive from the servers. In certain embodiments, the learning logic examines the source MAC addresses of the packets arriving on the different vNICs of the IOD 107 and populates the mapping table according to the observed source MAC addresses. For example, if a packet with source MAC address MAC_1 arrives on vNIC_5 of the IOD 107, then the learning logic 313 may insert an entry in the mapping table specifying that MAC_1 belongs to vNIC_5. Later, when a packet with destination address MAC_1 arrives from the network, the I/O module knows from the mapping table that the packet should be directed to vNIC_5.


In certain embodiments, data transfer between servers and their assigned vNICs is flow controlled per individual vNIC. The flow control may be provided by a transport protocol used for communication between servers and their remote I/O resources. When standard internal NICs are overwhelmed with transmitted traffic, a transmit queue becomes filled to capacity and the driver or application issuing the packets determines that no additional packets can be sent. Therefore, in certain embodiments, the flow control is achieved all the way to the application generating the traffic. This approach may be more desirable than dropping packets that cannot be transmitted. vNICs of the IOD 107 may be configured to provide similar functionality. Since a reliable transport protocol is used between the servers and the IOD 107, the vNIC driver on the server can queue packets until they are consumed by the remote vNIC I/O module. If the queue is full, the driver may notify the sender that it has run out of transmit buffer space in the same fashion that a local NIC driver performs this task.



FIG. 4 is a diagrammatic representation showing an example of a vHBA I/O module 205. In addition to the TCA 301, the module 305 may include a buffer manager 403, a queue manager 405, and a Fibre Channel HBA device 407 to be virtualized. These elements may be implemented in a network processor or in hardware, such as FPGA or ASIC, which may also include the TCA functionality within the same device.


According to various embodiments, the server sends an I/O control block (IOCB) containing a command (e.g. a SCSI command) as well as various I/O control information, such as buffer information for data to be read or written. This IOCB propagates to the HBA according to the flow protocols described below. The two basic commands are the ones for reading data from and writing data to a target storage device.


The vHBA I/O module 205 may provide N_Port ID virtualization (NPIV) functionality. NPIV enables multiple Fibre Channel initiators to share a single physical port. For example, each vHBA can be viewed as a separate initiator on the port. In this case, each vHBA that terminates on the port appears with its own world-wide name (WWN) on the Fibre Channel fabric. This approach makes management of vHBAs similar to other HBAs, including functions like Fibre Channel zoning configuration.


In certain embodiments, the vHBA buffer manager 403 is responsible for managing buffering of data when it is transferred from the servers to the Fibre Channel HBA 407, and vice versa. The queue manager 405 may be used to enforce quality-of-service properties on the data transfer. For example, the queue manager 405 may modulate the transfer of data to and from the servers per vHBA to comply with the committed and peak bandwidth configurations for each vHBA. In certain embodiments, data transfers are initiated by the vHBA I/O module 205 using RDMA Read operations for reading data from server memory and RDMA Write operations for writing data to server memory, which is described further in more details. Servers typically do not initiate data transfers. Instead, the servers are configured to send commands. As such, quality-of-service guarantees may be provided at the granularity of individual vHBAs, which is not available in other conventional approaches, such as encapsulation of Fibre Channel over Ethernet (FCoE). FCoE does not provide throttling of an individual flow of HBA traffic since there are no FCoE or Ethernet flow control mechanisms which operate at the granularity of individual HBAs. FCoE only enables flow control of an entire link or an entire traffic class, which is an inherent limitation of FCoE.


In certain embodiments, a vHBA is configured to boot a server from an image stored in a storage device on the Fibre Channel network. For example, software residing on flash memory of the server, such as the expansion memory on the Ethernet NIC of the server, may be used for this purpose. When a server boots, it may execute the software residing in this memory. This software, in turn, discovers a boot vHBA, which is assigned to the server on an IOD, and proceeds to boot the server from a storage device, which is assigned to the server as its boot device. The assignment of servers to boot devices can be configured through the IOD management system. Such functionality enables changing the server's purpose, thus achieving the decoupling of both the I/O profile and the boot image from the server. In other words, the server's entire identity can be changed dynamically, which includes both its I/O connectivity and its operating system.


It should be understood that Fibre Channel is just one example of a storage connectivity technology that can be used for the described systems and methods. Other storage connectivity technologies include Internet Small Computer System Interface (iSCSI), Serial ATA (SATA), and Serial Attached SCSI (SAS).



FIG. 5A illustrates one example of a protocol stack 500, which can be used for virtual I/O data transfers between servers and their remote I/O devices. In certain embodiments, a virtual I/O protocol 509, such as vNIC and vHBA, runs on the top of the iWARP protocol 507, which provides a remote direct memory access (RDMA) capability. This enables an I/O module within an IOD to place data directly into a memory buffer on the server. iWARP 507 uses the service of the TCP/IP protocol 505 to achieve reliable transmission and back-off in cases of packet loss. Support for slow start and back-off as provided by TCP/IP 505 enables using lossy Ethernet fabrics for connecting an IOD 107 to servers 101-103. TCP/IP 505 can also be used for lossless Ethernet fabrics.



FIG. 5B shows another example of a protocol stack 510, in which the virtual I/O protocol runs on the top of the Reliable Connection (RC) protocol 511. The RC protocol 511 as defined by the InfiniBand architecture may run on Ethernet in certain embodiments. The RC protocol 511 provides RDMA. However, unlike iWARP in the example described above, the RC protocol 511 may not be positioned on top of TCP since the RC protocol 511 already provides reliable transmission. In certain embodiments, the RC protocol 511 does not provide back-off and slow start for handling congestion conditions, which may make it more applicable for lossless Ethernet fabrics, which are flow controlled. Implementation of the RC protocol 511 may be less complex than a combination of iWARP 507 and TCP 505 protocols illustrated in FIG. 5A, particularly when a hardware-based TCA is used. Also, the RC protocol 511 does not incur certain overheads, which may be associated with TCP protocols, such as the increased latency resulting from slow start.



FIGS. 6A and 6B are diagrammatic representations of virtual I/O server driver stack examples using a hardware-based implementation of the transport protocol (in FIG. 6A) and a software-based implementation of the transport protocol (in FIG. 6B). From the perspective of the operating system running on the server, storage devices on the Fibre Channel network connected to the IOD vHBA module appear as block devices. In certain embodiments, this may resemble a case where the Fibre Channel network is connected to a local HBA device rather than to a remote vHBA device. Similarly, vNICs may appear as network devices that resemble local physical NICs. Virtual I/O device drivers may perform similar to physical I/O device drivers from the perspective of the upper OS layers. The difference between virtual I/O devices and physical I/O devices may be in their communication with the hardware that they control. In the case of physical I/O devices, the driver may control a local device over a local bus, such as PCI Express. In the case of virtual I/O devices provided by an IOD, the driver communicates with a remote I/O module over a reliable transport protocol running over a lossless or lossy Ethernet fabric.



FIG. 6A shows a driver stack example where a server contains a NIC 601 with hardware support for the reliable transport protocol, such as RC, iWARP, or similar. This approach may deliver high levels of performance since it can provide hardware segmentation and reassembly between packets and larger messages. In certain embodiments, a hardware-based implementation of the transport protocol can also provide zero-copy transfers during receiving and transmitting leading to low CPU overhead.



FIG. 6B shows a driver stack example where a server contains a NIC 611 without hardware support for the reliable transport protocol. Instead, this support is provided in a software layer 613. This approach may lead to higher overhead and lower performance than the hardware approach of FIG. 6A. However, software-based implementation of the transport protocol may provide wider applicability for the IOD device since hardware reconfiguration may not be needed (e.g., opening servers and installing specialized NIC devices or replacing with servers with such NIC devices pre-installed). Thus, this approach may be used where hardware configuration costs may offset lower performance considerations.


Description of the elements (601-620) illustrated in FIGS. 6A and 6B is provided throughout the detailed description.


A NIC driver typically includes a packet transmit path and a packet receive path. The packet transmit path is activated whenever the upper level software passes a packet to the driver. The packet receive path is activated when the NIC receives a packet from the network, and it needs to forward the packet to the upper layers of the network stack.


In certain embodiments, a vNIC driver implements the transmit and receive paths. Packets to be transmitted may be queued in a transmit queue. The packets are sent to the remote vNIC I/O module using the reliable send operation (such as RC Send) of the transport protocol. The vNIC I/O module will then send the packet to the external Ethernet network. Once the send is complete, the packet is de-queued from the transmit queue. Since the transport protocol is reliable, the completion of the send operation signifies that the vNIC I/O module acknowledged that the packet was received. For the vNIC receive path, the driver uses the receive operation (such as RC Receive) of the transport protocol. The receive operation is asynchronous. When the vNIC I/O module receives a packet from the external Ethernet network, and the packets needs to be sent to the server, the I/O module performs a send operation, which results in a completion of a receive operation on the server. The driver is notified of the completion, and it then processes the new packet by forwarding it to the network stack.



FIG. 7 is a diagrammatic representation showing an example of a write flow where a server is performing a write operation to a device on the Fibre Channel network connected to a vHBA I/O module. The process involves one of the servers 101-103 (e.g., a server 101), a target channel adapter 301, virtualization logic 311, and an HBAs 317. The Fibre Channel target device in this example may be a disk drive.


The write flow starts with a server 101 sending an I/O control block (IOCB) to the TCA 301 (arrow 701) according to certain embodiments. For example, an IOCB may be sent by an RC Send command with one or more IOCBs. A wide variety of IOCB formats are available. In many embodiments, an IOCB includes a buffer memory address, and a buffer length. Furthermore, it may include a write command, such as a SCSI Write. Multiple buffer address and length values may be provided in the event that the buffer is fragmented and needs to be represented as a scatter-gather list. Furthermore, a queue of the vHBA I/O module may be configured to store 32, 64, 128, or any other number of outstanding commands at one time. Once the IOCB reaches the target channel adapter 301, the adapter may reply with an acknowledgement and pass the command to the virtualization logic 401 for processing using an internal protocol (e.g., with Ethernet headers removed).


According to various embodiments, the virtualization logic 401 then requests the data to be written from the server memory, for example, by sending an RDMA Read Request 703 back to the server 101. The server 101 replies and initiates a data transfer associated with RDMA Read responses 705 in FIG. 7. When the first RDMA read response reaches the virtualization logic 401, the logic updates a pointer of the corresponding HBA 407. The updated pointed indicates that there is a new request in the IOCB queue. The HBA 407 proceeds with requesting an IOCB read 709 from the virtual logic 401. The IOCB data 711 is forwarded to the HBA, which triggers a request for disk write data 713 from the HBA 407. The data is then transferred (“written”) from the virtualization logic 401 memory to the HBA 407 as disk write data 715. The sequence of requesting disk write data 713 and 716 may be repeated. Finally, when all data is transferred, the HBA 407 sends a completion message, such as a response IOCB 717 to the virtualization logic 401, which is then forwarded back to the server 301 as Send Response IOCB 719 using, for example, an RC Send operation. The server 101 may reply back to the target channel adapter 301 with an acknowledgement. Finally, the virtualization logic 401 updates the pointer to indicate that the response queue entry can be reused 721.


In general, the write flow may be considered as a combination of two protocols. The first protocol is one between the servers 101-103 and the virtualization logic 401, which includes the target channel adapter 301. The second protocol is between the virtualization logic 401 and the HBA 407.



FIG. 8 is a diagrammatic representation showing one example of a read flow where a server is performing a read operation from a device on the Fibre Channel network connected to a vHBA I/O module. Similarly to the write flow described above, the process involves one of the servers 101-103 (e.g., server 101), a target channel adapter 301, virtualization logic 401, and one of the HBAs 407.


According to various embodiments, the write flow starts with the server 101 sending an I/O control block (IOCB) using an RC Send operation to TCA 301. In certain embodiments, an IOCB includes a buffer memory address, and a buffer length. In addition, it may include a read command, such as a SCSI Read. The buffer information specifies the memory area on the server where the read data should be placed. Once the IOCB reaches the target channel adapter 301, the adapter may reply with an acknowledgement and pass the command to the virtualization logic 401 for processing.


The virtualization logic 401 then updates the pointer of the HBA 407 to indicate a new IOCB on the request queue. The HBA 407 requests the IOCB from the virtualization logic 401 by sending an IOCB request command 805. The IOCB is then forwarded 807 to the HBA 407. The data read from the disk is then transferred from the HBA 407 to the memory of the virtualization logic 401 in a series of transfers 809. The virtualization logic 401 fetches the data from the memory and sends it to the server as RDMA Write commands 811. The server may respond with an acknowledgement after receiving the last data packet. Once all data is read from the HBA 407, it sends a completion message, shown as Response IOCB 813, to the virtualization logic 401. This response is then forwarded to the server 101. Finally, the virtualization logic 401 updates the response queue index 817, so that the response queue entry can be reused.


In addition to the RC protocol referenced above, any other RDMA protocol applicable over Ethernet fabrics, such as iWARP, may be used.


In addition to the buffered approaches described above for the vHBA write and read flows, a cut-through approach may be implemented in certain embodiments. With a cut-through approach, RDMA Read data arriving at the virtualization logic 401 from the server 101 is sent immediately to the HBA 407 without buffering. Similarly, data arriving at the virtualization logic 401 from the HBA 407 is sent immediately to the server 101 by RDMA Write without buffering.



FIG. 9 is a diagrammatic representation showing one example of a management protocol over an Ethernet fabric. This protocol may be used to enable servers to discover their remote I/O resources on the Ethernet I/O fabric and to establish communication with the relevant IODs and I/O module TCAs. The virtual I/O driver software running on the servers 101-103 allows the servers to transmit multicast discovery packets that initiate a management (or discovery) process described herein. The management module 105 in each of the IODs 107 executes a session manager 901 and a discovery service 903. The ultimate targets for communication are one or more TCAs, such as 301 shown in FIG. 9, since they represent the I/O module end point.


According to certain embodiments, the multicast discovery packets are sent (arrow 907) to a pre-established (and known to the server) multicast address, i.e., that of directory service 903, using Ethernet layer 2 multicast. All IODs on the same Ethernet network are configured to listen to multicast discovery packets, which are sent to the address known to the server. The discovery packet may contain server information (e.g., name, OS version, MAC address, firmware version, and other information). Any IOD that receives this packet creates a server object within the information model with the attributes contained in the discovery packet. If a server profile is present for this physical server 101 on the IOD, the directory service 903 responds to the server 101 with a unicast packet that contains information about the IOD (arrow 909). The server 101 then uses the information contained in the unicast packet to establish a connection with the session manager 901 of the IOD 107 over a reliable communication channel (arrow 911). Once the session has been established, the session manager 901 uploads to the server information on the virtual I/O resources, such as vNICs and vHBAs, allocated to the server (arrow 913) and information on how to reach these resources.



FIG. 9 also illustrates server 101 using the information obtained from the IOD to establish a connection with a target channel adapter 301 (arrow 915), and transfer data (arrow 919) in accordance with the read and write protocols described above. The server obtains the address of TCA 301 from the session manager during the uploading of the information about virtual I/O devices assigned to the server (arrow 913). TCA 301 resides on an I/O module which includes virtual I/O devices which are assigned to the server.



FIG. 10 illustrates a technique for virtualizing I/O resources in accordance with certain embodiments. The process 1000 may start with connecting multiple Ethernet network interfaces of the IOD to multiple servers (block 1002). The connection may be performed using a lossy Ethernet fabric or a lossless Ethernet fabric. According to various embodiments, the servers include NICs in order to connect to the Ethernet network interfaces using an Ethernet I/O fabric. The IOD may include an Ethernet switch in order to connect to the servers over the Ethernet fabric. The Ethernet switch may be a lossless Ethernet switch or a lossy Ethernet switch.


An IOD may contain one or more I/O modules of the same type or a combination of I/O modules of different types, such as vHBA I/O module and vNIC I/O module. Virtual I/O devices, such as vNICs and vHBAs are implemented in the respective I/O modules. The process 1000 may continue with servers being associated with vNICs and/or vHBAs of the IOD (block 1004). Multiple servers may have virtual I/O resources on the same I/O modules or on different I/O modules. The I/O modules may be responsible for enforcing quality-of-server guarantees, such as committed and peak data rates, for the virtual I/O devices.


According to various embodiments, the IOD can provide flexible termination points for the I/O resources assigned to servers. The IOD may be connected to multiple Ethernet networks and/or Fibre Channel networks. In certain embodiments, the process 1000 includes operation 1006 for connecting multiple output ports of the IOD to multiple external devices. Connection of a server to one or more networks is performed by assigning the server's vNICs and/or vHBAs to these networks. Therefore, servers can be connected to different physical networks without a need for re-cabling or any other physical intervention.


The process 1000 may include operation 1008 for mapping vNICs and vHBAs to the output ports. Various embodiments of this operation are described above in the context of management protocol, learning, and other processes.


Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. It should be noted that there are many alternative ways of implementing the processes, systems and apparatus of the present invention. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein.

Claims
  • 1. A method to facilitate virtualizing I/0 resources, the method comprising: communicatively connecting a plurality of Ethernet network interfaces to a plurality of servers via a first Ethernet network, each server of the plurality of servers comprising one or more processors and memory;communicatively connecting a first plurality of output ports of an I/0 director to a first plurality of external devices over a second Ethernet network;mapping with a network processor a plurality of virtual network interface cards with the first plurality of output ports;maintaining a plurality of media access control (MAC) addresses for the plurality of servers with the network processor; and facilitating, with the I/O director, sharing of one or more common, shared Network Interface Controllers (NICs) and Host Bus Adapters (HBAs) to the plurality of servers at least in part by: associating the plurality of servers with a plurality of virtual network interface cards executed by the I/0 director;communicating with each of the plurality of servers using a management protocol over the first Ethernet network;processing multicast discovery packets transmitted from virtual network interface card drivers of the plurality of servers according to the management protocol to facilitate discovery of remote I/0 resources of the second Ethernet network by the plurality of servers, the multicast discovery packets comprising server identifiers and corresponding server attributes;creating server objects based at least in part on the multicast discovery packets;determining whether the I/O director retains corresponding server profiles;responding to the plurality of servers with unicast packets containing first information facilitating establishment of connections with a session manager of the I/O director;establishing with learning logic a mapping of the plurality of virtual network interface cards to the plurality of MAC addresses, at least in part by storing mapping specifications as data packets arrive from one or more servers of the plurality of servers; andin response to a connection being established with one server of the plurality of servers using the first information of the unicast packets, uploading, to the one server of the plurality of servers, second information on one or more of the plurality of virtual network interface cards and/or a plurality of virtual host bus adapters allocated to the one server.
  • 2. The method to facilitate virtualizing I/O resources of claim 1, where each server of the plurality of servers: establishes communication with the I/O director using the management protocol; anddiscovers the remote I/O resources on the second Ethernet network using the management protocol;wherein the mapping of the plurality of virtual network interface cards with the first plurality of output ports is performed dynamically.
  • 3. The method to facilitate virtualizing I/O resources of claim 1, further comprising: communicatively connecting a second plurality of output ports to a second plurality of external devices over a first Fibre Channel network, wherein the plurality of servers is associated with the plurality of virtual host bus adapters; andmapping the plurality of virtual host bus adapters with the second plurality of output ports, wherein the mapping of the plurality of virtual host bus adapters with the second plurality of output ports is performed dynamically.
  • 4. The method to facilitate virtualizing I/O resources of claim 1, further comprising: communicatively connecting a second plurality of output ports to a second plurality of external devices, wherein the plurality of servers is associated with a plurality of adapters selected from a group consisting of serial ATA (SATA) adapters, serial attached SCSI (SAS) adapters, RAID adapters, Fibre Channel host bus adapters, and iSCSI host bus adapters; andmapping the plurality of adapters with the second plurality of output ports.
  • 5. The method to facilitate virtualizing I/O resources of claim 1, further comprising: facilitating, via the I/O director, communication among servers within the plurality of servers, wherein the communication is performed using one or more protocols selected from a group consisting of Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), and Message Passing Interface (MPI).
  • 6. The method to facilitate virtualizing I/O resources of claim 1, further comprising: removing transport protocol headers from the data packets received from the plurality of servers;assigning internal headers to the data packets; andsending the data packets with the internal headers to the one or more of the plurality of virtual network interface cards.
  • 7. The method to facilitate virtualizing I/O resources of claim 1, wherein the storing the mapping specifications comprises populating a mapping table as the data packets arrive from the one or more servers of the plurality of servers, and the learning logic is configured to: examine the data packets received from the one or more servers of the plurality of servers on the one or more of the plurality of virtual network interface cards; andidentify from the data packets one or more MAC addresses of the plurality of MAC addresses;wherein the establishing the mapping comprises mapping the identified one or more MAC addresses to the one or more of the plurality of virtual network interface cards.
  • 8. One or more non-transitory, machine-readable media having machine-readable instructions thereon, which, when executed by one or more processing devices, cause the one or more processing devices to: cause communicatively connecting of a plurality of Ethernet network interfaces to a plurality of servers via a first Ethernet network, each server of the plurality of servers comprising one or more processors and memory;cause communicatively connecting of a first plurality of output ports of an I/O director to a first plurality of external devices over a second Ethernet network;map a plurality of virtual network interface cards with the first plurality of output ports;maintain a plurality of media access control (MAC) addresses for the plurality of servers; andfacilitate, with the I/0 director, sharing of one or more common, shared Network Interface Controllers (NICs) and Host Bus Adapters (HBAs) to the plurality of servers at least in part by: associate the plurality of servers with the plurality of virtual network interface cards executed by the I/0 director;causing communicating with each of the plurality of servers using a management protocol over the first Ethernet network;processing multicast discovery packets transmitted from virtual network interface card drivers of the plurality of servers according to the management protocol to facilitate discovery of remote I/0 resources of the second Ethernet network by the plurality of servers, the multicast discovery packets comprising server identifiers and corresponding server attributes;creating server objects based at least in part on the multicast discovery packets;determining whether the I/O director retains corresponding server profiles;causing responding to the plurality of servers with unicast packets containing first information facilitating establishment of connections with a session manager of the I/O director;establishing with learning logic a mapping of the plurality of virtual network interface cards to the plurality of MAC addresses, at least in part by storing mapping specifications as data packets arrive from one or more servers of the plurality of servers; andin response to a connection being established with one server of the plurality of servers using the first information of the unicast packets, causing uploading, to the one server of the plurality of servers, of second information on one or more of the plurality of virtual network interface cards and/or a plurality of virtual host bus adapters allocated to the one server.
  • 9. The one or more non-transitory, machine-readable media of claim 8, where each server of the plurality of servers: establishes communication with the I/O director using the management protocol; anddiscovers the remote I/O resources on the second Ethernet network using the management protocol;wherein the mapping of the plurality of virtual network interface cards with the first plurality of output ports is performed dynamically.
  • 10. The one or more non-transitory, machine-readable media of claim 8, the one or more processing devices further to: cause communicatively connecting of a second plurality of output ports to a second plurality of external devices over a first Fibre Channel network, wherein the plurality of servers is associated with the plurality of virtual host bus adapters; andmap the plurality of virtual host bus adapters with the second plurality of output ports, wherein the mapping of the plurality of virtual host bus adapters with the second plurality of output ports is performed dynamically.
  • 11. The one or more non-transitory, machine-readable media of claim 8, the one or more processing devices further to: cause communicatively connecting of a second plurality of output ports to a second plurality of external devices, wherein the plurality of servers is associated with a plurality of adapters selected from a group consisting of serial ATA (SATA) adapters, serial attached SCSI (SAS) adapters, RAID adapters, Fibre Channel host bus adapters, and iSCSI host bus adapters; andmap the plurality of adapters with the second plurality of output ports.
  • 12. The one or more non-transitory, machine-readable media of claim 8, the one or more processing devices further to: facilitate, via the I/O director, communication among servers within the plurality of servers, wherein the communication is performed using one or more protocols selected from a group consisting of Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), and Message Passing Interface (MPI).
  • 13. The one or more non-transitory, machine-readable media of claim 8, the one or more processing devices further to: remove transport protocol headers from the data packets received from the plurality of servers;assign internal headers to the data packets; andcause sending of the data packets with the internal headers to the one or more of the plurality of virtual network interface cards.
  • 14. The one or more non-transitory, machine-readable media of claim 8, wherein the storing the mapping specifications comprises populating a mapping table as the data packets arrive from the one or more servers of the plurality of servers, and wherein the learning logic is configured to: examine the data packets received from the one or more servers of the plurality of servers on the one or more of the plurality of virtual network interface cards; andidentify from the data packets one or more MAC addresses of the plurality of MAC addresses;wherein the establishing the mapping comprises mapping the identified one or more MAC addresses to the one or more of the plurality of virtual network interface cards.
  • 15. A device to facilitate virtualizing I/O resources, the device comprising: an I/O director communicatively connectable with a first Ethernet network and a second Ethernet network, the I/O director comprising: a plurality of Ethernet network interfaces communicatively connectable to a plurality of servers over the first Ethernet network;a first plurality of output ports communicatively connectable to a first plurality of external devices over the second Ethernet network; anda network processor operable to map a plurality of virtual network interface cards with the first plurality of output ports, wherein the network processor is further operable to maintain a plurality of media access control (MAC) addresses for the plurality of servers;the I/O director to facilitate sharing of one or more common, shared Network Interface Controllers (NICs) and Host Bus Adapters (HBAs) to the plurality of servers at least in part by: associating the plurality of servers with the plurality of virtual network interface cards executed by the I/O director;communicating with each of the plurality of servers using a management protocol over the first Ethernet network;processing multicast discovery packets transmitted from virtual network interface card drivers of the plurality of servers according to the management protocol to facilitate discovery of remote I/O resources of the second Ethernet network by the plurality of servers, the multicast discovery packets comprising server identifiers and corresponding server attributes;creating server objects based at least in part on the multicast discovery packets;determining whether the I/O director retains corresponding server profiles;responding to the plurality of servers with unicast packets containing first information facilitating establishment of connections with a session manager of the I/O director;establishing with learning logic a mapping of the plurality of virtual network interface cards to the plurality of MAC addresses, at least in part by storing mapping specifications as data packets arrive from one or more servers of the plurality of servers; andin response to a connection being established with one server of the plurality of servers using the first information of the unicast packets, uploading, to the one server of the plurality of servers, second information on one or more of the plurality of virtual network interface cards and/or a plurality of virtual host bus adapters allocated to the one server.
  • 16. The device to facilitate virtualizing I/O resources of claim 15, where each server of the plurality of servers: establishes communication with the I/O director using the management protocol; anddiscovers the remote I/O resources on the second Ethernet network using the management protocol;wherein the mapping of the plurality of virtual network interface cards with the first plurality of output ports is performed dynamically.
  • 17. The device to facilitate virtualizing I/O resources of claim 15, the I/O director further to: communicatively connect of a second plurality of output ports to a second plurality of external devices over a first Fibre Channel network, wherein the plurality of servers is associated with the plurality of virtual host bus adapters; andmap the plurality of virtual host bus adapters with the second plurality of output ports, wherein the mapping of the plurality of virtual host bus adapters with the second plurality of output ports is performed dynamically.
  • 18. The device to facilitate virtualizing I/O resources of claim 15, the I/O director further to: communicatively connect of a second plurality of output ports to a second plurality of external devices, wherein the plurality of servers is associated with a plurality of adapters selected from a group consisting of serial ATA (SATA) adapters, serial attached SCSI (SAS) adapters, RAID adapters, Fibre Channel host bus adapters, and iSCSI host bus adapters; andmap the plurality of adapters with the second plurality of output ports.
  • 19. The device to facilitate virtualizing I/O resources of claim 15, the I/O director further to: facilitate communication among servers within the plurality of servers, wherein the communication is performed using one or more protocols selected from a group consisting of Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), and Message Passing Interface (MPI).
  • 20. The device to facilitate virtualizing I/O resources of claim 15, the I/O director further to: remove transport protocol headers from the data packets received from the plurality of servers;assign internal headers to the data packets; andcause sending of the data packets with the internal headers to the one or more of the plurality of virtual network interface cards.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation application of U.S. application Ser. No. 12/544,744, filed on Aug. 20, 2009, titled “REMOTE SHARED SERVER PERIPHERALS OVER AN ETHERNET NETWORK FOR RESOURCE VIRTUALIZATION,” which is herein incorporated by reference in its entirety for all purposes.

US Referenced Citations (278)
Number Name Date Kind
5621913 Tuttle et al. Apr 1997 A
5754948 Metze May 1998 A
5815675 Steele et al. Sep 1998 A
5898815 Bluhm et al. Apr 1999 A
6003112 Tetrick Dec 1999 A
6069895 Ayandeh May 2000 A
6145028 Shank et al. Nov 2000 A
6157955 Narad et al. Dec 2000 A
6247086 Allingham Jun 2001 B1
6253334 Amdahl et al. Jun 2001 B1
6282647 Leung et al. Aug 2001 B1
6308282 Huang et al. Oct 2001 B1
6314525 Mahalingham et al. Nov 2001 B1
6331983 Haggerty et al. Dec 2001 B1
6343324 Hubis et al. Jan 2002 B1
6377992 Plaza et al. Apr 2002 B1
6393483 Latif et al. May 2002 B1
6401117 Narad et al. Jun 2002 B1
6418494 Shatas et al. Jul 2002 B1
6430191 Klausmeier et al. Aug 2002 B1
6466993 Bonola Oct 2002 B1
6470397 Shah et al. Oct 2002 B1
6578128 Arsenault et al. Jun 2003 B1
6594329 Susnow Jul 2003 B1
6628608 Lau et al. Sep 2003 B1
6708297 Bassel Mar 2004 B1
6725388 Susnow Apr 2004 B1
6757725 Frantz et al. Jun 2004 B1
6779064 McGowen et al. Aug 2004 B2
6804257 Benayoun et al. Oct 2004 B1
6807581 Starr et al. Oct 2004 B1
6823458 Lee et al. Nov 2004 B1
6898670 Nahum May 2005 B2
6931511 Weybrew et al. Aug 2005 B1
6937574 Delaney et al. Aug 2005 B1
6963946 Dwork et al. Nov 2005 B1
6970921 Wang et al. Nov 2005 B1
7011845 Kozbor et al. Mar 2006 B2
7046668 Pettey et al. May 2006 B2
7093265 Jantz et al. Aug 2006 B1
7096308 Main et al. Aug 2006 B2
7103064 Pettey et al. Sep 2006 B2
7103888 Clayton et al. Sep 2006 B1
7111084 Tan et al. Sep 2006 B2
7120728 Krakirian et al. Oct 2006 B2
7127445 Mogi et al. Oct 2006 B2
7143227 Maine Nov 2006 B2
7159046 Mulla et al. Jan 2007 B2
7171434 Ibrahim et al. Jan 2007 B2
7171495 Matters et al. Jan 2007 B2
7181211 Phan-Anh Feb 2007 B1
7203842 Kean Apr 2007 B2
7209439 Rawlins et al. Apr 2007 B2
7213246 van Rietschote et al. May 2007 B1
7219183 Pettey et al. May 2007 B2
7240098 Mansee Jul 2007 B1
7260661 Bury et al. Aug 2007 B2
7269168 Roy et al. Sep 2007 B2
7281030 Davis Oct 2007 B1
7281077 Woodral Oct 2007 B2
7281169 Golasky et al. Oct 2007 B2
7307948 Infante et al. Dec 2007 B2
7308551 Arndt et al. Dec 2007 B2
7334178 Aulagnier Feb 2008 B1
7345689 Janus et al. Mar 2008 B2
7346716 Bogin et al. Mar 2008 B2
7360017 Higaki et al. Apr 2008 B2
7366842 Acocella et al. Apr 2008 B1
7386637 Arndt et al. Jun 2008 B2
7395352 Lam et al. Jul 2008 B1
7412536 Oliver et al. Aug 2008 B2
7421710 Qi et al. Sep 2008 B2
7424529 Hubis Sep 2008 B2
7433300 Bennett et al. Oct 2008 B1
7457897 Lee et al. Nov 2008 B1
7457906 Pettey et al. Nov 2008 B2
7478173 Delco Jan 2009 B1
7493416 Pettey Feb 2009 B2
7502884 Shah et al. Mar 2009 B1
7509436 Rissmeyer Mar 2009 B1
7516252 Krithivas Apr 2009 B2
7602774 Sundaresan et al. Oct 2009 B1
7606260 Oguchi et al. Oct 2009 B2
7609723 Munguia Oct 2009 B2
7634650 Shah et al. Dec 2009 B1
7669000 Sharma et al. Feb 2010 B2
7711789 Jnagal et al. May 2010 B1
7733890 Droux et al. Jun 2010 B1
7782869 Chitlur Srinivasa Aug 2010 B1
7783788 Quinn et al. Aug 2010 B1
7792923 Kim Sep 2010 B2
7793298 Billau et al. Sep 2010 B2
7821973 McGee et al. Oct 2010 B2
7836332 Hara et al. Nov 2010 B2
7843907 Abou-Emara et al. Nov 2010 B1
7849153 Kim Dec 2010 B2
7865626 Hubis Jan 2011 B2
7870225 Kim Jan 2011 B2
7899928 Naik et al. Mar 2011 B1
7933993 Skinner Apr 2011 B1
7937447 Cohen et al. May 2011 B1
7941814 Okcu et al. May 2011 B1
8041875 Shah et al. Oct 2011 B1
8180872 Marinelli et al. May 2012 B1
8180949 Shah et al. May 2012 B1
8185664 Lok et al. May 2012 B1
8195854 Sihare Jun 2012 B1
8200871 Rangan et al. Jun 2012 B2
8218538 Chidambaram et al. Jul 2012 B1
8228820 Gopal Gowda et al. Jul 2012 B2
8261068 Raizen et al. Sep 2012 B1
8285907 Chappell et al. Oct 2012 B2
8287044 Chen et al. Oct 2012 B2
8291148 Shah et al. Oct 2012 B1
8392645 Miyoshi Mar 2013 B2
8397092 Karnowski Mar 2013 B2
8443119 Limaye et al. May 2013 B1
8458306 Sripathi Jun 2013 B1
8677023 Venkataraghavan et al. Mar 2014 B2
8892706 Dalal Nov 2014 B1
9083550 Cohen et al. Jan 2015 B2
9064058 Daniel Jun 2015 B2
9264384 Sundaresan et al. Feb 2016 B1
9331963 Krishnamurthi et al. May 2016 B2
20010032280 Osakada et al. Oct 2001 A1
20010037406 Philbrick et al. Nov 2001 A1
20020023151 Iwatani Feb 2002 A1
20020065984 Thompson et al. May 2002 A1
20020069245 Kim Jun 2002 A1
20020152327 Kagan et al. Oct 2002 A1
20030007505 Noda et al. Jan 2003 A1
20030028716 Sved Feb 2003 A1
20030037177 Sutton et al. Feb 2003 A1
20030051076 Webber Mar 2003 A1
20030081612 Goetzinger et al. May 2003 A1
20030093501 Carlson et al. May 2003 A1
20030099254 Richter May 2003 A1
20030110364 Tang et al. Jun 2003 A1
20030118053 Edsall et al. Jun 2003 A1
20030126315 Tan et al. Jul 2003 A1
20030126320 Liu et al. Jul 2003 A1
20030126344 Hodapp, Jr. Jul 2003 A1
20030131182 Kumar et al. Jul 2003 A1
20030165140 Tanq et al. Sep 2003 A1
20030172149 Edsall et al. Sep 2003 A1
20030200315 Goldenberg et al. Oct 2003 A1
20030200363 Futral et al. Nov 2003 A1
20030208614 Wilkes Nov 2003 A1
20030212755 Shatas et al. Nov 2003 A1
20030226018 Tardo et al. Dec 2003 A1
20030229645 Mogi et al. Dec 2003 A1
20040003140 Rimmer Jan 2004 A1
20040003141 Matters et al. Jan 2004 A1
20040003154 Harris et al. Jan 2004 A1
20040008713 Knight et al. Jan 2004 A1
20040025166 Adlung et al. Feb 2004 A1
20040028063 Roy et al. Feb 2004 A1
20040030857 Krakirian et al. Feb 2004 A1
20040034718 Goldenberg et al. Feb 2004 A1
20040054776 Klotz et al. Mar 2004 A1
20040057441 Li et al. Mar 2004 A1
20040064590 Starr et al. Apr 2004 A1
20040078632 Infante et al. Apr 2004 A1
20040081145 Harrekilde-Petersen et al. Apr 2004 A1
20040107300 Padmanabhan et al. Jun 2004 A1
20040123013 Clayton et al. Jun 2004 A1
20040139237 Rangan et al. Jul 2004 A1
20040151188 Maveli et al. Aug 2004 A1
20040160970 Dally et al. Aug 2004 A1
20040172494 Pettey et al. Sep 2004 A1
20040179529 Pettey et al. Sep 2004 A1
20040210623 Hydrie et al. Oct 2004 A1
20040218579 An Nov 2004 A1
20040225719 Kisley et al. Nov 2004 A1
20040225764 Pooni et al. Nov 2004 A1
20040233933 Munguia Nov 2004 A1
20040236877 Burton Nov 2004 A1
20050010688 Murakami et al. Jan 2005 A1
20050033878 Pangal et al. Feb 2005 A1
20050039063 Hsu et al. Feb 2005 A1
20050044301 Vasilevsky et al. Feb 2005 A1
20050050191 Hubis Mar 2005 A1
20050058085 Shapiro et al. Mar 2005 A1
20050066045 Johnson et al. Mar 2005 A1
20050080923 Elzur Apr 2005 A1
20050080982 Vasilevsky et al. Apr 2005 A1
20050091441 Qi et al. Apr 2005 A1
20050108407 Johnson et al. May 2005 A1
20050111483 Cripe et al. May 2005 A1
20050114569 Bogin et al. May 2005 A1
20050114595 Karr et al. May 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050141425 Foulds Jun 2005 A1
20050160251 Zur et al. Jul 2005 A1
20050182853 Lewites et al. Aug 2005 A1
20050188239 Golasky et al. Aug 2005 A1
20050198410 Kagan et al. Sep 2005 A1
20050198523 Shanbhag et al. Sep 2005 A1
20050232285 Terrell et al. Oct 2005 A1
20050238035 Riley Oct 2005 A1
20050240621 Robertson et al. Oct 2005 A1
20050240932 Billau et al. Oct 2005 A1
20050262269 Pike Nov 2005 A1
20060004983 Tsao et al. Jan 2006 A1
20060007937 Sharma Jan 2006 A1
20060010287 Kim Jan 2006 A1
20060013240 Ma et al. Jan 2006 A1
20060045089 Bacher Mar 2006 A1
20060045098 Krause Mar 2006 A1
20060050693 Bury et al. Mar 2006 A1
20060059400 Clark et al. Mar 2006 A1
20060092928 Pike et al. May 2006 A1
20061012969 Kagan et al. Jun 2006
20060168286 Makhervaks et al. Jul 2006 A1
20060168306 Makhervaks et al. Jul 2006 A1
20060136570 Pandya Aug 2006 A1
20060179178 King Aug 2006 A1
20060182034 Klinker et al. Aug 2006 A1
20060184711 Pettey et al. Aug 2006 A1
20060193327 Arndt et al. Aug 2006 A1
20060200584 Bhat Sep 2006 A1
20060212608 Arndt et al. Sep 2006 A1
20060224843 Rao et al. Oct 2006 A1
20060233168 Lewites et al. Oct 2006 A1
20060242332 Johnsen et al. Oct 2006 A1
20060253619 Torudbakken et al. Nov 2006 A1
20060282591 Krithivas Dec 2006 A1
20060292292 Brightman et al. Dec 2006 A1
20070050520 Riley Mar 2007 A1
20070067435 Landis et al. Mar 2007 A1
20070101173 Fung May 2007 A1
20070112574 Greene May 2007 A1
20070112963 Dvkes et al. May 2007 A1
20070130295 Rastogi et al. Jun 2007 A1
20070220170 Abjanic et al. Sep 2007 A1
20070286233 Latif et al. Dec 2007 A1
20080025217 Gusat et al. Jan 2008 A1
20080082696 Bestler Apr 2008 A1
20080159260 Vobbilisetty et al. Jul 2008 A1
20080192648 Galles Aug 2008 A1
20080205409 McGee et al. Aug 2008 A1
20080225877 Yoshida Sep 2008 A1
20080270726 Elnozahy et al. Oct 2008 A1
20080301692 Billau et al. Dec 2008 A1
20080307150 Stewart et al. Dec 2008 A1
20090070422 Kashyap et al. Mar 2009 A1
20090141728 Brown et al. Jun 2009 A1
20090296579 Dharwadkar Dec 2009 A1
20090307388 Tchapda Dec 2009 A1
20100010899 Lambert Jan 2010 A1
20100088432 Itoh Apr 2010 A1
20100138602 Kim Jun 2010 A1
20100195549 Aragon et al. Aug 2010 A1
20100232450 Maveli et al. Sep 2010 A1
20100293552 Allen et al. Nov 2010 A1
20110153715 Oshins et al. Jun 2011 A1
20110154318 Oshins et al. Jun 2011 A1
20120072564 Johnson Mar 2012 A1
20120079143 Krishnamurthi et al. Mar 2012 A1
20120144006 Wakamatsu et al. Jun 2012 A1
20120158647 Yadappanavar et al. Jun 2012 A1
20120163376 Shukla et al. Jun 2012 A1
20120163391 Shukla et al. Jun 2012 A1
20120166575 Ogawa et al. Jun 2012 A1
20120167080 Vilayannur et al. Jun 2012 A1
20120209905 Haugh et al. Aug 2012 A1
20120239789 Ando et al. Sep 2012 A1
20120304168 Raj Seeniraj et al. Nov 2012 A1
20130031200 Gulati et al. Jan 2013 A1
20130080610 Ando Mar 2013 A1
20130117421 Wimmer May 2013 A1
20130138758 Cohen et al. May 2013 A1
20130145072 Venkataraghavan et al. Jun 2013 A1
20130159637 Forqette et al. Jun 2013 A1
20130179532 Tameshige et al. Jul 2013 A1
20130201988 Zhou et al. Aug 2013 A1
20140122675 Cohen et al. May 2014 A1
20150134854 Tchapda May 2015 A1
Foreign Referenced Citations (6)
Number Date Country
1703016 Nov 2005 CN
104823409 Aug 2015 CN
2912805 Sep 2015 EP
2912805 Jun 2016 EP
2912805 May 2017 EP
2014070445 May 2014 WO
Non-Patent Literature Citations (106)
Entry
Bhatt, Ajay V. “Creating a Third Generation I/O Interconnect”, Intel® Developer Network for PCI Express* Architecture, www.express-lane.org, pp. 1-11.
Balakrishnan et al., Improving TCP/IP Performance over Wireless Networks, Proc. 1st ACM Int'l Conf. on Mobile Computing and Networking (Mobicom), Nov. 1995, 10 pages.
Figueiredo et al, “Resource Virtualization Renaissance”, May 2005, IEEE Computer Society, pp. 28-31.
HTTP Persistent Connection Establishment, Management and Termination, section of ‘The TCP/IP Guide’ version 3.0, Sep. 20, 2005, 2 pages.
Kesavan et al., Active CoordinaTion (ACT)—Toward Effectively Managing Virtualized Multicore Clouds, IEEE, 2008.
Liu et al., High Performance ROMA-Based MPI Implementation over InfiniBand, ICS'03, San Francisco, ACM, Jun. 23-26, 2003, 10 pages.
Marshall (Xsigo Systems Launches Company and 1/0 Virtualization Product, Sep. 15, 2007, vmblog.com, “http://vmblog.com/archive/2007/09/15/xsigo-systems-launches-company-and-i-o-virtualization-product.aspx” accessed on Mar. 24, 2014).
Poulton, Xsigo—Try it out, I dare you, Nov. 16, 2009.
Ranadive et al., IBMon: Monitoring VMM-Bypass Capable InfiniBand Devices using Memory Introspection, ACM, 2009.
Spanbauer, Wired or Wireless, Choose Your Network, PCWorld, Sep. 30, 2003, 9 pages.
TCP Segment Retransmission Timers and the Retransmission Queue, section of ‘The TCP/IP Guide’ version 3.0, Sep. 20, 2005, 3 pages.
TCP Window Size Adjustment and Flow Control, section of ‘The TCP/IP Guide’ version 3.0, Sep. 20, 2005, 2 pages.
Wikipedia's article on ‘Infiniband’, Aug. 2010.
Wong et al., Effective Generation of Test Sequences for Structural Testing of Concurrent Programs, IEEE International Conference of Complex Computer Systems (ICECCS'05), 2005.
Xu et al., Performance Virtualization for Large-Scale Storage Systems, IEEE, 2003, 10 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action dated Jul. 11, 2008, 12 pages.
U.S. Appl. No. 11/083,258, Final Office Action dated Feb. 2, 2009, 13 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action dated Nov. 12, 2009, 13 pages.
U.S. Appl. No. 11/083,258, Final Office Action dated Jun. 10, 2010, 15 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action dated Mar. 28, 2011, 14 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action dated Apr. 25, 2012, 30 pages.
U.S. Appl. No. 11/083,258, Final Office Action dated Oct. 26, 2012, 30 pages.
U.S. Appl. No. 11/083,258, Advisory Action dated Jan. 24, 2013, 3 pages.
U.S. Appl. No. 11/083,258, Final Office Action dated Apr. 18, 2014, 37 pages.
U.S. Appl. No. 11/083,258, Non-Fina; Office Action dated Sep. 10, 2014, 34 pages.
U.S. Appl. No. 11/083,258, Final Office Action dated Mar. 19, 2015, 37 pages.
U.S. Appl. No. 11/083,258, Notice of Allowance dated Oct. 5, 2015, 8 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action dated Jul. 22, 2008, 13 pages.
U.S. Appl. No. 11/086,117, Final Office Action dated Dec. 23, 2008, 11 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action dated May 6, 2009, 12 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action dated Jul. 22, 2010, 24 pages.
U.S. Appl. No. 11/086,117, Notice of Allowance dated Dec. 27, 2010, 15 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action dated Mar. 31, 2009, 22 pages.
U.S. Appl. No. 11/145,698, Final Office Action dated Aug. 18, 2009, 22 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action dated Mar. 16, 2011, 24 pages.
U.S. Appl. No. 11/145,698, Final Office Action dated Jul. 6, 2011, 26 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action dated May 9, 2013, 13 pages.
U.S. Appl. No. 11/145,698, Notice of Allowance, dated Oct. 24, 2013, 15 pages.
U.S. Appl. No. 11/179,085, Non-Final Office Action dated May 31, 2007, 14 pages.
U.S. Appl. No. 11/179,085, Response to Non-final Office Action filed Aug. 10, 2007, 8 pages.
U.S. Appl. No. 11/179,085, Final Office Action dated Oct. 30, 2007, 13 pages.
U.S. Appl. No. 11/179,085, Pre Appeal Brief Request mailed on Jan. 24, 2008, 6 pages.
U.S. Appl. No. 11/179,085, Preliminary Amendment dated May 27, 2008, 9 pages.
U.S. Appl. No. 11/179,085, Notice of Allowance dated Aug. 11, 2008, 4 pages.
U.S. Appl. No. 11/179,437, Non-Final Office Action dated May 8, 2008, 11 pages.
U.S. Appl. No. 11/179,437, Final Office Action dated Jan. 8, 2009, 13 pages.
U.S. Appl. No. 11/179,437, Notice of Allowance dated Jun. 1, 2009, 8 pages.
U.S. Appl. No. 11/184,306, Non-Final Office Action dated Apr. 10, 2009, 5 pages.
U.S. Appl. No. 11/184,306, Notice of Allowance dated Aug. 10, 2009, 4 pages.
U.S. Appl. No. 11/200,761, Office Action dated Mar. 12, 2009.
U.S. Appl. No. 11/200,761, Final Office Action dated Aug. 13, 2009.
U.S. Appl. No. 11/200,761, Non-Final Office Action dated Jan. 20, 2010, 22 pages.
U.S. Appl. No. 11/200,761, Final Office Action dated Jul. 9, 2010, 22 pages.
U.S. Appl. No. 11/200,761, Advisory Action dated Aug. 31, 2010, 3 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action dated Aug. 31, 2012, 21 pages.
U.S. Appl. No. 11/200,761, Office Action dated Feb. 7, 2013, 22 pages.
U.S. Appl. No. 11/200,761, Advisory Action dated Apr. 19, 2013, 3 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action dated Jun. 11, 2013, 21 pages.
U.S. Appl. No. 11/200,761 Final Office Action dated Jan. 9, 2014, 23 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action dated Mar. 11, 2015, 24 pages.
U.S. Appl. No. 11/200,761, Final Office Action dated Oct. 22, 2015, 22 pages.
U.S. Appl. No. 11/200,761, “Non-Final Office Actions”, dated Jul. 27, 2016, 21 pages.
U.S. Appl. No. 11/200,761, Final Office Action dated Feb. 9, 2017, 7 pages.
U.S. Appl. No. 11/200,761, Notice of Allowance dated Jun. 23, 2017, all pages.
U.S. Appl. No. 11/222,590, Non-Final Office Action dated Mar. 21, 2007, 6 pages.
U.S. Appl. No. 11/222,590, Notice of Allowance dated Sep. 18, 2007, 5 pages.
U.S. Appl. No. 12/250,842, Non-Final Office Action dated Aug. 10, 2010, 9 pages.
U.S. Appl. No. 12/250,842, Response to Non-Final Office Action filed Nov. 19, 2010, 8 pages.
U.S. Appl. No. 12/250,842, Notice of Allowance dated Feb. 18, 2011, 5 pages.
U.S. Appl. No. 12/250,842, Notice of Allowance dated Jun. 10, 2011, 5 pages.
U.S. Appl. No. 12/544,744, Non-Final Office action dated Jun. 6, 2012, 25 pages.
U.S. Appl. No. 12/544,744, Final Office Action dated Feb. 27, 2013, 27 pages.
U.S. Appl. No. 12/544,744, Non-Final Office action dated Apr. 4, 2014, 29 pages.
U.S. Appl. No. 12/544,744, Final Office Action dated Nov. 7, 2014, 31 pages.
U.S. Appl. No. 12/544,744, Non-Final Office action dated Sep. 24, 2015, 28 pages.
U.S. Appl. No. 12/544,744, Final Office Action dated Jun. 1, 2016, 33 pages.
U.S. Appl. No. 12/544,744, Non-Final Office action dated Jan. 30, 2017, 32 pages.
U.S. Appl. No. 12/544,744, Final Office Action dated Aug. 25, 2017, 33 pages.
U.S. Appl. No. 12/544,744, Notice of Allowance, dated Jan. 3, 2018, 8 pages.
U.S. Appl. No. 12/862,977, Non-Final Office Action dated Mar. 1, 2012, 8 pages.
U.S. Appl. No. 12/862,977, Non-Final Office Action dated Aug. 29, 2012, 9 pages.
U.S. Appl. No. 12/862,977, Notice of Allowance dated Feb. 7, 2013, 11 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action dated Nov. 13, 2011, 10 pages.
U.S. Appl. No. 12/890,498, Final Office Action dated Feb. 7, 2012, 9 Pages.
U.S. Appl. No. 12/890,498, Advisory Action dated Apr. 16, 2012, 4 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action dated May 21, 2013, 22 pages.
U.S. Appl. No. 12/890,498, Final Office Action dated Nov. 19, 2014, 21 pages.
U.S. Appl. No. 12/890,498, Advisory Action dated Jan. 27, 2015, 3 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action dated Mar. 5, 2015, 24 pages.
U.S. Appl. No. 12/890,498, Final Office Action dated Jun. 17, 2015, 24 pages.
U.S. Appl. No. 12/890,498, Advisory Action dated Aug. 25, 2015, 3 pages.
U.S. Appl. No. 12/890,498, Notice of Allowance dated Dec. 30, 2015, all pages.
U.S. Appl. No. 12/890,498, Supplemental Notice of Allowability, dated Jan. 14, 2016, 2 pages.
U.S. Appl. No. 13/229,587, Non-Final Office Action dated Oct. 6, 2011, 4 pages.
U.S. Appl. No. 13/229,587, Response to Non-Final Office Action filed Jan. 4, 2012, 4 pages.
U.S. Appl. No. 13/229,587, Notice of Allowance dated Jan. 19, 2012, 5 pages.
U.S. Appl. No. 13/445,570, Notice of Allowance dated Jun. 20, 2012, 5 pages.
U.S. Appl. No. 13/663,405, Non-Final Office Action dated Nov. 21, 2014, 19 pages.
U.S. Appl. No. 13/663,405, Notice of Allowance dated Mar. 12, 2015, 13 pages.
Chinese Application No. 201380063351.0, Office Action dated May 2, 2017, 16 pages (7 pages for the original document and 9 pages for the English translation).
Chinese Application No. 201380063351.0, Office Action dated Dec. 11, 2017, (3 pages of the original document and 9 pages for the English translation).
Chinese Application No. 201380063351.0, Notice of Decision to Grant dated Mar. 6, 2018, (no English Version Available), 2 pages.
European Application No. 13850840.3, Extended European Search Report dated May 3, 2016, 6 pages.
European Application No. 13850840.3, Notice of Decision to Grant dated Dec. 15, 2016, all pages.
International Application No. PCT/US2013/065008, International Preliminary Report on Patentability dated May 14, 2015, 6 pages.
International Application No. PCT/US2013/065008, International Search Report and Written Opinion dated Apr. 16, 2014, 17 pages.
Related Publications (1)
Number Date Country
20180227248 A1 Aug 2018 US
Continuations (1)
Number Date Country
Parent 12544744 Aug 2009 US
Child 15944391 US