Network virtualization over infiniband

Information

  • Patent Grant
  • 9083550
  • Patent Number
    9,083,550
  • Date Filed
    Monday, October 29, 2012
    12 years ago
  • Date Issued
    Tuesday, July 14, 2015
    9 years ago
Abstract
Mechanisms are provided to allow servers connected over an InfiniBand fabric to communicate using multiple private virtual interconnects (PVIs). In particular embodiments, the PVIs appear as virtual Ethernet networks to users on individual servers and virtual machines running on the individual servers. Each PVI is represented on the server by a virtual network interface card (VNIC) and each PVI is mapped to its own InfiniBand multicast group. Data can be transmitted on PVIs as Ethernet packets fully encapsulated, including the layer 2 header, within InfiniBand messages. Broadcast and multicast frames are propagated using InfiniBand.
Description
TECHNICAL FIELD

The present disclosure relates to network virtualization over InfiniBand.


DESCRIPTION OF RELATED ART

InfiniBand provides a robust, scalable, and fail-safe architecture for connecting nodes such as servers, appliances, and disk arrays. InfiniBand is often used in high performance server clusters and datacenters. In one particular application, InfiniBand is used to connect servers to an input/output (I/O) director that provides efficient virtualized, shared, and fault tolerant I/O resources such as host bus adapters (HBAs) and network interface cards (NICs) to the servers.


However, mechanisms for isolating or separating communications on an InfiniBand fabric are limited. Furthermore, other mechanisms such as Internet Protocol (IP) over InfiniBand (IB) do not easily allow for efficient virtualization. Consequently, techniques and mechanisms are provided to enhance communications over InfiniBand and allow for network virtualization over InfiniBand.





BRIEF DESCRIPTION OF THE DRAWINGS

The disclosure may best be understood by reference to the following description taken in conjunction with the accompanying drawings, which illustrate particular example embodiments.



FIG. 1 illustrates one example of a system with servers connected to an I/O director.



FIG. 2 illustrates one example of a system having multiple servers and multiple private virtual interconnects (PVIs) over InfiniBand.



FIG. 3 illustrates one example of a technique for creating a PVI.



FIG. 4 illustrates one example of a forwarding table.



FIG. 5 illustrates one example of a technique for sending data.



FIG. 6 provides one example of a system that can be used to implement one or more mechanisms.





DESCRIPTION OF PARTICULAR EMBODIMENTS

Reference will now be made in detail to some specific examples of the invention including the best modes contemplated by the inventors for carrying out the invention. Examples of these specific embodiments are illustrated in the accompanying drawings. While the invention is described in conjunction with these specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims.


For example, the techniques and mechanisms of the present invention will be described in the context of InfiniBand and an input/output (I/O) director. However, it should be noted that the techniques and mechanisms of the present invention apply to InfiniBand variations and other types of networks as well as architectures that do not include an I/O director. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. Particular example embodiments of the present invention may be implemented without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.


Various techniques and mechanisms of the present invention will sometimes be described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. For example, a system uses a processor in a variety of contexts. However, it will be appreciated that a system can use multiple processors while remaining within the scope of the present invention unless otherwise noted. Furthermore, the techniques and mechanisms of the present invention will sometimes describe a connection between two entities. It should be noted that a connection between two entities does not necessarily mean a direct, unimpeded connection, as a variety of other entities may reside between the two entities. For example, a processor may be connected to memory, but it will be appreciated that a variety of bridges and controllers may reside between the processor and memory. Consequently, a connection does not necessarily mean a direct, unimpeded connection unless otherwise noted.


OVERVIEW

Mechanisms are provided to allow servers connected over an InfiniBand fabric to communicate using multiple private virtual interconnects (PVIs). In particular embodiments, the PVIs appear as virtual Ethernet networks to users on individual servers and virtual machines running on the individual servers. Each PVI is represented on the server by a virtual network interface card (VNIC) and each PVI is mapped to its own InfiniBand multicast group. Data can be transmitted on PVIs as Ethernet packets fully encapsulated including the layer 2 header within InfiniBand messages. Broadcast and multicast frames are propagated using InfiniBand.


Example Embodiments

InfiniBand is a switched fabric that provides high bandwidth, low latency, quality of service, and failover capabilities. InfiniBand provides point-to-point bidirectional serial links to connect servers, disk arrays, appliances, etc. InfiniBand offers unicast, multicast, and broadcast support and is often used in cloud computing clusters and data centers.


In particular embodiments, the servers are connected over an InfiniBand fabric to an I/O director. The I/O director provides shared and virtualized I/O resources to the servers. The common approach for providing I/O connectivity to servers and other hosts is to provide I/O controllers within the servers themselves. I/O controllers include Ethernet network interface cards (NICs), Fibre Channel, iSCSI and SAS host bus adapters (HBAs), etc. The I/O controllers are then connected to external devices using cables. External devices include switches, storage devices, display devices, and others. Cabling quickly becomes hard to manage in data centers with a significant number of servers, networks, and storage devices.


In some implementations, I/O controllers are offloaded onto an external shared system referred to herein as an I/O director. The I/O director includes actual I/O resources connected to external devices such as switches and storage. The hosts are connected to the I/O director over InfiniBand, but the number of cables required to provide redundancy and fault tolerance is much lower than the number of cables required when each host has its own I/O resources. In many cases, deploying an I/O director reduces the number of I/O cables per server from half a dozen or a dozen to one or two cables. A VNIC driver is provided for communication with the VNIC I/O modules at the I/O director and for providing network device services on the server which correspond to those provided by local physical NICs. The end result is that servers have connectivity to any number of different data and storage networks using virtual I/O devices.


Although servers can efficiently communicate with external entities on external networks using virtualized I/O resources, communications with other servers on the same InfiniBand fabric are not necessarily efficient. Communications with other servers on the same InfiniBand fabric are still required to pass through the I/O module at the I/O director. Having local communications pass through the I/O module at the I/O director is inefficient and introduces a significant bandwidth, latency, and throughput limitations. Furthermore, if virtual networks are desired, one port at the I/O module is required for each separate virtual network. This can be problematic in systems that require thousands of virtual networks.


Consequently, the techniques of the present invention provide mechanisms for implementing virtual networks in an InfiniBand fabric. According to various embodiments, servers are connected over an InfiniBand fabric using virtual NICs (VNICs) that encapsulate Ethernet packets including layer 2 headers in InfiniBand messages. Servers and virtual machines can communicate as though the servers and virtual machines are connected using an Ethernet architecture. Different VNICs are provided for each virtual network. According to various embodiments, each virtual network is referred to herein as a private virtual interconnect (PVI). Each PVI provides logically isolated communications. A server may be a member of any number of PVIs.


According to various embodiments, an administrator uses a management system to assign PVIs to servers. Each PVI is represented on the server by a VNIC which is used to access the PVI. Each PVI is mapped to its own InfiniBand multicast group which serves as its broadcast domain. PVI unicast frames are encapsulated in their entirety within InfiniBand reliable connected (RC) and unreliable datagram (UD) protocol messages. By contrast, a mechanism such as IP over InfiniBand does not include layer 2 headers in encapsulation.


Broadcast and multicast frames are propagated using InfiniBand multicast operations. In particular embodiments, mechanisms are provided for learning mappings between layer 2 addresses used within the PVI and their corresponding InfiniBand end points. Failover in case of link or switch failure is supported.


According to various embodiments, a very large number of isolated virtual networks can be created and scaled in a manner that allows high performance server-to-server communication. The mechanism is scalable, easy to manage, and provides significant benefits for a variety of applications. In particular embodiments, all intelligence can be maintained within VNIC drivers at individual servers. No centralized controller is required. Discovery can be performed autonomously with existing InfiniBand messages. Users on servers and virtual machines have access to their own networks that appear to them as Ethernet networks.



FIG. 1 illustrates one example of a system that includes multiple servers connected using an InfiniBand fabric to an I/O director. In particular embodiments, multiple servers 101, 103, 105, 107, and 109 are linked through an interconnect 131 such as an InfiniBand fabric. According to various embodiments, the servers 101, 103, 105, 107, and 109 communicate using Ethernet packets encapsulated in InfiniBand messages. VNICs 111, 115, 119, 123, and 127 are provided for servers 101, 103, 105, 107, and 109 respectively. According to various embodiments, VNICs 111, 115, 119, 123, and 127 are virtual network interface cards that appear to users at individual servers to be actual network interface cards.


To communicate with entities on an external network 161, servers 101, 103, 105, 107, and 109 use VNICs 111, 115, 119, 123, and 127 respectively to communicate with an I/O director 151 over the InfiniBand fabric. According to various embodiments, the I/O director 151 includes I/O ports 141. I/O ports 141 include VNICs that provide the servers 101, 103, 105, 107, and 109 with virtualized I/O resources. According to various embodiments, the I/O director includes a target channel adapter (TCA) for actual communications on the InfiniBand fabric. A TCA can be a discrete device, or its functionality can be integrated into another device of the I/O module. A TCA may recognize and terminate various transport protocols (iWARP, RC, etc.)


According to various embodiments, the TCA removes the link and transport protocol headers from the packet when a server transmits a data packet to the I/O ports 141. The TCA then forwards the packet with an internal header to a network processor in the I/O director 151.


According to various embodiments, the network processor may include VNIC to VNIC switching logic. The VNIC-to-VNIC switching logic performs packet forwarding between VNICs terminating on the same Ethernet port. The VNIC-to-VNIC switching logic 227 maintains a table of corresponding VNICs and MAC addresses and performs packet forwarding based on MAC addresses. For example, if VNIC1 is linked to address MAC1, and a data packet having MAC1 as its destination address is received on VNIC2 which terminates on the same Ethernet port as VNIC 1, then the VNIC-to-VNIC switching logic forwards this packet to VNIC1. This functionality allows use of an I/O director with external switches that do not forward packets to the same link that they came from, so that the switching is performed, in this case, within I/O modules themselves.


According to various embodiments, the VNIC I/O module also has learning logic, which is used to establish a mapping of VNICs created by virtualization software (on the servers) to VNICs of the I/O director. When a server is virtualized and one or more virtual machines are created on the server, each virtual machine can be associated with one or more VNICs, which are implemented by the server virtualization software. These VNICs are also referred to as virtual machine VNICs or simply VM VNICs. According to various embodiments, each VM VNIC has a MAC address, which is assigned by the virtualization software. One or more VM VNICs may be bridged to a single VNIC of the I/O director using a software virtual switch, which is implemented by the virtualization software. In particular embodiments, the traffic of multiple VM VNICs may appear on the same VNIC of the I/O director, and this traffic may include packets with different source MAC addresses for the different VM VNICs. According to various embodiments, the VNIC I/O module 203 establishes a mapping between a VM VNIC MAC address and a corresponding VNIC of the I/O director. This mapping enables directing incoming traffic to the correct VNIC of the I/O director. For example, if a packet with destination MAC address MAC1 arrives at the I/O module Ethernet port, and MAC1 is the address of VM VNIC1, then the I/O module needs to know which VNIC of the I/O director should receive this packet. In certain embodiments, a lookup is performed in a mapping table to establish this I/O director VNIC to VM VNIC correspondence.


By using VNICs 111, 115, 119, 123, and 127 to communicate with an I/O director 151, communication with external network 161 can be performed efficiently using shared and virtualized I/O resources. However, even communications between servers 101, 103, 105, 107, and 109 that are not destined for any external network 161 have to go through the I/O director 151. Requiring all inter-server communications to go through the I/O director 151 is inefficient and introduces an artificial bottleneck into a system. Furthermore, the number of virtual networks that can be created in an InfiniBand fabric is limited by the number of ports in an I/O module of the I/O director 151. For example, creating 1500 virtual networks in an InfiniBand fabric would require 1500 ports.



FIG. 2 illustrates one example of system that includes multiple servers connected over multiple virtual networks. An InfiniBand fabric includes multiple servers 201, 203, 205, 207, and 209. According to various embodiments, server 201 is assigned VNIC1211 and VNIC2213. Server 203 is assigned to VNIC1215 and VNIC2217. Server 205 is assigned VNIC1219, VNIC2221 and VNIC3223. In particular embodiments, server 207 is a load balancer or other appliance assigned VNIC2225 and VNIC3227. Server 209 is assigned VNIC2229 and VNIC3231.


According to various embodiments, servers 201, 203, and 205 assigned VNIC1211, 215, and 219, respectively, are members of private virtual interconnect (PVI) 241. Servers 201, 203, 205, 207, and 209 assigned VNIC2213, 217, 221, 225, and 229, respectively, are members of PVI 243. Servers 205, 207, and 209 assigned VNIC3223, 227, and 231, respectively, are members of PVI 245. According to various embodiments, communications on PVI 241, 243, and 245 are transmitted as Ethernet packets including layer 2 headers encapsulated in InfiniBand reliable connected (RC) and unreliable datagram (UD) protocol messages. According to various embodiments, a PVI can be created when an administrator directs a server to create a new VNIC corresponding to a virtual network identifier such as a net_ID. According to various embodiments, the net_ID is translated to a multicast group identifier by performing minor bit modification. Based on multicast group identifier, a multicast group join operation is propagated to the subnet manager.


If the server is the first member of the multicast group corresponding to a virtual network, the subnet manager creates a multicast group and adds the port to the multicast group using the multicast group ID corresponding to the net_ID and programs all switches on the path to add the new port. If the server is not the first member, the subnet manager adds the port to the multicast group and programs all switches on the path to add the new port. A driver then creates the VNIC on the server. It should be noted that InfiniBand elements, such as queue pairs necessary for communication, may also be created at this point, e.g. for UD communications or later on, e.g. for RC communications. A queue pair may include a send queue and a receive queue created in tandem and identified by a queue pair number.



FIG. 3 illustrates one example of a mechanism for creating one or more private virtual interconnects (PVIs) in an InfiniBand network connecting multiple servers and/or appliances such as load balancers and security systems. The servers may or may not be connected to an I/O director that provides shared and virtualized I/O resources to the servers. According to various embodiments, an instruction is received at 301 to include a server in a virtual network. At 303, a net_ID corresponding to the virtual network is identified. At 305, the net_ID is translated to a multicast group ID using minor bit modification. Based on multicast group identifier, a multicast group join operation is propagated to the subnet manager. If the server is the first member of the multicast group corresponding to a virtual network, the subnet manager creates at 309 a multicast group and adds the port to the multicast group using the multicast group ID corresponding to the net_ID and programs all switches on the path to add the new port. If the server is not the first member,


the subnet manager creates a multicast group and adds the port to the multicast group using the multicast group ID corresponding to the net_ID and programs all switches on the path to add the new port at 309. If the server is not the first member, the subnet manager adds the port to the multicast group and programs all switches on the path to add the new port at 311. According to various embodiments, a driver then creates the new VNIC on the server at 315.



FIG. 4 illustrates one example of a forwarding table used for transmitting data in an InfiniBand network that supports multiple virtual networks. A private virtual interconnect driver forwarding table 401 is provided on a per VNIC driver basis. The forwarding table 401 includes a destination address 411, a VLAN identifier 413, destination InfiniBand address information 415, and destination queue pair information 417. According to various embodiments, the destination InfiniBand address information 415 may be a destination InfiniBand address vector. In particular embodiments, the destination 411 and VLAN ID 413 pair are used to identify unique forwarding table entries. The destination InfiniBand address info 415 and destination queue pair 417 are used to forward data based on InfiniBand standard UD and RC mechanisms.



FIG. 5 illustrates one example of a technique for sending data. According to various embodiments, a VNIC driver receives data from a network stack at 501. The data may be Ethernet data that the VNIC driver encapsulates in an InfiniBand message at 503. It is determined whether the data corresponds to a broadcast packet, a multicast packet, or a unicast packet at 505. If the data corresponds to a broadcast packet, a multicast encapsulated packet is transmitted on the PVI queue pair at 507. If the data corresponds to a multicast packet, an InfiniBand multicast group is identified at 509. In some instances, the multicast packet can be treated as a broadcast packet and transmitted to everyone on the PVI multicast group. In other instances, an InfiniBand multicast group is created for each IP multicast group is used for multicast operations. The multicast group packet can then be transmitted using the IB multicast group at 511.


If the data corresponds to a unicast packet, the destination address is accessed in a forwarding table specific to that VNIC driver at 513. The destination address and a VLAN ID are used to identify a unique entry in the forwarding table 515. Conventional InfiniBand forwarding mechanisms are then used to transmit UD and RC packets.


When a destination server receives an InfiniBand message, InfiniBand message encapsulation is removed to extract Ethernet data. Information from the InfiniBand message can be used to populate a forwarding table at the destination server. Information may include destination queue pair and destination address.


According to various embodiments, the various mechanisms can be implemented in hardware, firmware, and/or software. FIG. 6 provides one example of a system that can be used to implement one or more mechanisms. For example, the system shown in FIG. 6 may be used to implement a server or an I/O director.


According to particular example embodiments, a system 600 suitable for implementing particular embodiments of the present invention includes a processor 601, a memory 603, an interface 611, and a bus 615 (e.g., a PCI bus). When acting under the control of appropriate software or firmware, the processor 601 is responsible for such tasks such as data modification. Various specially configured devices can also be used in place of a processor 601 or in addition to processor 601. The complete implementation can also be done in custom hardware. The interface 611 is typically configured to send and receive data packets or data segments over a network. Particular examples of interfaces the device supports include host bus adapter (HBA) interfaces, Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like.


In addition, various very high-speed interfaces may be provided such as fast Ethernet interfaces, 1/10/40/100 Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, Host Channel Adapter, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control communications-intensive tasks.


According to particular example embodiments, the system 600 uses memory 603 to store data, algorithms, and program instructions. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store received data and process received data.


Because such information and program instructions may be employed to implement the systems/methods described herein, the present invention relates to tangible, machine readable media that include program instructions, state information, etc. for performing various operations described herein. Examples of machine-readable media include, but are not limited to, magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks and DVDs; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM) and random access memory (RAM). Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.


Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. Therefore, the present embodiments are to be considered as illustrative and not restrictive and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims
  • 1. A method for creating a virtual network, the method comprising: converting a virtual network identifier to an InfiniBand multicast group identifier at a first server;sending an InfiniBand multicast message over an InfiniBand fabric, the InfiniBand fabric including the first server, a second server, and a third server, wherein communications between the first server, the second server, and the third server comprise Ethernet packets encapsulated for transmission over the InfiniBand fabric, wherein a network entity receives the InfiniBand multicast message and determines whether the first server is a first member of a multicast group corresponding to a virtual network and adds a port to a multicast group;creating a virtual network interface card (VNIC) corresponding to the virtual network identifier for each of the first server, second server, and third server; andcreating a Private Virtual Interconnect (PVI) between two or more of the first server, second server, or third server using the VNIC, the PVI comprising a virtual Ethernet network corresponding to the virtual network identifier, wherein the two or more of the first, second, and third servers of the virtual Ethernet network communicate via the PVI using Ethernet packets encapsulated within InfiniBand messages.
  • 2. The method of claim 1, wherein the InfiniBand multicast message is associated with an InfiniBand multicast join operation.
  • 3. The method of claim 1, wherein the network entity is a subnet manager.
  • 4. The method of claim 3, wherein if the first server is the first member of the multicast group, the subnet manager creates the multicast group.
  • 5. The method of claim 1, wherein the multicast group corresponds to the virtual network.
  • 6. The method of claim 1, wherein converting the virtual network identifier to the InfiniBand multicast group comprises performing bit modification.
  • 7. The method of claim 1, wherein the VNIC is created by a driver on the first server.
  • 8. The method of claim 1, wherein the second server is a network appliance.
  • 9. A system comprising: a processor; anda memory coupled with and readable by the processor and storing therein a set of instructions which, when executed by the processor, causes the processor to create a virtual network by: converting a virtual network identifier to an InfiniBand multicast group identifier at a first server;sending an InfiniBand multicast message over an InfiniBand fabric, the InfiniBand fabric including the first server, a second server, and a third server, wherein communications between the first server, the second server, and the third server comprise Ethernet packets encapsulated for transmission over the InfiniBand fabric, wherein a network entity receives the InfiniBand multicast message and determines whether the first server is a first member of a multicast group corresponding to a virtual network and adds a port to a multicast group;creating a virtual network interface card (VNIC) corresponding to the virtual network identifier for each of the first server, second server, and third server; andcreating a Private Virtual Interconnect (PVI) between two or more of the first server, second server, or third server using the VNIC, the PVI comprising a virtual Ethernet network corresponding to the virtual network identifier, wherein the two or more of the first, second, and third servers of the virtual Ethernet network communicate via the PVI using Ethernet packets encapsulated within InfiniBand messages.
  • 10. The system of claim 9, wherein the InfiniBand multicast message is associated with an InfiniBand multicast join operation.
  • 11. The system of claim 9, wherein the network entity is a subnet manager.
  • 12. The system of claim 11, wherein if the first server is the first member of the multicast group, the subnet manager creates the multicast group.
  • 13. The system of claim 9, wherein the multicast group corresponds to a virtual network.
  • 14. The system of claim 9, wherein converting the virtual network identifier to the InfiniBand multicast group comprises performing bit modification.
  • 15. The system of claim 9, wherein the VNIC is created by a driver on the first server.
  • 16. The system of claim 9, wherein the second server is a network appliance.
  • 17. A non-transitory computer readable medium comprising a set of instructions stored therein which, when executed by a processor, causes the processor to create a virtual network by: converting a virtual network identifier to an InfiniBand multicast group identifier at a first server;sending an InfiniBand multicast message over an InfiniBand fabric, the InfiniBand fabric including the first server, a second server, and a third server, wherein communications between the first server, the second server, and the third server comprise Ethernet packets encapsulated for transmission over the InfiniBand fabric, wherein a network entity receives the InfiniBand multicast message and determines whether the first server is a first member of a multicast group corresponding to a virtual network and adds a port to a multicast group;creating a virtual network interface card (VNIC) corresponding to the virtual network identifier for each of the first server, second server, and third server; andcreating a Private Virtual Interconnect (PVI) between two or more of the first server, second server, or third server using the VNIC, the PVI comprising a virtual Ethernet network corresponding to the virtual network identifier, wherein the two or more of the first, second, and third servers of the virtual Ethernet network communicate via the PVI using Ethernet packets encapsulated within InfiniBand messages.
  • 18. The non-transitory computer readable medium of claim 17, wherein the InfiniBand multicast message is associated with an InfiniBand multicast join operation.
US Referenced Citations (266)
Number Name Date Kind
5621913 Tuttle et al. Apr 1997 A
5754948 Metze May 1998 A
5815675 Steele et al. Sep 1998 A
5898815 Bluhm et al. Apr 1999 A
6003112 Tetrick Dec 1999 A
6069895 Ayandeh May 2000 A
6145028 Shank et al. Nov 2000 A
6157955 Narad et al. Dec 2000 A
6247086 Allingham Jun 2001 B1
6253334 Amdahl et al. Jun 2001 B1
6282647 Leung et al. Aug 2001 B1
6308282 Huang et al. Oct 2001 B1
6314525 Mahalingham et al. Nov 2001 B1
6331983 Haggerty et al. Dec 2001 B1
6343324 Hubis et al. Jan 2002 B1
6377992 Plaza Fernández et al. Apr 2002 B1
6393483 Latif et al. May 2002 B1
6401117 Narad et al. Jun 2002 B1
6418494 Shatas et al. Jul 2002 B1
6430191 Klausmeier et al. Aug 2002 B1
6466993 Bonola Oct 2002 B1
6470397 Shah et al. Oct 2002 B1
6578128 Arsenault et al. Jun 2003 B1
6594329 Susnow Jul 2003 B1
6628608 Lau et al. Sep 2003 B1
6708297 Bassel Mar 2004 B1
6725388 Susnow Apr 2004 B1
6757725 Frantz et al. Jun 2004 B1
6779064 McGowen et al. Aug 2004 B2
6804257 Benayoun et al. Oct 2004 B1
6807581 Starr et al. Oct 2004 B1
6823458 Lee et al. Nov 2004 B1
6898670 Nahum May 2005 B2
6931511 Weybrew et al. Aug 2005 B1
6937574 Delaney et al. Aug 2005 B1
6963946 Dwork et al. Nov 2005 B1
6970921 Wang et al. Nov 2005 B1
7011845 Kozbor et al. Mar 2006 B2
7046668 Pettey et al. May 2006 B2
7093265 Jantz et al. Aug 2006 B1
7096308 Main et al. Aug 2006 B2
7103064 Pettey et al. Sep 2006 B2
7103888 Cayton et al. Sep 2006 B1
7111084 Tan et al. Sep 2006 B2
7120728 Krakirian et al. Oct 2006 B2
7127445 Mogi et al. Oct 2006 B2
7143227 Maine Nov 2006 B2
7159046 Mulla et al. Jan 2007 B2
7171434 Ibrahim et al. Jan 2007 B2
7171495 Matters et al. Jan 2007 B2
7188209 Pettey et al. Mar 2007 B2
7203842 Kean Apr 2007 B2
7209439 Rawlins et al. Apr 2007 B2
7213246 van Rietschote et al. May 2007 B1
7219183 Pettey et al. May 2007 B2
7240098 Mansee Jul 2007 B1
7260661 Bury et al. Aug 2007 B2
7269168 Roy et al. Sep 2007 B2
7281030 Davis Oct 2007 B1
7281077 Woodral Oct 2007 B2
7281169 Golasky et al. Oct 2007 B2
7307948 Infante et al. Dec 2007 B2
7308551 Arndt et al. Dec 2007 B2
7334178 Aulagnier Feb 2008 B1
7345689 Janus et al. Mar 2008 B2
7346716 Bogin et al. Mar 2008 B2
7360017 Higaki et al. Apr 2008 B2
7366842 Acocella et al. Apr 2008 B1
7386637 Arndt et al. Jun 2008 B2
7412536 Oliver et al. Aug 2008 B2
7421710 Qi et al. Sep 2008 B2
7424529 Hubis Sep 2008 B2
7433300 Bennett et al. Oct 2008 B1
7457897 Lee et al. Nov 2008 B1
7457906 Pettey et al. Nov 2008 B2
7493416 Pettey Feb 2009 B2
7502884 Shah et al. Mar 2009 B1
7509436 Rissmeyer Mar 2009 B1
7516252 Krithivas Apr 2009 B2
7602774 Sundaresan et al. Oct 2009 B1
7606260 Oguchi et al. Oct 2009 B2
7609723 Munguia Oct 2009 B2
7634650 Shah et al. Dec 2009 B1
7669000 Sharma et al. Feb 2010 B2
7711789 Jnagal et al. May 2010 B1
7733890 Droux et al. Jun 2010 B1
7782869 Chitlur Srinivasa Aug 2010 B1
7783788 Quinn et al. Aug 2010 B1
7792923 Kim Sep 2010 B2
7793298 Billau et al. Sep 2010 B2
7821973 McGee et al. Oct 2010 B2
7836332 Hara et al. Nov 2010 B2
7843907 Abou-Emara et al. Nov 2010 B1
7849153 Kim Dec 2010 B2
7865626 Hubis Jan 2011 B2
7870225 Kim Jan 2011 B2
7899928 Naik et al. Mar 2011 B1
7933993 Skinner Apr 2011 B1
7937447 Cohen et al. May 2011 B1
7941814 Okcu et al. May 2011 B1
8041875 Shah et al. Oct 2011 B1
8180872 Marinelli et al. May 2012 B1
8180949 Shah et al. May 2012 B1
8185664 Lok et al. May 2012 B1
8195854 Sihare Jun 2012 B1
8200871 Rangan et al. Jun 2012 B2
8218538 Chidambaram et al. Jul 2012 B1
8228820 Gopal Gowda et al. Jul 2012 B2
8261068 Raizen et al. Sep 2012 B1
8285907 Chappell et al. Oct 2012 B2
8291148 Shah et al. Oct 2012 B1
8387044 Yamada et al. Feb 2013 B2
8392645 Miyoshi Mar 2013 B2
8397092 Karnowski Mar 2013 B2
8443119 Limaye et al. May 2013 B1
8458306 Sripathi Jun 2013 B1
8677023 Venkataraghavan et al. Mar 2014 B2
8892706 Dalal Nov 2014 B1
20010032280 Osakada et al. Oct 2001 A1
20010037406 Philbrick et al. Nov 2001 A1
20020023151 Iwatani Feb 2002 A1
20020065984 Thompson et al. May 2002 A1
20020069245 Kim Jun 2002 A1
20020146448 Kozbor et al. Oct 2002 A1
20020152327 Kagan et al. Oct 2002 A1
20030007505 Noda et al. Jan 2003 A1
20030028716 Sved Feb 2003 A1
20030037177 Sutton et al. Feb 2003 A1
20030051076 Webber Mar 2003 A1
20030081612 Goetzinger et al. May 2003 A1
20030093501 Carlson et al. May 2003 A1
20030099254 Richter May 2003 A1
20030110364 Tang et al. Jun 2003 A1
20030126315 Tan et al. Jul 2003 A1
20030126320 Liu et al. Jul 2003 A1
20030126344 Hodapp, Jr. Jul 2003 A1
20030131182 Kumar et al. Jul 2003 A1
20030165140 Tang et al. Sep 2003 A1
20030172149 Edsall et al. Sep 2003 A1
20030200315 Goldenberg et al. Oct 2003 A1
20030208614 Wilkes Nov 2003 A1
20030212755 Shatas et al. Nov 2003 A1
20030226018 Tardo et al. Dec 2003 A1
20030229645 Mogi et al. Dec 2003 A1
20040003141 Matters et al. Jan 2004 A1
20040003154 Harris et al. Jan 2004 A1
20040008713 Knight et al. Jan 2004 A1
20040025166 Adlung et al. Feb 2004 A1
20040028063 Roy et al. Feb 2004 A1
20040030857 Krakirian et al. Feb 2004 A1
20040034718 Goldenberg et al. Feb 2004 A1
20040054776 Klotz et al. Mar 2004 A1
20040057441 Li et al. Mar 2004 A1
20040064590 Starr et al. Apr 2004 A1
20040078632 Infante et al. Apr 2004 A1
20040081145 Harrekilde-Petersen et al. Apr 2004 A1
20040107300 Padmanabhan et al. Jun 2004 A1
20040123013 Clayton et al. Jun 2004 A1
20040139237 Rangan et al. Jul 2004 A1
20040151188 Maveli et al. Aug 2004 A1
20040160970 Dally et al. Aug 2004 A1
20040172494 Pettey et al. Sep 2004 A1
20040179529 Pettey et al. Sep 2004 A1
20040210623 Hydrie et al. Oct 2004 A1
20040218579 An Nov 2004 A1
20040225719 Kisley et al. Nov 2004 A1
20040225764 Pooni et al. Nov 2004 A1
20040233933 Munguia Nov 2004 A1
20040236877 Burton Nov 2004 A1
20050010688 Murakami et al. Jan 2005 A1
20050033878 Pangal et al. Feb 2005 A1
20050039063 Hsu et al. Feb 2005 A1
20050044301 Vasilevsky et al. Feb 2005 A1
20050050191 Hubis Mar 2005 A1
20050058085 Shapiro et al. Mar 2005 A1
20050066045 Johnson et al. Mar 2005 A1
20050080923 Elzur Apr 2005 A1
20050080982 Vasilevsky et al. Apr 2005 A1
20050091441 Qi et al. Apr 2005 A1
20050108407 Johnson et al. May 2005 A1
20050111483 Cripe et al. May 2005 A1
20050114569 Bogin et al. May 2005 A1
20050114595 Karr et al. May 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050141425 Foulds Jun 2005 A1
20050160251 Zur et al. Jul 2005 A1
20050182853 Lewites et al. Aug 2005 A1
20050188239 Golasky et al. Aug 2005 A1
20050198410 Kagan et al. Sep 2005 A1
20050198523 Shanbhag et al. Sep 2005 A1
20050232285 Terrell et al. Oct 2005 A1
20050238035 Riley Oct 2005 A1
20050240621 Robertson et al. Oct 2005 A1
20050240932 Billau et al. Oct 2005 A1
20050262269 Pike Nov 2005 A1
20060007937 Sharma Jan 2006 A1
20060010287 Kim Jan 2006 A1
20060013240 Ma et al. Jan 2006 A1
20060045098 Krause Mar 2006 A1
20060050693 Bury et al. Mar 2006 A1
20060059400 Clark et al. Mar 2006 A1
20060092928 Pike et al. May 2006 A1
20060129699 Kagan et al. Jun 2006 A1
20060136570 Pandya Jun 2006 A1
20060168286 Makhervaks et al. Jul 2006 A1
20060168306 Makhervaks et al. Jul 2006 A1
20060179178 King Aug 2006 A1
20060182034 Klinker et al. Aug 2006 A1
20060184711 Pettey et al. Aug 2006 A1
20060193327 Arndt et al. Aug 2006 A1
20060200584 Bhat Sep 2006 A1
20060212608 Arndt et al. Sep 2006 A1
20060224843 Rao et al. Oct 2006 A1
20060233168 Lewites et al. Oct 2006 A1
20060242332 Johnsen et al. Oct 2006 A1
20060253619 Torudbakken et al. Nov 2006 A1
20060282591 Krithivas Dec 2006 A1
20060292292 Brightman et al. Dec 2006 A1
20070050520 Riley Mar 2007 A1
20070067435 Landis et al. Mar 2007 A1
20070101173 Fung May 2007 A1
20070112574 Greene May 2007 A1
20070112963 Dykes et al. May 2007 A1
20070130295 Rastogi et al. Jun 2007 A1
20070220170 Abjanic et al. Sep 2007 A1
20070286233 Latif et al. Dec 2007 A1
20080025217 Gusat et al. Jan 2008 A1
20080082696 Bestler Apr 2008 A1
20080159260 Vobbilisetty et al. Jul 2008 A1
20080192648 Galles Aug 2008 A1
20080205409 McGee et al. Aug 2008 A1
20080225877 Yoshida Sep 2008 A1
20080270726 Elnozahy et al. Oct 2008 A1
20080288627 Hubis Nov 2008 A1
20080301692 Billau et al. Dec 2008 A1
20080307150 Stewart et al. Dec 2008 A1
20090070422 Kashyap et al. Mar 2009 A1
20090106470 Sharma et al. Apr 2009 A1
20090307388 Tchapda Dec 2009 A1
20100088432 Itoh Apr 2010 A1
20100138602 Kim Jun 2010 A1
20100195549 Aragon et al. Aug 2010 A1
20100293552 Allen et al. Nov 2010 A1
20110153715 Oshins et al. Jun 2011 A1
20110154318 Oshins et al. Jun 2011 A1
20120079143 Krishnamurthi et al. Mar 2012 A1
20120110385 Fleming et al. May 2012 A1
20120144006 Wakamatsu et al. Jun 2012 A1
20120158647 Yadappanavar et al. Jun 2012 A1
20120163376 Shukla et al. Jun 2012 A1
20120163391 Shukla et al. Jun 2012 A1
20120166575 Ogawa et al. Jun 2012 A1
20120167080 Vilayannur et al. Jun 2012 A1
20120209905 Haugh et al. Aug 2012 A1
20120239789 Ando et al. Sep 2012 A1
20120304168 Raj Seeniraj et al. Nov 2012 A1
20130031200 Gulati et al. Jan 2013 A1
20130080610 Ando Mar 2013 A1
20130117421 Wimmer May 2013 A1
20130117485 Varchavtchik et al. May 2013 A1
20130138758 Cohen et al. May 2013 A1
20130138836 Cohen et al. May 2013 A1
20130145072 Venkataraghavan et al. Jun 2013 A1
20130159637 Forgette et al. Jun 2013 A1
20130179532 Tameshige et al. Jul 2013 A1
20130201988 Zhou et al. Aug 2013 A1
Non-Patent Literature Citations (85)
Entry
Wikipedia's article on ‘Infiniband’ from Aug. 2010.
U.S. Appl. No. 12/890,498, Final Office Action mailed on Feb. 7, 2012, 9 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action mailed on Nov. 3, 2011, 10 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action mailed on May 21, 2013, 22 pages.
Kesavan et al., Active Coordination (ACT)—Toward Effectively Managing Virtualized Multicore Clouds, IEEE, 2008.
Poulton, Xsigo—Try it out, I dare you! , Nov. 16, 2009.
Ranadive et al., IBMon: Monitoring VMM-Bypass Capable InfiniBand Devices using Memory Introspection, ACM, 2009.
U.S. Appl. No. 11/083,258, Final Office Action mailed on Feb. 2, 2009, 13 pages.
U.S. Appl. No. 11/083,258, Final Office Action mailed on Jun. 10, 2010, 15 pages.
U.S. Appl. No. 11/083,258, Final Office Action mailed on Oct. 26, 2012, 30 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action mailed on Jul. 11, 2008, 12 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action mailed on Nov. 12, 2009, 13 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action mailed on Mar. 28, 2011, 14 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action mailed on Apr. 25, 2012, 30 pages.
U.S. Appl. No. 11/086,117, Final Office Action mailed on Dec. 23, 2008, 11 pages.
U.S. Appl. No. 11/086,117, Final Office Action mailed on Dec. 10, 2009, 18 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action mailed on May 6, 2009, 12 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action mailed on Jul. 22, 2008, 13 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action mailed on Jul. 22, 2010, 24 pages.
U.S. Appl. No. 11/086,117, Notice of Allowance mailed on Dec. 27, 2010, 15 pages.
U.S. Appl. No. 11/145,698, Final Office Action mailed on Aug. 18, 2009, 22 pages.
U.S. Appl. No. 11/145,698, Final Office Action mailed on Jul. 6, 2011, 26 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action mailed on May 9, 2013, 13 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action mailed on Mar. 31, 2009, 22 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action mailed on Mar. 16, 2011, 24 pages.
U.S. Appl. No. 11/179,085, Final Office Action mailed on Oct. 30, 2007, 13 pages.
U.S. Appl. No. 11/179,085, Non-Final Office Action mailed on May 31, 2007, 14 pages.
U.S. Appl. No. 11/179,085, Notice of Allowance mailed on Aug. 11, 2008, 4 pages.
U.S. Appl. No. 11/179,085, Pre Appeal Brief Request mailed on Jan. 24, 2008, 6 pages.
U.S. Appl. No. 11/179,085, Preliminary Amendment mailed on May 27, 2008, 9 pages.
U.S. Appl. No. 11/179,085, Response to Non-final Office Action filed on Aug. 10, 2007, 8 pages.
U.S. Appl. No. 11/179,085, filed Jul. 11, 2005.
U.S. Appl. No. 11/179,437, Final Office Action mailed on Jan. 8, 2009, 13 pages.
U.S. Appl. No. 11/179,437, Non-Final Office Action mailed on May 8, 2008, 11 pages.
U.S. Appl. No. 11/179,437, Notice of Allowance mailed on Jun. 1, 2009, 8 pages.
U.S. Appl. No. 11/179,437, filed Jul. 11, 2005.
U.S. Appl. No. 11/184,306, Non-Final Office Action mailed on Apr. 10, 2009, 5 pages.
U.S. Appl. No. 11/184,306, Notice of Allowance mailed on Aug. 10, 2009, 4 pages.
U.S. Appl. No. 11/200,761, Final Office Action mailed on Jul. 9, 2010, 22 pages.
U.S. Appl. No. 11/200,761, Final Office Action mailed on Aug. 13, 2009, 22 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Jun. 11, 2013, 21 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Aug. 31, 2012, 21 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Jan. 20, 2010, 22 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Mar. 12, 2009, 22 pages.
U.S. Appl. No. 11/200,761, Office Action mailed on Feb. 7, 2013, 22 pages.
U.S. Appl. No. 11/200,761, mailed on Aug. 9, 2005, 32 pages.
U.S. Appl. No. 11/222,590, Non-Final Office Action mailed on Mar. 21, 2007, 6 pages.
U.S. Appl. No. 11/222,590, Notice of Allowance mailed on Sep. 18, 2007, 5 pages.
U.S. Appl. No. 12/250,842, Allowed Claims mailed on Jun. 10, 2011.
U.S. Appl. No. 12/250,842, Non-Final Office Action mailed on Aug. 10, 2010, 9 pages.
U.S. Appl. No. 12/250,842, Notice of Allowance mailed on Feb. 18, 2011, 5 pages.
U.S. Appl. No. 12/250,842, Notice of Allowance mailed on Jun. 10, 2011, 5 pages.
U.S. Appl. No. 12/250,842, Response to Non-Final Office Action filed on Nov. 19, 2010, 8 pages.
U.S. Appl. No. 12/250,842, filed Oct. 14, 2008.
U.S. Appl. No. 12/544,744, Final Office Action mailed on Feb. 27, 2013, 27 pages.
U.S. Appl. No. 12/544,744, Non-Final Office Action mailed on Jun. 6, 2012, 26 pages.
U.S. Appl. No. 12/862,977, Non-Final Office Action mailed on Mar. 1, 2012, 8 pages.
U.S. Appl. No. 12/862,977, Non-Final Office Action mailed on Aug. 29, 2012, 9 pages.
U.S. Appl. No. 12/862,977, Notice of Allowance mailed on Feb. 7, 2013, 11 pages.
U.S. Appl. No. 13/229,587, Non-Final Office Action mailed on Oct. 6, 2011, 4 pages.
U.S. Appl. No. 13/229,587, Notice of Allowance mailed on Jan. 19, 2012, 5 pages.
U.S. Appl. No. 13/229,587, Response to Non-Final Office Action filed on Jan. 4, 2012, 4 pages.
U.S. Appl. No. 13/445,570, Notice of Allowance mailed on Jun. 20, 2012, 5 pages.
Bhatt, Creating a Third Generation I/O Interconnect, Intel Developer Network for PCI Express Architecture, www.express-lane.org, printed Aug. 22, 2005, pp. 1-11.
Figueiredo et al., Resource Virtualization Renaissance, IEEE Computer Society, May 2005, pp. 28-31.
Liu et al., High Performance RDMA-Based MPI Implementation over InfiniBand, ICS'03, San Francisco, ACM, Jun. 23-26, 2003, 10 pages.
Wong et al., Effective Generation of Test Sequences for Structural Testing of Concurrent Programs, IEEE International Conference of Complex Computer Systems (ICECCS'05), 2005.
Xu et al., Performance Virtualization for Large-Scale Storage Systems, IEEE, 2003, 10 pages.
U.S. Appl. No. 11/083,258, Advisory Action mailed on Jan. 24, 2013, 3 pages.
U.S. Appl. No. 11/083,258, Final Office Action mailed on Apr. 18, 2014, 37 pages.
International Search Report and Written Opinion of PCT/US2013/065008 mailed on Apr. 16, 2014, 17 pages.
U.S. Appl. No. 12/544,744, Non-Final Office Action mailed on Apr. 4, 2014, 30 pages.
Marshall, Xsigo Systems Launches Company and 1/0 Virtualization Product, vmblog.com, http:/lvmblog.com/archive/2007/09/15/xsigo-systems-launches-company-and-i-o-virtualization-product.aspx, accessed on Mar. 24, 2014, Sep. 15, 2007.
U.S. Appl. No. 11/083,258, Non-Final Office Action, mailed Sep. 18, 2013, 35 pages.
U.S. Appl. No. 11/145,698, Notice of Allowance, mailed Oct. 24, 2013, 15 pages.
U.S. Appl. No. 11/200,761, Final Office Action mailed Jan. 9, 2014, 23 pages.
U.S. Appl. No. 11/083,258, Non-Final Office Action mailed on Sep. 10, 2014, 34 pages.
U.S. Appl. No. 11/200,761, Advisory Action mailed on Oct. 21, 2009, 2 pages.
U.S. Appl. No. 11/200,761, Advisory Action mailed on Apr. 19, 2013, 3 pages.
U.S. Appl. No. 11/200,761, Advisory Action mailed on Aug. 31, 2010, 3 pages.
U.S. Appl. No. 12/544,744, Final Office Action mailed on Nov. 7, 2014, 32 pages.
U.S. Appl. No. 12/890,498, Advisory Action mailed on Apr. 16, 2012, 4 pages.
U.S. Appl. No. 11/083,258, Final Office Action mailed on Mar. 19, 2015, 37 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Mar. 11, 2015, 24 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action mailed on Mar. 5, 2015, 24 pages.
Related Publications (1)
Number Date Country
20140122675 A1 May 2014 US