Resource virtualization mechanism including virtual host bus adapters

Information

  • Patent Grant
  • 9264384
  • Patent Number
    9,264,384
  • Date Filed
    Wednesday, March 16, 2005
    19 years ago
  • Date Issued
    Tuesday, February 16, 2016
    8 years ago
Abstract
Methods and apparatus are provided for virtualizing resources such as host bus adapters connected to a storage area network. Resources are offloaded from individual servers onto a resource virtualization switch. Servers are connected to the resource virtualization switch using an I/O bus connection. Servers are assigned resources such as virtual host bus adapters and share access to physical host bus adapters included in the resource virtualization switch. Redundancy can be provided using multipathing mechanisms.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention relates to resource virtualization. In one example, the present invention relates to methods and apparatus for efficiently virtualizing, allocating, and managing resources used to connect servers to storage area networks.


2. Description of Related Art


Conventional servers connect to storage area networks such as fibre channel fabric storage area networks using host bus adapters (HBAs). In many implementations, multiple HBAs are included in each server to provide for redundancy and load sharing. Each HBA is connected to a fibre channel switch port. If many servers are connected to a storage area network, a large number of HBAs and fibre channel switch ports are required. A large number of HBAs and fibre channel switch ports are required even though many HBAs and switch ports remain underutilized.


Some virtualization work has been done to provide shared access to a fibre channel storage area network within a particular server. Multiple operating systems included on a server may have limited shared access to a fibre channel storage area network. N-port virtualization in fibre channel allows for multiple initiators in a single HBA within a single server. Some solutions have allowed sharing of connectivity using gateway techniques through Ethernet. However, these solutions are high latency and low bandwidth, often unsuitable for typical storage area network applications and data center applications.


However, techniques and mechanisms for sharing resources such as HBAs and sharing connectivity to fibre channel storage area networks are limited. In many instances, conventional mechanisms still lead to underutilization and resource inflexibility. Network administration issues also remain complicated with the need for a large number of HBAs and switch ports. Consequently, it is desirable to provide methods and apparatus for more efficiently connecting servers to fibre channel storage area networks.


SUMMARY OF THE INVENTION

Methods and apparatus are provided for virtualizing resources such as host bus adapters connected to a storage area network. Resources are offloaded from individual servers onto a resource virtualization switch. Servers are connected to the resource virtualization switch using an I/O bus connection. Servers are assigned resources such as virtual host bus adapters and share access to physical host bus adapters included in the resource virtualization switch. Redundancy can be provided using multipathing mechanisms.


In one embodiment, a resource virtualization switch coupled to a storage area network is provided. The resource virtualization switch includes multiple port adapters, an I/O bus switch, and a resource virtualization switch platform. The port adapters are connected to a storage area network having storage area network ports associated with storage area network switches. The I/O bus switch is connected to multiple servers. The resource virtualization switch platform is operable to map communications from the first server and the second server onto a single port adapter.


In another embodiment, a technique for transmitting data is provided. Data is received data over an I/O bus connection from multiple servers including at least a first server and a second server. Data received from the first server and the second server is associated with a first port adapter at a resource virtualization switch. The first port adapter is connected to a storage area network switch. Data is transmitted to the storage area network switch using the first port adapter.


A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention may best be understood by reference to the following description taken in conjunction with the accompanying drawings, which are illustrative of specific embodiments of the present invention.



FIG. 1 is a diagrammatic representation showing typical server configuration.



FIG. 2 is a diagrammatic representation showing multiple servers having virtualized resources.



FIG. 3 is a diagrammatic representation depicting separate servers and associated address spaces



FIG. 4 is a diagrammatic representation depicting a layer model using a virtual device driver.



FIG. 5 is a diagrammatic representation showing one example of a virtual host bus adapter (VHBA) driver.



FIG. 6 is a diagrammatic representation showing one example of a VHBA coupled to one or more HBAs.



FIG. 7 is a diagrammatic representation showing a resource virtualization switch platform.



FIG. 8 is a diagrammatic representation showing multipathing and a VHBA.



FIG. 9 is a flow process diagram showing a technique for initializing HBAs.



FIG. 10 is flow process diagram showing a technique for receiving using HBAs.





DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS

Reference will now be made in detail to some specific examples of the invention including the best modes contemplated by the inventors for carrying out the invention. Examples of these specific embodiments are illustrated in the accompanying drawings. While the invention is described in conjunction with these specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims.


For example, the techniques of the present invention will be described in the context of Peripheral Control Interface (PCI) Express and fibre channel. However, it should be noted that the techniques of the present invention can be applied to a variety of different standards and variations to PCI Express and fibre channel. For example, storage area networks may be implemented using fibre channel, but storage area networks can also be implemented using other protocols such as internet Small Computer Systems Interface (iSCSI). Although fibre channel based storage area network terms will be used, it should be recognized that the techniques of the present invention should not be limited to fibre channel.


In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.


Furthermore, techniques and mechanisms of the present invention will sometimes be described in singular form for clarity. However, it should be noted that some embodiments can include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. For example, a processor is used in a variety of contexts. However, it will be appreciated that multiple processors can also be used while remaining within the scope of the present invention unless otherwise noted.


A server or computing system generally includes one or more processors, memory, as well as other peripheral components and peripheral interfaces such as host bus adapters (HBA), hardware accelerators, network interface cards (NIC), graphics accelerators, disks, etc. To increase processing power, servers are often aggregated as blades in a rack or as servers on a server farm or data center and interconnected using various network backbones or backplanes. In some examples, each server includes an HBA configured to allow communication over a storage area network. The fibre channel fabric can be used to implement a storage area network having storage resources such as disk arrays and tape devices. The storage area network also typically includes storage area network switches that allow routing of traffic between various storage resources. To provide fault-tolerance, individual servers are often configured with redundant resources.


For example, a server may include multiple HBAs to allow for continued operation in the event of adapter failure. Each server may also have multiple CPUs or multiple network cards to provide for fault tolerance. However, providing redundant resources in each server in a server rack or server farm can be expensive. A server farm including 40 individual systems and 40 adapters would require typically an additional 40 adapters for redundancy on each particular system. Redundancy can conventionally be provided only in a rigid and inflexible manner. Having a large number of adapters also requires a large number of switch ports, leading to inefficient and expensive deployment.


Because resources such as peripheral components and peripheral interfaces are assigned on a per server or a per processor basis, other servers do not typically have access to these resources. In order to provide adequate resources for each server, resources are typically over-provisioned. That is, more bandwidth is provided than is typically needed. For example, HBAs are typically arranged to provide 1 G, 2G, or 4G of bandwidth. However, typical servers rarely use that amount. More network interface bandwidth is allocated than is typically used simply to handle worst-case or expected worst-case scenarios.


Resources are over-provisioned resulting in overall waste and low utilization. Resource assignment on a per server or a per processor basis also limits the ability to reconstruct or reconfigure a resource environment. For example, a system administrator may want to dynamically allocate unused HBA resources to other servers needing bandwidth. Conventional HBAs are also not hot pluggable, resulting in longer downtimes during server administrative operations such as upgrades.


Having a number of disparate servers also increases the complexity associated with individual system management. The servers would typically have to be individually administered without the benefit of centralized administration. Oftentimes, servers would be equipped with graphics cards and I/O subsystems to allow for system administrator access.


Conventional architectures create resource usage inefficiency, server management inefficiency, and reconfiguration inflexibility, along with a number of other drawbacks. Consequently, the techniques of the present invention provide for resources virtualization. According to various embodiments, each server no longer has access to a physical peripheral component or a physical peripheral interface such as an HBA, but instead has access to logical or virtual resources.


In some embodiments, resources such as HBAs are removed from individual servers and aggregated at a resource virtualization server or resource virtualization switch. In one example, the resource virtualization switch creates an on-demand provisioned and traffic engineered data center by seamlessly integrating with existing hardware and software infrastructure. The resource virtualization switch receives requests from individual servers over a bus interface such as PCI Express and determines what resources to provide to handle individual requests. For example, a first server may request to transmit data over a local area network. The request is routed to the resource virtualization switch that then determines how to handle the request. In one example, the request is forwarded to the HBA corresponding to the first server.


Access to resources such as I/O and hardware acceleration resources remains at the bus level. Any mechanism allowing interconnection of components in a computer system is referred to herein as a bus. Examples of buses include PCI, PCI Express, Vesa Local Bus (VLB), PCMCIA, and AGP. For example, master components (e.g. processors) initiate transactions such as read and write transactions over buses with slave components (e.g. memory) that respond to the read and write requests. Buses in a server are typically associated with a memory space to allow for use of the read and write transactions. Any device having one or more processors that are able to access a shared memory address space is referred to herein as a server, computer, or computing system.


In one example, a server includes multiple processors that can all access a shared virtual or physical memory space. Although each processor may own separate cache lines, each processor has access to memory lines in the memory address space. A server or computing system generally includes one or more processors, memory, as well as other peripheral components and peripheral interfaces such as host bus adapters (HBAs), hardware accelerators, network interface cards (NIC), graphics accelerators, disks, etc. A processor can communicate with a variety of entities including a storage area network.


According to various embodiments, HBAs are included in a resource virtualization switch connected to multiple servers using a bus interface such as PCI Express. The bus interface provides a low latency, high bandwidth connection between the multiple servers and the storage HBA in the resource virtualization switch. The resource virtualization switch aggregates several server memories into a unified memory or an aggregated memory address view to a storage area network controller and this enables the sharing of a physical storage HBA among several servers.


In one embodiment, buffers associated with the resource virtualization switch are provided to hide PCI Express latency while extending and adapting storage HBA access patterns to the PCI Express fabric. The SCSI layer of the multiple servers including target discovery is completely decoupled from the storage HBA. Targets discovered from the storage HBA are controlled and discovered by the resource virtualization switch. This enables multiplexing several SCSI initiators from different servers onto a single storage HBA. According to various embodiments, the resource virtualization switch allows on the fly addition, deletion and adjustment of virtual HBA bandwidth allocated to each server. For example, a single 4G HBA can be split into 2G, 1G, and 1G and allocated to three separate servers.


An administrator can provision and partition resources at the resource virtualization switch based on particular needs and requirements. Quality of service (QOS) and traffic engineering schemes can be implemented at the bus level. In a conventional architecture, quality of service (QoS) and traffic engineering are available only at the network level and not at the bus level. Traffic associated with particular devices or servers can be given priority or guaranteed bandwidth. The total amount of resources can be decreased while increasing resource utilization. The resource virtualization mechanism can be introduced into existing server racks and farms with little disruption to system operation.



FIG. 1 is a diagrammatic representation showing a conventional implementation for connecting servers to a storage area network. According to various embodiments, the storage area network is implemented using a fibre channel fabric. Server 101 includes a processor 103, memory 105, and HBA 107. The processor 103 communicates with other components and interfaces in the system using an I/O bus and associated I/O controllers. In typical implementations, communications between components and interfaces in server 101 occur over an I/O bus such as PCI. Server 111 includes processors 113 and 117, memory 115, and HBA 119. Communication within server 111 similarly occurs over one or more I/O buses. Server 121 includes a processor 123, memory 125, and HBA 129. In order to allow connection with a storage area network through a switch 141, HBAs 107, 119, and 129 are provided. In one example, a processor 103 is configured to drive HBA 107 to initiate conventional fibre channel fabric login (flogi) and port login (plogi) processes to connect to a switch 141. Similarly, processors 113 and 117, and processor 123 are configured to drive HBAs 119 and 129 to initiate the flogi and plogi protocols. During the login processes, parameters and other information may be exchanged with the storage area network and other storage area network connected ports.


The various HBAs 107, 119, and 129 are also assigned port world wide names (pwwns) and fibre channel identifiers (fc_ids). Each HBA encapsulates data into fibre channel frames for transmission to a fiber channel switch 141. Encapsulation may involve adding appropriate fibre channel headers and addresses. Each HBA is also configured to remove fibre channel headers and addresses and provided data to an associated processor over a system bus when fibre channel frames are received from a fabric.


To provide for reliability, servers 101, 111, and 121 may include multiple HBAs to allow effective switchover in the event one HBA fails. Furthermore, many servers may have redundant lines physically connecting the various HBAs to the fibre channel switch 141. Multiple fibre channel switch ports are also required. Multiple fibre channel switch ports are also required. The resource allocation and system management inefficiencies are magnified by the physical complexities of routing redundant lines. Although only HBAs are noted, each server 101, 111, and 121 may also include network interface cards and hardware accelerators.



FIG. 2 is a diagrammatic representation showing separate servers connected to a resource virtualization switch 251. Server 201 includes processor 203 and memory 205. Server 211 includes processor 213 and 217 and memory 215. Server 221 includes only processor 223 and memory 225. Components and peripherals in each server 201, 211, and 221 are connected using one or more I/O buses. According to various embodiments, the I/O bus is extended to allow interconnection with other servers and external entities through an I/O bus interconnect such as an I/O bus switch 241. In one example, server 201 no longer uses addresses such as port world wide names (pwwns) associated with an HBA or media access control (MAC) addresses associated with a NIC to communicate with other servers and external networks, but each server is instead configured to communicate with a resource virtualization switch 251 using an I/O bus switch 241.


An I/O bus switch 241 may be a standalone entity, integrated within a particular server, or provided with a resource virtualization switch 251. According to various embodiments, components such as HBA 153, MC 255, and hardware accelerator 257, can be offloaded from servers 201, 211, and 221 onto a resource virtualization switch 251. The resources including MC 243 and HBA 245 are maintained in a shared and virtualized manner on a resource virtualization switch 251. Links can be provided between the resource virtualization switch and external switches such as network switch 261. According to various embodiments, the resource virtualization switch 251 includes control logic that drives an HBA 253 to initiate flogi and plogi processes independently from server 201, 211, and 221. In some instances, flogi and plogi processes may be implemented by a resource virtualization switch 251 control plane even before any servers 201, 211, and 221 are connected to the resource virtualization switch.


According to various embodiments, a series of servers is connected to the resource virtualization switch using a PCI Express bus architecture. In some cases, a PCI Express bridge is used to increase compatibility with some existing systems. However, a PCI Express bridge is not necessarily needed. By using a resource virtualization switch, the number of resources and links can be significantly reduced while increasing allocation efficiency.



FIG. 3 is a diagrammatic representation showing separate servers each associated with a memory address space. According to various embodiments, server 301 includes a memory address space 303 with kernel memory 305 and application memory 307. The memory address space 303 may be a physical memory address space or a virtual memory address space. Server 301 may include one or more processors with access to the memory address space. Server 311 includes a memory address space 313 with kernel memory 315 and application memory 317. The memory address space 313 may be a physical memory address space or a virtual memory address space. Server 311 may include one or more processors with access to the memory address space. Server 321 includes a memory address space 323 with kernel memory 325 and application memory 327. The memory address space 323 may be a physical memory address space or a virtual memory address space. Server 321 may include one or more processors with access to the memory address space.


According to various embodiments, the separate servers 301, 311, and 321 are connected to a resource virtualization switch using an I/O bus. In one embodiment, an I/O bus interconnect 351 such as an I/O bus switch is used to connect the separate servers to external entities such as a storage area network. The I/O bus interconnect 351 is associated with logic that allows aggregation of the memory address spaces 303, 313, and 323. Any logical address space that includes the memory address spaces of multiple computer systems or servers is referred to herein as an aggregated memory address space. In one embodiment, an aggregated memory address space is managed by an I/O bus switch or by a resource virtualization switch.


When a transaction occurs in a memory address space 313, the resource virtualization switch can identify the transaction as a server 311 transaction. The memory address space regions can be used to classify traffic. For example, data received from a server 311 in memory address space 313 can be assigned a particular fibre channel exchange identifier (OX_JD) for transmission onto a storage area network. An fibre channel exchange identifier is one conventional fibre channel parameter that can be used to distinguish traffic. When a reply to the transmission is received from the storage area network, the exchange identifier is used to determine which server the resource virtualization switch forwards the reply to. In one example, a table listing servers, memory address spaces, and fibre channel exchange identifiers is maintained by a resource virtualization switch. When a server writes a data block to a resource virtualization switch, an exchange identifier is assigned to fibre channel frames for transmitting that data block. Reply messages with the same exchange identifier can then be appropriately forwarded to the originating server. It will be recognized that a variety of parameters other than exchange identifiers can be used to classify traffic.


It should also be noted that each server 301, 311, and 321 may be embodied in separate computer cases. In other examples, each server may be embodied in a card, a blade, or even a single integrated circuit (IC) device or portion of an IC device. Techniques for performing interconnection can be implemented on one or more application specific integrated circuits (ASICs) and/or programmable logic devices (PLDs). The entire interconnection mechanism can be provided on a server, a card, a chip, or on a processor itself.



FIG. 4 is a diagrammatic representation showing one example of a software architecture using the resource virtualization switch of the present invention where a virtualized HBA is used for communication with a storage area network. A user level 411 includes application 401. The user level 411 is coupled to a kernel level 415 file system 421. Various transport layer protocols such as a SCSI high level protocol 431, SCSI mid level protocol 441, and SCSI low level protocol 451. In conventional implementations, a SCSI low level protocol is associated with an HBA driver that operates an HBA. However, the techniques of the present invention contemplate replacing the conventional HBA device driver with a modified device driver or a virtual device driver. Any device driver configured to drive a resource virtualization switch is referred to herein as a modified or virtual device driver. The modified or virtual device driver 451 is configured to allow kernel access to a virtual peripheral. The kernel continues to operate as though it has access to a peripheral such as an HBA included in the server. That is, the kernel may continue to operate as though the HBA can be accessed directly over the bus without using a resource virtualization switch.


However, the virtual device driver supplied is actually driving access to an I/O bus switch 461 and an associated resource virtualization switch. The I/O bus switch 461 and associated resource virtualization switch can then perform processing to determine how to handle the request to access a particular resource such as an HBA. In some examples, the resource virtualization switch can apply traffic shaping or prioritization schemes to various requests, or assign flows to particular HBAs with predetermined bandwidth.



FIG. 5 is a diagrammatic representation showing one example of a virtual HBA (VHBA) driver. Any mechanisms operating a device that allows the mapping of multiple servers over an I/O bus to a single HBA device is referred to herein as a VHBA driver. When a conventional HBA card or device is connected to a computer system over a bus, a number of SCSI parameters 513 are configured for that HBA. A VHBA driver 511 keeps the same set of SCSI parameters 513 to allow a VHBA driver to operate in conventional systems. In one example, a processor in a server uses the same set of parameters and formats used for an HBA driver to operate a VHBA driver. According to various embodiments, both an HBA and a VHBA driver 511 use the same SCSI parameters 513. A scsi-reset-delay integer specifies the recovery time in milliseconds for a reset delay by either a SCSI bus or SCSI device. A scsi-options property is an integer specifying a number of options through individually defined bits. Other options include the following:


SCSI_OPTIONS_DR—indicates whether the VHBA should grant disconnect privileges to a target device.


SCSI_OPTIONS_LINK—indicates whether the VHBA should enable linked commands.


SCSI_OPTIONS_SYNC—indicates whether the VHBA driver should negotiate synchronous data transfer and whether the driver should reject any attempt to negotiate synchronous data transfer initiated by a target.


SCSI_OPTIONS_PARITY—indicates whether the VHBA driver should run the SCSI bus with parity.


SCSI_OPTIONS_FAST—indicates if the VHBA should not operate the bus in FAST SCSI mode.


SCSI_OPTIONS_WIDE—indicates whether the VHBA should operate the bus in WIDE SCSI mode.


According to various embodiments, the VHBA adapter parameters 515 include SCSI parameters 513. Adapter parameters may include disconnect, link, synchronization, and parity. Adapter parameters allow communication with a resource virtualization switch. In one embodiment, adapter parameters also include rate, transfer rate, bus number, and slot number.



FIG. 6 is a diagrammatic representation showing multiple VHBAs. According to various embodiments, servers 601, 603, 605, and 607 are connected to VHBAs 621, 623, 625, and 627 respectively through I/O bus switch. Virtual HBAs 621 and 623 are included in a VHBA chip coupled to HBA 631 and VHBAs 625 and 627 are included in a VHBA chip and coupled to HBA 633. In one example, server 601 communicates with multiple entities in a storage area network 651 coupled to HBA 631. Any sequence of data transmissions between a source and destination in a storage area network is referred to herein as an exchange. A server 601 may be involved in multiple exchanges.


An exchange may include a set of one or more non-concurrent related sequences passing between a pair of fibre channel ports. In one embodiment, an exchange represents a conversation such as a SCSI task. Exchanges may be bidirectional and may be short or long lived. In some examples, the parties to an exchange are identified by an Originator Exchange_Identifier (OX_ID) and a Responder Exchange_Identifier (RX_ID).


The multiple exchanges from a particular server 601 are mapped to VHBA 621. According to various embodiments, each VHBA is a logical entity mapped to a particular server. Multiple VHBAs can be included in a single device. In one embodiment, a single chip includes 4 VHBAs and logic for mapping OX_IDs to particular servers. Traffic from multiple VHBAs is aggregated onto a single HBA 631. In one example, HBA 631 is a conventional HBA available from Qlogic Corporation of Aliso Viejo, Calif. or Adaptec Inc. of Milpitas, Calif. Interaction between HBA 631 and 633 appears to a storage area network as though each HBA is included in individual servers.


According to various embodiments, when a data sequence is received from a server 601 at a VHBA 621, the exchange identifier associated with the data sequence is mapped with server 601 and maintained in a database associated with VHBA 621. The HBA 631 then forwards the data in a fibre channel frame to a storage area network with the exchange identifier or some other parameter that can be used by the resource virtualization switch 641 to identify the originating server when a response is received from the storage area network.



FIG. 7 is a diagrammatic representation showing one example of a resource virtualization switch. Although the techniques of the present invention do not require a resource virtualization switch, a resource virtualization switch can be used to increase system functionality and efficiency. An I/O bus switch 721 is connected to multiple computer systems using I/O buses. Port adapters 731-739 are associated with multiple resources such as NICs, HBAs, sATAs, hardware accelerators, etc. The server platform 711 manages interaction between the servers connected to the I/O bus switch 721 and various resources associated with the port adapters 731-739.


The server platform 711 is associated with memory 719 and a processor subsystem 713, a power subsystem 715, and a storage subsystem 717. In some embodiments, the server platform 711 includes tables with information mapping various servers connected through the I/O bus switch 721 and various port adapter resources. The processor subsystem 713 is configured to manage port adapter resource as though the port adapters were included in individual servers. In one example, the processor subsystem 713 is configured to initiate fabric login and port login processes for HBA cards associated with a storage area network. According to various embodiments, the I/O bus switch 721 supports flexible virtual channel configuration, high availability, and dynamic port configurations. Examples of I/O bus switches include the PCI Express switch PEX 8532 available from PLX Technology, Inc. of Sunnyvale, Calif. and the PCI Express switch PES-48G available from MC Semiconductor of Agoura Hills, Calif.


The server platform 711 includes a VHBA device 741 that may be associated with one or more VHBAs mapped to particular servers. In one embodiment, the VHBA device 741 is a VHBA chip having a PCI Express interface coupled to the I/O bus switch 721 and a port adapter interface. In other examples, the VHBA chip may include an HBA port adapter and interface directly with a storage area network instead of interfacing with a conventional HBA. The VHBA chip includes classifier logic 747, a queue manager 745, and a buffer manager 743. According to various embodiments, the classifier logic 747 identifies information such as a frame's destination server and priority. The data can then be buffered in memory by buffer manager 743 and a descriptor for the data is then posted by the queue manager 745. In one embodiment, one or more queues are provided for each connected server. Additional queues may be provided to handle traffic having different levels of priority. Read, write, and control queues can also be provided. In one example, a descriptor includes parameters such as a pointer to the data in memory, a length, a source port, a multicast count, and an exchange identifier.


Each individual server may also include descriptor queues. As will be appreciated, the servers connected to the I/O Bus Switch 721 including the resource virtualization switch arbitrate for access to the I/O Bus. When access is obtained, data can be read from memory associated with one of the server based on the information provided in the descriptor queues.


Redundancy mechanisms are also provided to allow continued operation in the event that an HBA or other resource fails or a resource virtualization switch itself fails. FIG. 8 is a diagrammatic representation showing one technique for providing redundancy. Multipathing is a conventional mechanism that allows the creation of interface groups that allow standby or simultaneous operation of devices. In one example, a server includes multiple HBA cards, each associated with a device driver. One card may be active and the other standby, or the HBA cards may be used simultaneously to allow load balancing. However, requiring multiple HBA cards in conventional implementations can lead to device underutilization.


The techniques and mechanisms of the present invention contemplate providing multipathing using VHBAs. In one embodiment, multiple VHBA device drivers 811 and 813 are configured on a server 801. Multiple VHBA device drivers 815 and 817 are configured on server 803. The VHBA device drivers are associated with different HBAs and possibly different resource virtualization switches. In one embodiment, a server 801 includes an active VHBA driver 811 associated with resource virtualization switch 823. If the HBA in resource virtualization switch 823 fails, or the resource virtualization switch 823 itself associated with the I/O Bus switch 821 fails, the standby VHBA driver 813 associated with I/O bus switch 831 and resource virtualization switch 833 can take over operation. Switchover can occur after a period of inactivity or after failure to receive heartbeat indicators. Existing multipathing mechanisms can be used to provide for HBA redundancy and failover capabilities by using VHBA device drivers and resource virtualization switches.



FIG. 9 is a flow process diagram showing one technique for initiating HBAs at a resource virtualization switch. At 901, the control processor initiates port and fabric login processes for multiple HBAs included in a resource virtualization switch. Various name server operations can also be initiated to allow recognition of the HBAs and the storage area network. In some examples, the HBAs each have port world wide names and fibre channel identifiers that allow other entities in a storage area network to communicate with the HBAs. According to various embodiments, entities in a storage area network see individual HBAs as associated with individual servers. At 903, information received from servers over an I/O bus such as the PCI Express bus is received. At 905, servers identified over the I/O bus are mapped to individual VHBAs. According to various embodiments, multiple VHBAs are included in VHBA chips. At 907, one or more VHBAs are mapped to individual HBAs. For example, four VHBAs mapped to four servers connected over an I/O bus are configured to share a single physical HBA.


At 911, a resource virtualization switch receives data from individual servers over the I/O bus. According to various embodiments, data is received after a resource virtualization switch obtains access to the I/O bus and reads a descriptor referencing the data to be transferred. In some examples, an exchange identifier is determined at 913. An exchange identifier may specify a particular conversation between a source server and a destination entity in the storage area network. At 915, an exchange identifier to server mapping is maintained. In typical instances, the exchange identifier is maintained to allow return traffic to be routed to the appropriate server.



FIG. 10 is a flow process diagram showing a technique for receiving frames from a storage area network. At 1001, a frame is received at an HBA. According to various embodiments, the frame includes a port world wide name, a fibre channel identifier, and exchange information. At 1003, the frame is classified based on the exchange identifier. According to various embodiments, the frame is classified using VHBA classifier logic. The frame may also be classified based on priority or other parameters. At 1005, the destination server is determined based on the exchange identifier. In some examples, information mapping exchange identifiers to corresponding servers is maintained by the resource virtualization switch. At 1007, the frame may be buffered. At 1009, a descriptor referencing data included in the frame is posted. In one embodiment, the descriptor is posted in a queue associated with the destination server. In other examples, the descriptor is posted in the queue associated with the priority of the data and the destination server. When the destination server is able to obtain access to the I/O bus, the descriptor and the reference data is read into the memory of the destination server.


In addition, although exemplary techniques and devices are described, the above-described embodiments may be implemented in a variety of manners. For instance, instructions and data for implementing the above-described invention may be stored on fixed or portable storage media. Hardware used to implement various techniques may be embodied as racks, cards, integrated circuited devices, or portions of semiconductor chips. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims
  • 1. A resource virtualization switch coupled to a storage area network, the resource virtualization switch comprising: a processor; anda memory coupled with and readable by the processor and having stored therein a set of instructions which, when executed by the processor, causes the processor to implement: a plurality of port adapters including at least a first port adapter and a second port adapter, the plurality of port adapters connected to a storage area network, the storage area network including a plurality of storage area network ports associated with storage area network switches;an I/O bus switch providing an interconnection between a plurality of servers and between the plurality of servers and the storage area network, the I/O bus switch directly connected to a PCI Express bus of each of the plurality of servers without use of an Host Bus Adapter (HBA) between the I/O bus switch and the PCI Express bus of each of the plurality of servers, the plurality of servers including at least a first server and a second server; anda resource virtualization switch platform operable to map communications from the first server and the second server onto the first port adapter,wherein the resource virtualization switch platform comprises a virtual host bus adapter device including a plurality of virtual host bus adapters, wherein the virtual host bus adapter device assigns a virtual host bus adapter to each of the plurality of servers coupled to the I/O bus switch and aggregates traffic from the plurality of virtual host bus adapters onto the first port adapter.
  • 2. The resource virtualization switch of claim 1, wherein the port adapters are Host Bus Adapters (HBAs).
  • 3. The resource virtualization switch of claim 2, wherein the first server includes a first virtual HBA driver.
  • 4. The resource virtualization switch of claim 3, wherein the first server further includes a second virtual HBA driver.
  • 5. The resource virtualization switch of claim 4, wherein the first and second virtual HBA drivers are used for redundancy and load sharing.
  • 6. The resource virtualization switch of claim 4, wherein the first and second virtual HBA drivers are used for multipathing.
  • 7. The resource virtualization switch of claim 6, wherein the first and second virtual HBA drivers are coupled to different resource virtualization switches.
  • 8. The resource virtualization switch of claim 3, wherein the first virtual HBA driver comprises a Small Computer Systems Interface (SCSI) and a resource virtualization switch interface.
  • 9. The resource virtualization switch of claim 1, wherein the first port adapter is coupled to a first storage area network port and the second port adapter is coupled to a second storage area network port.
  • 10. The resource virtualization switch of claim 1, wherein the I/O bus switch is a PCI Express switch.
  • 11. The resource virtualization switch of claim 10, wherein the communications from the first server and second server are mapped onto the first port adapter dynamically.
  • 12. The resource virtualization switch of claim 11, wherein communications from the first port adapter received from the storage area network are transmitted to the first server or the second server based on fibre channel frame information.
  • 13. The resource virtualization switch of claim 12, wherein communications from the first port adapter received from the storage area network are transmitted to the first server or the second server based on fibre channel exchange identifiers.
  • 14. The method of claim 1, wherein the resource virtualization switch is capable of implementing quality of service on a bus level.
  • 15. A method for transmitting data, comprising: receiving data from a plurality of servers including at least a first server and a second server, the data received by an I/O bus switch of a resource virtualization switch over a PCI Express bus of each of the plurality of servers, the I/O bus switch providing an interconnection between the plurality of servers and between the plurality of servers and a storage area network, the I/O bus switch directly connected to the PCI Express bus of each of the plurality of servers without use of an Host Bus Adapter (HBA) between the I/O bus switch and the PCI Express bus of each of the plurality of servers;associating data received from the first server and the second server with a first port adapter at the resource virtualization switch, the first port adapter connected to the storage area network switch,wherein the resource virtualization switch comprises a virtual host bus adapter device including a plurality of virtual host bus adapters, wherein the virtual host bus adapter device assigns a virtual host bus adapter to each of the plurality of servers coupled to the I/O bus connection and aggregates data from the plurality of virtual host bus adapters onto the first port adapter; andtransmitting the data to the storage area network switch using the first port adapter.
  • 16. The method of claim 15, wherein the resource virtualization switch includes a plurality of port adapters including the first port adapter and a second port adapter.
  • 17. The method of claim 16, wherein the plurality of port adapters are Host Bus Adapters (HBAs).
  • 18. The method of claim 17, wherein the first server includes a first virtual HBA driver.
  • 19. The method of claim 18, wherein the first server further includes a second virtual HBA driver.
  • 20. The method of claim 19, wherein the first and second virtual HBA drivers are used for redundancy and load sharing.
  • 21. The method of claim 19, wherein the first and second virtual HBA drivers are used for multipathing.
  • 22. The method of claim 21, wherein the first and second virtual HBA drivers are coupled to different resource virtualization switches.
  • 23. The method of claim 18, wherein the first virtual HBA driver comprises a Small Computer Systems Interface (SCSI) and a resource virtualization switch interface.
  • 24. The method of claim 15, wherein the first port adapter is coupled to a first fibre channel port and a second port adapter is coupled to a second fibre channel port.
  • 25. The method of claim 15, wherein the I/O bus switch is a PCI Express switch.
  • 26. The method of claim 25, wherein the communications from the first server and second server are mapped onto the first port adapter dynamically.
  • 27. The method of claim 26, wherein communications from the first port adapter received from the storage area network are transmitted to the first server or the second server based on fibre channel frame information.
  • 28. The method of claim 27, wherein communications from the first port adapter received from the storage area network are transmitted to the first server or the second server based on fibre channel exchange identifiers.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority from U.S. Provisional Patent Application No. 60/590,450 titled METHODS AND APPARATUS FOR RESOURCE VIRTUALIZATION, filed on Jul. 22, 2004 by Shreyas Shah, Subramanian Vinod, R. K. Anand, and Ashok Krishnamurthi, the entirety of which is incorporated by reference for all purposes.

US Referenced Citations (273)
Number Name Date Kind
5621913 Tuttle et al. Apr 1997 A
5754948 Metze May 1998 A
5815675 Steele et al. Sep 1998 A
5898815 Bluhm et al. Apr 1999 A
6003112 Tetrick Dec 1999 A
6069895 Ayandeh May 2000 A
6145028 Shank et al. Nov 2000 A
6157955 Narad et al. Dec 2000 A
6247086 Allingham Jun 2001 B1
6253334 Amdahl et al. Jun 2001 B1
6282647 Leung et al. Aug 2001 B1
6308282 Huang et al. Oct 2001 B1
6314525 Mahalingham et al. Nov 2001 B1
6331983 Haggerty et al. Dec 2001 B1
6343324 Hubis et al. Jan 2002 B1
6377992 Plaza et al. Apr 2002 B1
6393483 Latif et al. May 2002 B1
6401117 Narad et al. Jun 2002 B1
6418494 Shatas et al. Jul 2002 B1
6430191 Klausmeier et al. Aug 2002 B1
6466993 Bonola Oct 2002 B1
6470397 Shah et al. Oct 2002 B1
6578128 Arsenault et al. Jun 2003 B1
6594329 Susnow Jul 2003 B1
6628608 Lau et al. Sep 2003 B1
6708297 Bassel Mar 2004 B1
6725388 Susnow Apr 2004 B1
6757725 Frantz et al. Jun 2004 B1
6779064 McGowen et al. Aug 2004 B2
6804257 Benayoun et al. Oct 2004 B1
6807581 Starr et al. Oct 2004 B1
6823458 Lee et al. Nov 2004 B1
6898670 Nahum May 2005 B2
6931511 Weybrew et al. Aug 2005 B1
6937574 Delaney et al. Aug 2005 B1
6963946 Dwork et al. Nov 2005 B1
6970921 Wang et al. Nov 2005 B1
7011845 Kozbor et al. Mar 2006 B2
7046668 Pettey et al. May 2006 B2
7093265 Jantz et al. Aug 2006 B1
7096308 Main et al. Aug 2006 B2
7103064 Pettey et al. Sep 2006 B2
7103888 Cayton et al. Sep 2006 B1
7111084 Tan et al. Sep 2006 B2
7120728 Krakirian et al. Oct 2006 B2
7127445 Mogi et al. Oct 2006 B2
7143227 Maine Nov 2006 B2
7159046 Mulla et al. Jan 2007 B2
7171434 Ibrahim et al. Jan 2007 B2
7171495 Matters et al. Jan 2007 B2
7181211 Phan-Anh Feb 2007 B1
7188209 Pettey et al. Mar 2007 B2
7203842 Kean Apr 2007 B2
7209439 Rawlins et al. Apr 2007 B2
7213246 van Rietschote et al. May 2007 B1
7219183 Pettey et al. May 2007 B2
7240098 Mansee Jul 2007 B1
7260661 Bury et al. Aug 2007 B2
7269168 Roy et al. Sep 2007 B2
7281030 Davis Oct 2007 B1
7281077 Woodral Oct 2007 B2
7281169 Golasky et al. Oct 2007 B2
7307948 Infante et al. Dec 2007 B2
7308551 Arndt et al. Dec 2007 B2
7334178 Aulagnier Feb 2008 B1
7345689 Janus et al. Mar 2008 B2
7346716 Bogin et al. Mar 2008 B2
7360017 Higaki et al. Apr 2008 B2
7366842 Acocella et al. Apr 2008 B1
7386637 Arndt et al. Jun 2008 B2
7412536 Oliver et al. Aug 2008 B2
7421710 Qi et al. Sep 2008 B2
7424529 Hubis Sep 2008 B2
7433300 Bennett et al. Oct 2008 B1
7457897 Lee et al. Nov 2008 B1
7457906 Pettey et al. Nov 2008 B2
7493416 Pettey Feb 2009 B2
7502884 Shah et al. Mar 2009 B1
7509436 Rissmeyer Mar 2009 B1
7516252 Krithivas Apr 2009 B2
7602774 Sundaresan et al. Oct 2009 B1
7606260 Oguchi et al. Oct 2009 B2
7609723 Munguia Oct 2009 B2
7634650 Shah et al. Dec 2009 B1
7669000 Sharma et al. Feb 2010 B2
7711789 Jnagal et al. May 2010 B1
7733890 Droux et al. Jun 2010 B1
7782869 Chitlur Srinivasa Aug 2010 B1
7783788 Quinn et al. Aug 2010 B1
7792923 Kim Sep 2010 B2
7793298 Billau et al. Sep 2010 B2
7821973 McGee et al. Oct 2010 B2
7836332 Hara et al. Nov 2010 B2
7843907 Abou-Emara et al. Nov 2010 B1
7849153 Kim Dec 2010 B2
7865626 Hubis Jan 2011 B2
7870225 Kim Jan 2011 B2
7899928 Naik et al. Mar 2011 B1
7933993 Skinner Apr 2011 B1
7937447 Cohen et al. May 2011 B1
7941814 Okcu et al. May 2011 B1
8041875 Shah et al. Oct 2011 B1
8180872 Marinelli et al. May 2012 B1
8180949 Shah et al. May 2012 B1
8185664 Lok et al. May 2012 B1
8195854 Sihare Jun 2012 B1
8200871 Rangan et al. Jun 2012 B2
8218538 Chidambaram et al. Jul 2012 B1
8228820 Gopal Gowda et al. Jul 2012 B2
8261068 Raizen et al. Sep 2012 B1
8285907 Chappell et al. Oct 2012 B2
8291148 Shah et al. Oct 2012 B1
8387044 Yamada et al. Feb 2013 B2
8392645 Miyoshi Mar 2013 B2
8397092 Karnowski Mar 2013 B2
8443119 Limaye et al. May 2013 B1
8458306 Sripathi Jun 2013 B1
8677023 Venkataraghavan et al. Mar 2014 B2
8892706 Dalal Nov 2014 B1
9064058 Daniel Jun 2015 B2
9083550 Cohen et al. Jul 2015 B2
20010032280 Osakada et al. Oct 2001 A1
20010037406 Philbrick et al. Nov 2001 A1
20020023151 Iwatani Feb 2002 A1
20020065984 Thompson et al. May 2002 A1
20020069245 Kim Jun 2002 A1
20020146448 Kozbor et al. Oct 2002 A1
20020152327 Kagan et al. Oct 2002 A1
20030007505 Noda et al. Jan 2003 A1
20030028716 Sved Feb 2003 A1
20030037177 Sutton et al. Feb 2003 A1
20030051076 Webber Mar 2003 A1
20030081612 Goetzinger et al. May 2003 A1
20030093501 Carlson et al. May 2003 A1
20030099254 Richter May 2003 A1
20030110364 Tang et al. Jun 2003 A1
20030126315 Tan et al. Jul 2003 A1
20030126320 Liu et al. Jul 2003 A1
20030126344 Hodapp, Jr. Jul 2003 A1
20030131182 Kumar et al. Jul 2003 A1
20030165140 Tang et al. Sep 2003 A1
20030172149 Edsall et al. Sep 2003 A1
20030200315 Goldenberg et al. Oct 2003 A1
20030208614 Wilkes Nov 2003 A1
20030212755 Shatas et al. Nov 2003 A1
20030226018 Tardo et al. Dec 2003 A1
20030229645 Mogi et al. Dec 2003 A1
20040003141 Matters et al. Jan 2004 A1
20040003154 Harris et al. Jan 2004 A1
20040008713 Knight et al. Jan 2004 A1
20040025166 Adlung et al. Feb 2004 A1
20040028063 Roy et al. Feb 2004 A1
20040030857 Krakirian et al. Feb 2004 A1
20040034718 Goldenberg et al. Feb 2004 A1
20040054776 Klotz et al. Mar 2004 A1
20040057441 Li et al. Mar 2004 A1
20040064590 Starr et al. Apr 2004 A1
20040078632 Infante et al. Apr 2004 A1
20040081145 Harrekilde-Petersen et al. Apr 2004 A1
20040107300 Padmanabhan et al. Jun 2004 A1
20040123013 Clayton et al. Jun 2004 A1
20040139237 Rangan et al. Jul 2004 A1
20040151188 Maveli et al. Aug 2004 A1
20040160970 Dally et al. Aug 2004 A1
20040172494 Pettey et al. Sep 2004 A1
20040179529 Pettey et al. Sep 2004 A1
20040210623 Hydrie et al. Oct 2004 A1
20040218579 An Nov 2004 A1
20040225719 Kisley et al. Nov 2004 A1
20040225764 Pooni et al. Nov 2004 A1
20040233933 Munguia Nov 2004 A1
20040236877 Burton Nov 2004 A1
20050010688 Murakami et al. Jan 2005 A1
20050033878 Pangal et al. Feb 2005 A1
20050039063 Hsu et al. Feb 2005 A1
20050044301 Vasilevsky et al. Feb 2005 A1
20050050191 Hubis Mar 2005 A1
20050058085 Shapiro et al. Mar 2005 A1
20050066045 Johnson et al. Mar 2005 A1
20050080923 Elzur Apr 2005 A1
20050080982 Vasilevsky et al. Apr 2005 A1
20050091441 Qi et al. Apr 2005 A1
20050108407 Johnson et al. May 2005 A1
20050111483 Cripe et al. May 2005 A1
20050114569 Bogin et al. May 2005 A1
20050114595 Karr et al. May 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050141425 Foulds Jun 2005 A1
20050160251 Zur et al. Jul 2005 A1
20050182853 Lewites et al. Aug 2005 A1
20050188239 Golasky et al. Aug 2005 A1
20050198410 Kagan et al. Sep 2005 A1
20050198523 Shanbhag et al. Sep 2005 A1
20050232285 Terrell et al. Oct 2005 A1
20050238035 Riley Oct 2005 A1
20050240621 Robertson et al. Oct 2005 A1
20050240932 Billau et al. Oct 2005 A1
20050262269 Pike Nov 2005 A1
20060004983 Tsao et al. Jan 2006 A1
20060007937 Sharma Jan 2006 A1
20060010287 Kim Jan 2006 A1
20060013240 Ma et al. Jan 2006 A1
20060045098 Krause Mar 2006 A1
20060050693 Bury et al. Mar 2006 A1
20060059400 Clark et al. Mar 2006 A1
20060092928 Pike et al. May 2006 A1
20060129699 Kagan et al. Jun 2006 A1
20060136570 Pandya Jun 2006 A1
20060168286 Makhervaks et al. Jul 2006 A1
20060168306 Makhervaks et al. Jul 2006 A1
20060179178 King Aug 2006 A1
20060182034 Klinker et al. Aug 2006 A1
20060184711 Pettey et al. Aug 2006 A1
20060193327 Arndt et al. Aug 2006 A1
20060200584 Bhat Sep 2006 A1
20060212608 Arndt et al. Sep 2006 A1
20060224843 Rao et al. Oct 2006 A1
20060233168 Lewites et al. Oct 2006 A1
20060242332 Johnsen et al. Oct 2006 A1
20060253619 Torudbakken et al. Nov 2006 A1
20060282591 Krithivas Dec 2006 A1
20060292292 Brightman et al. Dec 2006 A1
20070050520 Riley Mar 2007 A1
20070067435 Landis et al. Mar 2007 A1
20070101173 Fung May 2007 A1
20070112574 Greene May 2007 A1
20070112963 Dykes et al. May 2007 A1
20070130295 Rastogi et al. Jun 2007 A1
20070220170 Abjanic et al. Sep 2007 A1
20070286233 Latif et al. Dec 2007 A1
20080025217 Gusat et al. Jan 2008 A1
20080082696 Bestler Apr 2008 A1
20080159260 Vobbilisetty et al. Jul 2008 A1
20080192648 Galles Aug 2008 A1
20080205409 McGee et al. Aug 2008 A1
20080225877 Yoshida Sep 2008 A1
20080270726 Elnozahy et al. Oct 2008 A1
20080288627 Hubis Nov 2008 A1
20080301692 Billau et al. Dec 2008 A1
20080307150 Stewart et al. Dec 2008 A1
20090070422 Kashyap et al. Mar 2009 A1
20090106470 Sharma et al. Apr 2009 A1
20090307388 Tchapda Dec 2009 A1
20100088432 Itoh Apr 2010 A1
20100138602 Kim Jun 2010 A1
20100195549 Aragon et al. Aug 2010 A1
20100232450 Maveli Sep 2010 A1
20100293552 Allen et al. Nov 2010 A1
20110153715 Oshins et al. Jun 2011 A1
20110154318 Oshins et al. Jun 2011 A1
20120079143 Krishnamurthi et al. Mar 2012 A1
20120110385 Fleming et al. May 2012 A1
20120144006 Wakamatsu et al. Jun 2012 A1
20120158647 Yadappanavar et al. Jun 2012 A1
20120163376 Shukla et al. Jun 2012 A1
20120163391 Shukla et al. Jun 2012 A1
20120166575 Ogawa et al. Jun 2012 A1
20120167080 Vilayannur et al. Jun 2012 A1
20120209905 Haugh et al. Aug 2012 A1
20120239789 Ando et al. Sep 2012 A1
20120304168 Raj Seeniraj et al. Nov 2012 A1
20130031200 Gulati et al. Jan 2013 A1
20130080610 Ando Mar 2013 A1
20130117421 Wimmer May 2013 A1
20130117485 Varchavtchik et al. May 2013 A1
20130138758 Cohen et al. May 2013 A1
20130138836 Cohen et al. May 2013 A1
20130145072 Venkataraghavan et al. Jun 2013 A1
20130159637 Forgette et al. Jun 2013 A1
20130179532 Tameshige et al. Jul 2013 A1
20130201988 Zhou et al. Aug 2013 A1
20140122675 Cohen et al. May 2014 A1
20150134854 Tchapda May 2015 A1
Non-Patent Literature Citations (89)
Entry
Figueiredo et al, “Resource Virtualization Renaissance”, May 2005, IEEE Computer Society, pp. 28-31.
Ajay V. Bhatt, “Creating a Third Generation I/O Interconnect”, Intel® Developer Network for PCI Express* Architecture, www.express-lane.org, pp. 1-11.
U.S. Appl. No. 11/145,698, Final Office Action mailed Jul. 6, 2011, 29 pages.
U.S. Appl. No. 11/145,698, Non-final Office Action mailed Mar. 16, 2011, 30 pages.
U.S. Appl. No. 11/145,698, Final Office Action mailed Aug. 18, 2009, 27 pages.
U.S. Appl. No. 11/145,698, Non-final Office Action mailed Mar. 31, 2009, 27 pages.
Wikipedia's article on ‘Infiniband’, Aug. 2010.
U.S. Appl. No. 11/086,117, Final Office Action mailed on Dec. 23, 2008, 11 pages.
U.S. Appl. No. 11/086,117, Final Office Action mailed on Dec. 10, 2009, 18 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action mailed on May 6, 2009, 12 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action mailed on Jul. 22, 2008, 13 pages.
U.S. Appl. No. 11/086,117, Non-Final Office Action mailed on Jul. 22, 2010, 24 pages.
U.S. Appl. No. 11/086,117, Notice of Allowance mailed on Dec. 27, 2010, 15 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action mailed on May 9, 2013, 13 pages.
U.S. Appl. No. 11/179,085, Final Office Action mailed on Oct. 30, 2007, 13 pages.
U.S. Appl. No. 11/179,085, Non-Final Office Action mailed on May 31, 2007, 14 pages.
U.S. Appl. No. 11/179,085, Notice of Allowance mailed on Aug. 11, 2008, 4 pages.
U.S. Appl. No. 11/179,085, Pre Appeal Brief Request mailed on Jan. 24, 2008, 6 pages.
U.S. Appl. No. 11/179,085, Preliminary Amendment mailed on May 27, 2008, 9 pages.
U.S. Appl. No. 11/179,085, Response to Non-final Office Action filed on Aug. 10, 2007, 8 pages.
U.S. Appl. No. 11/179,085, filed Jul. 11, 2005.
U.S. Appl. No. 11/179,437, Final Office Action mailed on Jan. 8, 2009, 13 pages.
U.S. Appl. No. 11/179,437, Non-Final Office Action mailed on May 8, 2008, 11 pages.
U.S. Appl. No. 11/179,437, Notice of Allowance mailed on Jun. 1, 2009, 8 pages.
U.S. Appl. No. 11/179,437, filed Jul. 11, 2005.
U.S. Appl. No. 11/184,306, Non-Final Office Action mailed on Apr. 10, 2009, 5 pages.
U.S. Appl. No. 11/184,306, Notice of Allowance mailed on Aug. 10, 2009, 4 pages.
U.S. Appl. No. 11/200,761, Final Office Action mailed on Jul. 9, 2010, 22 pages.
U.S. Appl. No. 11/200,761, Final Office Action mailed on Aug. 13, 2009, 22 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Jun. 11, 2013, 21 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Aug. 31, 2012, 21 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Jan. 20, 2010, 22 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Mar. 12, 2009, 22 pages.
U.S. Appl. No. 11/200,761, Office Action mailed on Feb. 7, 2013, 22 pages.
U.S. Appl. No. 11/200,761, U.S. Patent Application mailed on Aug. 9, 2005, 32 pages.
U.S. Appl. No. 11/222,590, Non-Final Office Action mailed on Mar. 21, 2007, 6 pages.
U.S. Appl. No. 11/222,590, Notice of Allowance mailed on Sep. 18, 2007, 5 pages.
U.S. Appl. No. 12/250,842, Allowed Claims mailed on Jun. 10, 2011.
U.S. Appl. No. 12/250,842, Non-Final Office Action mailed on Aug. 10, 2010, 9 pages.
U.S. Appl. No. 12/250,842, Notice of Allowance mailed on Feb. 18, 2011, 5 pages.
U.S. Appl. No. 12/250,842, Notice of Allowance mailed on Jun. 10, 2011, 5 pages.
U.S. Appl. No. 12/250,842, Response to Non-Final Office Action filed on Nov. 19, 2010, 8 pages.
U.S. Appl. No. 12/250,842, filed Oct. 14, 2008.
U.S. Appl. No. 12/544,744, Final Office Action mailed on Feb. 27, 2013, 27 pages.
U.S. Appl. No. 12/544,744, Non-Final Office Action mailed on Jun. 6, 2012, 26 pages.
U.S. Appl. No. 12/862,977, Non-Final Office Action mailed on Mar. 1, 2012, 8 pages.
U.S. Appl. No. 12/862,977, Non-Final Office Action mailed on Aug. 29, 2012, 9 pages.
U.S. Appl. No. 12/862,977, Notice of Allowance mailed on Feb. 7, 2013, 11 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action mailed on Nov. 13, 2011, 10 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action mailed on May 21, 2013, 22 pages.
U.S. Appl. No. 13/229,587, Non-Final Office Action mailed on Oct. 6, 2011, 4 pages.
U.S. Appl. No. 13/229,587, Notice of Allowance mailed on Jan. 19, 2012, 5 pages.
U.S. Appl. No. 13/229,587, Response to Non-Final Office Action filed on Jan. 4, 2012, 4 pages.
U.S. Appl. No. 13/445,570, Notice of Allowance mailed on Jun. 20, 2012, 5 pages.
Kesavan et al., Active CoordinaTion (ACT)—Toward Effectively Managing Virtualized Multicore Clouds, IEEE, 2008.
Liu et al., High Performance RDMA-Based MPI Implementation over InfiniBand, ICS'03, San Francisco, ACM, Jun. 23-26, 2003, 10 pages.
Poulton, Xsigo—Try it out, I dare you, Nov. 16, 2009.
Ranadive et al., IBMon: Monitoring VMM-Bypass Capable InfiniBand Devices using Memory Introspection, ACM, 2009.
Wong et al., Effective Generation of Test Sequences for Structural Testing of Concurrent Programs, IEEE International Conference of Complex Computer Systems (ICECCS'05), 2005.
Xu et al., Performance Virtualization for Large-Scale Storage Systems, IEEE, 2003, 10 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action mailed on Mar. 16, 2011, 23 pages.
U.S. Appl. No. 11/145,698, Non-Final Office Action mailed on Mar. 31, 2009, 22 pages.
U.S. Appl. No. 11/145,698, Final Office Action mailed on Jul. 6, 2011, 26 pages.
U.S. Appl. No. 11/145,698, Final Office Action mailed on Aug. 18, 2009, 22 pages.
U.S. Appl. No. 11/145,698, Notice of Allowance, mailed Oct. 24, 2013, 15 pages.
U.S. Appl. No. 11/200,761 Final Office Action mailed Jan. 9, 2014, 23 pages.
International Search Report and Written Opinion of PCT/US2013/065008 mailed on Apr. 16, 2014, 17 pages.
U.S. Appl. No. 11/200,761, Advisory Action mailed on Oct. 21, 2009, 2 pages.
U.S. Appl. No. 11/200,761, Advisory Action mailed on Apr. 19, 2013, 3 pages.
U.S. Appl. No. 11/200,761, Advisory Action mailed on Aug. 31, 2010, 3 pages.
U.S. Appl. No. 12/544,744, Final Office Acton mailed on Nov. 7, 2014, 32 pages.
U.S. Appl. No. 12/890,496, Advisory Action mailed on Apr. 16, 2012, 4 pages.
U.S. Appl. No. 13/663,405, Notice of Allowance mailed on Mar. 12, 2015, 13 pages.
HTTP Persistent Connection Establishment, Management and Termination, section of ‘The TCP/IP Guide’ version 3.0, Sep. 20, 2005, 2 pages.
TCP Segment Retransmission Timers and the Retransmission Queue, section of ‘The TCP/IP Guide’ version 3.0, Sep. 20, 2005, 3 pages.
TCP Window Size Adjustment and Flow Control, section of ‘The TCP/IP Guide’ version 3.0, Sep. 20, 2005, 2 pages.
Balakrishnan et al., Improving TCP/IP Performance over Wireless Networks, Proc. 1st ACM Int'l Conf. on Mobile Computing and Networking (Mobicom), Nov. 1995, 10 pages.
U.S. Appl. No. 12/890,498, Final Office Action mailed on Jun. 17, 2015, 24 pages.
U.S. Appl. No. 12/890,498, Advisory Action mailed on Jan. 27, 2015, 3 pages.
U.S. Appl. No. 12/890,498, Final Office Action mailed on Nov. 19, 2014, 21 pages.
U.S. Appl. No. 11/200,761, Non-Final Office Action mailed on Mar. 11, 2015, 24 pages.
U.S. Appl. No. 12/890,498, Non-Final Office Action mailed on Mar. 5, 2015, 24 pages.
U.S. Appl. No. 13/663,405, Non-Final Office Action mailed on Nov. 21, 2014, 19 pages.
Spanbauer, Wired or Wireless, Choose Your Network, PCWorld, Sep. 30, 2003, 9 pages.
U.S. Appl. No. 12/544,744, Non-Final Office Action mailed on Apr. 4, 2014, 30 pages.
Marshall, D. “Xsigo Systems Launches Company and I/O Virtualization Product”, vmblog.com, Sep. 15, 2007, 4 pages. Retrieved from http:/lvmblog.com/archive/2007/09/15/xsigo-systems-launches-company-and-i-o-virtualization-product.aspx, accessed on Mar. 24, 2014.
U.S. Appl. No. 12/890,498, Final Office Action mailed on Feb. 7, 2012, 9 pages.
U.S. Appl. No. 12/544,744, Non-Final Office Action mailed on Sep. 24, 2015, 29 pages.
U.S. Appl. No. 12/890,498, Advisory Action mailed on Aug. 25, 2015, 3 pages.
Provisional Applications (1)
Number Date Country
60590450 Jul 2004 US