Connected input/output hub management

Information

  • Patent Grant
  • 8683108
  • Patent Number
    8,683,108
  • Date Filed
    Wednesday, June 23, 2010
    14 years ago
  • Date Issued
    Tuesday, March 25, 2014
    10 years ago
Abstract
A method for implementing connected input/output (I/O) hub configuration and management includes configuring a first I/O hub in wrap mode with a second I/O hub. The hubs are communicatively coupled via a wrap cable. The method further includes generating data traffic on a computing subsystem that includes the hubs. Generating traffic includes: converting, via the first hub, a request to implement a transaction into an I/O device-readable request packet and transmitting the request packet over the wrap cable; converting, via the second hub, the I/O device-readable (IODR) request packet into a system readable request and transmitting the request over a system bus; converting, via the second hub, the response to an IODR response packet, and transmitting the response packet over the wrap cable; and converting, via the first hub, the IODR response packet into a system readable response packet, and transmitting the response packet over the system bus.
Description
BACKGROUND

This invention relates generally to input/output (I/O) hub interfacing within a computing environment, and more particularly to connected I/O hub management.


During the manufacture of systems comprised of multiple frames or subsystems (e.g., central electronics complex (CEC) and input/output (I/O) frames), it is beneficial to be able to test the functionality and external connectivity of the system's main computing subsystem (e.g., CEC frame) in a stand-alone configuration without being connected to other subsystems (e.g., I/O frames). Advantages of a stand-alone test include reduced hardware cost and, with less components involved, improved fault isolation.


Most of the capabilities of a CEC frame (e.g., CPU, memory, and nest capabilities) can be tested in a stand-alone configuration, but much of the functionality in the I/O hub chip (which is also a part of the CEC frame) requires capabilities typically provided only by components in an I/O frame or subsystem. A nest component of a CEC frame includes various chips, such as system control chips, cache chips, and memory storage controller chips.


One comprehensive solution for testing the I/O hub of a CEC frame is to abandon the philosophy of a stand-alone configuration and attach an I/O frame; however, this is also the most expensive solution. Many early-life failures of I/O related components in a CEC frame can be exposed without requiring the functionality of a full I/O frame. Also, the increased number of components involved in the system test reduces the isolation capabilities of I/O-hub focused tests.


Another solution is the connection of special test vehicles to the CEC frame I/O ports. These units are typically much less expensive than a full I/O frame and are designed to provide specific test functionality. This can be an effective method for testing I/O functionality of a CEC during (semi) stand-alone testing. However, similar drawbacks as those described above regarding a full I/O frame connection also apply to using a test vehicle; that is, there is expense in acquiring and potentially developing these test vehicles. Also, when attempting to isolate a fault, the test vehicles themselves need to be considered as a possible source of failures.


SUMMARY

An exemplary embodiment includes a method for implementing connected input/output (I/O) hub configuration and management. The method includes configuring a first I/O hub in wrap mode with a second I/O hub. The first and second I/O hubs are communicatively coupled to one another via a wrap cable. The method also includes generating data traffic on a main computing subsystem that includes the first and second I/O hubs. Generating the data traffic includes: converting, by the first I/O hub, a request to implement a transaction to an I/O device-readable request packet and transmitting the I/O device-readable request packet over the wrap cable; converting, by the second I/O hub, the I/O device-readable request packet into a system readable request and transmitting the system readable request over a system bus, the system readable request addressed to a system memory; receiving, by the second I/O hub, a response to the system readable request from the system memory, converting the response to an I/O device-readable response packet, and transmitting the I/O device-readable response packet over the wrap cable; and converting, by the first I/O hub, the I/O device-readable response packet into a system readable response packet, and transmitting the system readable response packet over the system bus to a processor that initiated the request to implement a transaction.


A further exemplary embodiment includes a system for implementing connected input/output (I/O) hub configuration and management. The system includes a first I/O hub, a second I/O hub communicatively coupled to the first I/O hub via a wrap cable, and a processor in communication with the first and second I/O hubs. The processor and the first and second I/O hubs implement a method. The method includes configuring the first I/O hub in wrap mode with the second I/O hub, and generating data traffic on a main computing subsystem that includes the processor and the first and second I/O hubs. Generating the data traffic includes: converting, by the first I/O hub, a request to implement a transaction to an I/O device-readable request packet and transmitting the I/O device-readable request packet over the wrap cable; converting, by the second I/O hub, the I/O device-readable request packet into a system readable request and transmitting the system readable request over a system bus, the system readable request addressed to a system memory of the main computing subsystem; receiving, by the second I/O hub, a response to the system readable request from the system memory, converting the response to an I/O device-readable response packet, and transmitting the I/O device-readable response packet over the wrap cable; and converting, by the first I/O hub, the I/O device-readable response packet into a system readable response packet, and transmitting the system readable response packet over the system bus to a processor that initiated the request to implement a transaction.


An additional exemplary embodiment includes a computer program product for implementing connected input/output (I/O) hub configuration and management. The computer program product includes a non-transitory storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes configuring a first I/O hub in wrap mode with a second I/O hub. The first and second I/O hubs are communicatively coupled to one another via a wrap cable. The method also includes generating data traffic on a main computing subsystem that includes the first and second I/O hubs. Generating the data traffic includes: converting, via the first I/O hub, a request to implement a transaction to an I/O device-readable request packet and transmitting the I/O device-readable request packet over the wrap cable; converting, via the second I/O hub, the I/O device-readable request packet into a system readable request and transmitting the system readable request over a system bus, the system readable request addressed to a system memory; in response to receiving, at the second I/O hub, a response to the system readable request from the system memory, converting, via the second I/O hub, the response to an I/O device-readable response packet, and transmitting the I/O device-readable response packet over the wrap cable; and converting, via the first I/O hub, the I/O device-readable response packet into a system readable response packet, and transmitting the system readable response packet over the system bus to a processor that initiated the request to implement a transaction.


Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention. For a better understanding of the invention with advantages and features, refer to the description and to the drawings.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

Referring now to the drawings wherein like elements are numbered alike in the several FIGURES:



FIG. 1 depicts a block diagram of a computer system that may be implemented in an exemplary embodiment;



FIG. 2 is a flow diagram describing a process for configuring and testing I/O functionality of an I/O hub-enabled subsystem in an exemplary embodiment;



FIG. 3 depicts an I/O definition file in an exemplary embodiment; and



FIG. 4 is a computer program product that may be implemented in an exemplary embodiment.





DETAILED DESCRIPTION

Exemplary embodiments of the present invention provide a means for controlling I/O hub ports such that when they are cabled together (e.g., using a wrap plug and/or wrap cable), a program may easily generate high-level transport layer traffic on the I/O hub ports' cabling, thereby providing the ability to perform I/O functionality testing without the need for an I/O frame interconnect. This transport layer traffic tests a broad scope of functionality within I/O hub chips and, as an additional benefit, provides a means of communication between logical partitions. A wrap plug connects a given port's outputs and inputs together, and a wrap cable connects the inputs and outputs of separate ports together.


In an exemplary embodiment, an I/O hub is configured to enable outgoing packets to be interpreted as incoming packets. Thus, when I/O ports of an I/O hub-enabled subsystem (e.g., CEC frame) are connected using a wrap plug or wrap cable, a broad scope of I/O functionality can be tested during a CEC stand-alone test. Using this wrap capability during manufacturing testing eliminates the cost of the required test fixtures (i.e., a test I/O frame or test vehicles) and minimizes the number of sources for a failure, thus improving fault isolation.


In an exemplary embodiment, a system is configured with wrap cables on the downbound (i.e., endpoint-connected) ports of an I/O switch as part of the I/O frame manufacturing test. It may also be applied out in the field to assist with problem diagnostics and fault isolation for the subsystems (e.g., CEC frame and I/O frame).


Additionally, the exemplary embodiments provide a low-overhead method for using a communication link established over the wrap cable as a channel between logical partitions of a system.


Turning now to FIG. 1, a computer system 100 that may be implemented by an exemplary embodiment of the present invention will now be described. In an embodiment, the computer system 100 is a System z® server offered by International Business Machines Corporation (IBM®). System z is based on the z/Architecture® offered by IBM. Details regarding the z/Architecture are described in an IBM publication entitled, “z/Architecture Principles of Operation,” IBM Publication No. SA22-7832-07, February 2009, which is hereby incorporated herein by reference in its entirety.


In many I/O subsystem infrastructures, one or more I/O hubs interface between a system memory nest and an I/O interconnection network (e.g., I/O frame). For large systems, the I/O infrastructure might be split across one or more frames physically separate from a main computing subsystem (e.g., CEC frame), connected via high-speed cabling.


As described in an exemplary embodiment herein, I/O hub port controls and testing processes are implemented using Peripheral Component Interconnect Express (PCIe) protocols. PCIe is a component level interconnect standard that defines a bi-directional communication protocol for transactions between I/O adapters and host systems. PCIe communications are encapsulated in packets according to the PCIe standard for transmission on a PCIe bus. Transactions originating at I/O adapters and ending at host systems are referred to as upbound transactions. Transactions originating at host systems and terminating at I/O adapters are referred to as downbound transactions. The PCIe topology is based on point-to-point unidirectional links that are paired (e.g., one upbound link, one downbound link) to form the PCIe bus. The PCIe standard is maintained and published by the Peripheral Component Interconnect Special Interest Group (PCI-SIG).


In an exemplary embodiment, computer system 100 includes one or more central processing units (CPUs) 102 coupled to a system memory 104 via nest 106. The nest 106 may include elements such as a memory controller, cache chips, and system controller, to name a few. The system memory 104, CPU(s) 102 and nest 106 collectively form a host 124, as depicted in FIG. 1.


In an embodiment, a host bus 120 connects one or more I/O hubs (e.g., I/O hubs 112A and 112B) to the nest 106. The host 124, host bus 120 and I/O hubs 112A/112B collectively form a main computing subsystem 150 of the computer system 100. In the System z architecture, this subsystem 150 is referred to as a central electronics complex (CEC).


As shown in FIG. 1, the computer system 100 may be developed for implementation with an I/O infrastructure (shown in FIG. 1 as I/O subsystem or frame 160). The subsystem 150 communicates with the I/O subsystem 160 over I/O communication connections (or I/O bus) 126. The I/O subsystem 160 may include one or more I/O switches 114 that direct communications (data traffic) between I/O devices (e.g., I/O devices 110A and 110B) and I/O hubs 112A and 112B.


When the subsystem 150 is operably coupled to the I/O subsystem 160, a variety of transactions may be generated and executed among the subsystems 150 and 160. For example, system memory 104 may be accessed when a CPU 102, or an I/O device 110A/110B issues a memory request (read or write) that includes an address used to access the system memory 104. The address included in the memory request is typically not directly usable to access system memory 104, and therefore, it requires translation to an address that is directly usable in accessing system memory 104. In an embodiment, the address is translated via an address translation mechanism (ATM), which may be implemented via the CPU 102 (not shown). In an embodiment, the ATM translates the address from a virtual address to a real or absolute address using, for example, dynamic address translation (DAT). The memory request, including the translated address, is received by the nest 106. In an embodiment, the nest 106 maintains consistency in the system memory 104 and arbitrates between requests for access to the system memory 104 from hardware and software components including, but not limited to, the CPUs 102 and the I/O devices 110A/110B.


When the subsystem 150 is operably coupled to the subsystem 160, the I/O devices 110A/110B perform one or more I/O functions (e.g., functions 111A/111B). A memory request that is sent from one of the CPUs 102 the to the I/O devices 110A/110B (i.e., a downbound memory request) is first routed to an I/O hub (e.g., one of hubs 112A/112B), which is connected to an I/O bus 126. The memory request is then sent from the I/O bus 126 to one of the I/O devices 110A/110B via one or more I/O switches 114. The I/O bus 126 and I/O switches 114 may communicate in a standard PCIe format as is known in the art.


In an exemplary embodiment, the nest 106 includes one or more state machines, or logic (not shown) for interpreting and transmitting I/O operations, bi-directionally, between the host system 124 and the I/O devices 110A and 110B. The I/O hubs 112A and 112B may also include a root complex (not shown) that receives requests and completions from one of the I/O switches 114 (e.g., when not in wrap mode). Memory requests include an I/O address that may need to be translated, and thus, the I/O hub 112A/112B provides the address to an address translation and protection (ATP) unit (not shown). The ATP unit may be used to translate, if needed, the I/O address into to an address directly usable to access system memory 104.


In an exemplary embodiment, the subsystem 150 is not communicatively coupled to the I/O subsystem 160, as shown by the dotted lines representing the I/O bus 126. Rather, a wrap cable 170 is connected to I/O ports 113A and 113B of respective I/O hubs 112A and 112B. In an alternative exemplary embodiment, a wrap plug (not shown) may be coupled to two ports of a single I/O hub.


As shown in FIG. 1, an I/O definition file 140 is stored in system memory 104, along with an I/O control table 142. The I/O definition file 140 contains information about the I/O configuration of the subsystem, such as operating system data, switch data, device data, processor data, channel path data, control unit data, channel subsystem data, and logical partitions. The I/O definition file 140 may be configured by a user of the computer system 100 using appropriate operating system tools. For example, in the system Z architecture, I/O definitions may be user-configured via Hardware Configuration Definition (HCD). A user of the computer system 100 may configure one or more I/O hubs 112A and 112B in wrap mode via the I/O definition file 140, such that the system operating system will not attempt to discover the I/O topology of I/O devices associated with the I/O hub during system start up. The I/O definition file 140 is described further in FIG. 3.


Turning now to FIG. 2, a flow diagram describing a process for configuring and testing I/O functionality of an I/O hub-enabled subsystem 150 will now be described in an exemplary embodiment.


Between connected I/O hubs (root complexes), the port 113 behavior is nearly the same as that of an I/O endpoint connected to a root complex. However, the root complex processes incoming packets slightly different than an I/O endpoint; a valid root complex design has the root complex rejecting incoming configuration request packets as unsupported requests. Because of this special behavior, the traffic transmitted from a root complex, while it is connected to another root, needs to be controlled. In the following description, when a root complex or I/O hub is connected to another root complex (I/O hub), it is said to be in wrap mode.


An established method of discovering the I/O subsystem 160 topology connected to an I/O hub root complex is to perform a “bus-walk” procedure, issuing configuration-request probe packets out of the root complex. However, when the root complex is in wrap mode, issuing configuration requests will result in a response (from the connected root complex) of “unsupported request.” In an exemplary embodiment, an I/O hub (e.g., I/O hub 112A/112B) is explicitly designated in step 202 of FIG. 2 as being in wrap mode in the user-constructed I/O definition file 140 (see FIG. 3). With this designation, the system firmware will not attempt to discover the I/O topology attached to the root complex.


Each I/O device slot in a system frame corresponds to a physical channel ID (PCHID) and, ordinarily, an I/O definition file only describes the contents or planned contents of PCHIDs in the I/O frame and specifies the logical partitions authorized to use the corresponding I/O device. In the case of an I/O hub in wrap mode, in an exemplary embodiment, the PCHID in the subsystem 150 of a slot containing the corresponding I/O hub device is specified in the I/O definition file 140 as a root complex in wrap mode. As shown in FIG. 3, each entry in the file 140 corresponds to a PCHID value (a physical slot location for an I/O device or I/O hub card). In each entry is a designation of the device type and the logical partition authorization. The PCHID values corresponding to the I/O hubs having the wrap cable 170 (e.g., I/O hubs 112A and 112B) have a device type of “PCIe function” and a special designation that they are “wrap mode” functions.


During system start up or initial machine load (IML), the system firmware parses the I/O definition file 140 and builds an internal I/O control table 142. The I/O control table 142 contains information such as, but not limited to, device type, routing path, and logical partition authorizations. In a PCIe-implemented embodiment, each entry in this table 142 corresponds to a PCIe function (e.g., I/O function 111A/111B). Also, each I/O hub 112A/112B (root complex) in wrap mode is represented in this table 142 as an I/O function with a special wrap-mode designation. After parsing the I/O definition file 140, the system firmware discovers the physical topology attached to each of the no-wrap-mode I/O hubs 112A/112B (root complexes), e.g., using a bus-walk procedure and further refines the internal I/O control tables 142.


Because the wrap-mode I/O hubs 112A/112B are internally represented as I/O functions 111A/111B, respectively, programs are able to generate traffic over the wrap cable 170 using methods similar to those used to communicate with an I/O endpoint.


After system boot, or IML, the program discovers and enables the two wrap-mode functions: A and B. An exemplary implementation of function discovery, designation and enablement is described in U.S. patent application Ser. Nos. 12/821,184 and 12/821,185, filed concurrently with the instant application and which are incorporated by reference herein in their entireties. Enablement may include registering functions A and B with Address Translation and Protection (ATP) facility in the I/O hubs 112A and 112B respectively. In step 204, a request to implement a transaction is received at an I/O hub targeted for receiving the request. For example, assuming the request is targeted to function handle A (i.e., function 111A), the CPU 102 interprets the request and forwards it to the nest 106, targeting the I/O hub (I/O hub 112A) corresponding to function A 111A. The targeted I/O hub 112A receives the request to implement the transaction and, at step 206, converts the request into an I/O device-readable request packet (e.g., a PCIe request packet).


At step 208, the I/O hub 112A transmits the I/O device-readable request over the wrap cable 170 via port 113A and the request is received by the I/O hub 112B via the port 113B. At step 210, the second I/O hub 112B converts the I/O device-readable request into a system-readable request (e.g., readable by the system memory 104), possibly using the ATP capabilities registered for function B during function enablement. The second I/O hub 112B transmits the system-readable request over the host bus 120 to the nest 106 at step 212 for execution.


Depending upon the type of transaction request implemented, the processes described above in FIG. 2 may be adjusted to accommodate requirements imposed by the specified transaction. For example, if the transaction request is a LOAD-REQUEST-RESPONSE, the CPU 102 interpretation step described above may involve interpreting the LOAD-type instruction and issuing a system memory load request into the nest 106. In addition, the converting step 206 may include converting the system memory load request into an I/O memory read request packet. Also, the converting performed in step 210 may include converting the I/O memory read request packet into a system memory load request. Once the system memory 104 has executed the request in response to step 212, the system memory 104 generates a system memory load response, targeting the I/O hub associated with function B 111B (i.e., I/O hub 112B). The I/O hub 112B receives the system memory load response, converts it into an I/O device memory read response packet and transmits it on the wrap cable 170. I/O hub 112A receives the response packet, converts it into a system memory response packet and submits it into the nest 106, targeting the CPU 102 that issued the original request. The CPU 102 completes the execution of the LOAD instruction by providing to the program, the response data returned in the response packet.


Traffic comprised of store-requests is generated in a similar manner as above, initiated by the program issuing a STORE-type instruction. Also, simultaneously symmetric wrap-cable traffic can be generated: LOAD- and STORE-type instructions can be targeted to function B.


In addition to converting PCIe memory transactions into system memory transactions, the I/O hub (root complex) is responsible for converting I/O message signaled interrupt (MSI) packets into system interrupts. MSI messages provide a means for logically generating an interrupt as an alternative to pin-based interrupts. A PCIe MSI is represented by a PCIe memory write operation by a function to a specially registered address range. To create a PCIe-sourced system interrupt using our invention, the program simply issues a STORE-type instruction to the special MSI address range, targeting a wrap-mode function.


As indicated above, the exemplary embodiments provide a convenient and effective means for a program to generate PCIe endpoint-like traffic out of and into a root complex using system memory requests. The load/store traffic generated on the wrap cable exercises most of the corresponding packet processing logic in the I/O hub. That processing includes, but is not limited to, address translation, DMA traffic measurement, and error handling. The I/O hub portion of a CEC frame and the corresponding system firmware can be put through a thorough test without attaching anything more than wrap cables to the I/O ports.


In addition, exemplary embodiments provide a method of low-latency, high-bandwidth communication between logical partitions. Each wrap-mode root complex functions as a gateway into a partition's memory space, and can control the incoming traffic from a wrap-mode function in the same way that it controls and filters incoming traffic from a standard PCIe endpoint function. For example, the I/O definition file may assign ownership of function A to partition 1 and ownership of function B to partition 2. Each partition registers its function with the ATP facility of the corresponding I/O hub, putting in place the appropriate filters with regard to valid destination addresses for incoming packets. A communication channel between the partitions is established because partition 1 is able to access a portion of partition 2's memory space through function A, and partition 2 is able to access a portion of partition 1's memory space through function B.


One or more of the above-referenced components and/or functionalities may be implemented via, e.g., the systems and processes described in U.S. patent application Ser. No. 12/821,181; U.S. patent application Ser. No. 12/821,182; U.S. patent application Ser. No. 12/821,191; and U.S. patent application Ser. No. 12/821,194, each of which is filed concurrently with the instant application and each of which are incorporated by reference herein in its entirety.


Technical effects and benefits include an effective means for a program to generate I/O endpoint-like traffic into an I/O hub using system memory requests. The load/store traffic that can be generated on a wrap cable can exercise most of the corresponding packet processing logic in the I/O hub. The I/O hub portion of a main computing subsystem and the corresponding system firmware can be put through a thorough test without attaching anything more than wrap cables to the I/O ports.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.


As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.


Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.


A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.


Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.


Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).


Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.


The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


As described above, embodiments can be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. In exemplary embodiments, the invention is embodied in computer program code executed by one or more network elements. Embodiments include a computer program product 400 on a computer usable medium 402 with computer program code logic 404 containing instructions embodied in tangible media as an article of manufacture. Exemplary articles of manufacture for computer usable medium 402 may include floppy diskettes, CD-ROMs, hard drives, universal serial bus (USB) flash drives, or any other computer-readable storage medium, wherein, when the computer program code logic 404 is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. Embodiments include computer program code logic 404, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code logic 404 is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code logic 404 segments configure the microprocessor to create specific logic circuits.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims
  • 1. A computer program product for implementing connected input/output (I/O) hub configuration and management, the computer program product comprising: a non-transitory storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method, the method comprising:configuring a first I/O hub in wrap mode with a second I/O hub, the first and second I/O hubs communicatively coupled to one another via a wrap cable; andgenerating data traffic on a main computing subsystem that includes the first and second I/O hubs, comprising:converting, via the first I/O hub, a request to implement a transaction to an I/O device-readable request packet and transmitting the I/O device-readable request packet over the wrap cable;converting, via the second I/O hub, the I/O device-readable request packet into a system readable request and transmitting the system readable request over a system bus, the system readable request addressed to a system memory;in response to receiving, at the second I/O hub, a response to the system readable request from the system memory, converting, via the second I/O hub, the response to an I/O device-readable response packet, and transmitting the I/O device-readable response packet over the wrap cable; andconverting, via the first I/O hub, the I/O device-readable response packet into a system readable response packet, and transmitting the system readable response packet over the system bus to a processor that initiated the request to implement a transaction.
  • 2. The computer program product of claim 1, wherein the request to implement a transaction is a load-request-response request.
  • 3. The computer program product of claim 1, wherein the request to implement a transaction is a store-request request.
  • 4. The computer program product of claim 1, wherein the request to implement a transaction is a message signaled interrupt request.
  • 5. The computer program product of claim 1, wherein the first and second I/O hubs are PCIe-enabled hubs.
  • 6. The computer program product of claim 1, wherein configuring the first I/O hub in wrap mode with the second I/O hub includes: designating the first I/O hub as being in wrap mode in a user-constructed I/O definition file accessible to a host system that is communicatively coupled to the first I/O hub; anddesignating the second I/O hub as being in wrap mode in a user-constructed I/O definition file accessible to a host system that is communicatively coupled to the second I/O hub.
  • 7. The computer program product of claim 6, wherein each entry in the user-constructed I/O definition file corresponds to physical slot locations for corresponding I/O hub cards that include the first and second I/O hubs, and also a physical slot location for I/O devices.
  • 8. A system for implementing connected input/output (I/O) hub configuration and management, the system comprising: a first I/O hub;a second I/O hub communicatively coupled to the first I/O hub via a wrap cable;a processor in communication with the first and second I/O hubs, at least one of the processor, the first I/O hub, and the second I/O hub including logic for implementing a method, the method comprising:configuring the first I/O hub in wrap mode with the second I/O hub; andgenerating data traffic on a main computing subsystem that includes the processor and the first and second I/O hubs, comprising:converting, by the first I/O hub, a request to implement a transaction to an I/O device-readable request packet and transmitting the I/O device-readable request packet over the wrap cable;converting, by the second I/O hub, the I/O device-readable request packet into a system readable request and transmitting the system readable request over a system bus, the system readable request addressed to a system memory of the main computing subsystem;receiving, by the second I/O hub, a response to the system readable request from the system memory, converting the response to an I/O device-readable response packet, and transmitting the I/O device-readable response packet over the wrap cable; andconverting, by the first I/O hub, the I/O device-readable response packet into a system readable response packet, and transmitting the system readable response packet over the system bus to a processor that initiated the request to implement a transaction.
  • 9. The system of claim 8, wherein the request to implement a transaction is a load-request-response request.
  • 10. The system of claim 8, wherein the request to implement a transaction is a store-request request.
  • 11. The system of claim 8, wherein the request to implement a transaction is a message signaled interrupt request.
  • 12. The system of claim 8, wherein the first and second I/O hubs are PCIe-enabled hubs.
  • 13. The system of claim 8, wherein configuring the first I/O hub in wrap mode with the second I/O hub includes: designating the first I/O hub as being in wrap mode in a user-constructed I/O definition file accessible to a host system that is communicatively coupled to the first I/O hub; anddesignating the second I/O hub as being in wrap mode in a user-constructed I/O definition file accessible to a host system that is communicatively coupled to the second I/O hub.
  • 14. The system of claim 13, wherein each entry in the user-constructed I/O definition file corresponds to physical slot locations for corresponding I/O hub cards that include the first and second I/O hubs, and also a physical slot location for I/O devices.
  • 15. A method for implementing connected input/output (I/O) hub configuration and management, the method comprising: configuring a first I/O hub in wrap mode with a second I/O hub, the first and second I/O hubs communicatively coupled to one another via a wrap cable; andgenerating data traffic on a main computing subsystem that includes the first and second I/O hubs, comprising:converting, by the first I/O hub, a request to implement a transaction to an I/O device-readable request packet and transmitting the I/O device-readable request packet over the wrap cable;converting, by the second I/O hub, the I/O device-readable request packet into a system readable request and transmitting the system readable request over a system bus, the system readable request addressed to a system memory;receiving, by the second I/O hub, a response to the system readable request from the system memory, converting the response to an I/O device-readable response packet, and transmitting the I/O device-readable response packet over the wrap cable; andconverting, by the first I/O hub, the I/O device-readable response packet into a system readable response packet, and transmitting the system readable response packet over the system bus to a processor that initiated the request to implement a transaction.
  • 16. The method of claim 15, wherein the request to implement a transaction is a load-request-response request.
  • 17. The method of claim 15, wherein the request to implement a transaction is a store-request request.
  • 18. The method of claim 15, wherein the request to implement a transaction is a message signaled interrupt request.
  • 19. The method of claim 15, wherein the first and second I/O hubs are PCIe-enabled hubs.
  • 20. The method of claim 15, wherein configuring the first I/O hub in wrap mode with the second I/O hub includes: designating the first I/O hub as being in wrap mode in a user-constructed I/O definition file accessible to a host system that is communicatively coupled to the first I/O hub; anddesignating the second I/O hub as being in wrap mode in a user-constructed I/O definition file accessible to a host system that is communicatively coupled to the second I/O hub.
  • 21. The method of claim 20, wherein each entry in the user-constructed I/O definition file corresponds to physical slot locations for corresponding I/O hub cards that include the first and second I/O hubs, and also a physical slot location for I/O devices.
US Referenced Citations (225)
Number Name Date Kind
5170472 Cwiakala et al. Dec 1992 A
5282274 Liu Jan 1994 A
5430856 Kinoshita Jul 1995 A
5438575 Bertrand Aug 1995 A
5465332 Deloye et al. Nov 1995 A
5465355 Cook et al. Nov 1995 A
5535352 Bridges et al. Jul 1996 A
5551013 Beausoleil et al. Aug 1996 A
5568365 Hahn et al. Oct 1996 A
5574873 Davidian Nov 1996 A
5600805 Fredericks et al. Feb 1997 A
5617554 Alpert et al. Apr 1997 A
5742785 Stone et al. Apr 1998 A
5761448 Adamson et al. Jun 1998 A
5790825 Traut Aug 1998 A
5815647 Buckland et al. Sep 1998 A
5838960 Harriman, Jr. Nov 1998 A
5870598 White et al. Feb 1999 A
5960213 Wilson Sep 1999 A
6009261 Scalzi et al. Dec 1999 A
6023736 Lambeth et al. Feb 2000 A
6067595 Lindenstruth May 2000 A
6205530 Kang Mar 2001 B1
6308255 Gorishek, IV et al. Oct 2001 B1
6330656 Bealkowski et al. Dec 2001 B1
6349380 Shahidzadeh et al. Feb 2002 B1
6362942 Drapkin et al. Mar 2002 B2
6408347 Smith et al. Jun 2002 B1
6456498 Larson et al. Sep 2002 B1
6463582 Lethin et al. Oct 2002 B1
6519645 Markos et al. Feb 2003 B2
6523140 Arndt et al. Feb 2003 B1
6529978 Eide et al. Mar 2003 B1
6544311 Walton et al. Apr 2003 B1
6578191 Boehme et al. Jun 2003 B1
6615305 Olesen et al. Sep 2003 B1
6625169 Tofano Sep 2003 B1
6643727 Arndt et al. Nov 2003 B1
6654818 Thurber Nov 2003 B1
6658599 Linam et al. Dec 2003 B1
6704831 Avery Mar 2004 B1
6721813 Owen et al. Apr 2004 B2
6721839 Bauman et al. Apr 2004 B1
6772264 Dayan et al. Aug 2004 B1
6816590 Pike et al. Nov 2004 B2
6845469 Hicks et al. Jan 2005 B2
6901537 Dawkins et al. May 2005 B2
6907510 Bennett et al. Jun 2005 B2
6927975 Crippen et al. Aug 2005 B2
6950438 Owen et al. Sep 2005 B1
6963940 Glassen et al. Nov 2005 B1
6970992 Gurumoorthy et al. Nov 2005 B2
6973510 Arndt et al. Dec 2005 B2
6978338 Wang et al. Dec 2005 B2
6996638 Brice, Jr. et al. Feb 2006 B2
7003615 Chui et al. Feb 2006 B2
7042734 Hensley et al. May 2006 B2
7062594 Sardella et al. Jun 2006 B1
7065598 Connor et al. Jun 2006 B2
7093155 Aoki Aug 2006 B2
7103808 Kitamorn et al. Sep 2006 B2
7107384 Chen et al. Sep 2006 B1
7107495 Kitamorn et al. Sep 2006 B2
7127599 Brice, Jr. et al. Oct 2006 B2
7130938 Brice, Jr. et al. Oct 2006 B2
7139940 Arbeitman et al. Nov 2006 B2
7174550 Brice, Jr. et al. Feb 2007 B2
7177961 Brice, Jr. et al. Feb 2007 B2
7200704 Njoku et al. Apr 2007 B2
7206946 Sakakibara et al. Apr 2007 B2
7209994 Klaiber et al. Apr 2007 B1
7260664 Arndt et al. Aug 2007 B2
7277968 Brice, Jr. et al. Oct 2007 B2
7296120 Corrigan et al. Nov 2007 B2
7302692 Bae et al. Nov 2007 B2
7334107 Schoinas et al. Feb 2008 B2
7340582 Madukkarumukumana et al. Mar 2008 B2
7370224 Jaswa et al. May 2008 B1
7380041 Belmar et al. May 2008 B2
7398343 Marmash et al. Jul 2008 B1
7412488 Jha et al. Aug 2008 B2
7418572 Hepkin Aug 2008 B2
7420931 Nanda et al. Sep 2008 B2
7444493 Schoinas et al. Oct 2008 B2
7447934 Dasari et al. Nov 2008 B2
7454548 Belmar et al. Nov 2008 B2
7457900 Panesar Nov 2008 B2
7475183 Traut et al. Jan 2009 B2
7480303 Ngai Jan 2009 B1
7493425 Arndt et al. Feb 2009 B2
7496707 Freking et al. Feb 2009 B2
7506087 Ho et al. Mar 2009 B2
7519647 Carlough et al. Apr 2009 B2
7525957 Scherer et al. Apr 2009 B2
7529860 Freimuth et al. May 2009 B2
7530071 Billau et al. May 2009 B2
7546386 Arndt et al. Jun 2009 B2
7546406 Armstrong et al. Jun 2009 B2
7546487 Marisetty et al. Jun 2009 B2
7549090 Bailey et al. Jun 2009 B2
7552298 Bestler Jun 2009 B2
7562366 Pope et al. Jul 2009 B2
7565463 Johnsen et al. Jul 2009 B2
7567567 Muller et al. Jul 2009 B2
7587531 Brice, Jr. et al. Sep 2009 B2
7594144 Brandyberry et al. Sep 2009 B2
7600053 Carlson et al. Oct 2009 B2
7606965 Njoku et al. Oct 2009 B2
7613847 Kjos et al. Nov 2009 B2
7617340 Gregg Nov 2009 B2
7617345 Clark et al. Nov 2009 B2
7624235 Wadhawan et al. Nov 2009 B2
7627723 Buck et al. Dec 2009 B1
7631097 Moch et al. Dec 2009 B2
7660912 Gregg Feb 2010 B2
7664991 Gunda et al. Feb 2010 B1
7676617 Kloeppner Mar 2010 B2
7729316 Uhlik Jun 2010 B2
7836254 Gregg et al. Nov 2010 B2
7975076 Moriki et al. Jul 2011 B2
8140917 Suetsugu et al. Mar 2012 B2
8510592 Chan Aug 2013 B1
20030056155 Austen et al. Mar 2003 A1
20030058618 Soetemans et al. Mar 2003 A1
20030198180 Cambron Oct 2003 A1
20040088604 Bland et al. May 2004 A1
20040117534 Parry et al. Jun 2004 A1
20040133819 Krishnamurthy et al. Jul 2004 A1
20040136130 Wimmer et al. Jul 2004 A1
20040199700 Clayton Oct 2004 A1
20050033895 Lueck et al. Feb 2005 A1
20050071472 Arndt et al. Mar 2005 A1
20050091438 Chatterjee Apr 2005 A1
20050144533 LeVangia et al. Jun 2005 A1
20050146855 Brehm et al. Jul 2005 A1
20050182862 Ritz et al. Aug 2005 A1
20050286187 Liu et al. Dec 2005 A1
20050289271 Martinez et al. Dec 2005 A1
20060053339 Miller et al. Mar 2006 A1
20060067069 Heard et al. Mar 2006 A1
20060085573 Pike et al. Apr 2006 A1
20060087813 Becker et al. Apr 2006 A1
20060087814 Brandon et al. Apr 2006 A1
20060146461 Jones et al. Jul 2006 A1
20060195644 Arndt et al. Aug 2006 A1
20060230208 Gregg et al. Oct 2006 A1
20060253619 Torudbakken et al. Nov 2006 A1
20060288130 Madukkarumukumana et al. Dec 2006 A1
20070008663 Nakashima et al. Jan 2007 A1
20070069585 Bosco et al. Mar 2007 A1
20070073955 Murray et al. Mar 2007 A1
20070115230 Tajiri et al. May 2007 A1
20070136554 Biran et al. Jun 2007 A1
20070168636 Hummel et al. Jul 2007 A1
20070183393 Body et al. Aug 2007 A1
20070186074 Bradford et al. Aug 2007 A1
20070211430 Bechtolsheim Sep 2007 A1
20070226386 Sharp et al. Sep 2007 A1
20070234018 Feiste Oct 2007 A1
20070239925 Koishi Oct 2007 A1
20070245041 Hua et al. Oct 2007 A1
20070262891 Woodral et al. Nov 2007 A1
20070271559 Easton et al. Nov 2007 A1
20070274039 Hamlin Nov 2007 A1
20080043405 Lee et al. Feb 2008 A1
20080065796 Lee et al. Mar 2008 A1
20080069141 Bonaguro et al. Mar 2008 A1
20080077817 Brundridge et al. Mar 2008 A1
20080091851 Sierra Apr 2008 A1
20080091868 Mizrachi et al. Apr 2008 A1
20080091915 Moertl et al. Apr 2008 A1
20080114906 Hummel et al. May 2008 A1
20080126648 Brownlow et al. May 2008 A1
20080126652 Vembu et al. May 2008 A1
20080148295 Freimuth et al. Jun 2008 A1
20080162865 Koufaty et al. Jul 2008 A1
20080168208 Gregg Jul 2008 A1
20080189577 Arndt et al. Aug 2008 A1
20080192431 Bechtolsheim Aug 2008 A1
20080209114 Chow et al. Aug 2008 A1
20080222406 Tabuchi Sep 2008 A1
20080235425 Belmar et al. Sep 2008 A1
20080239687 Leigh et al. Oct 2008 A1
20080239945 Gregg Oct 2008 A1
20080259555 Bechtolsheim et al. Oct 2008 A1
20080270853 Chagoly et al. Oct 2008 A1
20090037682 Armstrong et al. Feb 2009 A1
20090070760 Khatri et al. Mar 2009 A1
20090125666 Freking et al. May 2009 A1
20090144462 Arndt et al. Jun 2009 A1
20090144731 Brown et al. Jun 2009 A1
20090182966 Greiner et al. Jul 2009 A1
20090182969 Norgaard et al. Jul 2009 A1
20090210646 Bauman et al. Aug 2009 A1
20090222814 Astrand Sep 2009 A1
20090234987 Lee et al. Sep 2009 A1
20090240849 Corneli et al. Sep 2009 A1
20090249039 Hook et al. Oct 2009 A1
20090276551 Brown et al. Nov 2009 A1
20090276773 Brown et al. Nov 2009 A1
20090276774 Kinoshita Nov 2009 A1
20090328035 Ganguly Dec 2009 A1
20100005234 Ganga et al. Jan 2010 A1
20100005531 Largman et al. Jan 2010 A1
20100027559 Lin et al. Feb 2010 A1
20100042999 Dorai et al. Feb 2010 A1
20100169674 Kazama et al. Jul 2010 A1
20100205608 Nemirovsky et al. Aug 2010 A1
20100287209 Hauser Nov 2010 A1
20110029696 Uehara Feb 2011 A1
20110029734 Pope et al. Feb 2011 A1
20110131359 Pettey et al. Jun 2011 A1
20110219161 Deshpande et al. Sep 2011 A1
20110258352 Williams et al. Oct 2011 A1
20110265134 Jaggi et al. Oct 2011 A1
20110317351 Pizzolato et al. Dec 2011 A1
20110317743 DeCusatis et al. Dec 2011 A1
20110320602 Carlson et al. Dec 2011 A1
20110320666 Gregg et al. Dec 2011 A1
20110320674 Gregg et al. Dec 2011 A1
20110320703 Craddock et al. Dec 2011 A1
20110320796 DeCusatis et al. Dec 2011 A1
20110320861 Bayer et al. Dec 2011 A1
20110320887 Craddock et al. Dec 2011 A1
20110320892 Check et al. Dec 2011 A1
Foreign Referenced Citations (13)
Number Date Country
1885096 Dec 2006 CN
101196615 Jun 2008 CN
101571631 Nov 2009 CN
102193239 Sep 2011 CN
57191826 Nov 1982 JP
5981724 May 1984 JP
6279557 Apr 1987 JP
0553973 Mar 1993 JP
2007087082 Apr 2007 JP
2007241526 Sep 2007 JP
2010134627 Jun 2010 JP
WO9600940 Nov 1996 WO
2009027189 Mar 2009 WO
Non-Patent Literature Citations (56)
Entry
Dolphin Interconnect Solutions; MySQL Acceleration Solutions; Solid State Storage; Embeded and HPC Solutions; “DXH510 PCI Express Host Adapter” ; ww.dolphinics.com/products/pent-dxseries-dsh510.html downloaded Jun. 10, 2010.
J. Regula, “Using Non-transparent Bridging in PCI Express Systems”, PLX Technology, Inc., pp. 1-31, Jun. 1, 2004.
Jack Regula; “Ethernet Tunneling through PCI Express Inter-Processor Communication, Low Latency Storage IO Source”; www.wwpi.com; publisher: Computer Technology Review; Jan. 19, 2009.
Robert F. Kern, “IBM System z & DS8000 Technology Synergy”, IBM ATS Americas Disk Storage; Jul. 21, 2009, pp. 1-25.
Szwed et al.; “Managing Connected PCI Express Root Complexes”; Dated: Dec. 23, 2009—6 pages.
Final Office Action mail date Jun. 15, 2011 for U.S. Appl. No. 12/821,221.
U.S. Appl. No. 12/821,124, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,181, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,182, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,185, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,191, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,648, filed Jun. 23, 2010.
Z/Architecture Principles of Operation, Feb. 2009; pp. 1-1344, IBM Corporation.
Huang, Wei et al., “A Case for High Performance Computing with Virtual Machines,” ISC '06, Jun. 3 28 30, Carins, Queensland, Australia, pp. 125-134, Jun. 3, 2006.
Swift, Micael M. et al., “Improving the Reliability of Commodity Operating Systems,” ACM Transactions on Computer Systems, vol. 23, No. 1, Feb. 2005, pp. 77-110.
Non-final office Action received for U.S. Appl. No. 12/821,239 dated Nov. 8, 2012.
Non-final Office Action dated Sep. 26, 2012 for U.S. Appl. No. 12/821,243.
Final Office Action dated Sep. 13, 2012 for U.S. Appl. No. 12/821,256.
Final Office Action received Oct. 10, 2012 for U.S. Appl. No. 12/821,221.
Non-final Office Action received Oct. 11, 2012 for U.S. Appl. No. 12/821,247.
Notice of Allowance dated Sep. 19, 2012 for U.S. Appl. No. 12/821,224.
U.S. Appl. No. 12/821,221, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,222, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,224, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,226, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,239, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,242, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,243, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,245, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,247, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,250, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,256, filed Jun. 23, 2010.
U.S. Appl. No. 12/821,271, filed Jun. 23, 2010.
Baumann, Andrew, et al., “The Multikernel: A New OS Architecture for Scalable Multicore Systems,” Oct. 2009, SOSP'09, Oct. 11-14, 2009, Big Sky, Montana, USA, pp. 29-43.
Crawford et al. “Accelerating Computing with the Cell Broadband Engine Processor”; CF'08, May 5-7, 2008; Ischia, Italy; Copyright 2008 ACM 978-1-60558-077.
Darren Abramson et al.; “Intel Virtualization Technology for Directed I/O”; Intel Technology Journal, vol. 10, Issue 3, Aug. 10, 2006; pp. 1-16.
Huang, Wei et al., “A Case for High Performance Computing with Virtual Machines,” ISC '06, Jun3 28 30, Carins, Queensland, Australia, pp. 125-134, Jun. 3, 2006.
“Intel (registered trademark) Itanium (registered trademark) Architecture Software Developer's Manual,” vol. 2, Rev. 2.2, Jan. 2006.
“z/VM: General Information Manual,” IBM Publication No. GC24-5991-05, May 2003.
“DMA Engines Bring Mulicast to PCI Express Systems,” http://electronicdesign.com, Aug. 13, 2009, 3 pages.
“I/O Virtualization and AMD's IOMMU,” AMD Developer Central, http://developer.amd.com/documentation/articles/pages.892006101.aspx, Aug. 9, 2006.
“IBM Enhances the IBM eServer zSeries 990 Family of Servers,” Hardware Announcement, Oct. 7, 2003, pp. 1-11.
Internet Article, “Large Page Support in the Lunux Kernel,” http://lwn.net./Articles/69691<retrieved on Jan. 26, 2010>.
K. Vaidyanathan et al.; “Exploiting RDMA Operations for Providing Efficient Fine-Grained Resource Monitoring in Cluster-Based Servers”; Jun. 2006; pp. -10; Downloaded: Apr. 13, 2010 at 18:53:46 UTC from IEEE Xplore. 1-4244-0328-6/06.
Mysore, Shashidhar et al., “Understanding and Visualizing Full Systems with Data Flow Tomography” SPOLOS '08, Mar. 1-5, 2008, Seattle, Washington, USA, pp. 211-221.
Narayanan Ganapathy et al.; Papers-USENIX Annual Teleconference (No. 98); Entitled: “General Purpose Operating System Support for Multiple Page Sizes” 1998; pp. 1-9.
Non-Final Office Action mail date Jan. 10, 2011.
Paulsen, Erik; “Local Memory Coaxes Top Speed from SCSI Masters”; Electronic Design, v. 41, (Apr. 15, 1993) p. 75-6+.
Swift, Micael M. et al., “Improving the Reliability of Commodity Operating Systems, ” ACM Transactions on Computer Systems, vol. 23, No. 1, Feb. 2005, pp. 77-110.
Talluri et al., “A New Page Table for 64-bit Address Spaces,” ACM SIGOPS Operating Systems Review, vol. 29, Issue 5 (Dec. 1995), pp. 194-200.
VTdHowTo—Xen Wiki; Downloaded—Apr. 16, 2010; pp. 1-5; http://wiki.xensource.com/xenwiki/VTdHowTo.
Winwood, Simon, et al., “Multiple Page Size Support in the Linux Kernel”, Proceedings of Ottawa Linux Symposium, 2002.
Xu, Min et al., “Towards a VMM-based Usage Control Framework for OS Kernel Integrity Protection,” SACMAT '07, Jun. 20-22, 2007, Sophia Antipolis, France, pp. 71-80.
z/VM: Running Guest Operating Systems, IBM Publication No. SC24-5997-02, Oct. 2001.
International Search Report, PCT/CN2013/070828, mailed Apr. 25, 2013, 13 pages.
ISR, Information Materials for IDS, dated Apr. 8, 2013, from JPO OA dated Mar. 26, 2013, 2 pages.
Related Publications (1)
Number Date Country
20110320670 A1 Dec 2011 US