The present invention relates generally to the computing industry; and, more specifically to systems, methods, computer program products, and apparatuses for extending peripheral component interconnect express (PCIe) fabrics.
Peripheral component interconnect express (PCIe) is a high-speed serial computer expansion bus standard widely used to attach various hardware devices (e.g., storage devices, network cards, sound cards, and the like) to a host central processing unit (CPU). Because the host CPU memory configurations may be vendor-specific, PCIe provides for an input/output (I/O) standard to connect various devices to the CPU. PCIe allows for a variety of improvements over older bus standards (e.g., PCI and PCI-eXtended). For example, PCIe generally allows for higher maximum system bus throughput, lower I/O pin count, smaller bus footprint, native hot-plug functionality, and other advantages.
An issue with the PCIe bus standard is that each PCIe fabric is limited by a finite amount of resources. For example, each PCIe fabric's 32-bit address memory space may not exceed 4 GB in size, and each fabric may only have a maximum of 256 bus numbers. Because PCIe operates on point-to-point serial connections, these limitations directly cap the maximum number of nodes (i.e., devices) that may be attached to a PCIe fabric. That is, bus numbers for various devices may not overlap, and each attached device requires a set of unique bus numbers to function. Various bus numbers in a PCIe fabric may be reserved for particular uses (e.g., as internal bus numbers of PCIe switches, hot-plug functionality, or the like), further limiting the number of available bus numbers.
Furthermore, a fault occurring at any component attached to a PCIe fabric may impact any other downstream or upstream components attached to the faulty component. As the number of components and software drivers attached to the PCIe fabric increases, fault handling becomes more difficult and the propagation of any faults may lead to a system-wide crash.
These and other problems are generally solved or circumvented, and technical advantages are generally achieved, by preferred embodiments of the present invention, which provide an extended peripheral component interconnect express fabric.
In accordance with one example embodiment, a peripheral component interconnect express topology includes a host PCIe fabric comprising a host root complex. The host PCIe fabric includes a first set of bus numbers and a first memory mapped input/output (MMIO) space on a host central processing unit (CPU). Further, an extended PCIe fabric is provided, which includes a root complex endpoint (RCEP) as part of an endpoint of the host PCIe fabric. The extended PCIe fabric also includes a second set of bus numbers and a second MMIO space separate from the first set of bus numbers and the first MMIO space, respectively.
In accordance with another example embodiment, a peripheral component interconnect express (PCIe) topology includes an extended PCIe fabric. The extended PCIe fabric includes a root complex end point (RCEP). The RCEP is configured to be part of an endpoint of a first level PCIe fabric. In addition, the extended PCIe fabric comprises a memory mapped input/output (MMIO) space and a set of bus numbers.
In accordance with yet another example embodiment, a method for connecting peripheral includes providing a root complex endpoint (RCEP) hosting an extended peripheral component interconnect express (PCIe) fabric as part of an endpoint of a host PCIe fabric. The extended PCIe fabric has a first MMIO space that is separate from a second MMIO space of the host PCIe fabric. The method further comprises mapping the first MMIO space to the second MMIO space.
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:
Example embodiments covering various aspects of the encompassed innovation are discussed in greater detail below. It should be appreciated, however, that the present invention provides many applicable unique and novel concepts that can be embodied in a wide variety of specific contexts. Accordingly, the specific embodiments discussed herein are merely illustrative of specific ways to make, use, and implement various aspects of the present invention, and do not necessarily limit the scope thereof unless otherwise claimed.
The following various exemplary embodiments are described in a specific context, namely a peripheral component interconnect express (PCIe) bus standard fabric. As will be appreciated, however, such example embodiments may also extend other fabrics (e.g., upside down tree topologies with resource restrictions).
As described herein, a root complex is a hardware structure serving as a bridge between a PCIe fabric and a host central processing unit (CPU). The root complex may be integrated as part of the CPU. For example,
Also as used herein, MMIO space may include a portion of memory addressable using 32-bit addresses, which is generally limited to the first 4 GB of MMIO space. The MMIO space may further include a portion of memory addressable using 64-bit addresses, which may be mapped to MMIO space above the first 4 GB. Various example embodiments described herein include one or more additional root complex hardware structures as part of the endpoints of the host PCIe fabric. By adding root complex functionalities to endpoints (referred to as a root complex endpoint (RCEP)), additional PCIe fabrics may be connected to form extended PCIe fabrics that are not limited to the finite resources of the host PCIe.
For example,
As shown in
Switch 104 may include internal buses that allow multiple devices to be connected to a single root port 103 while still maintaining a point-to-point serial connection used by the PCIe standard. Although
As shown in
For example, endpoints 114 and 116 may be electrically connected to RCEP 106 through switch 112 and root port 110. As noted above, endpoints 114 and 116 may be most any type of peripheral devices, including storage devices, networking devices, sound cards, video cards, and the like. Similar to previous PCIe fabrics, endpoints 114 and 116 may simply terminate extended PCIe fabric 118. Alternatively, and in accordance with exemplary embodiments, endpoints 114 and/or 116 may include another RCEP having its own set of bus numbers and MMIO space; thus, forming another extended PCIe fabric. Accordingly, RCEPs essentially add gateway functionality to a PCIe endpoint; and therefore, theoretically, allow for a virtually limitless number of nodes to be attached to a host root complex.
Although
In accordance with exemplary embodiments, RCEP 106's MMIO may include a portion addressable using 32-bit addresses (referred to as 32-bit memory space) and a portion using 64-bit addresses (referred to as 64-bit memory space). In accordance with such embodiments, RCEP 106's PCIe configuration space, 32-bit memory space, and 64-bit memory space may be mapped to the 64-bit MMIO space of host PCIe fabric 100 (i.e., the portion of PCIe fabric 100's MMIO space that is addressable using 64-bit addresses). Thus, in accordance with such embodiments, RCEP 106 may be accessed from the MMIO space of host PCIe fabric 100. The mapping and enumeration of RCEP 106's extended fabric may be done using endpoint drivers associated with RCEP 106, as explained in greater detail below.
In accordance with other exemplary embodiments, RCEP 106 may also include fault handling mechanisms that resolve any faults occurring in its downstream devices (e.g., endpoints 114 and 116). Therefore, in such embodiments, faults may be contained by RCEP 106 and not propagate upstream to PCIe fabric 100, and RCEP 106 may act as a fault boundary. Further, RCEP 106 may generate an error interrupt to notify host root complex 102 of any faults. In such embodiments, these error interrupts may be used as a reporting mechanism, and any PCIe faults occurring in RCEP 106's downstream devices may be handled by RCEP 106 and not passed upstream to host root complex 102. The specific details regarding error interrupt reporting and fault handling may be implementation specific and vary between computing platforms/root complexes. For example, current PCIe standards leave the implementation details regarding how a root complex handles faults open to different varying vendor-specific implementations of computing platforms/root complexes. Therefore, the implementation details of RCEP 106's fault handling and error reporting mechanisms may, similarly, be open to different implementations depending on the applicable computing platform/root complex configurations.
Mapping and accessing PCIe configuration space for extended fabric 118 may be done using any suitable configuration. For example,
In accordance with exemplary embodiments, all (or any portion) of a device's functions connected to PCIe fabric 100 or extended fabric 118 may be mapped to their respective fabrics' own dedicated 256 MB of configuration space. Such space may be addressable, for example, by knowing the 8-bit PCI bus, 5-bit device, and 3-bit function numbers for a particular device function. This type of function addressing may be referred to herein as bus/device/function (BDF) addressing, which allows for a total of 256 bus numbers, 32 devices, and 8 functions for each PCIe fabric. Generally, in such embodiments, the device number may be set to 0 in accordance with PCIe bus standard fabrics due to PCIe's serial point-to-point connection structure. Further, in accordance with such embodiments, each device function may be entitled to 4 KB of configurations registers.
As shown, PCIe configuration space for the first level PCIe fabric (e.g., fabric 100) may occupy 256 MB of address space 208 in portion of space 202. In such embodiments, PCIe configuration space for the extended PCIe (e.g., extended fabric 118) may be mapped to 256 MB of address space 210 in 64-bit MMIO space 206. Further, any MMIO transactions in address spaces 208 or 210 may be treated as PCIe configuration access transactions for either PCIe host fabric 100 or extended fabric 118, respectively, by their corresponding root complex.
In such embodiments, the addresses of extended PCIe configuration space 210 may start at base value 212. Thus, the configuration space registers of a PCIe device function located at bus number B, device number D, and function number F may start at, for example, Base+(B+D+F)*4K. Alternatively, other suitable configurations for addressing device space registers are also contemplated, and the description of BDF addressing here is used for illustrative purposes only.
Mapping and accessing 32-bit memory space for extended PCIe fabric 118 may be done using any suitable configuration. For example,
Under current PCIe standards, the maximize size for a 32-bit address space is 4 GB. Furthermore, in accordance with current PCIe standards, on certain computing platforms (e.g., x86 platforms) the 32-bit memory for first-level PCIe fabric 100 may be shared with its PCIe configuration space and, thus, may be only 256 MB in size.
In accordance with exemplary embodiments, extended PCIe fabric 118 may have its own 32-bit memory space separate from the physical address space 200 of the host CPU. Moreover, as shown in
Mapping and accessing 64-bit memory space for extended PCIe fabric 118 may be done using any suitable configuration. For example,
In such embodiments, extended PCie fabric 118 may have its own 64-bit memory space separate from the physical address space 200 of the host CPU. As shown in
In certain exemplary embodiments, addressing the device register bank on the extended fabric 118 may be done using format 416. For example, format 416 may be used if the base physical address is size aligned to extended fabric 118's memory space configuration. Using format 416, RCEP 106 may strip the upper bits (e.g., bits 63 to p) of format 410 to form a 64-bit address for extended fabric 118. In other exemplary embodiments, format 418 may be used if the base physical address is not size aligned to extended fabric 118's memory space. In such embodiments, in order to compensate for the non-alignment of the physical address, an offset 420 may be added to a 64-bit system base address 422. Moreover, if the base address is at least 4 GB aligned (e.g., the lower 32-bits are 0) the size adjustment may only be performed for the high 32-bits of the base address.
In an exemplary embodiment, extended fabric 118's PCIe configuration space 210, 32-bit memory space 302, and 64-bit memory space 402 may overlap in the host CPU. In such embodiments, RCEP 106 may request a common mapping window large enough to accommodate all desired address ranges (e.g., spaces 210, 302, and 402) from the host CPU, and RCEP 106 may then divide the common mapping window as necessary into various desired address ranges.
In accordance with exemplary embodiments, extended fabric 118 may support device interrupts, which may be handled using any suitable method. For example, extended fabric 118 may use a message signaled interrupt (MSI) configuration. In such embodiments, MSIs originating from devices connected to extended fabric 118 (e.g., endpoints 114 and 116) may be delivered to applicable root ports (e.g., root port 110) in accordance with the PCIe bus standard. Furthermore, root port 110 of RCEP 106 may have a pre-assigned address window for MSIs. In such embodiments, when a memory write address matches the pre-assigned MSI address window, the transaction may be recognized as an interrupt. Moreover, Root port 110 may collect all the MSIs originating from its downstream fabrics and deposit them into a queue (where the queue may be located in the host CPU's memory in physical address space 200). Root port 110 may then signal a separate interrupt, which may also be a MSI, to its upstream root port (e.g., host root port 103). Host root port 102 may then trigger an appropriate software handler in accordance with the received interrupt. An interrupt handler of RCEP 106's root port 110 may then examine the MSI queue in main memory, determine the originating device (e.g., endpoint 114 or 116), and dispatch the appropriate interrupt handler of the device driver. Of course, other schemes for handing device interrupts are contemplated herein; and thus, any specific implementation described herein is used for illustrative purposes only—unless otherwise explicitly claimed.
In accordance with other exemplary embodiments, extended fabric 118 may also support direct memory access (DMA), which may be handled using any suitable method. In such embodiments, DMA transactions may include read requests, read completions, and write requests. PCIe packets may carry a system physical address or an IO (input/output) virtual address translated by an IOMMU (input/output memory management unit). Moreover, the PCIe requester IDs may be per fabric. Thus, the requester IDs may be replaced with RCEP 106's ID when a request crosses a PCIe fabric boundary and goes upstream to host PCIe fabric 100. That is, on extended fabric 118, the requester ID may be the ID of the endpoint device (e.g., endpoint 114 or 116). As the request gets forwarded upstream to root complex 102, the requester ID may be replaced with the ID of RCEP 106.
In such embodiments, DMA writes refer to moving data from a device (e.g., endpoint 114 or 116) to the host CPU's memory. RCEP 106 may replace the device ID with RCEP 106's ID when the request is passed upstream by RCEP 106 to root complex 102 and the host CPU. Furthermore, DMA reads refer to moving data from the host CPU's memory to the device. In such embodiments, RCEP 106 may utilize a hardware scoreboard to track all read requests by assigning transaction tags (e.g., as part of the request packets) for transactions to fabric 100. These transaction tags may be linked to RCEP 106's score board entries and may be used to record the requester IDs of read request packets originating on extended fabric 118. Completion data received by RCEP 106 from root complex 102 may carry the same transaction tag as the corresponding read request in accordance with such embodiments. Thus, in such embodiments, transaction tags may be used to match against score board entries to determine the appropriate device ID used on extended fabric 118. Of course, other schemes for handing DMA requests are contemplated herein; and thus, any specific implementation described herein is used for illustrative purposes only—unless otherwise explicitly claimed.
Therefore, using the various PCIe configuration access, memory access, DMAs, and interrupt mechanisms described in exemplary embodiments of the above paragraphs, RCEP devices may be used to host extended PCIe fabrics and connect additional devices (e.g., additional RCEP devices and/or peripheral devices) to a host root complex. RCEP devices may be similar to a typical PCIe root complex logic. Each extended PCIe fabric may have its own MMIO space and set of bus numbers. Therefore, the total number of devices that may be connected to a host CPU may not limited to the number of available bus numbers of the host PCIe fabric. In such embodiments, the MMIO space of each of the extended fabric may mapped to the 64-bit MMIO space of its parent fabric (e.g., the parent fabric for extended fabric 118 is first level fabric 100) for ease of access. Furthermore, fabric enumeration of the extended fabrics may be achieved through the RCEP endpoint device driver. In such embodiments, each RCEP may handle faults originating on the applicable extended root complex fabric. Thus, fault isolation may be achieved so that downstream fabric faults may be intercepted at the fabric boundary of an extended PCIe fabric and not propagated upstream.
In accordance with exemplary embodiments, almost any peripheral device (e.g., sound cards, video cards, network drivers, memory cards, and the like) may be connected to the extended fabric and need not change their driver software. In such embodiments, the extended PCIe fabric is interacts with peripheral devices in the same manner as a first level PCIe fabric. Moreover, the host CPU's software (e.g., an operating system) and RCEP drivers may be modified and/or created to set up register mapping, DMA address mapping, implement interrupt handlers through the RCEP, and other similar functions. Therefore, peripheral devices need not be notified that they are connected to an extended PCIe fabric instead of a first level PCIe fabric. Thus, extended PCIe fabrics and RCEP may be compatible with existing peripheral device drivers.
While this invention has been described with reference to illustrative exemplary embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative exemplary embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.
This application is a continuation of U.S. patent application Ser. No. 16/539,707, filed on Aug. 13, 2019, which is a continuation of U.S. patent Ser. No. 16/010,199, filed on Jun. 15, 2018, which is a continuation of U.S. patent application Ser. No. 14/822,685, filed on Aug. 10, 2015, which is a continuation of U.S. patent application Ser. No. 13/931,640, filed Jun. 28, 2013, entitled “System and Method for Extended Peripheral Component Interconnect Express Fabrics”. All of the aforementioned patent applications are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
6983339 | Rabe | Jan 2006 | B1 |
7334071 | Onufryk et al. | Feb 2008 | B2 |
7506084 | Moerti et al. | Mar 2009 | B2 |
7836129 | Freimuth et al. | Nov 2010 | B2 |
8806098 | Mandapuram et al. | Aug 2014 | B1 |
9135200 | Shao | Sep 2015 | B2 |
9405688 | Nagarajan et al. | Aug 2016 | B2 |
20070005858 | Shah | Jan 2007 | A1 |
20070130407 | Olson et al. | Jun 2007 | A1 |
20070147359 | Congdon et al. | Jun 2007 | A1 |
20080092148 | Moertl et al. | Apr 2008 | A1 |
20080147959 | Freimuth et al. | Jun 2008 | A1 |
20080209099 | Kloeppner et al. | Aug 2008 | A1 |
20090063894 | Billau et al. | Mar 2009 | A1 |
20090248947 | Malwankar | Oct 2009 | A1 |
20090276551 | Brown et al. | Nov 2009 | A1 |
20100146222 | Cox et al. | Jun 2010 | A1 |
20100161872 | Daniel | Jun 2010 | A1 |
20100165874 | Brown et al. | Jul 2010 | A1 |
20110016235 | Brinkmann et al. | Jan 2011 | A1 |
20110064089 | Hidaka et al. | Mar 2011 | A1 |
20110131362 | Klinglesmith | Jun 2011 | A1 |
20110202703 | Bacher et al. | Aug 2011 | A1 |
20110225341 | Satoh et al. | Sep 2011 | A1 |
20110225389 | Grisenthwaite | Sep 2011 | A1 |
20120030387 | Harriman | Feb 2012 | A1 |
20120144230 | Buckland et al. | Jun 2012 | A1 |
20120166690 | Regula | Jun 2012 | A1 |
20130054867 | Nishita | Feb 2013 | A1 |
20130198432 | Ning | Aug 2013 | A1 |
20140075079 | Tsai | Mar 2014 | A1 |
20140115223 | Guddeti et al. | Apr 2014 | A1 |
20140223061 | Yap | Aug 2014 | A1 |
20140244888 | Kallickal | Aug 2014 | A1 |
20140372741 | Gardiner et al. | Dec 2014 | A1 |
20160137617 | Sanfilippo et al. | May 2016 | A1 |
Number | Date | Country |
---|---|---|
101052013 | Oct 2007 | CN |
101165665 | Apr 2008 | CN |
101206629 | Feb 2012 | CN |
101751371 | Aug 2012 | CN |
102870381 | Jan 2013 | CN |
103095463 | May 2013 | CN |
103678165 | Mar 2014 | CN |
2007087083 | Apr 2007 | JP |
2008181389 | Aug 2008 | JP |
2010520541 | Jun 2010 | JP |
2011199419 | Oct 2011 | JP |
2011227539 | Nov 2011 | JP |
2012128717 | Jul 2012 | JP |
2013045236 | Mar 2013 | JP |
2013054414 | Mar 2013 | JP |
2013088879 | May 2013 | JP |
2013196593 | Sep 2013 | JP |
2016522236 | Jul 2016 | JP |
20090117885 | Nov 2009 | KR |
20100080360 | Jul 2010 | KR |
2011037576 | Mar 2011 | WO |
Entry |
---|
Charlie Demerjian, Intel shows off Rack Scale Architecture and Rack Disaggregation plans. Apr. 9, 2013, SemiAccurate on Target Technology News, 8 pages. |
Ryuji Naito, Thorough Explantation If You Make it, You Will Understand PCI Express, Interface vol. 36 No. 7, Japan, CQ Publishing Co., Ltd., Jul. 1, 2010, 20 pages. |
Shao, Wesley. Extending PCI Express Fabrics. PCI-SIG Developers Conference Asia-Pacific Tour 2013. Oct. 22, 2013, 22 pages. |
Jack Regula, Using Non-transparent Bridging in PCI Express Systems. Jun. 1, 2004, PLX Technology, Inc., 31 pages. |
Chapter 10: Interfacing an External Processor to an Altera FPGA,Embedded Design Handbook,Jul. 2011,total 14 pages. |
Lee Mohrmann et al.,“Creating Multicomputer Test Systems using PCI and PCI Express”,2009 IEEE,total 4 pages. |
Kwok Kong,“Enabling Multi-peer Support with a Standard-Based PCI Express Multi-ported Switch”,white paper,2006 Integrated Device Technology, Inc.,Jan. 27, 2006,total 13 pages. |
Kwok Kong,“Using PCI Express as the Primary System Interconnect in Multiroot Compute, Storage, Communications and Embedded”,white paper, 2008,total 13 pages. |
Chetan Kapoor,“What is PXImc?”,http://www.pxisa.org/files/resources/Articles%20and%20Application%20Notes/What%20is%20PXImc.pdf,total 8 pages. |
Number | Date | Country | |
---|---|---|---|
20210216485 A1 | Jul 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16539707 | Aug 2019 | US |
Child | 17153639 | US | |
Parent | 16010199 | Jun 2018 | US |
Child | 16539707 | US | |
Parent | 14822685 | Aug 2015 | US |
Child | 16010199 | US | |
Parent | 13931640 | Jun 2013 | US |
Child | 14822685 | US |