Mainstream processor chips, both in high performance and low power segments, are increasingly integrating additional functionality such as graphics, display engines, security engines, PCIe™ ports (i.e., ports in accordance with the Peripheral Component Interconnect Express (PCI Express™ (PCIe™)) Specification Base Specification version 2.0 (published 2007) (hereafter the PCIe™ specification) and other PCIe™ based peripheral devices, while maintaining legacy support for devices compliant with a PCI specification such as the Peripheral Component Interconnect (PCI) Local Bus Specification, version 3.0 (published 2002) (hereafter the PCI specification).
Such designs are highly segmented due to varying requirements from the server, desktop, mobile, embedded, ultra-mobile and mobile Internet device segments. Different markets seek to use single chip system-on-chip (SoC) solutions that combine at least some of processor cores, memory controllers, input/output controllers and other segment specific acceleration elements onto a single chip. However, designs that accumulate these features are slow to emerge due to the difficulty of integrating different intellectual property (IP) blocks on a single die. This is especially so, as IP blocks can have various requirements and design uniqueness, and can require many specialized wires, communication protocols and so forth to enable their incorporation into an SoC. As a result, each SoC or other advanced semiconductor device that is developed requires a great amount of design complexity and customization to incorporate different IP blocks into a single device.
One such area of design interest is arbitration. To prevent deadlocks and stalls in a system, an arbiter may be present to receive requests from multiple agents and arbitrate the requests to provide access grants to resources of the system. In some systems, arbitration is performed according to a fixed priority privilege in which a certain number of grants are allowed to avoid a higher priority requestor from starving lower priority requestors. Grant operation typically starts from the highest priority requestor and proceeds to the lowest priority requestor. In some systems, the lower priority requestor can only receive a grant when higher priority requestors have no active requests or have exhausted their grant count. Requestors commonly receive reloaded grant counts when no active requests are present, every requestor participating in arbitration has exhausted their grant counts, or no active request from any requestor with a valid grant count exists.
In a conventional fixed priority arbitration scheme, bandwidth allocation may not be able to be maintained for non-pipelined requests. These requests are received in the arbiter from a requestor that cannot, for some reason, maintain back-to-back requests. As a result, in a platform having pipelined and non-pipelined input/output (I/O) requestors, non-pipelined requestors can realize significant bandwidth degradation.
In various embodiments, an adaptive bandwidth allocation enhancement may be provided for a fixed priority arbiter to enhance fairness to requestors seeking access to an arbitrated resource. As will be discussed further below, a level of hysteresis can be provided to control when grant counts associated with the requestors can be reloaded.
Embodiments can be used in many different types of systems. As examples, implementations described herein may be used in connection with semiconductor devices such as processors or other semiconductor devices that can be fabricated on a single semiconductor die. In particular implementations, the device may be a system-on-chip (SoC) or other advanced processor that includes various homogeneous and/or heterogeneous processing agents, and additional components such as networking components, e.g., routers, controllers, bridge devices, memories and so forth.
Some implementations may be used in a semiconductor device that is designed according to a given specification such as an integrated on-chip system fabric (IOSF) specification issued by a semiconductor manufacturer to provide a standardized on-die interconnect protocol for attaching intellectual property (IP) blocks within an SoC or other chip. Such IP blocks can be of varying types, including general-purpose processors such as in-order or out-of-order cores, fixed function units, graphics processors, controllers, among many others. By standardizing an interconnect protocol, a framework is thus realized for a broad use of IP agents in different types of chips. Accordingly, not only can the semiconductor manufacturer efficiently design different types of chips across a wide variety of customer segments, it can also, via the specification, enable third parties to design logic such as IP agents to be incorporated in such chips. And furthermore, by providing multiple options for many facets of the interconnect protocol, reuse of designs is efficiently accommodated. Although embodiments are described herein in connection with this IOSF specification, understand the scope of the present invention is not limited in this regard and embodiments can be used in many different types of systems.
Referring now to
As will be described further below, each of the elements shown in
The IOSF specification includes 3 independent interfaces that can be provided for each agent, namely a primary interface, a sideband message interface and a testability or design for test (DFx) interface. According to the IOSF specification, an agent may support any combination of these interfaces. Specifically, an agent can support 0-N primary interfaces, 0-N sideband message interfaces, and an optional DFx interface. However, according to the specification, an agent must support at least one of these 3 interfaces.
Fabric 20 may be a hardware element that moves data between different agents. Note that the topology of fabric 20 can be product specific. As examples, a fabric can be implemented as a bus, a hierarchical bus, a cascaded hub or so forth. Referring now to
In various implementations, primary interface 112 implements a split transaction protocol to achieve maximum concurrency. That is, this protocol provides for a request phase, a grant phase, and a command and data phase. Primary interface 112 supports three basic request types: posted, non-posted, and completions, in various embodiments. Generally, a posted transaction is a transaction which when sent by a source is considered complete by the source and the source does not receive a completion or other confirmation message regarding the transaction. One such example of a posted transaction may be a write transaction. In contrast, a non-posted transaction is not considered completed by the source until a return message is received, namely a completion. One example of a non-posted transaction is a read transaction in which the source agent requests a read of data. Accordingly, the completion message provides the requested data.
In addition, primary interface 112 supports the concept of distinct channels to provide a mechanism for independent data flows throughout the system. As will be described further, primary interface 112 may itself include a master interface that initiates transactions and a target interface that receives transactions. The primary master interface can further be sub-divided into a request interface, a command interface, and a data interface. The request interface can be used to provide control for movement of a transaction's command and data. In various embodiments, primary interface 112 may support PCI ordering rules and enumeration.
In turn, sideband interface 116 may be a standard mechanism for communicating all out-of-band information. In this way, special-purpose wires designed for a given implementation can be avoided, enhancing the ability of IP reuse across a wide variety of chips. Thus in contrast to an IP block that uses dedicated wires to handle out-of-band communications such as status, interrupt, power management, configuration shadowing, test modes and so forth, a sideband interface 116 according to the IOSF specification standardizes all out-of-band communication, promoting modularity and reducing validation requirements for IP reuse across different designs. In general, sideband interface 116 may be used to communicate low performance information, rather than for primary data transfers, which typically may be communicated via primary interface 112.
As further illustrated in
Using an IOSF specification, various types of chips can be designed having a wide variety of different functionality. Referring now to
As further seen in
As further seen in
As further seen, fabric 250 may further couple to an IP agent 255. Although only a single agent is shown for ease of illustration the
Furthermore, understand that while shown as a single die SoC implementation in
In a grant-based fixed priority arbiter (GFPA) scheme, grant counts are used to allocate bandwidth for each of different requestors when a resource such as link coupled between multiple agents (e.g., a shared bus or other interconnect) is over-subscribed. All grant counters are loaded with default values upon reset de-assertion. Each grant issued to a given requestor causes an update to the corresponding requestor's grant counter, e.g., a decrement of 1. Eventually, grant counts will be reloaded globally when no active request is pending or all active requests have consumed their respective grant counts, or by a combination of both conditions, thus triggering a new round of arbitration. In a GFPA scheme, evaluation of whether to perform a global grant count reload can occur every clock cycle.
Certain requestors are incapable of sustaining back-to-back request assertions. In other words, these requestors cannot issue pipelined requests (e.g., in a first clock cycle and a next cycle). This may be due to request credit exchange roundtrip delay for agents with limited request queue depth without incurring additional gate count, or a device internal back-to-back request bubble, where a request signal is de-asserted when it is granted by the arbiter. To prevent a global grant count reload from being triggered unintentionally when back-to-back request assertion is not sustainable by a given agent, embodiments can delay reload of grant counts. In this way, a grant count reload operation can be delayed when a non-pipelined requestor's grant counter (or more than one such requestors' grant counters) has not consumed all its grant counts for a given arbitration round. In this way, bandwidth can be allocated to the non-pipelined requestor(s) per the bandwidth ratio defined by the assigned grant counts. That is, grant counter reload can be delayed when no requests are pending and the grant counter associated with at least one of the agents has a non-zero value.
In various embodiments, a global hysteresis counter may be provided in a GFPA to resolve the bandwidth allocation issue due to a request bubble (e.g., of 1-5 clocks) of non-pipelined requestors. Effectively, the global grant count reload in GFPA is delayed by the hysteresis counter until ‘accurate’ information from requestors is observed. This delay thus prevents grant counter reload for a predetermined number of clock cycles after a non-pipelined requestor has a request de-asserted (when at least one of the requestors has available grants for the arbitration round).
Upon de-assertion of any non-pipelined request, the hysteresis counter can be loaded with a configurable value (which in various embodiments may be set to greater than or equal to a number of clocks of a request bubble, e.g., of a requestor having a largest request bubble). The counter may be updated each clock cycle (e.g., it can be self-decremented by one per clock cycle until it reaches zero). And the global grant count reload is allowed to occur only when the next state of the hysteresis counter is zero. This state is equivalent to the present state (or current value) being one, with a decrement term asserted; or a present state of zero, with a reload term de-asserted. In other words, the next state of the hysteresis counter is the counter's flops input, or the value of the counter in the next clock.
In order to minimize the unnecessary effect of this hysteresis period (where the next state of hysteresis counter is larger than zero), the operation of this hysteresis period may further be qualified such that the de-asserting non-pipelined request is for a requestor having a corresponding grant counter having a non-zero value, since requestors that have consumed the last grant count do not need the hysteresis effect. Also when grant counts for all non-pipelined requestors are zero, the hysteresis effect may be eliminated by resetting the hysteresis counter to zero.
In one embodiment, the priority of hysteresis counter operation, per cycle of an arbitration round, may be as follows: i. (first priority) reset to zero when grant counts of all non-pipelined requests are zero; ii. (second priority) load with hysteresis value when there is a de-assertion of any non-pipelined request with a non-zero grant count; and iii. (third priority) decrement by 1 when the counter is larger than zero.
Thus according to various embodiments, a GFPA scheme may be able to maintain bandwidth allocation per a defined ratio despite of the existence of non-pipelined requestors. By knowing a priori the clock number of request bubbles of all non-pipelined requestors, the hysteresis value can be configured to adapt to different clock numbers of request bubble in different platforms without register transfer level (RTL) changes.
Furthermore, a GFPA scheme with a hysteresis counter in accordance with an embodiment of the present invention may handle non-pipelined requestors with different clock numbers of request bubbles, provided the hysteresis value is equal to or larger than the longest number of request bubbles of the non-pipelined requestors in the platform.
In addition, the hysteresis counter that delays a global grant count reload in an arbiter does not introduce additional gate levels to the request grant path, which maintains a timing critical path. The effect of the hysteresis period can be handled carefully, where a new hysteresis period is triggered by de-assertion of any non-pipelined request with a non-zero grant count. When all non-pipelined requestors' grant counts are zero, the hysteresis effect can be eliminated immediately by resetting the hysteresis counter to zero.
Referring now to
In various embodiments, arbiter 420 may be a fixed priority grant count arbiter to provide one or more grants to each of requestors 410 during an arbitration round or cycle. As shown in
Referring now to
Still referring to
Still referring to
In various embodiments, grant reload controller 550 may include logic to perform control of a grant reload operation such as shown in the flow diagram of
If instead at diamond 610 it is determined that a non-pipelined request has not been de-asserted, control passes to diamond 650. There, it can be determined whether the hysteresis counter is at a zero value. If so, control passes to block 660 where the global grant count reload may be allowed. If instead at diamond 650 it is determined that the hysteresis counter value is non-zero, control rather passes to block 655, where the hysteresis counter may be updated. For example, for this given clock cycle, the value of the hysteresis counter can be decremented, e.g., by one.
Note that at diamond 615 if it is determined that the grant counters of non-pipelined requesters are all at zero, control passes to block 670, where the hysteresis counter, e.g., present in the grant reload controller, is reset to a zero value. This reset to zero value may thus allow the global grant count reload to occur (block 660). More specifically with reference back to
Embodiments may be implemented in different components such as a platform controller hub (PCH) used in desktop, mobile and server platforms. The hysteresis counter with a configurable hysteresis count enables a fabric to adapt to platforms having non-pipelined I/O interfaces with different clock numbers of request bubbles while maintaining bandwidth allocation per the assigned grant counts. In this way, a chipset or other component can deliver expected bandwidth allocation per I/O interfaces under an over-subscribed condition.
Referring now to
While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
Number | Name | Date | Kind |
---|---|---|---|
5493566 | Ljungerg et al. | Feb 1996 | A |
6009488 | Kavipurapu | Dec 1999 | A |
6233632 | Meiyappan et al. | May 2001 | B1 |
6330647 | Jeddeloh et al. | Dec 2001 | B1 |
6427169 | Elzur | Jul 2002 | B1 |
6469982 | Henrion et al. | Oct 2002 | B1 |
6611893 | Lee et al. | Aug 2003 | B1 |
6694380 | Wolrich et al. | Feb 2004 | B1 |
6725313 | Wingard et al. | Apr 2004 | B1 |
6810460 | Kirkwood | Oct 2004 | B1 |
6816938 | Edara et al. | Nov 2004 | B2 |
7065733 | Goodnow et al. | Jun 2006 | B2 |
7457905 | Gehman | Nov 2008 | B2 |
7506089 | Cho et al. | Mar 2009 | B2 |
7573295 | Stadler | Aug 2009 | B1 |
7673087 | Ansari et al. | Mar 2010 | B1 |
7685346 | Teh et al. | Mar 2010 | B2 |
7723902 | Mandhani et al. | May 2010 | B2 |
7734856 | Reinig et al. | Jun 2010 | B2 |
7783819 | Mandhani | Aug 2010 | B2 |
7793345 | Weber et al. | Sep 2010 | B2 |
7873068 | Klinglesmith et al. | Jan 2011 | B2 |
7979592 | Pettey et al. | Jul 2011 | B1 |
7990999 | Lee et al. | Aug 2011 | B2 |
8010731 | Mandhani | Aug 2011 | B2 |
8023508 | Horton | Sep 2011 | B2 |
8199157 | Park et al. | Jun 2012 | B2 |
8225019 | Asnaashari | Jul 2012 | B2 |
8286014 | Han et al. | Oct 2012 | B2 |
8364874 | Schlansker et al. | Jan 2013 | B1 |
8437369 | Shaikli | May 2013 | B2 |
8443422 | Weber et al. | May 2013 | B2 |
20020038401 | Zaidi | Mar 2002 | A1 |
20030088722 | Price | May 2003 | A1 |
20030126336 | Creta | Jul 2003 | A1 |
20040177176 | Li et al. | Sep 2004 | A1 |
20050010687 | Dai | Jan 2005 | A1 |
20050120323 | Goodnow et al. | Jun 2005 | A1 |
20050137966 | Munguia et al. | Jun 2005 | A1 |
20050177664 | Cho et al. | Aug 2005 | A1 |
20050289369 | Chung et al. | Dec 2005 | A1 |
20050289374 | Kim et al. | Dec 2005 | A1 |
20060047849 | Mukherjee | Mar 2006 | A1 |
20060101179 | Lee et al. | May 2006 | A1 |
20060140126 | Zhong | Jun 2006 | A1 |
20060218336 | Ishizawa et al. | Sep 2006 | A1 |
20070006108 | Bueti | Jan 2007 | A1 |
20070067549 | Gehman | Mar 2007 | A1 |
20080059441 | Gaug et al. | Mar 2008 | A1 |
20080082840 | Kendall et al. | Apr 2008 | A1 |
20080147858 | Prakash et al. | Jun 2008 | A1 |
20080163005 | Sonksen et al. | Jul 2008 | A1 |
20080235415 | Clark et al. | Sep 2008 | A1 |
20080288689 | Hoang et al. | Nov 2008 | A1 |
20080310458 | Rijpkema | Dec 2008 | A1 |
20090006165 | Teh et al. | Jan 2009 | A1 |
20090119432 | Lee et al. | May 2009 | A1 |
20090235099 | Branover et al. | Sep 2009 | A1 |
20090249098 | Han et al. | Oct 2009 | A1 |
20090296740 | Wagh | Dec 2009 | A1 |
20100199010 | Goren et al. | Aug 2010 | A1 |
20100262855 | Buch et al. | Oct 2010 | A1 |
20100278195 | Wagh | Nov 2010 | A1 |
20100293304 | Alexandron et al. | Nov 2010 | A1 |
20110047272 | Bosneag | Feb 2011 | A1 |
20110078315 | Matsushita et al. | Mar 2011 | A1 |
20110078356 | Shoemaker | Mar 2011 | A1 |
20110093576 | Cherian et al. | Apr 2011 | A1 |
20110179248 | Lee | Jul 2011 | A1 |
20120066468 | Nakajima et al. | Mar 2012 | A1 |
20120079590 | Sastry et al. | Mar 2012 | A1 |
20120233514 | Patil et al. | Sep 2012 | A1 |
20120311213 | Bender et al. | Dec 2012 | A1 |
20130054845 | Nimmala et al. | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
10-2005-0077437 | Aug 2005 | KR |
10-2005-0082834 | Aug 2005 | KR |
2005071553 | Aug 2005 | WO |
Entry |
---|
Intel Corporation, “An Introduction to the Intel QuickPath Interconnect,” Jan. 2009, pp. 1-22. |
Sousek, et al., “PCI Express Core Integration with the OCP Bus,” CAST, Inc. 2006, 15 pages. |
Mentor Graphics, “PCI Express to AMBA 3 AXI Bridge IP,” Mentor Graphics, Jun. 2007, 2 pages. |
Everton Carara, et al., “Communication Models in Networks-on-Chip,” 18th IEEE/IFIP International Workshop on Rapid System Prototyping (RSP '07), 2007, pp. 57-60. |
U.S. Appl. No. 13/248,234, filed Sep. 29, 2011, entitled, “Sending Packets With Expanded Headers”, by Sridhar Lakshmanamurthy, et al. |
U.S. Appl. No. 13/248,243, filed Sep. 29, 2011, entitled, “Aggregating Completion Messages in a Sideband Interface”, by Sridhar Lakshmanamurthy, et al. |
U.S. Appl. No. 13/248,252, filed Sep. 29, 2011, entitled, “Providing Error Handling Support to Legacy Devices”, by Sridhar Lakshmanamurthy, et al. |
U.S. Appl. No. 13/248,263, filed Sep. 29, 2011, entitled, “Providing Multiple Decode Options for a System-On-Chip (SoC) Fabric”, by Sridhar Lakshmanamurthy, et al. |
U.S. Appl. No. 13/248,270, filed Sep. 29, 2011, entitled, “Supporting Multiple Channels of a Single Interface”, by Sridhar Lakshmanamurthy, et al. |
U.S. Appl. No. 13/248,276, filed Sep. 29, 2011, entitled, “Issuing Requests to a Fabric”, by Sridhar Lakshmanamurthy, et al. |
U.S. Appl. No. 13/222,362, filed Aug. 31, 2011, entitled, “Integrating Intellectual Property (IP) Blocks Into a Processor”, by Prashanth Nimmala, et al. |
U.S. Appl. No. 13/306,244, filed Nov. 29, 2011, entitled, “Providing a Sideband Message Interface for System on a Chip (SoC)”, by Robert P. Adler, et al. |
U.S. Appl. No. 13/248,232, filed Sep. 29, 2011, entitled, “Common Idle State, Active State and Credit Management for an Interface”, by Sridhar Lakshmanamurthy, et al. |
International Searching Authority, “Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority,” mailed Feb. 1, 2013, in International application No. PCT/US2012/051018. |
U.S. Patent and Trademark Office, Office Action mailed Jun. 7, 2013 with Reply filed Sep. 3, 2013, in U.S. Appl. No. 13/248,232. |
U.S. Patent and Trademark Office, Office Action mailed Jun. 20, 2013 with Reply filed Sep. 18, 2013, in U.S. Appl. No. 13/248,243. |
U.S. Patent and Trademark Office, Office Action mailed Apr. 23, 2013 with Reply filed Jul. 22, 2013, in U.S. Appl. No. 13/248,263. |
U.S. Patent and Trademark Office, Office Action mailed Jun. 14, 2013 with Reply filed Sep. 9, 2013, in U.S. Appl. No. 13/248,270. |
U.S. Patent and Trademark Office, Office Action mailed Jun. 20, 2013 with Reply filed Sep. 17, 2013, in U.S. Appl. No. 13/248,276. |
U.S. Patent and Trademark Office, Office Action mailed Sep. 18, 2013, in U.S. Appl. No. 13/248,252. |
Number | Date | Country | |
---|---|---|---|
20130054856 A1 | Feb 2013 | US |