Examples of the present disclosure generally relate to electronic circuits and, in particular, to a configurable network-on-chip (NoC) for a programmable device.
Advances in integrated circuit technology have made it possible to embed an entire system, such as including a processor core, a memory controller, and a bus, in a single semiconductor chip. This type of chip is commonly referred to as a system-on-chip (SoC). Other SoCs can have different components embedded therein for different applications. The SoC provides many advantages over traditional processor-based designs. It is an attractive alternative to multi-chip designs because the integration of components into a single device increases overall speed while decreasing size. The SoC is also an attractive alternative to fully customized chips, such as an application specific integrated circuit (ASIC), because ASIC designs tend to have a significantly longer development time and larger development costs. A configurable SoC (CSoC), which includes programmable logic, has been developed to implement a programmable semiconductor chip that can obtain benefits of both programmable logic and SoC.
An SoC can contain a packet network structure known as a network on a chip (NoC) to route data packets between logic blocks in the SoC—e.g., programmable logic blocks, processors, memory, and the like. A NoC in a non-programmable SoC has an irregular topology, static route configurations, fixed quality-of-service (QoS) paths, non-programmable address mapping, non-programmable routes, and egress/ingress nodes with a fixed interface protocol, width, and frequency. It is desirable to provide a more programmable and configurable NoC within an SoC.
Techniques for providing a configurable network-on-chip (NoC) for a programmable device are described. In an example, a programmable integrated circuit (IC) includes: a processor; a plurality of endpoint circuits; a network-on-chip (NoC) having NoC master units (NMUs), NoC slave units (NSUs), NoC programmable switches (NPSs), a plurality of registers, and a NoC programming interface (NPI); wherein the processor is coupled to the NPI and is configured to program the NPSs by loading an image to the registers through the NPI for providing physical channels between NMUs to the NSUs and providing data paths between the plurality of endpoint circuits.
In another example, a method of programming a network on chip (NoC) in a programmable integrated circuit (IC) includes: receiving first programming data at a processor in the programmable IC at boot time; loading the programming data to registers in the NoC through a NoC peripheral interface (NPI) to create physical channels between NoC master units (NMUs) and NoC slave units (NSUs) in the NoC; and booting the programmable IC.
In another example, a method of processing a request from an endpoint circuit in a network on chip (NoC) of a programmable integrated circuit (IC) includes: receiving the request at a master interface of a NoC master unit (NMU) in the NoC; packetizing data of the request at the NMU; sending the packetized data to a NoC slave unit (NSU) in the NoC through one or more NoC packet switches (NPSs); de-packetizing the packetized data at the NSU; and providing de-packetized data to a slave interface of the NSU.
These and other aspects may be understood with reference to the following detailed description.
So that the manner in which the above recited features can be understood in detail, a more particular description, briefly summarized above, may be had by reference to example implementations, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical example implementations and are therefore not to be considered limiting of its scope.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements of one example may be beneficially incorporated in other examples.
Various features are described hereinafter with reference to the figures. It should be noted that the figures may or may not be drawn to scale and that the elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be noted that the figures are only intended to facilitate the description of the features. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated example need not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular example is not necessarily limited to that example and can be practiced in any other examples even if not so illustrated or if not so explicitly described.
The NMUs 202 are traffic ingress points. The NSUs 204 are traffic egress points. Endpoint circuits coupled to the NMUs 202 and NSUs 204 can be hardened circuits (e.g., hardened circuits 110) or circuits configured in programmable logic. A given endpoint circuit can be coupled to more than one NMU 202 or more than one NSU 204.
The network 214 includes a plurality of physical channels 306. The physical channels 306 are implemented by programming the NoC 106. Each physical channel 306 includes one or more NoC packet switches 206 and associated routing 208. An NMU 202 connects with an NSU 204 through at least one physical channel 306. A physical channel 306 can also have one or more virtual channels 308.
Connections through the network 214 use a master-slave arrangement. In an example, the most basic connection over the network 214 includes a single master connected to a single slave. However, in other examples, more complex structures can be implemented.
In the example, the PS 104 includes a plurality of NMUs 202 coupled to the HNoC 404. The VNoC 402 includes both NMUs 202 and NSUs 204, which are disposed in the PL regions 102. The memory interfaces 406 include NSUs 204 coupled to the HNoC 404. Both the HNoC 404 and the VNoC 402 include NPSs 206 connected by routing 208. In the VNoC 402, the routing 208 extends vertically. In the HNoC 404, the routing extends horizontally. In each VNoC 402, each NMU 202 is coupled to an NPS 206. Likewise, each NSU 204 is coupled to an NPS 206. NPSs 206 are coupled to each other to form a matrix of switches. Some NPSs 206 in each VNoC 402 are coupled to other NPSs 206 in the HNoC 404.
Although only a single HNoC 404 is shown, in other examples, the NoC 106 can include more than one HNoC 404. In addition, while two VNoCs 402 are shown, the NoC 106 can include more than two VNoCs 402. Although memory interfaces 406 are shown by way of example, it is to be understood that other hardened circuits can be used in place of, or in addition to, the memory interfaces 406.
The endpoint circuits 702 and 714 can each be a hardened circuit or a circuit configured in programmable logic. The endpoint circuit 702 functions as a master circuit and sends read/write requests to the NMU 706. In the example, the endpoint circuits 702 and 714 communicate with the NoC 106 using an Advanced Extensible Interface (AXI) protocol. While AXI is described in the example, it is to be understood that the NoC 106 may be configured to receive communications from endpoint circuits using other types of protocols known in the art. For purposes of clarity by example, the NoC 106 is described as supporting the AXI protocol herein. The NMU 706 relays the request through the set of NPSs 708 to reach the destination NSU 710. The NSU 710 passes the request to the attached AXI slave circuit 712 for processing and distribution of data to the endpoint circuit 714. The AXI slave circuit 712 can send read/write responses back to the NSU 710. The NSU 710 can forward the responses to the NMU 706 through the set of NPSs 708. The NMU 706 communicates the responses to the AXI master circuit 704, which distributes the data to the endpoint circuit 702.
At step 806, the NMU 706 sends the packets for the request to the NPSs 708. Each NPS 708 performs a table lookup for a target output port based on the destination address and routing information. At step 808, the NSU 710 processes the packets of the request. In an example, the NSU 710 de-packetizes the request, performs AXI conversion, and performs asynchronous crossing and rate-matching from the NoC clock domain to the clock domain of the endpoint circuit 714. At step 810, the NSU 710 sends the request to the endpoint circuit 714 through the AXI slave circuit 712. The NSU 710 can also receive a response from the endpoint circuit 714 through the AXI slave circuit 712.
At step 812, the NSU 710 processes the response. In an example, the NSU 710 performs asynchronous cross and rate-matching from the clock domain of the endpoint circuit 714 and the clock domain of the NoC 106. The NSU 710 also packetizes the response into a stream of packets. At step 814, the NSU 710 sends the packets through the NPSs 708. Each NPS 708 performs a table lookup for a target output port based on the destination address and routing information. At step 816, the NMU 706 processes the packets. In an example, the NMU 706 de-packetizes the response, performs AXI conversion, and performs asynchronous crossing and rate-matching from the NoC clock domain to the clock domain of the endpoint circuit 702. At step 818, the NMU 706 sends the response to the endpoint circuit 702 through the AXI master circuit 704.
In the example, the IC die 1102 includes an NPS 1116. The NPS 1116 is coupled to the IC die 1104 through a conductor 1114 of the substrate 1110. The IC die 1104 includes an NPS 1118. The NPS 1118 is coupled to the conductor 1114 of the substrate 1110. In general, any number of switches can be coupled in this way through conductors on the substrate 1110. Thus, the NoC in the IC die 1102 is coupled to the NoC in the IC die 1104, thereby forming a large NoC that spans both IC dies 1102 and 1104. Although the conductor 1114 of the substrate 1110 is shown on a single layer, the substrate 1110 can include any number of conductive layers having conductors coupled to NoC switches on the dies 1102 and 1104.
Referring to the PS 2, each of the processing units includes one or more central processing units (CPUs) and associated circuits, such as memories, interrupt controllers, direct memory access (DMA) controllers, memory management units (MMUs), floating point units (FPUs), and the like. The interconnect 16 includes various switches, busses, communication links, and the like configured to interconnect the processing units, as well as interconnect the other components in the PS 2 to the processing units.
The OCM 14 includes one or more RAM modules, which can be distributed throughout the PS 2. For example, the OCM 14 can include battery backed RAM (BBRAM), tightly coupled memory (TCM), and the like. The memory controller 10 can include a DRAM interface for accessing external DRAM. The peripherals 8, 15 can include one or more components that provide an interface to the PS 2. For example, the peripherals 132 can include a graphics processing unit (GPU), a display interface (e.g., DisplayPort, high-definition multimedia interface (HDMI) port, etc.), universal serial bus (USB) ports, Ethernet ports, universal asynchronous transceiver (UART) ports, serial peripheral interface (SPI) ports, general purpose IO (GPIO) ports, serial advanced technology attachment (SATA) ports, PCIe ports, and the like. The peripherals 15 can be coupled to the MIO 13. The peripherals 8 can be coupled to the transceivers 7. The transceivers 7 can include serializer/deserializer (SERDES) circuits, MGTs, and the like.
In some FPGAs, each programmable tile can include at least one programmable interconnect element (“INT”) 43 having connections to input and output terminals 48 of a programmable logic element within the same tile, as shown by examples included at the top of
In an example implementation, a CLB 33 can include a configurable logic element (“CLE”) 44 that can be programmed to implement user logic plus a single programmable interconnect element (“INT”) 43. A BRAM 34 can include a BRAM logic element (“BRL”) 45 in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured example, a BRAM tile has the same height as five CLBs, but other numbers (e.g., four) can also be used. A DSP tile 35 can include a DSP logic element (“DSPL”) 46 in addition to an appropriate number of programmable interconnect elements. An IOB 36 can include, for example, two instances of an input/output logic element (“IOL”) 47 in addition to one instance of the programmable interconnect element 43. As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the I/O logic element 47 typically are not confined to the area of the input/output logic element 47.
In the pictured example, a horizontal area near the center of the die (shown in
Some FPGAs utilizing the architecture illustrated in
Note that
While the foregoing is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Number | Name | Date | Kind |
---|---|---|---|
6781407 | Schultz | Aug 2004 | B2 |
7185309 | Kulkarni et al. | Feb 2007 | B1 |
7199608 | Trimberger | Apr 2007 | B1 |
7281093 | Kulkarni et al. | Oct 2007 | B1 |
7301822 | Walstrum, Jr. et al. | Nov 2007 | B1 |
7454658 | Baxter | Jan 2008 | B1 |
7328335 | Sundararajan et al. | Feb 2008 | B1 |
7380035 | Donlin | May 2008 | B1 |
7420392 | Schultz et al. | Sep 2008 | B2 |
7500060 | Anderson et al. | Mar 2009 | B1 |
7521961 | Anderson | Apr 2009 | B1 |
7574680 | Kulkarni et al. | Aug 2009 | B1 |
7576561 | Huang | Aug 2009 | B1 |
7650248 | Baxter | Jan 2010 | B1 |
7653820 | Trimberger | Jan 2010 | B1 |
7689726 | Sundararajan et al. | Mar 2010 | B1 |
7788625 | Donlin et al. | Aug 2010 | B1 |
7831801 | Anderson | Nov 2010 | B1 |
8006021 | Li et al. | Aug 2011 | B1 |
8020163 | Nollet et al. | Sep 2011 | B2 |
8214694 | McKechnie et al. | Jul 2012 | B1 |
9152794 | Sanders et al. | Oct 2015 | B1 |
9165143 | Sanders et al. | Oct 2015 | B1 |
9230112 | Peterson et al. | Jan 2016 | B1 |
9323876 | Lysaght et al. | Apr 2016 | B1 |
9336010 | Kochar | May 2016 | B2 |
9411688 | Poolla et al. | Aug 2016 | B1 |
9652252 | Kochar et al. | May 2017 | B1 |
9652410 | Schelle et al. | May 2017 | B1 |
10243882 | Swarbrick et al. | Mar 2019 | B1 |
20040114609 | Swarbrick et al. | Jun 2004 | A1 |
20040210695 | Weber et al. | Oct 2004 | A1 |
20060288246 | Huynh | Dec 2006 | A1 |
20070067487 | Freebairn | Mar 2007 | A1 |
20080320255 | Wingard et al. | Dec 2008 | A1 |
20080320268 | Wingard et al. | Dec 2008 | A1 |
20120036296 | Wingard et al. | Feb 2012 | A1 |
20160344629 | Gray | Nov 2016 | A1 |
20170140800 | Wingard et al. | May 2017 | A1 |
20180159786 | Rowlands et al. | Jun 2018 | A1 |
20190052539 | Pappu | Feb 2019 | A1 |
20190238453 | Swarbrick et al. | Aug 2019 | A1 |
Entry |
---|
U.S. Appl. No. 15/936,916, filed Mar. 27, 2018, Swarbrick, I.A., et al., San Jose, CA USA. |
U.S. Appl. No. 15/588,321, filed May 5, 2017, Camarota, R., et al., San Jose, CA USA. |
U.S. Appl. No. 15/904,211, filed Feb. 23, 2018, Swarbrick, Ian A., San Jose, CA USA. |
U.S. Appl. No. 15/964,901, filed Apr. 27, 2018, Swarbrick, Ian A., San Jose, CA USA. |
Arm Limited, “AMBA 3 APB Protocol Specification,” v1.0, Sep. 25, 2003, pp. 1-34, ARM Limited, Cambridge, United Kingdom. |
Arm Limited, “AMBA 4 Axis-Stream Protocol Specification,” v1.0,Mar. 3, 2010, pp. 1-42, Arm Limited, Cambridge, United Kingdom. |
Arm Limited, “AMBA AXI and ACE Protocol Specification,” Jun. 16, 2003, pp. 1-306, Arm Limited, Cambridge, United Kingdom. |
Dally, William J. et al., “Deadlock=Free Message Routing in Multiprocessor Interconnection Networks,” IEEE Transactions on Computers, May 1987, pp. 547-553, vol. C-36, No. 5, IEEE, Piscataway, New Jersey, USA. |
Glass, Christopher et al., “The Turn Model for Adaptive Routing,” Journal of the Association for Computing Machinery, Sep. 1994, pp. 874-902, vol. 41, No. 5, ACM, New York, New York, USA. |
Rantala, Ville et al., “Network on Chip Routing Algorithms,” TUCS Techncal Report No. 779, Aug. 2006, pp. 1-38, Turku Centre for Computer Science, Turku, Finland. |
U.S. Appl. No. 15/990,506, filed May 25, 2018, Swarbrick, Ian A., et al., San Jose, CA USA. |
U.S. Appl. No. 15/936,916, filed Jul. 20, 2018, Swarbrick, Ian A., et al., San Jose, CA USA. |
U.S. Appl. No. 16/106,691, filed Aug. 21, 2018, Swarbrick, Ian A., et al., San Jose, CA USA. |
U.S. Appl. No. 15/886,583, filed Feb. 1, 2018, Swarbrick, Ian A., et al., San Jose, CA USA. |
Xilinx, Inc., “ZYNQ-7000 AP SoC-32 Bit DDR Access with ECC Tech Tip”, 15 pages, printed on Aug. 10, 2018, http://www.wiki.xilinx.com/Zynq-7000+AP+SoC+-+32+Bit+DDR+Access+with+ECC+Tech+Tip, San Jose, CA USA. |
Number | Date | Country | |
---|---|---|---|
20200026684 A1 | Jan 2020 | US |