This application claims priority from United Kingdom Patent Application No. 2107390.3, filed May 24, 2021, which application is incorporated herein by reference in its entirety.
The present invention relates to a module for mapping transactions between different transaction domains in a data bus, particularly but not exclusively for AXI data buses.
Many modern electronic devices include a number of buses to allow different on-chip devices to communicate with one another. Generally, these buses connect at least one ‘master’ device to at least one ‘slave’ device, and allows the master to issue commands to and/or exchange data with the slave.
One bus that is commonly used for on-chip communication is the Advanced eXtensible Interface (AXI) bus, defined within the ARM® Advanced Microcontroller Bus Architecture specifications, e.g. the ‘AXI3’, ‘AXI4’, and ‘AXI5’ specifications. This packet-switched bus provides a high performance multi-master to multi-slave communication interface.
The AXI specification outlines five channels for transactions, these are: the Read Address channel (AR); the Read Data channel (R); the Write Address channel (AW); the Write Data channel (W); and the Write Response channel (B).
Large multi-layer buses like AXI-based buses use transaction IDs (TIDs) to uniquely identify requests and responses. The AXI protocol specification explains how an interconnect should add bits to existing TIDs originating from bus masters to prevent collisions in TID values between multiple masters on the same bus which would otherwise make it impossible for responses to be returned to the correct master. In essence, this means a transaction ID becomes wider as it traverses the interconnect layer by layer. For example, a transaction generated by a master may have an initial 4-bit TID that, after traversing several interconnect layers, may end up being a 16-bit TID, thus requiring a lot more registers to store, and making the logic for managing the TIDs significantly largely.
When two interconnects are able to access each other's slaves, this scheme breaks and needs a workaround as neither interconnect is ‘below’ the other.
Additionally, the TID widths must be sufficiently wide to allow all master-to-slave paths to be uniquely identified, and to allow for any ‘looped’ interconnect layers to be resolved, i.e. the entire bus architecture must be fully defined. In other words, all masters, slaves and their interconnection patterns need to be specified to generate a solution for TID widths per master, per slave, per bus component and per interconnect. This solution may, in general, be determined using a suitable tool.
The Applicant has appreciated that it would be advantageous to avoid these issues.
When viewed from a first aspect, embodiments of the present invention provide an electronic device comprising a module configured to transfer data bus transactions from a transaction source domain to a transaction target domain, the module comprising:
The first aspect of the invention extends to a method of transferring data bus transactions from a transaction source domain to a transaction target domain, the method comprising:
The first aspect of the present invention further extends to a non-transitory computer-readable medium comprising instructions that, when executed by a processor, cause the processor to carry out a method of transferring data bus transactions from a transaction source domain to a transaction target domain, the method comprising:
The first aspect of the present invention also extends to a computer software product comprising instructions that, when executed by a processor, cause the processor to carry out a method of transferring data bus transactions from a transaction source domain to a transaction target domain, the method comprising:
Thus it will be appreciated that embodiments of the present invention provide a mechanism for bridging transactions between the different transaction domains, which may then each have their own ID ranges and bus widths. Embodiments of the present invention provide for both mapping of the transaction ID between the domains as well as providing flow control.
Those skilled in the art will appreciate that the present invention provides transaction ID mapping that translates the transaction ID from the transaction source domain—or ‘domain A’—ID (IDa) to the transaction target destination domain—or ‘domain B’—ID (IDb) for the address channel. When a new transaction is initiated from the source domain A, a lookup is performed in the look-up table (Mid), i.e. an internal bookkeeping table, to see if a transaction with the same IDa is already active. If so (i.e. if a transaction with the same IDa is active, as indicated by IDa being present in Mid), the mapped IDb will be the index in Mid where IDa is located. Conversely, if the transaction with the same IDa is not active, IDb is set equal to a free index of Mid (i.e. that does not store the ID of any active transaction). Generally, the first free index of Mid may be used, though this is not a strict requirement and any free (i.e. ‘available’) index could be selected.
In other words, the bus is effectively partitioned into different domains (i.e. sections), where each domain may have its own transaction ID widths as required, and where the remapping module is placed between the partitions to remap transaction IDs as necessary. This remapping maps numeric values to other numeric values, but with the capability of mapping wide IDs to narrow IDs (and vice versa).
This solution solves the problem with looped interconnects and also removes the need to fully define the entire bus interconnect before being able to compute the required number of transaction ID bits that are needed to connect all components correctly. The resulting bus interconnect may use an absolute minimum number of transaction ID bits, thereby reducing the amount of hardware needed.
This is achieved by way of a trade-off, in that the number of concurrent transaction IDs that can be used is limited by the size of the look-up table, i.e. the number of index values that are available for assigning transaction IDs from the source domain. Those skilled in the art may readily evaluate the look-up table size vs concurrency trade-off and determine a suitable look-up table size to allow an acceptable degree of concurrent accesses from one or more masters in the source domain to one or more slaves in the target domain. The limits of the number of unique transaction IDs and the number of active transactions with an identical transaction ID are both parameterisable.
It will be appreciated that mappings may also be carried out in the reverse direction, i.e. from IDb to IDa—e.g. for a response channel (i.e. transactions flowing from B to A) which may be either a transfer on the read data channel or the write response channel. When mapping from the second transaction domain B to the first transaction domain A, it is assumed that all active transactions (i.e. all transactions that can give a response on the response channel), are registered in the look-up table. When a transfer for a transaction arrives on the return channel, IDa is the ID stored on index IDb in the look-up table Mid. Thus, in some embodiments, the mapping logic is further configured such that when a transaction is received via the second interface, the mapping logic determines the transaction source ID stored in the look-up table against the index value equal to the transaction target ID, and sends the transaction to the transaction source domain using the determined transaction source ID.
It will be appreciated that the principles of the present invention may apply to a number of different data buses, known in the art per se. However, in some embodiments, the data bus is an AXI bus or an AXI Lite bus.
The module outlined hereinabove may handle read transactions and/or write transactions, as appropriate. Where both read and write transactions are to be handled, separate modules may be provided for each of these types of transactions.
In other words, the module outlined hereinabove may be arranged to handle write transactions or read transactions. In a particular set of embodiments, the data bus transactions transferred by the first module comprise data bus write transactions or data bus read transactions. In some such embodiments where the first module handles data bus write transactions, the device further comprises a second module configured to transfer data bus read transactions from the transaction source domain to the transaction target domain, the second module comprising:
In some embodiments, the look-up table further comprises a counter value for each index value, wherein:
In some embodiments, entries in the look-up table may be cleared when the associated counter value reaches zero, i.e. when there are no outstanding transactions having that transaction ID. However this is not necessary, as those entries having a counter value of zero may instead just be treated as being empty, without the additional step of actually clearing it from the look-up table.
It should also be noted that while in some embodiments the counter being zero to indicate that the associated index in the look-up table is free, alternative approaches may be taken. For example, the look-up table may comprise a dedicated ‘active’ (or ‘in use’) field—a flag or binary value which can be enabled or disabled—which can be set and inspected. Thus this field could be set high when the counter is non-zero to indicate that the index is in use. Alternatively, the counter value could be left at one when setting this active field to low such that the rest of the row in the look-up table is not updated at all and the active field is toggled to mark that index of the look-up table as free or not.
When a transaction having a particular transaction ID is started and completed in the same cycle, the counter may be incremented and decremented in the same cycle. If both the forward and reverse channels carry the same transaction ID which has been issued multiple times, the counter value would be non-zero and during the same cycle would be incremented (due to the forward channel) and decremented (due to the reverse channel) so at the end of the cycle, the counter value would be the same as it was at the start of that cycle.
The optional features outlined hereinabove in respect of various embodiments of the present invention may be combined in any suitable combination and/or permutation as appropriate. It will be appreciated that the optional features described above in respect of a particular module that handles the transfers from one transaction domain to another also apply to a second such module, where provided (i.e. in embodiments where separate modules are provided for read and write transactions). The two modules may be functionally identical or may be different from one another, as appropriate.
Certain embodiments of the invention will now be described, by way of non-limiting example only, with reference to the accompanying drawings in which:
The module 102 includes a mapping logic 108 and a look-up table (LUT) 110, which are outlined in further detail below. It will be appreciated that while
The module is configured to receive an AXI transaction request Ta from domain A 104, which in this example is the AXI transaction source domain. This transaction request Ta is received via a first interface 112 of the module 102. The transaction request Ta will, in general, have a transaction ID, denoted ‘IDa’, where IDa has an associated transaction ID width dependent on the width specified by domain A 104.
The mapping logic 108 uses the LUT 110 to translate transaction IDs between the two domains, i.e. from a transaction IDa in domain A 104 to a transaction IDb in domain B 106, and vice versa. The structure of this LUT 110 can be seen in
The LUT 110 has a number of columns: an index, the source domain transaction IDa, and a counter value, which are outlined in turn below.
The ‘Index’ column provides an identifier for each row in the LUT 110. In this particular example, the index values range from 0 to 15, i.e. they can be represented as a four-bit binary number ranging from 0b0000 (decimal ‘0’) to 0b1111 (decimal ‘15’). As outlined in further detail below, these correspond to the transaction IDb supplied with the transaction request to the second transaction domain B 106, which in this example is configured to use a four-bit transaction ID width. The number of index values available limits the concurrency of the system, i.e. how many unique transactions can be handled simultaneously. Those skilled in the art may make a suitable determination as to how much concurrency is required, and size the LUT 110 accordingly.
The ‘IDa’ column stores a source transaction domain IDa against one of these index values. In this case, the source domain A 104 may assign transaction ID values b between 0 and 31, i.e. they can be represented as a five-bit binary number ranging from 0b00000 (decimal ‘0’) to 0b11111 (decimal ‘31’). It will of course be appreciated that different transaction ID widths for each of the domains may be used, and either domain A 104 or domain B 106 may have a wider transaction ID width than the other (or they may be the same, albeit with a reduced benefit from the transaction ID width conversion afforded by the present invention).
The ‘Counter’ column stores the total number of pending transactions having a given IDa. A value of ‘0’ in this column indicates that there are no active transactions having that IDa and thus that row (i.e. that index) may be overwritten, but it hasn't been cleared from memory. It will be appreciated that certain logic may, alternatively, act to remove entries from the LUT 110 when their associated counter value reaches zero.
It should also be noted that the counter being ‘0’ to indicate a ‘free’ row is a choice, and alternative approaches may be taken. For example, the LUT 110 could instead include a dedicated ‘active’ or ‘in use’ field which can be set and inspected (where this field would be set high when the counter is non-zero in this case) rather than looking at the counter value. Alternatively, the counter value could be left at one when setting active/used to low such that the rest of the row in the LUT 110 is not updated at all and the active field is toggled to mark that index of the LUT 110 as free or not.
It can be seen in the ‘snapshot’ of
As illustrated in
By way of an example with reference to
If, on the other hand, a transaction request Ta is received having an IDa that isn't currently active, i.e. for which there is no match in the LUT 110, the mapping logic 108 selects an available index and stores the IDa against it, setting the counter value for that index to one (counter=1). That index value is then used as IDb when the AXI transaction is passed to domain B 106 as Tb.
Referring again to
When a transaction response Trb is received from domain B 106, the mapping logic 108 carries out a reverse translation by checking the index value equal to the IDb associated with the response Trb in the LUT 110, and the transaction response is passed to domain A as Tra using the IDa stored against that index value (i.e. the index value equal to IDb).
When a transaction is resolved, i.e. when it completes, the LUT 110 is updated to decrement the counter value for the associated index (counter--). For example, if the transaction having IDa=5 completes before any new transaction request having that same IDa value is received, the counter value for index value 4 is decremented from ‘1’ to ‘0’, thus rendering index value 4 available for use.
It will be appreciated that the transaction requests Ta, Tb and responses Tra, Trb outlined above may correspond to read transactions, e.g. where a master device in domain A 104 wishes to read something from the memory of a slave device in domain B 106. In such a scenario, the transaction request Ta, Tb may be for a read operation of a particular memory location using the ‘Read Address’ (AR) channel of the AXI bus; and the response Tra, Trb may be the associated data from that memory location on the ‘Read’ (R) channel of the AXI bus.
Alternatively, the transaction requests Ta, Tb and responses Tra, Trb outlined above may correspond to write transactions, e.g. where a master device in domain A 104 wishes to write something to the memory of a slave device in domain B 106. In such a scenario, the transaction request Ta, Tb may be for a write operation of a particular memory location using the ‘Write Address’ (AW) channel of the AXI bus. Those skilled in the art will appreciate that the actual data to be written is generally supplied separately over the ‘Write Data’ (W) channel of the AXI bus. The response Tra, Trb may be an acknowledgement of the write operation on the ‘Write Response’ (B) channel of the AXI bus. It will be appreciated that the forward and reverse channels may differ, depending on the particular bus protocol in use, e.g. if using the AXI3 protocol, the W channel also carries the transaction IDs.
For ease of illustration, the table update logic 116 which handles updates to the LUT 110; the table status logic 118 which checks the entries in the LUT 110; and the transaction hold logic 120 which handles transaction holds, are shown separately to the mapping logic 108. Each of these functions may be carried out by separate logic modules, or one or more of these functions may be carried out within a combined module. For example, all of these functions map be carried out within the mapping logic 108.
Thus it will be appreciated that embodiments of the present invention provide an improved arrangement in which a ‘mapper’ module translates transaction IDs between different domains, allowing for different transaction ID widths to be used between these domains. This arrangement may avoid the need to ‘grow’ a transaction ID at each interconnect level to keep track of where they come from, instead allocating one of a set of internal references (the index value) to a particular source transaction ID, and using that for all corresponding transactions having that same ID until it is no longer active. This arrangement may allow for complex loops to be handled more easily, and on an ad hoc basis, i.e. without needing to map out all master-slave paths in advance, by trading off against the maximum transaction concurrency of the system.
While specific embodiments of the present invention have been described in detail, it will be appreciated by those skilled in the art that the embodiments described in detail are not limiting on the scope of the claimed invention.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2107390.3 | May 2021 | GB | national |