The present disclosure relates to methods and controllers, devices and circuitry for controlling memory operations and in particular to controlling reading and writing operations.
The present application claims the Paris Convention priority from United Kingdom Patent application number 2100653.1, the contents of which are hereby incorporated by reference.
New radio access technologies, such as 3GPP 5G New Radio “NR”, brings dramatic increases in throughputs such as multi-gigabit over-the-air rates. Designing hardware that is able to handle such rates can be challenging, for example for baseband System-on-Chip (SoC) designers.
In particular, recent technological developments are associated with an increased amount of storage required (e.g. for Hybrid-ARQ (HARQ) or retransmissions mechanisms to function) and increased associated transfer rates to memory, for example off-chip Double Data Rate (DDR) memory.
While these challenges are presently particularly relevant to 5G, these challenges are expected to be even more relevant to future technologies.
Accordingly, it is desirable to provide arrangements which can improve the operation of memory, in particular the management of writing and reading operations in memory.
According to a first aspect of the present disclosure, there is provided a method of controlling memory operations, the method comprising identifying a plurality of data sets to store in memory, each data set comprising two or more bits ordered from a most significant bit to a least significant bit; storing in memory a first plurality of bits selected from the bits for the plurality of data sets, wherein the first plurality of bits is selected by selecting one or more successive first bits of each data set, including the most significant bit of the each data set, wherein the stored one or more successive first bits of the each data set define a stored portion of the each data set; and storing in memory a further plurality of bits selected from the bits for the plurality of data sets, wherein the further plurality of bits is selected by selecting one or more successive further bits of each data set, outside the stored portion and including the most significant bit outside the stored portion of the each data set. Accordingly, writing operations can be controlled in a manner which is expected, amongst other things, to assist with controlling and/or minimising the impact of congestion effecting memory operations.
According to a second aspect of the present disclosure, there is provided a method of controlling memory operations, the method comprising: identifying a plurality of data sets to read from memory, each data set comprising two or more bits ordered from a most significant bit to a least significant bit; reading from memory a first plurality of bits selected from the bits for the plurality of data sets, wherein the first plurality of bits is selected by selecting one or more successive first bits of each data set, including the most significant bit of the each data set, wherein the read one or more successive first bits of the each data set define a read portion of the each data set; and reading from memory a further plurality of bits selected from the bits for the plurality of data sets, wherein the further plurality of bits is selected by selecting one or more successive further bits of each data set, outside the read portion and including the most significant bit outside the read portion of the each data set. Accordingly, writing operations can be controlled in a manner which is expected, amongst other things, to assist with controlling and/or minimising the impact of congestion effecting memory operations.
According to a third aspect of the present disclosure, there is provided a controller for controlling memory operations, controller being configured to identify a plurality of data sets to store in memory, each data set comprising two or more bits ordered from a most significant bit to a least significant bit; store in memory a first plurality of bits selected from the bits for the plurality of data sets, wherein the first plurality of bits is selected by selecting one or more successive first bits of each data set, including the most significant bit of the each data set, wherein the stored one or more successive first bits of the each data set define a stored portion of the each data set; and store in memory a further plurality of bits selected from the bits for the plurality of data sets, wherein the further plurality of bits is selected by selecting one or more successive further bits of each data set, outside the stored portion and including the most significant bit outside the stored portion of the each data set.
According to a fourth aspect of the present disclosure, there is provided a controller for controlling memory operations, controller being configured to identify a plurality of data sets to read from memory, each data set comprising two or more bits ordered from a most significant bit to a least significant bit; read from memory a first plurality of bits selected from the bits for the plurality of data sets, wherein the first plurality of bits is selected by selecting one or more successive first bits of each data set, including the most significant bit of the each data set, wherein the read one or more successive first bits of the each data set define a read portion of the each data set; and read from memory a further plurality of bits selected from the bits for the plurality of data sets, wherein the further plurality of bits is selected by selecting one or more successive further bits of each data set, outside the read portion and including the most significant bit outside the read portion of the each data set.
According to a fifth aspect of the present disclosure, there is provided a controller system comprising a writing controller in accordance with the third aspect above and a reading controller in accordance with the fourth aspect above.
Accordingly there has been provided methods and controllers for controlling memory operations wherein a plurality of data sets are stored in memory or read from memory by storing or reading the most significant bits of each data set first, and subsequently storing or reading the next most significant bits of the each data set.
A more complete appreciation of the disclosure will become better understood by reference to the following example description when considered in connection with the accompanying drawings wherein like reference numerals designate identical or corresponding parts throughout the several views.
The present disclosure includes example arrangements falling within the scope of the claims and may also include example arrangements which may not necessarily fall within the scope of the claims but which are then useful to understand the teachings and techniques provided herein.
Most radio access technology communication systems are expected to include two features for the wireless nodes (e.g. mobile terminal, base station, remote radio head, relay, etc.) to manage errors in transmissions: (1) an error correction mechanism and (2) a retransmission mechanism.
As the skilled will appreciate, error correction mechanisms generally involve transmitting, alongside the bits to be communicated (which will be referred herein as “information bits”), parity bits. The parity bits can include Cyclic Redundancy Check “CRC” bit that can be used to determine if the received transmission contained any error. The parity bits can also include Forward Error Cording “FEC” bits we can be used to recover the originally transmitted bit (e.g. the originally transmitted information bits) even when the received bits contained errors.
In cases where there were an errors in the received bits (and, if FEC is available, where the errors could not be corrected), the receiver can report that the transmission was unsuccessful and retransmission mechanisms can then be used to have a successful transmission instead. In some cases, the retransmission will involve re-sending the same transmission and in other cases, the retransmission may include sending a different transmission which may be entirely different from the first transmission or which may at least partially overlap with the first transmission.
In this example, all of the coded bits that may be transmitted by the sender include 8 bits of information bits (e.g. the actual data to be transmitted) and 16 bits of parity bits. It will be appreciated that these values are illustrative only and that the amount of information and/or parity bit may vary greatly, as deemed appropriate based on the transmission parameter, communication standards or any other relevant factor. For example, in 5G NR it is expected that a coded block may include 25344 coded bits. The skilled person will appreciate that the teachings provided in the present disclosure apply equally to such and other cases.
In present mobile telecommunication networks and for example in 5G NR networks, a configuration where retransmissions attempts may send a different selection of coded bits compared to a previous transmission is sometimes called “incremental redundancy”. Using this terminology, the first transmission is sometimes identified as “RV0” (Redundancy Version 0), the first retransmission as “RV1”, the second retransmission as “RV2” and so on. In 5G NR, a HARQ cycle with incremental redundancy can extend to up to four transmissions (e.g. RV0 to RV3) before the redundancy check (CRC test) either passes or fails. While this terminology is widely used in the present disclosure, the skilled person will appreciate that present principles are not limited to applications in 5G NR or generally to 3GPP communications but is instead applicable to other situations.
Returning the example of
The examples of the present disclosure have been generally provided so as to correspond to the example of
After the first transmissions, the buffer for the coded bits will include what was received from the transmitted bits. In this example, this transmission will correspond to the eight (8) information bits and the four (4) parity bits transmitted. After this transmission, the buffer or memory for receiving the transmission will have some but not all of the received coded bits.
Assuming an unsuccessful first transmission, the second transmission will send the 12 parity bits not yet sent. The receiver will therefore have received all coded bits either through the first or second transmission. Accordingly, the buffer or memory for the transmission will have information for each of the coded bits.
Assuming that the device was still not able to decode the coded bits even after the second transmission, a third transmission will be sent as illustrated in
The same example is illustrated in
As the skilled person will know, when the same coded bit is received more than once, the receiver can use the various received versions of the coded bits in different ways. In one example, the receiver can use the last one, the one deemed the “better” or “stronger” one or can use soft addition of what was received. Effectively, the coded bit is either 0 or 1 but it is transmitted through a physical (analogue) signal such that the receiver may associated the received transmission with a score indicating whether the code is closer to 0 or 1. By combining the score for more than one transmission, any interference or other factor that may have deteriorate the transmission of a coded bit is generally not expected to have affected two transmission of the same coded bit in a similar manner. Accordingly, by combining the score for the coded bit from two or more transmission, the reliability of the score is expected to increase compared to the score for any single transmission of the same coded bit. In other words, the probability to decode is expected to increase as a result of the soft combination of two or more transmissions.
The present disclosure focusses on memory operations, which in the example of
The demodulator outputs log likelihood ratios (LLRs) which can be used by the LDPC decoder. From one perspective, an LLR can be seen as a score associated with a coded bit and which represents the likelihood of the coded bit being 0 or 1. The LLRs in mobile network tend to be 8 bit long although other length can equally be used. The score is conventionally represented on a scale from −1 to 1 where a score of −1 indicates a coded bit value 0 and a score of 1 indicates a coded bit value 1.
As illustrated in
As illustrated in
As illustrated in
If the each set of LLRs are written at each transmission (and thus at each retransmission), then the LLRs for both RV0 and RV1 are read. Additionally, the LLRs for RV2 are written in memory. Accordingly, the transfers are expected to correspond to three times the amount of LLRs or more: two sets of LLRs are read and at least one set of LLRs (LLRs for RV1 and/or combined LLRs) are written. In a case where only combined LLRs are read, then the transfers are expected to correspond to twice the amount of LLRs or more.
As illustrated in
It is noteworthy that in cases where the LLRs for all transmissions are combined, it may be beneficial to only store combined LLRs, thereby reducing the amount of data to transfer at each retransmission. However, it is also conceivable that in some cases not all LLRs will be combined together such that each set of LLRs will be stored separately. While the memory resources and transfer required would be increased, such an arrangement may also result in an improved decoding rate. For example, the decoder can rate or score the quality or reliability of LLRs (e.g. by looking at whether one of the (re)transmissions was corrupted by an unscheduled transmission from another terminal, at whether one of the (re)transmissions was interrupted by a low latency transmissions and/or at a level of interference associated with the (re)transmissions, etc.) to assess how useful the LLRs are expected to be and/or can attempt to decode the coded bits by using different combinations of transmissions (e.g. RV0+RV2+RV3, RV0+RV1, etc.) thereby increasing the decoding attempts. In other words, an HARQ arrangement may involve storing the combined LLRs and/or each set of LLRs for one or previous transmission and the skilled person can determine which option is best suited to a particular system or environment, depending for example on processing power, memory capability and/or device cost.
Under normal conditions, over the air throughput is expected to be maximised when the number of HARQ retransmissions is kept below 20% of all transmissions, namely where 80% of transmissions do not go beyond RV0. Under adverse conditions, such as burst interference, HARQ re-transmissions may extend to RV3. In this scenario transfers to and/or from the HARQ buffer may increase three-fold compared to the data amounts transferred for RV0.
Additionally, with an increase data rate on the air interface, as for example provided with newer radio technologies like 5G NR, this will in turn create an increase amount of data to be transferred to or from memory.
It should also be noted that, without any additional memory optimisation, HARQ LLR bit rate for an N-bit LLR is expected to be of N times that of the over-the-air throughput. For example, for a 5G NR system capable of a 5 Gb/s throughput (measured in terms of number of coded bits transmitted over the air) and for a 8-bit LLR (with no compression or optimisation of the LLRs), such a system may generate 40 Gb/s of HARQ LLR samples, thus yielding a peak 120 Gb/s of HARQ LLR buffer throughput in the examples discussed above. Accommodating such a high data rate can require significant and costly memory to be used. Additionally or alternatively, this can result in an over-provisioning of memory resources in order to accommodate HARQ LLR buffer bandwidth requirements, where these peak requirements are only seldom used or needed.
Access to DDR may be arbitrated by NoC and DDR controller functions. However, in a case where multiple functions request access simultaneously, the peak load may significantly exceed that supported by the DDR sub-system. In such a case, the requesting functions can be delayed, waiting for DDR transactions to complete, slowing down processing possibly to the point where the data-path is not able to complete critical processing operations in time. Accordingly, ensuring that the HARQ system can use memory functions in time is an important factor in designing a HARQ system. Additionally, HARQ LLR storage represents a significant part of the overall DDR memory bandwidth budget in a system such as the one illustrated in
One way to reduce the memory requirements when handling LLRs is to reduce the amount of data to be stored. With this in mind, some systems use a linear-log compression system in order to reduce the size of the stored LLR samples.
While such an arrangement can help reduce the amount of data to be stored and read to 75% of the original data amounts, with the seen or expected increase in the available data rates over the air, further improvements would be desirable, which could help reduce the reliance on adding more memory to such systems.
As the skilled person will appreciate, increasing the level of compression of a log-liner function is likely to result in a detrimental level of losses in the decompressed LLR samples. Namely, this is likely to have a greater impact on the ability to decode the coded bit using the LLRs which is likely to reach undesirable levels. In addition, one option consists in adding more memory (e.g. more DDR) such that the read and write operations can be distributed across two or more memories, thereby reducing the likelihood of experiencing delay when there is a peak in memory operations. However, this option is associated with an increased device cost which can be undesirable in low-cost devices.
Accordingly, it would be helpful to provide additional or alternative techniques for managing memory operations.
In the present disclosure, the terminology LLR(x,y) will refer to the LLR sample for a coded bit x, wherein the LLR(x,y) bit is in position y for this LLR sample. For example, in
A “write LLR bit re-ordering” function configured to select LLR sample bits for writing and in particular, to select an order in which to write the LLR sample bits. This function is configured using a parameter Q_Write which can be derived from a performance score or load control parameter for the memory.
The controller comprises the mirroring functions for reading management and further comprises the common function of:
According to this example, when latency and/or congestion is detected at the memory (e.g. through a delay between the read or write instructions and the read or write acknowledgement), the number of bits to be written in (or read from) memory can be reduced. Additionally, rather than merely reduce the number of bits to write in (or read from) memory, according to the techniques provided herein, the selection of the bits to write in (or read from) memory first is determined based on the most significant bits of each data set or word or LLR sample to be stored in memory.
In the example of
Accordingly, even if the writing (or reading) of a plurality of data sets or words or LLR samples is interrupted before it is completed, the most significant bits of each data set, word or LLR sample would have been written (or read) before the least significant bits for the data set, word or LLR sample. When using such techniques with data sets like LLR or other data sets that may have similar characteristics, the amount of data to be stored or accessed can be reduced while still storing or accessing the most important part of the data.
It is worth noting that while such an arrangement may not bring an additional benefit on a case where data is more “random”, because an LLR is a score on a −1 to 1 scale, having the most significant bit already gives an indication of whether the score is positive or negative, even if no other bit is used. The next significant bit will give an indication of whether the score is in the above or below 0.5 (or −0.5 if the score is negative) and so on. Accordingly, even if the quality of the score will be less when only some of the bits are used, due to the nature of LLRs, using the most significant bits first still provide useful information. Accordingly, it is expected that having truncated or not complete but still useful information to use while be beneficial as it is likely to help avoiding an overall decoding failure (that might otherwise happen due to an unmanaged congestion).
In examples where headers are used, shortened LLR samples can be marked in the header, for example in a Q(RV) value indicating how many bits were stored. Depending on how the memory operates and has been designed, this can help reduce the likelihood of subsequent reading operations attempting to access an invalid sample (e.g. a full LLR sample when only a partial LLR sample was stored). Accordingly, in some circumstances memory reading errors can be avoided or reduced in number.
Likewise,
Accordingly, for different transmissions or retransmissions, a different number of LLR bits may be stored each time (or the same number of bits may be stored, as appropriate).
Once a portion of each LLR sample comprising the most significant bits of the LLR sample has been stored, the system has effectively stored a partial LLR sample rather than the full original LLR sample. When or if the LLR sample is needed for use, the stored information corresponds to a portion of the original LLR sample rather than the full LLR sample. However, the decoder might in some cases need a full LLR sample to operate. In such situations, different methods may be used (separately or in combination) in order to complete the LLR information to reach a useable size. Techniques for “padding” a partial LLR sample which can be used for using partial LLR samples for soft combining or other operations where full LLR samples are expected to be used are described below with reference to
The LLR sample may for example be completed by adding information to the portion of the LLR that has not been stored (the “empty portion”). This can be done by adding bits to the empty portion, such as filling the empty portion based on one or more of: all empty bits set to zero, all empty bits set to one, bits randomly set to zero or one or bits set according to a pattern, the first (most significant) bit of the empty portion set to one and all others set to zero (which can also be referred to as “rounding up”), the first bit of the empty portion set to zero and all others set to one (which can also be referred to as “rounding down”), etc. Example patterns include a pattern of “0-1-0-1-0-1- . . . ”, a pattern of “1-0-1-0-1-0- . . . ” or any other pattern deemed suitable.
Returning to
It is noteworthy that such padding techniques may be used in combination with other techniques, for example with a log-linear compression and decompression technique. For example, we can consider a case where a 6 bit LLR sample is expected to be written once compressed from 8 bits to 6 bits using a log-linear compression and to be read before it is decompressed to 8 bits using a log-linear decompression. In such a case, the padding may be used to pad a partial LLR sample to 6 bit. Looking at the example of
The arrangement of
In other cases, the decoder may be configured to operate using partial LLR samples, wherein this is taken into account as part of the decoding process such that the partial LLR samples do not require to be complete, e.g. to be of the same size as the original LLR sample.
It is pointed out that the header bits, if provided, may be stored in any appropriate way, for example separately from the LLR sample memory resources, at the end of the LLR memory resources, in the middle of the LLR memory resources, saved together or distributed amongst the resources, and so on. The header for the different transmissions may also be stored separately from each other. For example, a header for RV0 may be stored in a location associated with the information stored for the LLR samples for transmission RV0, a header for RV1 may be stored in a location associated with the information stored for the LLR samples for transmission RV1.
As the most significant bits are stored first for each LLR sample, the storing of the LLR samples (or of any other type of data) can be stopped before it is completed while still retaining useful information through the most significant bit of each LLR sample (or other type of data). In such cases, using the techniques disclosed herein can be preferable to store data while controlling the load on the memory and in particular how the limited reading and writing bandwidth of the memory is managed.
As the skilled person will appreciate, using this type of memory operations is particularly useful with some type of data, e.g. LLR samples, but may not be as helpful with other type of data. If for example the data to be stored is not expected to have data where the most significant bits are more helpful than less significant bits (e.g. if the data encodes a random number), then this way of storing the data may not be as suitable.
For example, once the most significant bit for each LLR sample have been stored in LLR(n,0), the second most significant bit for the LLRs samples can be stored in LLR(n,1)-assuming that the writing operations have not been interrupted, for example due to an increase latency in memory operations.
By arranging the information by most significant bit rather of each LLR sample (or other type of data word) to be stored, the operation of the memory can be simplified when using techniques discussed herein. In particular, the bits to be stored in memory can be stored in memory in an order which corresponds to or similar to that of
As can be seen in
Following the same example as above, the four most significant bits for the LLR samples corresponding to the coded bits of the second transmission (first retransmission) RV1 are then stored, as illustrated in
Then, as illustrated in
In accordance with the techniques provided herein, the most significant bits of the LLR samples (or other data sets to store) are prioritised. This result in the greater density of bits received once or more in the part of the memory relating to the most significant bits, e.g. LLR(n,0) in
Additionally, the same teachings and techniques can be applied in a mirror manner for reading bits stored in memory. For example, the most significant bits of each LLR sample will be read first. Accordingly, even in an event where the reading operations are interrupted before all LLR sample bits available have been read, the system is expected to have (i) at least some information for the LLRs for each coded bit and (ii) for each LLR sample for each coded bit sent at each transmission, at least the most significant bits.
Accordingly, even if the reading operations are interrupted before they could complete, the risk of the interruption resulting in a failure to complete the overall decoding operation is reduced. Like for the benefits provided by the memory writing techniques discussed herein, the mirroring reading techniques reduce the risk of memory failure at least in part as a result of the prioritisation of the reading of most significant bits first and of successive attempting of reading (or writing) most significant bits for each LLR sample before moving on to less significant bits (for each LLR sample).
It will also be appreciated that while the writing and/or reading operations might be interrupted (e.g. as a result of the DDR scheduling or prioritising memory operations from one or more of the HARQ operations the channel estimator operations, other network operations or other memory operations), the writing and/or reading operations might also be configured based on a monitoring of the memory operations.
For example, in one arrangement, the writing and/or reading operations will be configured to write or read only a portion of the LLR samples in memory which will help reduce the amount of data transfers to and/or from memory. The size of the portion to be written or read can for example depend on a monitoring or status of the memory.
In this case, it is expected that the amount of data transfers to and/or from memory can be better controlled and it is expected that a controlled partial writing/reading operations management will result in fewer errors (compared to a case where all operations attempt to complete in the hope that they can complete before an interruption is experienced, e.g. caused by operations competing to access the memory). In addition, such an operation mode where there is are reduced write or read operations, in a planned fashion, is expected to cause fewer errors for the other functions using the same memory.
Table 1 below illustrates an example compression policy table which may be used by a controller and which can reduce the number of bits to be written in—or read from—memory depending on a measured congestion level. Accordingly, the controller may be configured based on two or more levels and can reduce the amount of data to be written and/or read in memory based on an expected load of the memory. In the particular example of Table 1, there are eight different congestion levels but it will be appreciated that more or fewer congestion or load levels may be used.
This table would be well suited for an arrangement where it would expected that 6 bits would be written or read in memory when the memory's load allow it (e.g. in a case where a 6-bit compressed LLR would normally be written or read). The skilled person will appreciate that the particular values of table 1 may thus be adjusted based on any suitable parameter, such as the size of the data that is to be written or read, based on the number of load levels, based on the severity of the load level or congestion level reflected by the levels, based on the properties of the memory (e.g. how the memory behaves when the load increases and how operation errors can affect the memory), etc.
As will be appreciated, the amount of compression to be used can depend on one or more of: a state of the memory, a latency level associated with the memory, a transmission number, etc. For example, in Table 1, the components related to the memory itself are reflected by the congestion or load levels (Level 1 to Level 8) and the amount of compression is further indexed on a combination or the type of operation (read or write) and on a transmission number (RV0 to RV3 in this example).
In this example, the compression level is usually the same or higher (i.e. there is less data written or read) for the reading operations than for the writing operation when the same congestion level is experienced. This is expected to yield better results as the amount of reading that can be done will be limited by the amount of writing that had been done previously. It will however be appreciated that in some cases the same level of compression can be configured for both reading and writing operations, and more compression can be configured for reading operations compared to writing operations. This can be decided based for example on performances of a particular memory (e.g. reading and/or writing speed), on use of a particular memory (e.g. the type of reading or writing operations from “competing” memory users), etc.
The amount of compression to be effected for reading and/or writing operations based on the level of congestion experienced can be configured using one or more of: a control processor for the controller, a configuration file, a remote element, a message received from a remote device or the device comprising the memory, etc. In some cases, the controller may be configured with different combinations of congestion levels and compression levels and may receive an instruction to use a particular combination and/or determine to use a particular of these combinations.
According to the configuration, and for each memory read or write operation, the compression level (e.g. a desired word-length) associated with the current estimated congestion level is determined and can be used within the adaptive compression function.
In one example, a corresponding reporting table may be used which records a count for each entry (e.g. number of operations with this configuration of reading/writing, RV level and congestion level) so that the frequency of occurrence of each entry can be measured. In turn, this information can be used for example by the controller tune operation of the adaptive compression and/or to report to higher layers. In some examples, the level of compression can be adjusted and/or the granularity of each level can be adjusted once more information is available on how the system is used. For example, if the records show that many operations are carried out around a particular zone or zones in the table above and that operations outside this zone or zones are less frequent, the granularity of the congestion level and/or of the compression levels around this cluster can be increased to a finer granularity. In some cases, this can also be paired with a reduced granularity outside of the cluster for operations which are found to be less frequent. The tuning of the compression policy may alternatively or additionally be done jointly with tuning of other system functions and/or for different modes of operation. For example, in some systems or operation modes, a higher performance equaliser may be configured which will need greater access to memory. In this case, the compression level(s) may be reduced which is expected to result in better decoding performances. Depending on which other functions access the memory and whether and how the memory access of any such functions is controlled, different functions may be configured or prioritised so as to control the operation of the memory.
Optionally, the method may return to step S1503, for example either until the entire data sets have been written (e.g. the write operation has completed) or until a predetermined number of bits have been written for each data set—determined for example based on Table 1 above.
Owing to this arrangement and to this writing of data which does not follow the actual organisation of the data sets, the writing can be carried in a manner which reduces the risk of errors if the writing is interrupted before completion. This is particularly useful with data wherein where the most significant bit of each data set is of greater importance to the meaning and use of the data compared to a less significant bit in the same data set.
Likewise, the same teachings apply equally to the reading operations, as illustrated in
Optionally, the method can return to step S1603 and select further bits from the portion that has not been read yet. In some cases, the method will return to step S1603 until all of the written bits have been read (e.g. full LLR words or samples, or partial ones if the writing operation was previously truncated, in a telecommunications system) or until a stopping condition is met, for example in case a desired number of bits have been read from memory (for example derived from Table 1 above or any other suitable configuration or determination).
Owing to this arrangement and to this reading of data which, again, does not follow the actual organisation of the data sets, the reading can be carried in a manner which reduces the risk of errors if the reading is interrupted before completion. This is particularly useful with data wherein where the most significant bit of each data set is of greater importance to the meaning and use of the data compared to a less significant bit in the same data set.
In this example a filter (which may for example be configurable) is included to smooth latency measurements with a view to avoiding triggering compression too early—in some cases, the filter may not be included and the latency data may be provided to the controller (which may or may not apply any data processing to this data before using it to control the read or write operations, e.g. to apply filter-like processing or any other processing).
An example natural language code for the Timer can for example be:
An example natural language code for the Filter can for example be:
Accordingly, and using the example of DDR memory, when enough samples have been read to form a DDR write burst (typically 512-bits), the re-ordering memory is read out row-by-row. As a result bits of equal significance are grouped together and the bits of most significant weight will be read before bits with a less significant weight. The number of rows written can be controlled by the controller, for example using the compression parameter Q_Write, which can be dynamically updated by the controller based on DDR congestion. In some cases, the number of rows written and/or the write operation can also be interrupted by other operations competing for memory access. On completion (with or without interruption), the Q(RV) value can be included in the HARQ buffer header and for example stored in DDR memory.
By ordering the data set in a buffer memory in a transposed manner relative to the order of reading the intermediate memory for writing the data in the ultimate memory (e.g. DDR), the implementation of the techniques discussed herein can be simplified and this additional step provides an efficient implementation of such techniques. In this example, a reading row-by-row (in the intermediate memory) of the bits to be stored can be associated with a storing of the data words column-by-column (in the intermediate memory) and, likewise, a reading column-by-column (in the intermediate memory) of the bits to be stored can be associated with a storing of the data words in row-by-row (in the intermediate memory).
In a system such as the one illustrated in
It will be appreciated that in some cases bits may not be available to have the complete DDR sample (e.g. if the writing was truncated or interrupted). Also, in some cases, the reading itself will be truncated or interrupted. Accordingly and as discussed above, in some cases an incomplete LLR sample may sometimes be completed by padding of the portion of the LLR that has not been read (because it was not previously stored and/or because it was not read).
While the illustration of
While the example implementation herein mainly rely on a write or read operation on the next most significant bit of the data set before carrying the operation on the bit in the same position on the next data set (or the next significant bit in the first data set if already at the last data set), it will be appreciated that more than one bit may be written or read from each data set or LLR sample each time. For example, the system may operate with pairs of bits and write/read two bits of each word at every loop (unless only one bit is left in a word). In other cases, a variable number of bits may be used, selected between 1 and a suitable number n.
From one perspective, the LLR samples can be viewed as data sets and in some cases, each data set is a bit word Wi having N ordered bits Wi(0) to Wi(N−1), with N≥2. In some examples, the words will be read or written by reading or writing first all of the Wi(0) for each word Wi, then all of the Wi(1), all of the Wi(2), etc. until a stopping condition is reached and/or until the operation is interrupted. In examples where more than one bit is read or written at a time, the data sets may be read or written as follows: first all Wi(0, 1), then all of Wi(2,3) etc. In another example, the data sets may be read or written as follows: Wi(0,1); then Wi(2); then Wi(3, 4, 5), etc. This may be based on a predetermined pattern or adjusted dynamically, if appropriate.
The skilled person will appreciate that although some of the examples above have been illustrated with two retransmissions (up to RV2), the number of retransmissions may be more or less than two and the same teachings and techniques will apply equally to such situations. It is also noted that while techniques of the present disclosure will found particular use in field of telecommunications and for example with the use of 5G or New Radio (NR), these techniques are not limited to these particular fields of use.
Likewise, while it is expected that the present techniques will be used in a system using incremental redundancy, these techniques can be used in other arrangements which do not use incremental redundancy, or even which do not use redundancy. Some of the technical strengths of these techniques are particularly well suited to data such as LLR data but other types of data may share similar characteristics and may thus also be well suited for use with the techniques discussed herein.
As will be appreciated, the teachings and techniques provided herein may be applied to any suitable memory, for example a single memory provided using a single device or multiple storing devices. The memory may also be distributed across multiple devices and/or may be a virtual memory. In some illustrative examples, the memory may be provided as a Double Data Rate “DDR” memory, such as a Synchronous Dynamic Random-Access Memory “SDRAM”.
As will be appreciated some of the example features discussed above, while useful in combination with the techniques provided herein should be not understood as limiting the scope of the present disclosure. For example, the use of a linear logarithmic compression step is an optional step.
Additionally, the teachings and techniques provided herein are expected to be particularly useful with the use of DDR memory but other types of memory may be used when implementing these teachings and techniques.
The present disclosure includes example arrangements falling within the scope of the claims (and other arrangements may also be within the scope of the following claims) and may also include example arrangements that do not necessarily fall within the scope of the claims but which are then useful to understand the teachings and techniques provided herein.
Example aspects of the present disclosure are presented in the following numbered clauses:
Clause 1. A method of controlling memory operations, the method comprising:
Clause 2. The method of Clause 1, further comprising, when a stopping event being detected, stopping the step of storing a further plurality of bits before completion of the step.
Clause 3. The method of Clause 1 or 2 wherein the method further comprising
Clause 4. The method of Clause 2 or 3 wherein a stopping event is triggered by one or more of:
Clause 5. The method of any one of Clauses 2 to 4 further comprising, upon detection of a stopping event and upon detection that a first data set of the plurality of data set has not been fully stored in memory, storing an indication that the storing of the first data set has been interrupted.
Clause 6. The method of Clause 5 wherein the indication comprises an indication of the number of bits of the first data set that have been stored in memory.
Clause 7. The method of any one of Clauses 2 to 6 further comprising
Clause 8. The method of any preceding Clause wherein selecting one or more successive first bits of each data set comprises selecting only the most significant bit of the each data set as the one or more successive first bits of each data set.
Clause 9. The method of any preceding Clause wherein selecting one or more successive further bits of each data set comprises selecting only the most significant bit outside the stored portion of the each data set as one or more successive further bits of each data set.
Clause 10. The method of any preceding Clause wherein each data set is at least one of
Clause 11. The method of any preceding Clause wherein each data set is a bit word Wi having N ordered bits Wi(0) to Wi(N−1), with N greater than or equal to 2.
Clause 12. The method of Clause 11 further comprising:
Clause 13. The method of Clause 11 or 12, further comprising:
Clause 14. The method of Clause 12 or 13 further comprising stopping the reading and storing the read bits in memory when a stopping criterion is met.
Clause 15. The method of any preceding Clause wherein the memory is a Double Data Rate “DDR” Synchronous Dynamic Random-Access Memory “SDRAM”.
Clause 16. A method of controlling memory operations, the method comprising:
Clause 17. The method of Clause 16, further comprising, when a stopping event being detected, stopping the step of read a further plurality of bits before completion of the step.
Clause 18. The method of Clause 16 or 17 wherein the method further comprising
Clause 19. The method of Clause 17 or 18 wherein a stopping event is triggered by one or more of:
Clause 20. The method of any one of Clauses 17 to 19 further comprising, upon detection of a stopping event, associating a value with bits of the plurality of data sets that have not been read from memory to generate full data sets.
Clause 21. The method of any one of Clauses 16 to 20, wherein selecting one or more successive first bits of each data set comprises selecting only the most significant bit of the each data set as the one or more successive first bits of each data set.
Clause 22. The method of any one of Clauses 16 to 21, wherein selecting one or more successive further bits of each data set comprises selecting only the most significant bit outside the read portion of the each data set as one or more successive further bits of each data set.
Clause 23. The method of any one of Clauses 16 to 22 wherein, upon detection that an earlier step of storing plurality of the data sets had been interrupted and that the portion of the plurality of data sets stored during the earlier step have all been read, associating a value with bits of the plurality of data sets outside the first portion to generate full data sets.
Clause 24. The method of any one of Clauses 16 to 23, wherein each data set is at least one of:
Clause 25. The method of any one of Clauses 16 to 24, wherein each data set is a bit word Wi having N ordered bits Wi(0) to Wi(N−1), with N greater than or equal to 2.
Clause 26. The method of Clause 25 further comprising:
Clause 27. The method of Clause 25 or 26 further comprising stopping the reading and storing the read bits in the re-ordering memory when a stopping criterion is met.
Clause 28. The method of any one of Clauses 16 to 27, wherein the memory is a Double Data Rate “DDR” Synchronous Dynamic Random-Access Memory “SDRAM”.
Clause 29. A controller for controlling memory operations, controller being configured to:
Clause 30. The controller of Clause 29, wherein the controller is further configured to implement the method of any one of Clauses 2 to 15.
Clause 31. A controller for controlling memory operations, controller being configured to:
Clause 32. The controller of Clause 31, wherein the controller is further configured to implement the method of any one of Clauses 16 to 28.
Clause 33. A controller system comprising:
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2022/050027 | 1/7/2022 | WO |