Various embodiments relate to a method and system for decoding a signal at a receiver in a multiple input multiple output (MIMO) communication system.
Current communication systems require efficient utilization of radio frequency spectrum in order to increase achievable data-rate within a given transmission bandwidth. Typically, each successive generation of communication systems aims at ultra-high throughput, seamless connectivity, and/or low latency. This can be accomplished by employing multiple transmit and receive antennas combined with signal processing and simultaneous communication with multiple users, each having multiple spatial streams or layers. This is possible with the advent of 4G, massive MIMO (mMIMO) and 5G, which results in enhanced spectral efficiency. However, the use of multiple transmitting and receiving antennas results in multi-user interference. Thus, combating the multi-user interference using an efficient implementation becomes essential. In order to reduce multi-user interference, efficient and robust multi-user MIMO (MU-MIMO) signal detection algorithms are used. The MU-MIMO signal detection algorithms multiply the capacity of a radio link using multiple transmitting and receiving antennas to exploit multipath propagation. By exploiting the multipath propagation, MU-MIMO facilitates transmitting and receiving more than one data signal simultaneously over the same radio channel.
Further, in a communication system comprising a transmitter and a receiver, an RF modulated signal from the transmitter may reach the receiver via a number of propagation paths. The characteristics of the propagation paths typically vary over time due to a number of factors such as fading and multipath, resulting in interference. Further, to combat the interference effectively, joint receiver techniques such as maximum likelihood detection (MLD), are used to facilitate high spectral efficiency.
Furthermore, the MU-MIMO signal detection algorithms are used to combat interference at a receiver side of the communication systems. While dealing with interference, the MU-MIMO signal detection algorithms exhibit a trade-off between performance and computational complexity. Traditional MU-MIMO signal detection algorithms are linear and provide sub-optimal performance These traditional MU-MIMO signal detection algorithms may comprise at least one of Maximal Ratio Combining (MRC), Zero Forcing (ZF), or Linear Minimum Mean Square Error (LMMSE) estimator. However, the propagation paths between the transmitting and receiving antennas are linearly independent (i.e., a transmission on one path is not a linear combination of the transmissions on the other paths), thus the likelihood of correctly receiving a data transmission increases as the number of antennas increases. However, this adds to the computational complexity at the receiver side of the communication systems. Further, linear detectors like MRC, ZF, and LMMSE detectors do not provide a good error performance except under very specific channel conditions. The difference in the performance of these detectors from that of an optimal detector is quite significant under most channel conditions, sometimes up to several decibels (dB). Further, the optimal detector, which employs a joint decoding algorithm, is significantly more complex. Further, sphere decoding is one appealing technique that performs MLD techniques, however, sphere decoding is not practical for commercial implementation of MU-MIMO communication systems. The number of computations performed in MLD techniques is quite high.
The receivers in the MU-MIMO communication system are expected to output soft information (log-likelihood ratios) about the bits that are decoded. An exact MLD soft-output sphere decoder (depth first approach) detects all layers from the transmitting user equipments (UEs), jointly, in a communication system. However, when the number of data streams is large, a single decoding phase takes a long duration which makes it impractical for commercial use. Using the MLD rule, the detector outputs the log-likelihood ratios (LLRs) which are used as inputs to the channel decoder. However, such a rule increases the complexity when the number of UEs/layers/data streams is large.
Further, a non-MLD fixed complexity soft-output sphere decoder (depth first approach) limits the number of computations to decode all the layers corresponding to the UEs, resulting in fixed complexity. However, this approach suffers from severe degradation of mMIMO systems with practical channel conditions. Additionally, there is no general rule on how to limit the number of computations for a defined mMIMO configuration.
In order to combat the above discussed problems of the communication systems, a K-best sphere decoder (breadth-first approach) is currently used at the receiver side of the communication systems, as disclosed in
Thus, the disadvantage of K-best decoder is that the value of K is fixed for each layer and needs to be sufficiently large. This results in added complexity at the receiver side of the communication system. Hence there is a need for a reduced computational complexity at the receiver side of the communication system. However, a reduced complexity receiver which provides near-optimal error performance for all channel conditions has not been considered for the MU-MIMO communication systems. Therefore, there is a need for an improved method and receiver for decoding a signal in the MU-MIMO communication system, in order to provide near-optimal error performance along with practical commercial implementation.
The present disclosure addresses the above object by the subject-matter covered by the independent claims. Preferred embodiments of the invention are defined in the dependent claims.
According to a first aspect of the invention, there is provided a receiver of a multiple input multiple output, MIMO, communication system. The receiver may comprise means for obtaining a signal y over a channel from a plurality of transmitters in communication with the receiver, wherein the signal y comprises data signals transmitted on a plurality of layers N, means for obtaining a concatenated matrix R, representing the channel between the plurality of transmitters and the receiver, wherein the concatenated matrix R is obtained based on an estimated channel matrix H, and means for determining an ordered list , based at least on the signal y and the obtained concatenated matrix R, wherein the ordered list is a list of N-dimensional vectors and each vector is a candidate constellation point for the transmitted data signal based on a predefined metric. In one embodiment, the transmitted plurality of data signals is a vector of constellation points. In one embodiment, the means for determining the ordered list comprises a List Search Block (LSB), wherein the LSB is configured to implement a Machine Learning (ML) algorithm. The usage of the LSB improves the receiver performance in terms of block error rate (BLER) using low decoding latency and hardware power consumption.
In one embodiment, the concatenated matrix R may be obtained by a QR decomposition of the estimated channel matrix H.
In some embodiments of the present invention, the means for determining the ordered list may further comprise means for determining an ordered list N, at a root layer of the plurality of layers N, and means for determining a plurality of ordered lists l for the other layers of the plurality of layers N, wherein each ordered list SS/being determined may be based on an ordered list l+1, where l=N−1, N−2, . . . 1.
In some embodiments of the present invention, the means for determining the ordered list N, at the root layer, may comprise means for obtaining one or more input parameters, wherein the one or more input parameters may comprise at least an Nth element of the signal y, a constellation size M, and a required number of surviving candidates KN, means for determining a plurality of partial symbol vectors, wherein the determined partial symbol vectors are the KN surviving candidates for the root layer, based at least on the one or more input parameters, means for determining a first smallest possible window using a trained ML model, wherein the first smallest possible window comprises a plurality of constellation points K′N for a signal y′ derived from Nth element of the signal y, wherein the plurality of constellation points K′N may comprise the KN closest constellation points to the signal y′, means for determining partial Euclidean distances (PEDs) between the signal y′ and each of the plurality of constellation points K′N in the first smallest possible window, and means for sorting the plurality of the K′N constellation points based on the determined PEDs and thereby determining the ordered list N. In one embodiment, the KN surviving candidates may refer to one or more survivors in root layer i.e. layer N. The usage of pre-trained ML block eliminates the need to calculate the distance to all the M constellation points. Such a use of the pre-trained ML block facilitates the reduction in computation complexities.
In some embodiments of the present invention, the determined partial Euclidean distances associated with the plurality of constellation points K′N in the ordered list N, may be stored in a list N.
In some embodiments of the present invention, the first smallest possible window for the KN surviving candidates is represented by a class indicated by integers Ly, Ry, By, and Ty, wherein Ly represents the number of constellation points to the left of the closest constellation point in the smallest possible window, Ry represents the number of constellation points to the right of the closest constellation point in the smallest possible window, By represents the number of constellation points to the bottom of the closest constellation point in the smallest possible window, and Ty represents the number of constellation points to the top of the closest constellation point in the smallest possible window.
In some embodiments of the present invention, the means for determining the ordered lists l, for the other layers of the plurality of layers may comprise means for obtaining one or more input parameters, wherein the one or more input parameters comprise at least an lth element of the signal y, the constellation size M, and a required number of surviving candidates Kl for layer l, where l=N−1, N−2, . . . 1, means for obtaining an ordered list l and a list l from the surviving candidate in the ordered list l+1, where l=N−1, N−2, . . . 1, means for determining the closest constellation point for each surviving candidate in the ordered list l+1, means for determining a partial symbol vector for each determined closest constellation point, means for determining partial Euclidean distances (PEDs) based on the determined partial symbol vectors, means for sorting the ordered list l+1 wherein the ordered list l+1 is sorted in ascending order of determined PEDs, means for determining a smallest possible window using the trained ML model, for each element in the sorted ordered list l+1, wherein the smallest possible window comprises a plurality of constellation points Kl,i, where i=1, 2, . . . , Kl+1, means for determining partial Euclidean distances (PEDs) between a function of lth element of the signal y and each of the Kl,i constellation points, and means for sorting the plurality of Kl,i constellation points in the smallest possible window based on the determined PEDs; and thereby determining the ordered list l.
In some embodiments of the present invention, the determined partial Euclidean distances associated with the plurality of constellation points Kl,i in the ordered list l, may be stored in a list l, where l=N−1, N−2, . . . , 1.
Further, the smallest possible window for the Kl surviving candidates is represented by a class indicated by integers Ly′, Ry′, By′, and Ty′, wherein Ly′ represents a number of constellation points to the left of the closest constellation point in the smallest possible window, Ry′ represents a number of constellation points to the right of the closest constellation point in the smallest possible window, By′ represents a number of constellation points to the bottom of the closest constellation point in the smallest possible window, and Ty′ represents a number of constellation points to the top of the closest constellation point in the smallest possible window.
In some embodiments of the present invention, the receiver may comprise means for performing Log-Likelihood ratio (LLR) computation on the determined ordered list . The receiver may further comprise means for pre-processing the signal y using one or more pre-processing techniques, wherein the one or more pre-processing techniques comprise at least noise-whitening technique and QR decomposition technique.
In another embodiment, the receiver may be configured to train the ML model based at least on a training data set with input features which are functions of a one-dimensional complex-valued signal y, the constellation size M, and the number of surviving candidates KN.
In accordance with another embodiment, a multi-user multiple input multiple output, MU-MIMO, communication system comprising a plurality of transmitters, a receiver, and a MIMO channel may be disclosed. The receiver may comprise means for obtaining a signal y over a channel, from a plurality of transmitters in communication with the receiver, wherein the signal y comprises data signals transmitted on a plurality of layers N; means for obtaining a concatenated matrix R, representing the channel between the plurality of transmitters and the receiver, wherein the concatenated matrix R is obtained based on an estimated channel matrix H, and means for determining an ordered list , based at least on the signal y and the obtained concatenated matrix R, wherein the ordered list is a list of N-dimensional vectors and each vector is a candidate constellation point for the transmitted data signal based on the predefined metric, the determining the ordered list may comprise a List Search Block (LSB), wherein the LSB is configured to implement a Machine Learning (ML) algorithm. The usage of the LSB improves the receiver performance in terms of block error rate (BLER) using low decoding latency and hardware power consumption.
Further, the concatenated matrix R may be obtained by a QR decomposition of the estimated channel matrix H.
The receiver may comprise means for determining the ordered list which further comprises means for determining an ordered list N, at a root layer of the plurality of layers N, and means for determining a plurality of ordered lists l for other layers of the plurality of layers N, wherein each ordered list l being determined may be based on the ordered list l+1, where l=N−1, N−2, . . . 1.
In some embodiments of the present invention, the means for determining the ordered list N, at the root layer, may comprise means for obtaining one or more input parameters, wherein the one or more input parameters may comprise at least an Nth element of the signal y, the constellation size M, and a required number of surviving candidates KN, means for determining a plurality of partial symbol vector, wherein the determined partial symbol vector are the KN surviving candidates for the root layer, based at least on the one or more input parameters, means for determining a first smallest possible window using a trained ML model, wherein the first smallest possible window comprises a plurality of constellation points K′N for a signal y′ derived from Nth element of the signal y, wherein the plurality of constellation points K′N may comprise the KN closest constellation points to the signal y′, means for determining partial Euclidean distances (PEDs) between the signal y′ and each of the plurality of constellation points K′N constellation points in the first smallest possible window, and means for sorting the plurality of constellation points K′N based on the determined PEDs, and thereby determining the ordered list N. In one embodiment, the KN surviving candidates may refer to one or more survivors in root layer i.e. layer N. In another embodiment, the Kl surviving candidates may refer to one or more survivors in layer l. The usage of pre-trained ML block eliminates the need to calculate the distance to all the M constellation points. Such a use of the pre-trained ML block facilitates the reduction in computation complexities.
In some embodiments of the present invention, the determined partial Euclidean distances associated with plurality of constellation points K′N in the ordered list N, may be stored in a list NN.
Further, the first smallest possible window for the KN surviving candidates is represented by a class indicated by integers Ly, Ry, By, and Ty, wherein Ly represents a number of constellation points to the left of the closest constellation point in the smallest possible window, Ry represents a number of constellation points to the right of the closest constellation point in the smallest possible window, By represents a number of constellation points to the bottom of the closest constellation point in the smallest possible window, and Ty represents a number of constellation points to the top of the closest constellation point in the smallest possible window.
In some embodiments of the present invention, the means for determining the ordered lists l, for the other layers of the plurality of layers may comprise means for obtaining one or more input parameters, wherein the one or more input parameters comprise at least an lth element of the signal y, a constellation size M, and a required number of surviving candidates Kl for layer l, where l=N−1, N−2, . . . 1, means for obtaining an ordered list l and a list l, from the surviving candidate in the ordered list +1, where l=N−1, N−2, . . . 1, means for determining a closest constellation point for each surviving candidate in the ordered list l+1, means for determining a partial symbol vector for each determined closest constellation point, means for determining partial Euclidean distances (PEDs) based on the determined partial symbol vectors, means for sorting the ordered list +1, , wherein the ordered list +1, is sorted in ascending order of determined PEDs, means for determining a smallest possible window, using the trained ML model, for each element in the sorted ordered list +1, wherein the smallest possible window comprises a plurality of constellation points Kl,i, wherein i=1, 2, . . . , Kl+1, means for determining partial Euclidean distances (PEDs) between a function of the lth element of the signal y and each of the constellation points Kl,i and means for sorting the plurality of constellation points Kl,i in the smallest possible window based on the determined PEDs; and thereby determining the ordered list l.
In some embodiments of the present invention, the determined partial Euclidean distances associated with the plurality of constellation points Kl,i in the ordered list l, may be stored a list l. Further, the smallest possible window of the plurality of Kl surviving candidates is represented by a class indicated by integers Ly′, Ry′, By′, and Ty′, wherein Ly′ represents a number of constellation points to the left of the closest constellation point in the smallest possible window, Ry′ represents a number of constellation points to the right of the closest constellation point in the smallest possible window, By′ represents a number of constellation points to the bottom of the closest constellation point in the smallest possible window, and Ty′ represents a number of constellation points to the top of the closest constellation point in the smallest possible window.
In some embodiments of the present invention, the receiver may comprise means for performing Log-Likelihood ratio (LLR) computation on the determined ordered list . The receiver may further comprise means for pre-processing the signal y using one or more pre-processing techniques, wherein the one or more pre-processing techniques comprise at least noise-whitening technique and QR decomposition technique.
In another embodiment, the receiver may be configured to train the ML model based at least on a training data set with input features which are functions of a one-dimensional complex-valued signal y, the constellation size M, and the number of surviving candidates KN.
According to a second aspect of the present invention, a method for decoding a signal y at a receiver in a multiple input multiple output, MIMO, communication system, may be disclosed.
The method may comprise, obtaining the signal y over a channel from a plurality of transmitters in communication with a receiver, wherein the signal y comprises data signals transmitted on a plurality of layers N, obtaining a concatenated matrix R, representing the channel between the plurality of transmitters and the receiver, wherein the concatenated matrix R is obtained based an estimated channel matrix H, and determining an ordered list , based at least on the signal y and the obtained concatenated matrix R, wherein the ordered list is the list of N-dimensional vectors and each vector is a candidate constellation point for the transmitted data signal based on the predefined metric, wherein the determining the ordered list may comprise a List Search Block (LSB), wherein the LSB is configured to implement a Machine Learning (ML) algorithm The usage of the LSB improves the receiver performance in terms of block error rate (BLER) using low decoding latency and hardware power consumption.
In one embodiment, the method wherein the concatenated matrix R is obtained by a QR decomposition of the estimated channel matrix H.
The method may further comprise determining an ordered list N, at a root layer of the plurality of layers N, and means for determining a plurality of ordered lists l for other layers of the plurality of layers N, wherein each ordered list l being determined may be based on the ordered list l+1, where l=N−1, N−2, . . . 1.
The method wherein determining the ordered list N, at the root layer, may comprise means for obtaining one or more input parameters, wherein the one or more input parameters may comprise at least an Nth element of the signal y, the constellation size M, and a required number of surviving candidate KN, means for determining a plurality of partial symbol vector, wherein the determined partial symbol vector are the KN surviving candidates for the root layer, based at least on the one or more input parameters, means for determining a first smallest possible window using a trained ML model, wherein the first smallest possible window comprises a plurality of constellation points K′N for a signal y′ derived from Nth element of the signal y, wherein the plurality of constellation points K′N may comprise the KN closest constellation points to the signal y′, means for determining partial Euclidean distances (PEDs) between the signal y′ and each of plurality of constellation points K′N in the first smallest possible window, and means for sorting the plurality of constellation points K′N based on the determined PEDs, and thereby determining the ordered list N. In one embodiment, the KN surviving candidates may refer to one or more survivors in root layer i.e. layer N. The usage of pre-trained ML block eliminates the need to calculate the distance to all the M constellation points. Such a use of the pre-trained ML block facilitates the reduction in computation complexities.
The method may further comprise the partial Euclidean distances (PEDs) associated with the plurality of constellation points K′N in the ordered list N, may be stored in a list N.
The method wherein determining the ordered lists l, for the other layers of the plurality of layers N, may comprise, obtaining one or more input parameters, wherein the one or more input parameters comprise at least an lth element of the signal y, a constellation size M, and a required number of Kl surviving candidates for layer l, where l=N−1, N−2, . . . 1, means for obtaining an ordered list l and a list l, where l=N−1, N−2, . . . 1, means for determining a closest constellation point for each surviving candidate in the ordered list l+1, means for determining a partial symbol vector for each determined closest constellation point, means for determining partial Euclidean distances (PEDs) based on the determined partial symbol vectors, means for sorting the ordered list l+1, wherein the ordered list l+1 is sorted in ascending order of determined PEDs, means for determining a smallest possible window, using the trained ML model, for each element in the sorted ordered list l+1, wherein the smallest possible window comprises a plurality of constellation points Kl,i wherein i=1, 2, . . . , Kl+1, means for determining partial Euclidean distances (PEDs) between a function of the lh element of the signal y and each of the plurality of constellation points Kl,i and means for sorting the plurality of constellation points Kl,i in the smallest possible window based on the determined PEDs; and thereby determining the ordered list l.
In some embodiments of the present invention, the determined partial Euclidean distances associated with the plurality of constellation points Kl,i in the ordered list l, may be stored in a list l, where l=N−1, N−2, . . . ,1.
In one embodiment, the determined partial Euclidean distances associated with the plurality of constellation points Kl,i in ordered list l may be stored in a list l, where l=N−1, N−2, . . . ,1. Further, the smallest possible window for the Kl surviving candidates is represented by a class indicated by integers Ly′, Ry′, By′, and Ty′, wherein Ly′ represents a number of constellation points to the left of the closest constellation point in the smallest possible window, Ry′ represents a number of constellation points to the right of the closest constellation point in the smallest possible window, By′ represents a number of constellation points to the bottom of the closest constellation point in the smallest possible window, and Ty′ represents a number of constellation points to the top of the closest constellation point in the smallest possible window.
In one embodiment, the method may comprise performing Log-Likelihood ratio (LLR) computation on the determined ordered list . The method may further comprise pre-processing the signal y using one or more pre-processing techniques, wherein the one or more pre-processing techniques comprise at least noise-whitening technique and QR decomposition technique.
In another embodiment, the ML model may be trained based at least on a training data set which are functions of a one-dimensional complex-valued signal y, the constellation size M and the required number of surviving candidates KN.
According to a third aspect of the present invention, a non-transitory computer-readable medium may be disclosed. The non-transitory computer-readable medium may comprise instructions for causing a processor to perform functions for optimization of signal shaping in multi-user multiple input multiple output (MU-MIMO) communication system. The non-transitory computer-readable medium may comprise instructions for causing a processor to perform functions including, obtaining a signal y over a channel, from a plurality of transmitters in communication with the receiver, wherein the signal y comprises data signals transmitted on a plurality of layers N, obtaining a concatenated matrix R, representing the channel between the plurality of transmitters and the receiver, wherein the concatenated matrix R cis obtained based on an estimated channel matrix H, and determining an ordered list , based at least on the signal y and the obtained concatenated matrix R, wherein the ordered list is the list of N-dimensional vectors and each vector is a candidate constellation point for the transmitted data signal based on a predefined metric, wherein the determining the ordered list comprises a List Search Block (LSB), wherein the LSB is configured to implement a Machine Learning (ML) algorithm. The usage of the LSB in the receiver improves the receiver performance in terms of block error rate (BLER) using low decoding latency and hardware power consumption.
In one embodiment, the concatenated matrix R may be obtained by a QR decomposition of the estimated channel matrix H.
In some embodiments of the present invention, the non-transitory computer-readable medium includes instructions for causing the processor to determine the ordered list , may perform functions including determining an ordered list N, at a root layer of the plurality of layers, and means for determining a plurality of ordered lists l for other layers of the plurality of layers, wherein each ordered list l being determined may be based on the ordered list l+1, where l=N−1, N−2, . . . 1.
In one embodiment, determining the ordered list N, at the root layer, may comprise means for obtaining one or more input parameters, wherein the one or more input parameters may comprise at least an Nth element of the signal y, the constellation size M, and a required number of surviving candidates KN, means for determining a plurality of partial symbol vector, wherein the determined partial symbol vector are the KN surviving candidates for the root layer, based at least on the one or more input parameters, means for determining a first smallest possible window using a trained ML model, wherein the first smallest possible window comprises a plurality of constellation points K′N for a signal y′ derived from Nth element of the signal y, wherein the plurality of constellation points K′N may comprise the KN closest constellation points to the signal y′, means for determining partial Euclidean distances (PEDs) between the signal y′ and each of K′N constellation points in the first smallest possible window, and means for sorting the plurality of the plurality of constellation points K′N based on the determined PEDs, and thereby determining the ordered list N. In one embodiment, the KN surviving candidates may refer to one or more survivors in root layer i.e. layer N.
In one embodiment, the determined partial Euclidean distances associated with the plurality of constellation points K′N in the ordered list N, may be stored in a list N.
Further, the non-transitory computer-readable medium includes instructions for determining the ordered lists l, for the other layers of the plurality of layers N, may comprise, obtaining one or more input parameters, wherein the one or more input parameters comprise at least an lth element of the signal y, a constellation size M, and a required number of surviving candidates Kl for layer l, where l=N−1, N−2, . . . 1, obtaining an ordered list l and a list l from the surviving candidate in the ordered list l+1, where l=N−1, N−2, . . . 1, determining a closest constellation point for each surviving candidate in the ordered list l+1, determining a partial symbol vector for each determined closest constellation point, determining partial Euclidean distances (PEDs) based on the determined partial symbol vectors, sorting the ordered list l+1, wherein the ordered list l+1 is sorted in ascending order of determined PEDs, determining a smallest possible window, using the trained ML model, for each element in the sorted ordered list l+1, wherein the smallest possible window comprises a plurality of constellation points Kl,i,wherein i=1, 2, . . . , Kl+l, determining partial Euclidean distances (PEDs) between a function of the lth element of the signal y and each of the plurality of constellation points Kl,i and sorting the plurality of constellation points Kl,i in the smallest possible window based on the determined PEDs; and thereby determining the ordered list l.
In one embodiment, the determined partial Euclidean distances associated with plurality of constellation points Kl,i in the ordered list l may be stored in a list l. Further, the smallest possible window for the Kl surviving candidates is represented by a class indicated by integers Ly′, Ry′, By′, and Ty′, wherein Ly′ represents a number of constellation points to the left of the closest constellation point in the smallest possible window, Ry′ represents a number of constellation points to the right of the closest constellation point in the smallest possible window, By′ represents a number of constellation points to the bottom of the closest constellation point in the smallest possible window, and Ty′ represents a number of constellation points to the top of the closest constellation point in the smallest possible window.
Further, the non-transitory computer-readable medium includes instructions for performing Log-Likelihood ratio (LLR) computation on the determined ordered list . Further, the non-transitory computer-readable medium includes instructions for pre-processing the signal y using one or more pre-processing techniques, wherein the one or more pre-processing techniques comprise at least noise-whitening technique and QR decomposition technique.
In another embodiment, the non-transitory computer-readable medium includes instructions for configuring the receiver to train the ML model based at least on a training data set with input features which are functions of a one-dimensional complex-valued signal y, the constellation size M, and the number of surviving candidates KN.
Altogether, the embodiments described herewith provides several advantages. In particular,
To the accomplishment of the foregoing and related ends, one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects and are indicative of but a few of the various ways in which the principles of the aspects may be employed. Other advantages and novel features will become apparent from the following detailed description when considered in conjunction with the drawings and the disclosed aspects are intended to include such aspects and their equivalents.
Further embodiments, details, advantages, and modifications of the present embodiments will become apparent from the following detailed description of the embodiments, which is to be taken in conjunction with the accompanying drawings, wherein:
Some embodiments of this disclosure, illustrating its features, will now be discussed in detail. The words “comprising,” “having,” “containing,” and “including,” and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to the listed item or items.
It should also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Although any apparatus and method similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the apparatus and methods are now described.
Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings in which like numerals represent like elements throughout the several figures, and in which embodiments are shown. Embodiments of the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples among other possible examples.
An embodiment of the present disclosure and its potential advantages are understood by referring to
Each one of the UE transmitter 202 may comprise at least one transmitting antenna and the base station MIMO receiver 204 may comprise at least one receiving antenna. In one example, each one of the UE transmitter 202 may comprise multiple transmitting antennas and the base station MIMO receiver 204 may comprise multiple receiving antennas. It should be noted that the UE transmitter 202 may be referred to as and/or may include some or all of the functionality of a user equipment (UE), mobile station (MS), terminal, an access terminal, a subscriber unit, a station, etc. Examples of the UE transmitter 202 may include, but are not limited to, cellular phones, smartphones, personal digital assistants (PDAs), wireless devices, electronic automobile consoles, sensors, or laptop computers. It should be noted that the base station MIMO receiver 204 maybe hereinafter referred to as a base station. In one embodiment, the base station may serve the UEs.
Further, each one of the UE transmitter 202 may communicate with the base station MIMO receiver 204, via a channel 206. In one embodiment, the channel 206 may be a wireless MIMO channel. Additionally, the channel 206 between the UE transmitter 202 and the base station MIMO receiver 204 may have a status or a state. Further, the status of the channel 206 may vary over time and may be described by one or more properties of the channel 206. It should be noted that properties of the channel 206 may, for example, comprise a channel gain, a channel phase, a signal-to-noise ratio (SNR), a received signal strength indicator (RSSI), or a transfer matrix. In one embodiment, the channel 206 may corrupt the signal being transmitted over the channel 206.
It will be apparent to one skilled in the art that the above-mentioned components of the MU-MIMO communication system 200 have been provided only for illustration purposes. The MU-MIMO communication system 200 may include a plurality of receivers as well, without departing from the scope of the disclosure.
Referring to
Further, the data stream 208 from each of the plurality of UE may undergo a channel coding 210 of data streams 208. Additionally, the plurality of user equipment (UE) UE1, UE2, . . . UEN
In one embodiment, UEk transmits Nt(k) number of modulated data streams. It should be noted that the data streams 208 may be referred to as layers. In one embodiment, at least two data streams may be received at the base station MIMO receiver 204, which may be from the same UE or different UE. Further, the data stream of UEk may be represented by sk∈QN
Based on the received data streams 208, the base station MIMO receiver 204 may send a signal yr streams to a joint receiver and equalizer 214. Further, the joint receiver and equalizer 214 may receive a channel estimation signal from a channel estimator 216. Successively, the joint receiver and equalizer 214 may process the signal yr and the channel estimation signal. In one embodiment, yr may be represented as
y
r
=Hs+n
int
+n
AWGN
Gaussian noise at the base station MIMO receiver 204, pint is the co-channel interference from other cells, and s=[s1T, s2T, . . . , sN
Further, the processed information may be sent to a channel decoder 218. It should be noted that the processed information may be referred to as soft information. In one embodiment, the soft information may be based on the received signal yr. The channel decoder 218 may classify the data related to each user equipment UE1, UE2, . . . UEN
It will be apparent to one skilled in the art that above-mentioned uplink transmission scenario in a massive MIMO communication system 200 has been provided only for illustration purposes. In one embodiment, additional impairments may be added on top of this scenario due to hardware, without departing from the scope of the disclosure.
At first, the joint receiver and equalizer 214 may receive the signal yr and a channel estimation signal Hest. In one embodiment, yr and Hest may be received by the pre-processing block 302. In one embodiment, the plurality of user equipment (UE) UE1, UE2, . . . UEN
At the pre-processing block 302, Nr is the number of received inputs at the base station MIMO receiver 204 and Nt(k) is the number of transmit antennas at UEk, k=1,2, . . . , Nu. In one embodiment, the pre-processing stage may comprise of noise-whitening and QR decomposition. At first, the noise-whitening of the signal yr may suppress the effects of unwanted interference in the signal yr. In one embodiment, noise-whitening may whiten the interference-cum-noise associated with the signal yr, which may require an estimation of Interference Covariance Matrix, denoted as:
C={(yr−Hs)(yr−Hs)H},
where C is obtained by averaging over pilot/reference symbol locations within a code block transmitted by the UEs. Further, noise whitening may comprise, performing Cholesky decomposition, denoted by C=LLH. Thereafter, the received signal is multiplied by L−1 to provide an effective received signal
Thereafter, QR decomposition of
y=Rs+n,y∈
N×1
, R∈
N×N
, s∈Q
N×1
, n∈
N×1
where n=QH
Q
{a+jb, a,b=−√{square root over (M)}+1, −√{square root over (M)}+3, . . . , √{square root over (M)}−3, √{square root over (M)}−1}, j=√{square root over (−1)}.
As shown in
Further, the output of the LSB 304 may include an ordered list . The ordered list may be defined as:
={si∈QN×1, i=1,2, . . . , K|d(s1)≤d(s2)≤, . . . , ≤d(sK)}
where ∀s∈, ∀ś∉, d(s)≤d(ś), d(s)∥y−Rs∥, the K vectors belonging to QN×1 with the least distance metrics given by d(s), and if d(sj)=d(sk), considering sj<sk if j<k. In one embodiment, for a received signal y[y1, y2, . . . , yN]T, the associated vector s [s1, s2, . . . , sN]T, and ri,j denoting the (i, j)th element of R. It should be noted that R is based on a channel estimate. Further, a partial received vector may be represented by yi:N[yi, . . . , yN]T and a partial symbol vector may be represented by si:N [s1, . . . , sN]T, where 1≤i≤N. Furthermore, Ri:N denote the submatrix of R representing rows i to N, columns i to N and ri:N [ri,i, ri+1, . . . , ri,N] denotes the sub-vector of row i of R consisting of elements from columns i to N. Thus, a partial Euclidean distance (PED) at Layer i is denoted by:
δ(si:N)=∥yi:N−Ri:Nsi:N∥.
which follows d(s)=δ(s)
δ2(si:N)=δ2(si+1:N)+∥yi−ri,isi−rsi+1:Nssi+1:N∥2
It should be noted that the ordered list may comprise constellation symbols with a smallest PEDs for a target layer. Further, the layers may be classified into a root layer and remaining layers. In one embodiment, for 10 users with one transmitting antenna each, the number of layers may be 10. Thus among the 10 layers, layer 10 may be considered as a root layer, layer 9 to layer 1 may be considered as remaining layers.
Further, for each layer 1, 1≤l<N, an ordered list of partial symbol vectors with the Kl smallest PEDs based on the surviving partial symbol vectors from Layer N. It should be noted that the algorithm ends upon reaching layer 1, where a final ordered list of K1=K symbol vectors is obtained as output. In one embodiment, an upper threshold for Kl may be pre-set and may be determined based on ∥rl,l∥. The value of ∥rl,l∥is inversely proportional to the value of Kl, for near optimal detection of the signal y.
Such usage of the LSB 304 in a receiver improves the receiver performance in terms of block error rate (BLER) using low decoding latency and hardware power consumption. Consequently, it should be noted that system throughput may be improved.
Further, the ordered list may be transmitted to the LLR computation block 306. It should be noted that the ordered list is a list of N-dimensional vectors and each vector is a candidate constellation point for the transmitted signal based on a predefined metric. It should be noted that the predefined metric is Euclidean norm of y-Rs, which is the difference between the signal vector y and Rs, where s is the candidate point
The output of the LLR computation block 306 may be referred to as soft information. Further, the soft information may be fed to the channel decoder 218. In one embodiment, the channel decoder 218 may be, but is not limited to, belief propagation decoding, polar list-decoding, Turbo decoder, or convolutional decoder. In one embodiment, the channel decoder 218 may decode signals corresponding to each UE. Furthermore, it should be noted that both the quality of the computed LLRs and the complexity increase with increasing values of candidate points. In one embodiment, for each symbol in the vector s may take value from an M-QAM, there may be a total of N log M bits transmitted at a time. Further, for j+ denoting a set of all symbol vector s∈QN×1 that may be mapped in a way corresponding to a jth bit being 1 and j− denoting a set of all symbol vector s∈QN×1 that may be mapped in a way corresponding to the jth bit being 0. Then, the LLR for the jth bit is computed as:
L(xj|y)={∥y−Rs∥}−{∥y−Rs∥}, j=1,2, . . . , N log M.
It should be noted that in the disclosed operation may be performed in each layer in the proposed algorithm. The list-search operation is facilitated by use of a pre-trained Machine Learning (ML) block 402. Further, for N layers, the algorithm begins at layer N and sequentially progresses to layers N−1, N−2, . . . 1. In one embodiment, the layers l, l−1, and l−2 may be considered, where 1 is a general layer index and may take any value from N to 3. It should be noted that the root layer may be referred to as layer N and the remaining layers may be referred to as layer l+1 where l∈N−1, N−2, . . . 1, hereinafter.
At first, at layer l, Kl surviving candidates, at 404, may progress to the subsequent layer i.e. layer l−1. Further, the algorithm may select Kl surviving candidates in each layer by calculating less than KlM distances, due the presence of the pre-trained ML black 402. In one embodiment, the Kl surviving candidates may refer to the partial symbol vectors. In addition to the Kl surviving candidates in each layer l, the pre-trained ML block 402, may provide a smallest possible window of constellation points, at each layer l−1.
The pre-trained ML block 402 may determine a smallest window at each layer of the proposed algorithm. It should be noted that the use of the pre-trained ML block 402 may provide a window of constellation points. Further, the size of the window is much smaller than M. In one embodiment, the pre-trained ML block 402 outputs the smallest possible window of points from an M-QAM constellation that contains the closed points to a candidate point corresponding to the signal y. In one embodiment, the pre-trained ML block 402 may output Kl−1,1 points, at layer l−1, to compute, at 406-1 along with an ordered 1st survivor from the previous layer l. Further, the pre-trained ML block 402 may output Kl−1,2 points, at layer l−1, to compute, at 406-2, along with an ordered 2nd survivor from the previous layer l. It should be noted that the process may continue till the pre-trained ML block 402 may output Kl−1,K1 npoints, at layer l−1, to compute, at 406-Kl along with an ordered Klth survivor from the previous layer l.
In one embodiment, candidate point may be a complex point, comprising a real and an imaginary part. Further, the pre-trained ML block 402 may be trained offline to obtain the smallest window as a function of K and y. For a particular candidate point, K closest points from an M-QAM constellation Q may be identified. In accordance with the candidate point, closest constellation points may be determined from the M-QAM constellation Q. In one embodiment, the closest constellation points may be represented by a partial symbol vector of the closest constellation point SCCP. In one embodiment, the pre-trained ML block 402 may be a classifier. It should be noted that for particular layer, the pre-trained ML block 402 may determine the smallest possible window using the parameters related to the received signal y and parameters related to the partial symbol vectors associated with the signal y. In one embodiment, for layer N, the pre-trained ML block 402 may determine the smallest possible window using the parameters related to Nth element of signal y and parameters related to the partial symbol vectors sN.
In one embodiment, the pre-trained ML block 402 may determine the smallest possible window based on the n input parameters. Further, the n input parameters may be functions of the signal y and functions of partial symbol vector of the closest constellation point to y from Q, denoted by sCCP. In one embodiment, at a layer l, for a candidate point yi and closest constellation points by sCCP (yl), K16, and 256 QAM defined as
QAM256={a+jb|a, b∈{−15, −13, −11, . . . , 11, 13, 15}},
the n input parameters may be at least among a group of parameters consisting of, but not limited to real and imaginary functions of the signal y and sCCP, like (yl)|, |ℑ(yl)|, (SCCP(yl))|, |ℑ(SCCP(yl))|, sign (|(yl)|−|(SCCP(yl))|), sign(|ℑ(yl)|−|ℑ(SCCP(yl))|, sign(|(yl)|−15), sign(|(yl)|−13), sig(|ℑ(yl)|−15), sign(|ℑ(yl)|−13), sign((yl−SCCP(yl))), and sign(ℑ(yl−SCCP(yl)))
It should be noted that the usage of pre-trained ML block 402 eliminates the need to calculate the distance to all the M constellation points. Such a use of the pre-trained ML block 402 facilitates the reduction in computation complexities.
Furthermore, the calculated less than KM distances may be sorted and chosen, at 408, to obtain an ordered list corresponding to the layer l−1. The sorted Kl−1 surviving candidates may then progress to the subsequent layer l−2. Further, the process may be repeated till the K surviving candidates reach layer l. In one embodiment, the layer l−2 may be referred to as the final layer or the last layer. Finally, K the surviving candidates of the of the final layer may provide the required ordered list .
It should be noted that in
In one embodiment, the output of the pre-trained ML block 402 may be referred to as a tuple. Further the tuple may be represented by (Ly, Ry, Ty, By). Further, the tuple may represent the smallest possible window 602 as Ry may represent the number of points in the smallest possible window 602 to the right of sCCP(on the X-axis). For the example in
={(sCCP)+2m+j(ℑ(sCCP)+2n)}
where m∈{−Ly, −Ly+1, . . . ,0, 1, . . . , Ry}, and n∈. {−By, −By+1, . . . ,0, 1, . . . , Ty}. It should be noted that the real and imaginary parts of each point in may be subject to maximum value of √{square root over (M)}−1 and a minimum value of √{square root over (M)}+1.
At first, at the root layer, i.e. layer N, the input parameters associated to the signal y may be obtained, at step 702. In one embodiment, the input parameters may comprise at least a Nth element of the signal y, a constellation size M may be obtained along with the number of surviving candidates KN among the constellation size M,. It should be noted that the Nth element of signal y corresponds to signal y at the root layer i.e. layer N. Further, a closest constellation point SNCCP to the signal y′, derived from the Nth element of signal y based at least on the input parameters, may be determined at step 704. After determining the partial symbol vector sNCCP, the pre-trained ML block 402 may determine a first smallest possible window of a plurality of constellation points K′N for a signal y′ that contain the KN surviving candidates, using a trained ML model, at step 706. The receiver is configured to train the ML model at least on a training data set with input features which are functions of a one-dimensional complex-valued signal y, the constellation size M, and the number of surviving candidates KN. As discussed above, the input parameters i.e. a real and imaginary functions of the sCCP for the pre-trained ML block 402 may be represented by
δ(sN)=∥yN−rNNsN∥=∥rNN∥∥ýN−sN∥ where ýNyN/rNN.
In one embodiment, the first smallest possible window may be determined by a design engineer. Based on the determined first smallest possible window, partial Euclidean distances (PEDs) between a candidate point of the signal y′ and each of the plurality of constellation points K′N in the first smallest possible window may be determined, at step 708. The PED between the candidate point of the signal y and each of the plurality of constellation points in the first smallest possible window may be represented as
δmin∥rNN∥∥ýN−sNCCP∥=∥rNN∥√{square root over (((ýN−sNCCP))2+(ℑ(ýN−sNCCP))2)}
In one embodiment, the first smallest possible window may comprise the plurality of constellation points K′N for a signal y′ derived from Nth element of the signal y. Further, the plurality of constellation points K′N in the first smallest possible window based on the determined PEDs may be sorted and chosen, at step 710. Based on the sorted plurality of constellation points, an ordered list N may be computed and stored in N, at step 712. Further, N, may be represented by
N
{s
N
(i)
∈Q, i=1,2, . . . , KN|δ((sN(1))≤, . . . , ≤δ(sN(K
In one embodiment, the determined partial Euclidean distances associated with the plurality of constellation points K′N in the ordered list N may be stored in N. In one embodiment, N may store the squares of the PEDs of all the elements of the ordered list N. Further, N may be represented by
N
{δ2(sN(i)), ∀sN(i)∈N}
At first, at the remaining layers, i.e. layers l, the input parameters associated to the signal y may be obtained. In one embodiment, the input parameters comprise at least an lth element of signal y, the constellation size M may be obtained along with a required number of surviving candidates Kl among the constellation size M, at step 802. Further, the ordered list l and the list l of the plurality of constellation points in the first smallest possible window may be obtained from the surviving candidate in the ordered list l+1, at step 804. It should be noted that the ordered list l and the list l may be associated with a previous layer i.e. root layer in current scenario. Further, a closest constellation point for each of the Kl+1 surviving candidate in the ordered list l+1 may be determined, at step 806. In one embodiment, the Kl+1 surviving candidates in the ordered list l+1 may be sent to the pre-trained ML block 402. Further, for each of the closest constellation point, a partial symbol vector sl may be computed, at step 808. In one embodiment, the partial symbol vector s at the layer l may be represented by sl:N(i), and may be represented as
s
l:N
(i)
[s
l
CCP(sl+1:N(i)), (sl+1:N(i))T]T∈Q(N−l+1)×1, i=1,2, . . . Kl+1,
Further, the square of PEDs may be computed and may be represented as
δ2(sl:N(i)), for i=1,2, . . . , Kl+1
Thus
δ2(sl:N(i))=δ2(sl+1:N(i))+∥rl,l∥2∥ýl(sl+1:N(i))−slCCP(sl+1:N(i)∥2.
For each of the closest constellation point, partial Euclidean distances (PEDs) between a candidate point of the lth element of the signal y and each determined closest constellation point, based on the partial symbol vector, may be computed, at step 810. Further, the list l+1 may be sorted based on the determined PEDs, at step 812. The sorted Kl PEDs with i1, i2, . . . , iK
In one embodiment, a window of size Kl,j<<M, may be computed using the pre-trained ML block 402 with ýl(sl+1:N(i
δ2([sk,(sl+1:N(i
In one embodiment, the computed plurality of constellation points in the smallest possible window may be sorted in ascending order. Based on the sorted plurality of constellation points, a window with Kl,i points for each element in the sorted list l+1, using the output of the pre-trained ML block 402, at step 814. Further, the distances between the Kl,i points for each element with index i in the sorted list and the candidate point of the l+1th element of the signal y for the current layer may be computed, at step 816. Further, the plurality of closest constellation points Kl,i in the smallest possible window based on the determined PEDs may be sorted and chosen, at step 818. Based on the sorted plurality of constellation points, an ordered list l may be computed and stored, at step 820. In one embodiment, the l may be represented as
It should be noted that a list l may be created to store metrics associated with the ordered list t. In one embodiment, l may store the squares of the PEDs of all the elements of the ordered list l. In one embodiment, the l may be referred to as l and may be represented as
t
{δ2(sl:N(i)), ∀sl:N(i)∈}.
Further, the above algorithm may be applied to all the remaining layers leading to the ordered list represented by:
={si∈QN×1, i=1,2, . . . , K|d(s1)≤d(s2)≤, . . . , ≤d(sK)}.
In one embodiment, we consider a sklearn's decision tree classifier with criterion ‘gini’, class_weight=‘balanced’, and the remaining default parameters. It should be noted that the results for the classifier are tabulated in Table 1. Further, to calculate a window size of each of each predicted class, the predicted class may be mapped back to the tuple (Ly, Ry, Ty, By), and the window size may be calculated as (Ly+Ry+1)(Ty, +By+1). Further, a misclassification may occur when the predicted (Ly, Ry, Ty, By) does not match with an optimal (Ly, Ry, Ty, By) which represents the smallest window containing the 16 closest point to yi.
In one embodiment, we consider MIMO communication in a 5G cellular network where a Base Station (BS) and UEs are equipped with multiple antennas. Further, the communication over the uplink (UL) channel where the BS receives the transmitted data from the UEs. Further, the scenario considers a massive MIMO system with transmitter and receiver beamforming. Table 2 lists the simulation parameters used for the evaluation:
As shown in the graph 900B, a Minimum Mean Square Error-Interference Rejection Combining (MMSE-IRC) and the proposed algorithm for Modulation and Coding Scheme (MCS)=28. A line (shown by 906) represents a MMSE-IRC and a line (shown by 908) represents the proposed algorithm for Modulation and Coding Scheme (MCS)=28.
As shown in the graphs 900C and 900D, a comparison between a Minimum Mean Square Error-Interference Rejection Combining (MMSE-IRC) and the proposed algorithm for Modulation for UE 1 and UE 2, respectively, with Modulation and Coding Scheme (MCS)=16. A line (shown by 910) represents a MMSE-IRC and a line (shown by 912) represents the proposed algorithm
Modulation for UE 1 with Modulation and Coding Scheme (MCS)=16. Further, a line (shown by 914) represents a MMSE-IRC and a line (shown by 916) represents the proposed algorithm Modulation for UE 2 with Modulation and Coding Scheme (MCS)=16.
It will be apparent to one skilled in the art that above-mentioned joint decoding of UEs scheduled in a MU-MIMO within a single cell has been provided only for illustration purposes. In one embodiment, additional impairments may be added on top of this scenario due to hardware, without departing from the scope of the disclosure. The above-mentioned joint decoding of UEs scheduled in a MU-MIMO may be within multiple cells in either a same cell site (intra-site) or even across different cell sites (inter-site). In another embodiment, the joint decoding of UEs scheduled in a MU-MIMO using antennas of one or more than one cell (like in Cooperative Multipoint, CoMP) operation.
The processor 1002 includes suitable logic, circuitry, and/or interfaces that are operable to execute instructions stored in the memory to perform various functions. The processor 1002 may execute an algorithm stored in the memory for a receiver of a multiple input multiple output, MIMO, communication system 100. The processor 1002 may also be configured to decode and execute any instructions received from one or more other electronic devices or server(s). The processor 1002 may include one or more general-purpose processors (e.g., INTEL® or Advanced Micro Devices® (AMD) microprocessors) and/or one or more special-purpose processors (e.g., digital signal processors or Xilinx® System On Chip (SOC) Field Programmable Gate Array (FPGA) processor). The processor 1002 may be further configured to execute one or more computer-readable program instructions, such as program instructions to carry out any of the functions described in the description.
Further, the processor 1002 may make decisions or determinations, generate frames, packets or messages for transmission, decode received frames or messages for further processing, and other tasks or functions described herein. The processor 1002, which may be a baseband processor, for example, may generate messages, packets, frames or other signals for transmission via wireless transceivers. It should be noted that the processor 1002 may control transmission of signals or messages over a wireless network, and may control the reception of signals or messages, etc., via a wireless network (e.g., after being down-converted by wireless transceiver, for example). The processor 1002 may be (or may include), for example, hardware, programmable logic, a programmable processor that executes software or firmware, and/or any combination of these. Further, using other terminology, the processor 1002 along with the transceiver may be considered as a wireless transmitter/receiver system, for example.
The memory 1004 stores a set of instructions and data. Further, the memory 1004 includes one or more instructions that are executable by the processor to perform specific operations. Some of the commonly known memory implementations include, but are not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, Compact Disc Read-Only Memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, Random Access Memories (RAMs), Programmable Read-Only Memories (PROMs), Erasable PROMs (EPROMs), Electrically Erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, cloud computing platforms (e.g. Microsoft Azure and Amazon Web Services, AWS), or other type of media/machine-readable medium suitable for storing electronic instructions.
It will be apparent to one skilled in the art that the above-mentioned components of the apparatus 1000 have been provided only for illustration purposes. In one embodiment, the apparatus 1000 may include an input device, output device etc. as well, without departing from the scope of the disclosure.
Embodiments of the present disclosure may be provided as a computer program product, which may include a computer-readable medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The computer-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, Compact Disc Read-Only Memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, Random Access Memories (RAMs), Programmable Read-Only Memories (PROMs), Erasable PROMs (EPROMs), Electrically Erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). Moreover, embodiments of the present disclosure may also be downloaded as one or more computer program products, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
The detailed description section of the application should state that orders of method steps are not critical. Such recitations would later support arguments that the step order in a method claim is not critical or fixed. Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
While the above embodiments have been illustrated and described, as noted above, many changes can be made without departing from the scope of the embodiments. For example, aspects of the subject matter disclosed herein may be adopted on alternative operating systems. Accordingly, the scope of the embodiments is not limited by the disclosure of the embodiment. Instead, the embodiments should be determined entirely by reference to the claims that follow.
Number | Date | Country | Kind |
---|---|---|---|
20215383 | Mar 2021 | FI | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/056816 | 3/16/2022 | WO |