Embodiments of the present invention relate generally to the field of radio frequency (RF) multiple-input-multiple-output (MIMO) systems and more particularly to implementing soft output decoding in such systems.
An N×M Multiple-Inputs-Multiple-Outputs (MIMO) system 100 is shown in
The channel matrix H includes entries hij that represent the relationship between the signals transmitted from the jth transmitter antenna 102 to the signal received by the ith receiver antenna 104. The dimension of the transmit vector
A MIMO decoder 106, e.g., a maximum likelihood (ML) decoder, may decode a received signal
A demultiplexer 108 may modulate transmitted signals
Due to the enormous rate at which information may be decoded, there is a great need in the art for providing a system and method that manages decoding signals in MIMO systems at an acceptable complexity while maintaining performance.
Embodiments of the present invention provide performance enhancement for soft output MIMO decoding, without increasing the complexity of the search, in cost of minor additions of memory and calculations. Embodiments of the present invention may include generating a tree-graph of bits based on: MIMO rank of the receiver, number of bits per layer, and type of modulation, wherein the tree-graph comprises a root node, leaf nodes, nodes, and links or branches connecting the nodes; performing sphere decoding by determining a radius covering a subset of nodes within said tree-graph; managing, based on the sphere decoding, tables comprising metrics and counter metrics usable for log likelihood ratio (LLR) generation; predicting, based on a specified prediction scheme, counter metrics for paths in the tree-graph that comprise nodes and branches out of the determined radius; and updating the tables comprising the counter metrics with the predicted counter metric, in a case that the predicted counter metrics are better in maximum likelihood terms than the determined counter metrics.
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
In the following description, various aspects of the present invention will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well known features may be omitted or simplified in order not to obscure the present invention.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices
A maximum likelihood decoder (e.g. decoder 106 of
System 200A may further include a computer processor 220 configured to: perform sphere decoding by determining a radius covering a subset of nodes within tree-graph 214; manage, based on the sphere decoding, tables 230 that include metrics and counter metrics usable for log likelihood ratio (LLR) generation; predict, based on a specified prediction scheme, counter metrics for paths in tree-graph 214 that may include nodes and branches out of the determined radius; and update tables 230 that may include the counter metrics with the predicted counter metric, in a case that the predicted counter metrics are better in maximum likelihood terms than the determined counter metrics. Processor 220 may be or perform the operations of modules such as graph generator 210A and other modules. Processor 220 may be configured to perform methods as discussed herein by, for example, executing software or code stored in memory.
In some embodiments graph generator 210A may be implemented by computer processor 220. Tables 230 may be data structures that are being updated (“managed”) in real-time by computer processor 220 based on incoming outputs of the sphere decoder.
For example, a tree graph for a 4×4 MIMO system that includes 4 levels (0-3), where each level has 64 nodes representing 64 possible values of each of the 4 element of
The maximum likelihood decoder may search tree graph 200 to determine the most likely solution e.g., a node 204 representing one element in a transmit vector
QR decomposition may simplify the search distance computation by enabling incremental searches on the tree as opposed to evaluating the transmit vector
Using QR decomposition, the channel matrix H is decomposed into matrices Q and R, such that: =QN×MRM×N. Matrix Q is unitary such that: Q−1=QH(QHQ=I) and matrix R is an upper triangular matrix (e.g. having real entries along its main diagonal):
From equation (1),
where ñ-has the same statistics (e.g. the same covariance matrix E[
Since matrix R is an upper triangular matrix, computing the squared distance (∥
To limit the number of candidate nodes and their distances to compute, a “sphere” decoder may be used, which searches tree graph 200 for a subset of nodes 204 that have an accumulated distance within a sphere of radius r (e.g. 205) centered at the received vector
The accumulated distance over the full tree search path may be, for example:
The accumulated distance may be calculated in a recursive manner for a j-th level path (from a root node to a j-th level node) based on sequential PEDi's for i=j+1, . . . M−1, for example, solving each row of the matrix R, from the last row of R upwards to the first row of R as follows:
An ith-level node may have an accumulated distance from the root node 201 to the leaf node 203 equal to a partial Euclidean distance (PEDi). The distance increment from that ith-level node to the next (i−1)-level node is DIi−1. Partial Euclidean distance, PEDi, and distance increment, DIi may be computed, for example, as:
where pyii is a function of ith-level measurement of the current level and higher levels j=i+1, . . . , (M−1). This internal state variable, pyii, may be generated in a recursive manner, for example, as described in reference to PEDi. The accumulated distance for each sequential branch decision PEDi−1 may be the sum of the previous level i accumulated distance PEDi and the incremental distance (or branch length) from the selected node at level i, ŝi, to a selected node of the next level i−1, si−1, defined as:
PEDi−1(ŝi−1,ŝi,ŝM-1)=PEDi(ŝi,ŝi+1,ŝM-1)+DIi−1
It should be noted that candidate symbol metric or Euclidian distance is a positive quantity, the same for distance increment DI, therefore PEDi is a positive monotonic non-decreasing sequence.
Full path metric is defined as:
metric=d2(s)=∥
Partial metric is defined in the similar way using:
partialMetric(i)=PEDi(ŝi,ŝi−1,ŝi,ŝM-1).
Sphere decoder—based on using the pre-processing step described above or similar, transforming the MIMO detection to a problem of tree search. In order to control search complexity a sphere radius is introduced. Radius introduction imposes a PEDi test vs. defined radius, so full/partial path with PED higher than the radius will be “pruned”: a portion of the tree below that branch will not be explored, since the metrics of all the available paths are higher than the sphere radius value.
Tree scanning strategy (possible solution) applied is depth-first tree traversal, where at each level si may be enumerate according to DIi. Each selection is verified against sphere radius. In case of radius check violation all the paths beneath the tested branch are “pruned”, since its accumulated distance will be greater than the specified sphere radius.
Radius management (possible solution): sphere radius is decreased each time a tree leaf is reached (full path from the root to the lowest level of the tree) having the smallest metric (∥y−Hs∥2) seen so far in the search.
metricML=∥
s
ML
=[s
0
,s
1
, . . . s
M-1]
b
ML
=[b
0
,b
1
, . . . b
bps·M-1]
The ML path can also be characterized by the sequence of pyiiML=[py0, py1, . . . pyM-1]
The search radius is calculated: R=metricML+LLRmax
This way, all the paths with accumulated metric PEDi>R will be pruned.
It is noted for the simplicity that bps is assumed as the same for all symbols of the transmitted M spacial steams si
Soft-Output MIMO Decoding
ML MMO decoder defined above scans candidates' vectors to find:
s
ML=argmins∥
This type of decoder is referred as hard-output decoder, since it produces the certain values for all decoded QAM symbols or equivalently to all decoded bits. In modern communication system Forward error correction (FEC) technique is used, so the required output of MIMO decoder are LLRs (Log likelihood ratio) per bit and not the bit values (“0” or “1”).
LLR is defined:
Using MAX-LOG LLR approximation the LLR of each detected bit is given by:
Soft output (LLR) information requires two paths (symbols) metrics to be found for each bit: the metric of the closest symbol where the bit equals “1” and the metric of the closest symbol where the bit equals “0”. The sphere decoder traverses a symbol tree in search for required paths metrics.
Path metrics used to produce LLR for each decoded bit are stored and managed by means of two metrics tables. The number of entries in each table equals to the total number of bits to be decoded. First table stores the smallest metric found for a specific bit where the bit equals “1”, while the second table stores the complementary information; the smallest symbol metric where the bit equals to “0”. Tables update is performed each time search reaches a level of tree “leafs”: metric of a full tree path from a root of the tree to leaf level becomes available.
Search ends when all the links or branches where either explored or pruned. Metrics tables' information available upon search termination used to produce LLR values for each decoded bit. All the tables' entries are initialized with search initialization values (infinite metric) in the beginning of the search. In case the final metrics are different from initialization value, it still might be sub-optimal, due to pruning that prevented reaching the optimal path.
The main drawback of the scheme for soft decoding is the potential missing of counter-metrics. In addition to searching for sML, for each detected bit bi (s.t.i=0. bps·M−1) the best symbol with the negative value to bML[i] is being searcher for. The metric of this symbol called counter-metrics for bit i.
ML metric and the ML counter-metrics for bit i (with addition of noise variance) may be the necessary inputs to LLR calculations
In case one of the table entries contains search initialization value (infinite metric), default LLR value is assigned to that bit, that most probably not accurate enough.
The only available information in this case:
The trade-off is between reaching “best leafs” leading to optimal metrics in LLR tables and on the other hand achieving a minimal number of branch metrics evaluation (tree traverse cycle count).
Embodiments of the present invention involve using partial metric information of pruned path and prediction of remaining path metric to update LLR tables. The partial (pruned) path is “virtually augmented” to a complete symbol by adding an expected metric addition based of the best known ML metric. The estimated metric information is then used to update metrics tables' entries of the relevant bits.
Following is proposed sequence of steps according to one embodiment:
Step 1: Upon reaching a node at level i, where PEDi≧R
Step 2: Part of the bits determined (according to the partial path): all the bits for levels M down to level i. Denote bcurr[j] bits of the current path, only values at indexes (i*bps)≦j≦((M*bps−1)) are valid. Counter-metrics of the following bits are candidates for update based on prediction and available partial metric information:
Idx
discovered
cand
={j|(i*bps)≦j≦((M*bps−1))AND bcurr[j]≠bML[j]}
Counter-metrics of all the “un-discovered path” bits are also candidates for update based on prediction and available partial metric information:
Idx
undiscovered
cand
={j|0≦j≦((i*bps−1))}
Complete set of candidates:
Idx
cand
=Idx
discovered
cand
∪IdX
undiscovered
cand
Step 3: Wherein the estimated metric used for candidates update for all the relevant candidates in Idxcand:
PEDi+deltaD
and wherein a non-limiting example for predicted metric augmentation:
deltaD=metricML−PEDiML
Following is a non-limiting example for partial update flow: The following example will demonstrate the basic flow:
Assuming a 4×4 MIMO decoding setting. Each spatial layer is modulated QPSK: 2 bits per layer. Decoding aims to detect 8 bits: calculate 8 LLRs by managing metric tables of 8 entries each.
In the example below:
1. hypMetric and antiHypMetric are the mentioned above metric tables. The entries 0,1 are used for two bits of the first layer (0), entries 2,3 used for two bits of the second layer (1), etc.
2. MLbits holds the encoding of the best metric (ML metric) in found during the search. Entries are either ‘0’ or ‘1’ according to the encoding of the symbol with the lowest metric found during the search (ML symbol).
3. MLaccumMetric holds incremental metrics for ML symbol. Entry 0 hold the accumulated metric from the root of the tree to layer 0, in the similar manner entry 1 holds the accumulated metric from the root of the tree to layer 1. The last entry holds the total metric of the ML symbol.
4. MLAccumMetric holds incremented metrics ML symbol. Entry 0 hold the accumulated metric from the root of the tree to layer 0, in the similar manner entry 1 holds the accumulated metric from the root of the tree to layer 1, etc. The last entry holds the total metric of the ML symbol.
5. currPathAccumMetric holds accumulated metrics of current search path. Entry 0 hold the accumulated metric from the root of the tree to layer 0, in the similar manner entry 1 holds the accumulated metric from the root of the tree to layer 1, and so on. Each time a candidate symbol is being tested in a specific layer, new accumulated metric (from the root to the layer) causes update of the relevant entry.
6. currPathBits [8] holds the encoding of the current search path. Entries are either ‘0’ or ‘1’ according to the encoding of the symbols chosen along the search. Each time a possible symbol is being tested in a specific layer the corresponding bits updated.
7. currLayer—current search layer
8. sphereRadius—sphere radius for search management
9. approx_factor—factor used to correct a metric approximation used for “virtually augmented” path. The factor is parameter produced by a special algorithm. Possibility of using more complex “virtually augmentation” is also considered: more complicated prediction scheme.
The following notes should be taken into account:
1. hypMetric and antiHypMetric are updated each time currLayer reaches 4 or once partial update condition fulfilled.
2. TablesUpdate—is a function used to update hypMetric and antiHypMetric tables. Path metric and currPathBits are used to compare the stored values vs. current metric.
The respective pseudo code may be as the following example; other coding may be used:
Using the proposed implementation/algorithm at the end of the search the number of LLRs assigned with default values is reduced; in addition some of the metrics get more reliable values based on “virtually augmented” paths compared to available explored full paths. In overall the produced LLRs using the proposed method are more accurate. The user may configure the conditions for which the table is updated based on the reliability of the estimation.
The improved search results in a higher precision result (closer to pure soft output ML performance) per given cycle count. Embodiments of the present invention may also be used to reach the same precision faster compared to search which does not utilize the partial metric information.
The plots below are based on full baseband LTE-A link simulator, where Sphere decoder performance with the data enhancement of embodiments of the present invention and without it were evaluated. In the first chart performance measurements are PER (Packet Error Rate) as function of SNR (Signal to Noise Ratio) in MIMO 4×4 configuration over EPAS Channel model (3GPP standardized channel model) and Turbo code's code-rate is 5/6. System's target PER is 10−1.
Line 310 represents a full soft ML solution (used for performance reference).
Line 320 represents a sphere decoder with predicted augmentation (PA) in accordance with embodiments of the present invention and LLRmax=0.1
Line 350 represents a sphere decoder without predicted augmentation (PA) in accordance with embodiments of the present invention and LLRmax=0.1.
Line 340 represents a sphere decoder without predicted augmentation (PA) in accordance with embodiments of the present invention and LLRmax=0.15.
Line 330 represents a sphere decoder without predicted augmentation (PA) in accordance with embodiments of the present invention and LLRmax=0.2.
In order to achieve similar performance, in terms of PER, Sphere decoder without partial update in one embodiment mechanism has to work with bigger LLRmax. Choosing a larger LLRmax may enable reaching more relevant nodes for LLR calculation on one hand and making the search longer on the other hand.
In
| line 410: represents a sphere decoder with predicted augmentation (PA) in accordance with embodiments of the present invention and LLRmax=0.1
Line 420: represents a sphere decoder without predicted augmentation (PA) in accordance with embodiments of the present invention and LLRmax=0.2
In order to achieve similar performance, in terms of PER, Sphere decoder without partial update mechanism in average spends ˜2× more cycles of MIMO detection. This is equivalent to 2× (twice) longer processing latency or doubling hardware to meet same throughput requirements.
Other predictions are also may be used. According to some embodiments, the prediction scheme may be per tone and based on any path history that is available. Specifically, the prediction may be updated based on the historical data. The historical data may include the path history and also the relationship between previously discovered paths. The prediction scheme may also be per bit and is based on any path history that is available. In some other embodiments, the prediction scheme may be based on data from previous tones. Alternatively, the prediction scheme may be based on a priori data for a specified tone, such data may be, for example, the channel coding scheme being used. Furthermore, the prediction can be based on data from neighboring tones having a coherent bandwidth.
Embodiments of the invention may include an article such as a non-transitory computer or processor readable medium, or a computer or processor non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory, encoding, including or storing instructions, e.g., computer-executable instructions, which, when executed by a processor or controller, carry out methods disclosed herein.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, method or an apparatus. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.”
The aforementioned flowchart and block diagrams illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the above description, an embodiment is an example or implementation of one or more inventions. The various appearances of “one embodiment,” “an embodiment” or “some embodiments” do not necessarily all refer to the same embodiments.
Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.
Reference in the specification to “some embodiments”, “an embodiment”, “one embodiment” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions.
It is to be understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only.
The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples.
It is to be understood that the details set forth herein do not construe a limitation to an application of the invention.
Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above.
It is to be understood that the terms “including”, “comprising”, “consisting” and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as specifying components, features, steps or integers.
If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.
It is to be understood that, where the claims or specification refer to “a” or “an” element, such reference is not be construed that there is only one of that element.
It is to be understood that, where the specification states that a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included.
Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
The term “method” may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs.
The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined.
The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein.
While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.
This application is a non-provisional patent application claiming benefit from U.S. provisional patent application Ser. No. 61/935,001 filed on Feb. 3, 2014 and incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61935001 | Feb 2014 | US |