METHOD AND APPARATUS FOR AN EQUALIZER BASED ON VITERBI ALGORITHM

Information

  • Patent Application
  • 20240106536
  • Publication Number
    20240106536
  • Date Filed
    August 14, 2023
    8 months ago
  • Date Published
    March 28, 2024
    a month ago
Abstract
An apparatus including at least one processor configured to execute instructions and cause the apparatus to perform, obtaining for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (bj), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state and a less likely state related to a received sample at the previous time step (k−1); determining based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter (Q) related to a difference between likelihoods of the most likely state and the less likely state; updating magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter, to obtain an updated Ilr value Ilrold,updated.
Description
FIELD OF THE INVENTION

Various example embodiments relate to an apparatus and a method for use by an equalizer based on the Viterbi algorithm in a receiver in a communication link.


BACKGROUND

Next-generation passive optical networks (PONs) are used to deliver broadband access. PON systems have a point-to-multi-point (P2MP) topology, in which one optical line terminal (OLT) at the network side is used to connect to a multitude (e.g., up to 64) of optical network units (ONUs) at the user side by means of an optical distribution network (ODN) or fiber plant that contains optical fibers and splitters, but no active components. It is noted that, pertaining to the present disclosure, the terminology ONU and ONT for Optical Network Terminal may be used interchangeably.


Due to cost reasons, it is expected that next-generation passive optical networks (PON) standards will employ bandwidth-limited reception. For instance, 50G PON may be received using 25G optics. Such bandwidth limitation causes significant inter-symbol-interference (ISI), i.e., received samples do not only depend on the corresponding transmitted symbol, but are also affected by the previous and/or following symbols.


In addition, due to the increased line rate speeds, next-generation PON systems will be more strongly affected by chromatic dispersion due to the fiber propagation, leading to even more ISI.


While for lower-rate PON technologies, the impact of ISI is small and hence can easily be dealt with or even neglected, this does not hold anymore for 25G/50G PON systems due to the reasons listed above (bandwidth-limited reception and increased impact of dispersion). As a result, to recover the symbols that are corrupted by ISI and enable reliable communication, next-generation PON technologies may use an (electrical) analog-to-digital converter (ADC) at the receiver in combination with digital signal processing (DSP)-based channel equalization and forward error correcting (FEC) techniques. The channel equalizer deals with ISI and provides an interference-free (or with reduced interference) receive symbol sequence to the FEC decoder. The FEC decoder further corrects the errors in the receive sequence using redundancy introduced at the transmitter.


Channel equalization is a well-known DSP technique to mitigate ISI in digital communication systems. A near-optimal approach consists of Maximum Likelihood Sequence Estimation (MLSE) in case of a hard-output equalizer. However, the computational complexity of this approach grows exponentially with the channel dispersion time. A well-known efficient MLSE implementation utilizes the Viterbi algorithm.


An important drawback of the traditional MLSE algorithm (or Viterbi algorithm) is that the outputs are hard decisions, meaning binary data symbol estimations. There exist also algorithm variants that output soft decisions or Ilrs, for example BCJR (Bahl, Cocke, Jelinek and Raviv) and Soft-Output Viterbi Equalizer (SOVE). However these either have a much higher complexity, or a reduced soft-output performance.


SUMMARY OF THE INVENTION

Thus, an objective of the invention is to provide a soft-output MLSE or Viterbi-based algorithm, that has a low or even minimal complexity, without compromising on the performance.


The object of the invention is achieved by the subject matters according to the claims.


According to a first aspect of the present invention, there is provided an apparatus, comprising means for performing: obtaining, by an equalizer based on Viterbi algorithm in a receiver in a communication link, for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (bj), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at the previous time step (k−1); determining, by the equalizer, based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter (Q) related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max); updating, by the equalizer, magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter, thereby obtaining an updated Ilr value Ilrold,updated.


According to a second aspect of the present invention, there is provided a method comprising: obtaining, by an equalizer based on Viterbi algorithm in a receiver in a communication link, for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (b1), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at a previous time step (k−1); determining, by the equalizer, based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max); updating, by the equalizer, magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter, thereby obtaining an updated Ilr value Ilrold,updated.


According to a third aspect of the present invention, there is provided a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out: obtaining, by an equalizer based on Viterbi algorithm in a receiver in a communication link, for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (bj), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at a previous time step (k−1); determining, by the equalizer, based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max); updating, by the equalizer, magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold, min, the Ilr value Ilrold,max, and the first parameter, thereby obtaining an updated Ilr value Ilrold,updated.


According to a fourth aspect of the invention, there is provided an apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to with the at least one processor, cause the apparatus at least to perform: obtaining, by an equalizer based on Viterbi algorithm in a receiver in a communication link, for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (bj), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at a previous time step (k−1); determining, by the equalizer, based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max); updating, by the equalizer, magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter, thereby obtaining an updated Ilr value Ilrold,updated.


According to a fifth aspect of the invention, there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the following: obtaining, by an equalizer based on Viterbi algorithm in a receiver in a communication link, for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (bj), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at a previous time step (k−1); determining, by the equalizer, based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max); updating, by the equalizer, magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold, min, the Ilr value Ilrold,max, and the first parameter, thereby obtaining an updated Ilr value Ilrold,updated.


According to a sixth aspect of the invention, there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: obtaining, by an equalizer based on Viterbi algorithm in a receiver in a communication link, for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (b1), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at a previous time step (k−1); determining, by the equalizer, based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max); updating, by the equalizer, magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter, thereby obtaining an updated Ilr value Ilrold,updated.


According to the present invention, a similar structure as the soft-output Viterbi Equalizer (SOVE) algorithm is used, while the Ilrold,max of a transmitted bit (bj) and associated with the less likely state (s′max) related to a received sample at a previous time step (k−1) will not be simply discarded. At every step, the Ilr magnitudes of the transmitted bit (bj) is updated based on the Ilr values Ilrold,min, Ilrold,max associated with both the most likely state (s′min) and a less likely state (s′max). The difference between the log-likelihoods of the most likely state (s′min) and the less likely state (s′max) is also reflected in the magnitude of the updated Ilr value. In this way the same performance as the BCJR algorithm can be realized, with a complexity that scales as the hard-output MLSE algorithm. Comparing to BCJR which requires two passes through the trellis, various embodiments require only a single pass through the trellis.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 depicts a schematic block diagram of a network topology according to the present application;



FIG. 2 depicts a schematic block diagram of a next-generation PON communication system;



FIG. 3 depicts a schematic diagram of an example trellis of states for illustrating the Viterbi algorithm;



FIG. 4 depicts a schematic block diagram of an example implementation of the Viterbi based MLSE algorithm;



FIG. 5 depicts a schematic block diagram of memory update procedure for the Viterbi based MLSE;



FIG. 6a depicts a schematic block diagram of an example implementation of the BCJR algorithm;



FIG. 6b depicts a schematic block diagram of a path metric calculation block used in an example implementation of the BCJR algorithm;



FIG. 7 depicts a schematic block diagram of an example implementation of the SOVE algorithm;



FIG. 8 depicts a flow diagram of a method for Ilr updating according to various embodiments;



FIG. 9 depicts a schematic diagram showing the update procedure according to one embodiment;



FIG. 10 depicts a schematic block diagram of implementing Ilr updating according to one embodiment of the present invention;



FIG. 11 depicts a block diagram of an apparatus according to various embodiments.





DETAILED DESCRIPTION

Example embodiments of the present application are described herein in detail and shown by way of example in the drawings. It should be understood that, although specific embodiments are discussed herein there is no intent to limit the scope of the invention to such embodiments. To the contrary, it should be understood that the embodiments discussed herein are for illustrative purposes, and that modified and alternative embodiments may be implemented without departing from the scope of the invention as defined in the claims. Embodiments not falling under the scope of the appended claims are to be considered merely as examples suitable for understanding the invention. The sequence of method steps is not limited to the specific embodiments, the method steps may be performed in other possible sequence. Similarly, specific structural and functional details disclosed herein are merely representative for purposes of describing the embodiments. The invention described herein, however, may be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.



FIG. 1 shows a schematic block diagram of a network topology according to the present application.


As shown in FIG. 1 in a PON, an OLT 110 at the network side is used to connect to a plurality of optical network units (ONUs) 121, 122, . . . , 123 at the user side by means of an optical distribution network (ODN) or fiber plant that contains optical fibers and splitters, but no active components.


Most PON technologies such as G-PON, E-PON, and XGS-PON are time-division multiplexing (TDM) PON technologies, in which the fiber medium is shared in time between the different ONUs. In addition, time- and wavelength-division multiplexing (TWDM) PON technologies exist, such as NG-PON2, in which multiple TDM systems at different wavelength are stacked on the same PON system. The present invention applies to both TDM and TWDM PON systems.



FIG. 2 shows a schematic block diagram of a next-generation PON communication system.


In FIG. 2, at the transmitter TX, redundant bits are added to the data bits, so the FEC decoder at the receiver can reduce the total number of errors using this redundancy. The 50G PON standard employs low-density parity check (LDPC) codes, which are advanced FEC codes achieving higher FEC gains than the more conventional Reed Solomon codes used by older PON standards. LDPC codes have also the advantage that they can be efficiently decoded while taking into account soft information. This means they can be decoded using the log-likelihood ratios (LLRs) or probabilities of the individual bits rather than only their hard-decisioned binary values. Soft-input LDPC is expected to be used in some 50G implementations, due to its superior FEC gain.


The encoded data bits are then digitally modulated onto symbols. Typically, non-return to zero on-off keying (NRZ-OOK) modulation is used. The digital transmit signal x [k] is hence represented by the binary values +1 or −1 depending on the current (encoded) data bit bk. For example, a bit value bk=0 corresponds to x[k]=+1, and a bit value bk=1 corresponds to x[k]=−1. Other (higher-order) modulation formats like, e.g., PAM4 could be used in future PON technologies for which x[k] ∈ [−1, −⅓, ⅓, 1], or PAM3 for which x[k] ∈ [−1, 0,1].


The transmit signal x[k] is communicated over a dispersive and noisy channel leading to the receive samples y[k], at the output of the ADC. This dispersive and noisy channel includes the fiber propagation as well as the hardware-induced distortion effects.


At the receiver RX, the received samples are processed by a channel equalizer to mitigate the ISI impact and provide an interference-free symbol sequence to the FEC decoder. The latter further corrects the errors in the receive sequence. In case of a soft decoder, the Ilrs are calculated as log of the ratio of the probabilities of the bit being a 0 given the received samples y to it being a 1:







llr
k

=

log




(


P



(


b
k

=

0

y


)



P



(


b
k

=

1

y


)



)

.






Maximum Likelihood Sequence Estimation (MLSE) is a well-known near-optimal channel equalization approach. An MLSE equalizer selects the most likely (i.e. the one with maximum likelihood or probability) symbol sequence from all possible transmit sequences, given the received samples. The computational complexity of this approach grows hence exponentially with the length of the channel dispersion time. A well-known efficient MLSE implementation utilizes the Viterbi algorithm to reduce the computational complexity.



FIG. 3 shows a schematic diagram of an example trellis of states for illustrating the Viterbi algorithm.


As illustrated in FIG. 3, the goal of the Viterbi algorithm is to calculate the most probable path through a trellis of states. Consider binary transmission (i.e., x[k]=±1) and an L-tap MLSE equalizer where L is the expected number of consecutively transmit symbols that affect any received sample. Consequently, the received samples with ISI may be viewed as the output of a finite state machine, and can thus be represented by a 2L−1-state trellis diagram. This is illustrated in FIG. 3 in which L=3 and there are 4=2{circumflex over ( )}2 states. The maximum-likelihood estimate of the transmit symbol sequence is simply the most probable path through the trellis, marked with solid arrows, given the receive sample sequence y=[y1 . . . yK]:








x
*

=


argmin
x

-

log



(

P



(

y


x
1







x
K


)


)




,




where P(y|x1 . . . xK) is the probability that y is observed given that the sequence x=[x1, . . . , xK] is transmitted. The dashed arrows show all other possible transitions between the states.


Viterbi-type algorithms provide an efficient means for performing the trellis search, by sequentially processing all receive samples and pruning unlikely paths at every step. Key is that the optimal path through the trellis is computed incrementally using the previously computed path and branch metrics. The idea is that for each possible state s at a time step k, only 1 of the 2 incoming paths will be retained (the path with the highest probability) while the other path will be discarded. For example, for state “01” at time step k+2, there are two possible incoming paths from previous time step, namely state “00” and state “10” at time step k+1. Only the one with highest probability, i.e. the path from state “00” will be retained. Generally, for each possible state s at time step k, a path from one of the 2L−1 possible states for the L−1 transmitted symbol sub-sequence [xk−L+2, . . . xk] will be retained. Thus, each state s at time step k corresponds to one of the 2L−1 possible states for the L−1 transmit symbol sub-sequence [xk−L+2, . . . , xk]. In addition, if the path metric PM[s, k] is defined as the minus log-probability of the being in state s at time step k of the trellis (i.e., that the symbols xk−L+2, . . . , xk corresponding to state s were transmitted), then the PM[s,k] corresponds to:







P


M

[

s
,
k

]


=



-
log




(

P



(


y
1

,


,


y
k



x

k
-
L
+
2



,


,

x
k


)


)


=


-

log




(




x

k
-
L
+
1





(

P



(



y
k



x

k
-
L
+
1



,


,

x
k


)



P



(


y
1

,


,


y

k
-
1




x

k
-
L
+
1



,


,

x

k
-
1



)


)









Note that the values of xk−L+2, . . . , xk are implicitly specified by the state s (each state s corresponds to a specific set of transmitted symbols xk−L+2, . . . , xk). The PM of state s at time step k can in approximation be computed recursively from the PMs of the two possible predecessor states s′ (for the two values of xk−L+1) at time index k−1 as:








P


M

[

s
,
k

]


=


min

s




[



-
log




(

P



(



y
k



x

k
-
L
+
1



,


,

x
k


)


)


+

P


M

[


s


,

k
-
1


]



]


,




where the minus log of a sum of terms has been approximated as the minimum of the minus logs of the terms (which is a commonly applied approximation in the context of probabilities). Also, the value of xk−L+1 is implicitly specified by the state s′. The transition probability, corresponding to the first term inside the argument of the min-function, is defined as the branch metric (BM). Hence, in the trellis search 2LBMs or transition probabilities are calculated at each time step k (i.e., for each received sample yk):






BM[s,s′,k]=−log(P(yk|xk-L+1, . . . ,xk)),


where P(yk|xk-L+1, . . . xk) is the probability that yk is observed given that the current transmit sequence is xk−L+1, . . . , xk. Note that each BM corresponds to the transition from a previous state s′ (corresponding to the sequence xk−L+1, . . . , xk−1) to the next state s (corresponding to the sequence xk−L+2, . . . , xk). For any state s, only the s′ states for which the last L−2 symbols of s′ are identical to the first L−2 symbols of s are considered. The PMs are thus calculated as:







P


M

[

s
,
k

]


=



min

s




[


B


M

[

s
,

s


,
k

]


+

P


M

[


s


,

k
-
1


]



]

.





For the Viterbi algorithm, the absolute values of the PMs and BMs do not really matter, but only the relative differences (e.g., the differences between the different PMs at time-step k). Hence, the practically calculated values only equal the log probabilities up to an offset (i.e., a constant factor for the probabilities).


The performance of the MLSE equalizer depends on the accuracy of the BM calculation unit. Many BM models can be considered to calculate the transition probabilities. The most common model assumes white additive Gaussian noise, such that







P



(


y
k



x
i


)


=


1


2

π


σ
i
2






e

-



(

y


k

-
μ



i


)

2


2


σ
t
2










where μi and σi2 denote respectively the mean and noise variance of the Gaussian pdf corresponding to a received sample given transmitted sequence xi.


More advanced BM models may compute the transition probabilities via histogram metrics measured at the receiver, or based on AI-inspired models.



FIG. 4 shows a schematic block diagram of an example implementation of the Viterbi based MLSE algorithm.



FIG. 4 illustrates a block diagram of an example implementation of the Viterbi based MLSE algorithm for L=2 (and thus 2{circumflex over ( )}(L−1)=2 states). For every time step, 4 BMs, respectively for transitions from 2 possible states s′ (s′=0 and s′=1) at the previous time step tk-1 to two possible states s (s=0, and s=1) at the current time step tk, are calculated based on received samples. Then the 4 corresponding BMs are added to the delayed 2 surviving PMs. This leads to the two pairs of temporary PMs, of which pairwise the minimum is calculated, leading to the two surviving PMs, PM0 and PM1, each related to one of the two states 0 and 1. For each state, which temporary PM was minimal also determines what the most likely bit that was transmitted according to the surviving path in the state (PM0 being minimal implies that 0 was most likely, and PM1 that 1 was most likely). These bit values, bnew,0 and bnew,1, one for each state, are then sent to a memory unit.


The memory unit stores for each of the states the most probable bit values of the last M bits, where M is the length of the memory. At each time step, the bit values in the memory are updated, and these updates are calculated separately for each new state. Hence, it is sufficient to focus on the processing for one state, and for simplicity, the dependency on the new state 0/1 is dropped when considering parameters such as bnew.



FIG. 5 shows a schematic block diagram of memory update procedure for the Viterbi based MLSE.



FIG. 5 illustrates how the memory is processed at each time step. At the beginning of the processing, the memory is filled with hard decisions of the last M bits (obtained in the previous M steps) for each of the two previous states s′, s′=1 and s′=0. In the example shown in FIG. 5, b1 corresponds to the most recent bit value and bM to the oldest bit value. Then, focusing on the processing for the first possible state s=0 of the received sample at the current time step k (processing for state s=1 is identical), the memory of the most likely previous state s′min is copied. For example, for the first possible state s=0, bnew=0, given that the most likely state s′min related to the received sample at the previous time step (k−1) is s′=0 in this example, the memory corresponding to the state s′=0 is copied. After this, all the bits in the memory get shifted by 1 position, while bM is outputted as bout for the state s=0 and bnew is inserted at b1. In this way the memory content for the new state s=0 is obtained. Depending on which one of the possible states s, s=0 or s=1, has a minimum PM (see block ‘Min state’ in FIG. 4), the bout corresponding to the state having minimum PM will be outputted by the ‘Memory’ block in FIG. 4. FEC decoding is then performed based on the bout outputted by the ‘Memory’ block in FIG. 4.


The advantage of this MLSE algorithm is that it has a small complexity compared to other Viterbi-based algorithms. It however has hard output, and therefore has a performance penalty compared to soft-output equalization algorithms.



FIG. 6a show a schematic block diagram of an example implementation of the BCJR algorithm.


The BCJR algorithm is a well-known soft-output Viterbi-based equalization algorithm, which yields very good equalization performance. As shown in FIG. 6a, it consists of branch metric unit (BMU), two path metric units (PMU) and a metric combiner. After calculating the branch metrics in the BMU, two sets of path metrics are calculated: forward path metrics are calculated via a forward pass of the Viterbi algorithm (in the PMU forward), and backward path metrics are calculated via a backward pass (in the PMU backward). Both sets of path metrics are then combined (i.e., added together) in the metric combiner to calculate the total path metrics. These path metrics (indicating the log of the likelihoods) can then be used to calculate the Ilrs.



FIG. 6b shows a schematic block diagram of a path metric calculation block used in an example implementation of the BCJR algorithm.


The path mertic calculation block shown in FIG. 6b is somewhat similar to FIG. 4, with the difference that the memory stores all path metrics, rather than the M most recent bit values.


It is clear that the BCJR algorithm is significantly more complex than the MLSE algorithm, as it requires two passes through the trellis (i.e., two sets of path metric calculations). It also requires to store all the path metrics of the trellis in memory. So while the BCJR achieves the best performance, it does so at more than twice the complexity than the MLSE algorithm.



FIG. 7 shows a schematic block diagram of an example implementation of the SOVE algorithm.


A soft-output Viterbi-based algorithm that has lower complexity than the MLSE is the soft-output Viterbi Equalizer (SOVE). In the SOVE algorithm, the Ilrs are only calculated based on the forward pass. A block diagram example is given in FIG. 7. In the considered example (with two states), an Ilr for the current bit can be calculated for each state by considering the difference between the two temporary path metric. These Ilrs (Ilr0 and Ilr1) can then be stored in the memory, similar to how b0 and b1 are stored for the MLSE algorithm in FIG. 4. The Ilrs are then processed similarly as in FIG. 5, but with the bits b1, . . . ,bM replaced by Ilrs Ilr1, . . . ,IlrM.


Since the SOVE algorithm only relies on a single pass, it has roughly only half the complexity compared to the BCJR. However, the quality of the Ilrs is significantly worse, as it only considers the forward path metrics (i.e., only half the information), which leads to an overestimation of the Ilr magnitudes. This leads to a significant degradation in performance compared to the BCJR algorithm.


Other soft-output Viterbi algorithms have been proposed, such as for instance the soft-output Viterbi Algorithm (SOVA). These algorithms have typically a complexity somewhere between the SOVE and the BCJR algorithm, and for instance consider more states or path metric differences than the SOVE algorithm. While the performance is typically better than that of the SOVE, it still remains below that of the BCJR algorithm.



FIG. 8 shows a flow diagram of a method for Ilr updating according to various embodiments.


Various aspects may be implemented in a receiver in a communication link. For example, various aspects may be implemented in an OLT or an ONU. Generally, various aspects may be implemented in any receiver in a communication link affected by ISI.


Specifically, in step S810, an equalizer based on Viterbi algorithm in a receiver in a communication link obtains for a first possible state s of a received sample at the current time step k, log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit bj, wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state s′min and a less likely state s′max related to the received sample at the previous time step k−1.


For simplicity, the states of the received sample at the current time step k may be written as the current state s, and the states of the received sample at the previous time step k−1 may be written as previous state s′.


The number of possible states is related to L and the modulation schemed used. For example, given that the transmitted samples are binary modulated, there may be 2L−1 possible states. In another embodiment, in case of PAM4 modulation and L=3, there are 4L-1=16 possible states.


For simplicity, in the following, especially in FIGS. 9 and 10, various embodiment will be elaborated in a simple example, where L=2 and the received samples are binary modulated, especially using non-return to zero on-off keying (NRZ-OOK) modulation. In this case, there are only two possible current states s=0 and s=1.


A skilled person shall understand that the modulation scheme is not limited thereto. Other modulation scheme, for example, PAM 4 may be used in other embodiments. A skilled person shall also understand that in other embodiments the L may be higher than 2.



FIG. 9 shows a schematic diagram showing the update procedure according to one embodiment.


Reference is first made to FIG. 5. Conventionally, for Viterbi based MLSE, for each possible previous state s′, s′=1 and s′=0, there is a memory storing bit values of M previous bits. As shown in FIG. 5, hard decisions of bit values 0 or 1 are stored in respective memory.


According to various embodiments, there may be a memory storing Ilr values of M previous bits for each possible previous state s′. In the example shown in FIG. 9, there are only two states, s′=1 and s′=0. There are two memories respectively for each state, and each memory may have a depth of M Ilr positions, each Ilr position takes for example a predetermined number of bits in the memory, and stores a Ilr value of a transmitted bit associated with that state.


A skilled person shall understand that in other embodiments, there may be more than two possible states and the number of memories corresponds to the number of possible states of a sample.


For example, in case of four possible states, for example when L=3 and binary modulation is used, there may be four memories respectively for four possible states, and each memory may have a depth of M Ilr positions, each Ilr position stores a Ilr value of a transmitted bit.


Generally, the equalizer may obtain the Ilr values Ilrold,min, Ilrold,max of the first transmitted bit bj respectively from the mth Ilr position of a memory corresponding to respective possible state s′ of the received sample at the previous time step k−1; wherein, 1≤m≤M, M being related to length of the memory.


Specifically, as shown by the block drawn by dashed lines in FIG. 9, bk-3 is considered as the first transmitted bit bj. The Ilr values of the first transmitted bit bj, Ilr2_old,s′0 and Ilr2_old,s′1 are respectively retrieved from the second Ilr position of the memories respectively corresponding to s′=0 and s′=1.


Given that s=0 is considered as the first possible state for the current state s, the most likely previous state s′min and the less likely previous state s′max may be determined based on various methods known to a skilled person. For example, one of s′=0 and s′=1 may be determined based on path metrics as the most likely previous state s′min, while the other one may be determined as the less likely previous state s′max. Accordingly, Ilrold,min, Ilrold,max may be selected from the obtained Ilr values of the first transmitted bit bj, Ilr2_old,s′0, and Ilr2_old,s′1. A skilled person should know various different scheme of determining the Ilrold,min, Ilrold,max for a possible current state.



FIG. 10 shows a schematic block diagram of implementing Ilr updating according to one embodiment of the present invention.


Specifically, as shown in FIG. 10, the equalizer may retrieve the Ilr values Ilrold,s′0 and Ilrold,s′1 of the first transmitted bit bj respectively from the memory corresponding to each possible state s′=0 and s′=1. For simplicity, s′=0 may be denoted as s′0, and s′=1 may be denoted as s′1. Meanwhile, for each possible current state s, s=0 and s=1, the most likely previous state s′min may be determined based on the current PM values. Depending on the s′min value, one of the old Ilr values, Ilrold,s′0 or Ilrold,s′1 is selected as Ilrold,min, the other one of the old Ilr values Ilrold,s′1 or Ilrold,s′0 is selected as Ilrold,max.








llr

old
,
min


=

log



(


P



s


min

,
0



P



s


min

,
1



)



,


llr

old
,
max


=

log



(


P



s


max

,
0



P



s


max

,
1



)







Returning to FIG. 8, in step S820, the equalizer according to various embodiments determines, based on path metrics and branch metrics corresponding to the received sample at the current time step k; a first parameter Q related to a difference between likelihoods of the most likely state s′min and the less likely state s′max.


In one embodiment, the first parameter Q may be a magnitude of a log-likelihood ratio Ilrnew between the most likely state s′min and the less likely state s′max.






Q
=




"\[LeftBracketingBar]"


llr
new



"\[RightBracketingBar]"


=


log



(


P


s


min



P


s


max



)


=

log



(


max



(


P



s


min

,
0


,

P



s


min

,
1



)



max



(


P



s


max

,
0


,

P



s


max

,
1



)



)








In one embodiment where the received samples are binary modulated, especially using non-return to zero on-off keying (NRZ-OOK) modulation, the first parameter may be a magnitude of a Ilr value of a second transmitted bit associated with the first possible state s, wherein, the second transmitted bit distinguishes the most likely state s′min from the less likely state s′max.


For example, referring to the trellis of states in FIG. 3, given that the first possible current state s is “00” at time step k+1. Then the two possible previous states s′ are “00” and “10” at time step k, which are the states having an arrow going into “00” at time step k+1. In the example shown in FIG. 3 the previous state s′=00 is the most likely state s′min and the previous state s′=10 is the less likely state s′max. There is only one bit difference between the two previous states s′min and s′max. The first parameter Q may be a magnitude of a log-likelihood ratio Ilrnew between the most likely state s′min and the less likely state s′max. In case of binary modulation, the Ilrnew equals Ilr of the bit that distinguishes the most likely state s′min from the less likely state s′max.


Generally, at time step k, the position of the bit that distinguishes the most likely state s′min from the less likely state s′max is related to L and the modulation scheme used for the transmitted samples. For example, in case binary modulation is used, and L=3, considering at time step k, bk is the most recent bit in the current state s, the second transmitted bit that distinguishes the most likely previous state s′min from the less likely previous state s′max is bk-L+1.


Specifically, in the simple example as shown in FIG. 9 or 10, where L=2 and the received samples are binary modulated, at time step k, the bit in the current state is bk, the bit that distinguishes the most likely previous state s′min from the less likely previous state s′max is bk-1. In the example shown in FIG. 3, at time step k, the bits in the current state is bk−1bk, the bit that distinguishes the most likely state s′min from the less likely state s′max is bk-2.


Still referring to FIG. 8, in step S830, the equalizer according to various embodiments updates the magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter Q, thereby obtaining an updated Ilr value Ilrold,updated.


Specifically, in one embodiment, the sign of Ilrold,min may be preserved, the equalizer may reduce the magnitude of the Ilr value Ilrold,min, at least based on the sign of the Ilr value Ilrold,min, the Ilr value Ilrold,max and the first parameter Q.


Specifically, in one embodiment, the equalizer may determine whether the Ilr value Ilrold,min and the Ilr value Ilrold,max have the same sign. Based on determining the Ilr value Ilrold,min and the Ilr value Ilrold,max have the same sign, the magnitude of the updated Ilr value Ilrold,updated may be determined as the minimum of the magnitude the Ilr value Ilrold,min and the sum of the first parameter and the magnitude of the Ilr value Ilrold,max. Based on determining the Ilr value Ilrold,min and the Ilr value Ilrold,max do not have the same sign, the magnitude of the updated Ilr value Ilrold,updated may be determined as the minimum of the magnitude of the Ilr value Ilrold,min and the first parameter.


In the example embodiment shown in FIG. 10, dashed lines represent the operation used in the SOVE algorithm. According to SOVE algorithm, for each current state s, only the old Ilr value of the most likely previous stat s′ will be considered, thus the updated Ilr value would be taken as: Ilrold,updated=Ilrold,min, meaning that the Ilr values for the less likely states are discarded.


In the example embodiment shown in FIG. 10, solid lines represent the operation used according to various embodiments. While determining the updated Ilr value Ilrold,updated, the likelihood of the less likely state Ilrold,max is taken into account as well, thereby the updated Ilr value Ilrold,updated for the first transmitted bit bj is calculated more accurately.


More specifically, in the example embodiment illustrated in FIG. 10, both Ilrold,min and Ilrold,max are then parsed into their sign and magnitudes, and their signs are compared. Depending on this comparison, selector will select Q or Q+|Ilrold,max|. A min-block may then select the minimum of this result and |Ilrold,min|. The resulting magnitude is the concatenated with sign(Ilrold,min) to obtain the updated Ilr magnitude Ilrold,udpdated.






Ilr
old,updated=sign[Ilrold,min] min (|Ilrold,min|,






Q+˜xor[Ilr
old,min>0,Ilrold,max>0]|Ilrold,max|)


This is equivalent to the formula:







llr

old
,
updated




log



(


max



(


P


s



min
,
0



,

P


s



max
,
0




)



max



(


P


s



min
,
1



,

P


s



max
,
1




)



)






In case Ps′min,0≥Ps′min,1, which implies that Ps′min,0 is the highest of the four likelihoods. In this case the formula above can be simplified to:








llr

old
,
updated




log



(


max

(


P


s



min
,
0



,

P


s



max
,
0




)


max

(


P


s



min
,
1



,

P


s



max
,
1




)


)



=

log



(


P


s



min
,
0




max

(


P


s



min
,
1



,

P


s



max
,
1




)


)






It also implies that Ilrold,updated and Ilrold,min are both positive, and that the sign is preserved. Depending on the relative magnitudes of the likelihoods, the magnitude of Ilrold,updated should be either taken as |Ilrold,min|, Q, or Q+|Ilrold,max|. The same reasoning can be applied in case Ps′min,1>Ps′min,0.


After the updated Ilr value Ilrold,updated is obtained, in one embodiment, the equalizer may store the updated Ilr value Ilrold,updated into m+1th Ilr position of the further memory corresponding to the first possible state s of the received sample at the current time step k, when 1≤m<M.


In the example shown in FIG. 9 there are two further memories respectively corresponding to each possible state at the current time step k. Similar as explained above, each further memory may have a depth of M Ilr positions, each Ilr position takes for example a predetermined number of bits in the memory, and stores a Ilr value of a transmitted bit associated with that state. Specifically referring to FIG. 9, for the first possible state s=0, the updated Ilr value Ilr2_updated s0 of the first transmitted bit bk-3 may be stored in the third Ilr position of the further memory corresponding to the first possible state s=0, as represented by the solid arrow. Thereby, the Ilr values of a transmitted bit is shifted after update by one Ilr position in the memory.


The above description only concentrates on updating the Ilr value of one transmitted bit for one possible state. However, since each transmitted bit may be considered independently, a skilled person shall understand, the described obtaining S810, determining S820 and updating S830 procedure may be repeated for a predetermined number M of transmitted bits.


Referring to the example embodiment shown in FIG. 9, the described obtaining S810, determining S820 and updating S830 procedure may be repeated for each of the M bits bk-2, bk-3 and bk-1-M.


Furthermore, the described obtaining S810, determining S820 and updating S830 procedure may be repeated for respective possible state s of the received sample at the current time step k.


Referring to the example embodiment shown in FIG. 9, the solid arrows represent the update procedure performed for state s=0. The dashed arrow represents the update for s=1. The update procedure for the same transmitted bit may require same set of old Ilr values of that transmitted bit, however, the update procedure may be performed independently for different possible current states. Specifically, based on the same input, Ilr2_old,s′0 and Ilr2_old,s′1, the most likely previous state s′min and a less likely previous state s′max may be determined differently for the possible current states s=0 and s=1. Then the Ilr update may be performed independently for both s=0 and s=1 similarly as described above.


As shown by the dashed arrow in FIG. 9, for current state s=1, the updated Ilr value Ilr2_updated, s1 of the first transmitted bit bk-3 may be stored in the third Ilr position of the memory corresponding to the first possible state s=1.


A skilled person may easily amend the examples shown in FIGS. 9 and 10 to adapt for other embodiments with more possible states for example in case of higher L or PAM 4 modulation. For example, as shown in the example in FIG. 3, there are four possible states, s=0, s=1, s=2 and s=3, the equalizer may repeat the described obtaining S810, determining S820 and updating S830 procedure for respective possible state s=0, s=1, s=2 and s=3 of the received sample at the current time step.


In one embodiment where binary modulation is used and the first parameter Q is a magnitude of a Ilr value of a second transmitted bit associated with the first possible state s, wherein, the second transmitted bit distinguishes the most likely state s′min from the less likely state s′max, the equalizer may store the Ilr value of the second transmitted bit Ilrnew into the first Ilr position of the further memory corresponding to the first possible state s of the received sample at the current time step k.


Specifically, reference is made to the embodiment shown in FIG. 9, at time step k, the bit in the current state is bk, and the second transmitted bit that distinguishes the most likely state s′min from the less likely state s′max is bk-1. The Ilr values of the second transmitted bit bk-1, Ilrnew,so, Ilrnew,s1 respectively for s=0 and s=1, may be stored in the first Ilr position of the further memories corresponding to respective possible current states.


In one embodiment, the equalizer may determine whether the obtained Ilr values Ilrold,min, Ilrold,max are retrieved from the last Ilr position of the memory, namely whether m=M and the equalizer may determine whether the first possible state s is the most likely state smin related to the received sample at the current time step k; based on determining the first possible state s is the most likely state smin, and m=M, the equalizer may output the updated Ilr value Ilrold,updated and FEC decoding may be further performed, for example, by an FEC decoder, based on the output Ilr value Ilrold,updated.


Specifically, referring to the embodiment shown in FIG. 9, the updated Ilr value of the transmitted bit, the old Ilr values of which were obtained from the Mth Ilr position of the memory, may be stored as temporary output of the corresponding current state s. Then, one of the possible current states s=0 or s=1 may be determined as the most likely state smin related to the received sample at the current time step k, for example, based on path metrics. Specifically, the most likely state may be determined as the one with the minimal PM. A skilled person should know other scheme to determine the most likely current state smin. The temporary output corresponding to most likely current state may be provided as output of the equalizer, and provided further to a FEC decoder. The FEC decoder may perform FEC decoding based on the Ilr value output by the equalizer.


The storage of the temporary output may be optional. For example, the equalizer may first determine which one of the possible states is the most likely state smin related to the received sample at the current time step k, and then perform the update procedure for the oldest transmitted bit considering the most likely current state smin is the current state. The updated Ilr value of the oldest transmitted bit and associated with the most likely current state smin is directly provided as output of the equalizer.


In one example, the further memories corresponding to the current states may reuse the memories corresponding to the previous states. For example, the update procedure may be performed first for the oldest one among the transmitted bits, the Ilr values of which are stored in the memories corresponding to the previous states. For example, in FIG. 9, the update is first performed for the oldest transmitted bit bk-1-M. After update, the updated Ilr value IlrM_updated,Smin associated with the most likely current state smin may be output to the FEC decoder as described above. Then the update procedure may be repeated for other transmitted bits in an order of increasing reception time, bk-M, bk-M+1 . . . bk-2. After update, the Ilr values of each transmitted bit may be shifted by one Ilr position in the memory towards the Ilr position corresponding to the oldest transmitted bit. The first Ilr position of the further memory is filled with Ilr value of the second transmitted bit Ilrnew.


In other examples, the further memories corresponding to the possible current state s may be different from the memories corresponding to respective possible previous state s′. For example, the update procedure may be repeated in arbitrary order for all the transmitted bits and the updated Ilr values may be stored in the further memory different from the memories corresponding to respective possible previous state s′, where the old Ilr values were stored. After Ilr values of all the transmitted bits are updated, the content in the further memory may be rewritten into the memory corresponding to the respective previous state s′. The current state s becomes the previous state s′.


In some embodiments, for each possible current state s, there may be more than two possible previous states, for example in case of higher-order modulation formats, like PAM3, PAM4 or PAM8. One of the possible previous states may be determined as the most likely state, the other possible previous states may be all determined as less likely states. The equalizer may repeat the obtaining S810, determining S820 and updating S830 for respective less likely state of the received sample at a previous time step k−1.


For example, in case there are four previous states, namely four s′0, s′1, s′2 and s′3 in case of the PAM4 example, there will be one most likely state s′min and 3 less likely states s′max,1, s′max,2, and s′max,3. Now, for each less likely state s′max,j a first parameter Qj can be defined, which corresponds to the ratio of the highest likelihoods of s′min and s′max,j:







Q
j

=

log



(


max



(


P



s


min

,
0


,

P



s


min

,
1



)



max



(


P



s



max
,
j


,
0


,

P



s



max
,
j


,
1



)



)






The update formula then becomes:






Ilr
old,updated=sign(Ilrold,min)min (|Ilrold,min|,






Q
1
+˜xor(Ilrold,min>0,Ilrold,max,1>0)|Ilrold,max,1|,






Q
2
+˜xor(Ilrold,min>0,Ilrold,max,2>0)|Ilrold,max,2|,






Q
3
+˜xor(Ilrold,min>0,Ilrold,max,3>0)|Ilrold,max,3|)


One way to implement this is to apply the processing of FIG. 10.Error!Reference source not found. three times serially in a row, once for each less likely state. For instance, first the processing of FIG. 10 would be applied on inputs Ilrold,min and Ilrold,max,1 leading to an intermediate result Ilrresult,1. Then, the same processing is applied on inputs Ilrresult,1 and Ilrold,max,2, leading to the intermediate result Ilrresult,2, and then again on Ilrresult,2 and Ilrold,max,3 finally leading to the output result Ilrold,updated. Alternatively, the update could only be done for one of the less likely states (e.g., for one to most likely state, i.e., the one with the smallest Qj), or for two of the less likely states (e.g., for the one to and second to most likely states). This lowers the complexity but at the price of reduced performance.


This proposed processing yields a performance close to that of the BCJR algorithm. It does so at a much lower complexity, as only a single pass through the trellis is required. Also, not all the path metrics have to be stored in memory, but only the last M sets of Ilrs, which reduces the memory requirement significantly.



FIG. 11 shows a block diagram of an apparatus according to various embodiments.


Specifically, FIG. 11 depicts the apparatus 1100 operating in accordance with an example embodiment of the invention. The apparatus 1100 may be, for example, an electronic device, or a receiver in a communication link. In one example, the apparatus 1100 may be implemented in an OLT 110, or in an ONU 121, 122, 123. The apparatus 1100 includes a processor 1110 and a memory 1160. In other examples, the apparatus 200 may comprise multiple processors.


In the example of FIG. 11, the processor 1110 is a control unit operatively connected to read from and write to the memory 1160. The processor 1110 may also be configured to receive control signals received via an input interface and/or the processor 1110 may be configured to output control signals via an output interface. In an example embodiment, the processor 1110 may be configured to convert the received control signals into appropriate commands for controlling functionalities of the apparatus 1100.


The memory 1160 stores computer program instructions 1120 which when loaded into the processor 1110 control the operation of the apparatus 1100 as explained above. In other examples, the apparatus 1100 may comprise more than one memory 1160 or different kinds of storage devices.


Computer program instructions 1120 for enabling implementations of example embodiments of the invention or a part of such computer program instructions may be loaded onto the apparatus 1100 by the manufacturer of the apparatus 1100, by a user of the apparatus 1100, or by the apparatus 1100 itself based on a download program, or the instructions can be pushed to the apparatus 1100 by an external device. The computer program instructions may arrive at the apparatus 1100 via an electromagnetic carrier signal or be copied from a physical entity such as a computer program product, a memory device or a record medium such as a Compact Disc (CD), a Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disk (DVD) or a Blu-ray disk.


According to an example embodiment, the apparatus 1100 comprises means for performing, wherein the means for performing comprises at least one processor 1110, at least one memory 1160 including computer program code 220, the at least one memory 1160 and the computer program code 1120 configured to, with the at least one processor 1110, cause the performance of the apparatus 1100.


Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on the apparatus, a separate device or a plurality of devices. If desired, part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plurality of devices. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a ‘computer-readable medium’ may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted in FIG. 11. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.


If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined.


Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.


It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.

Claims
  • 1. An apparatus, comprising: at least one memory configured to store instructions; andat least one processor configured to execute the instructions and cause the apparatus to perform, obtaining for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (bj), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at the previous time step (k−1);determining based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter (Q) related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max);updating magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter, to obtain an updated Ilr value Ilrold,updated.
  • 2. The apparatus of claim 1, wherein the updating includes, reducing the magnitude of the Ilr value Ilrold,min, at least based on the sign of the Ilr value Ilrold,min, the Ilr value Ilrold,max and the first parameter.
  • 3. The apparatus of claim 1, wherein the updating includes, determining whether the Ilr value Ilrold,min and the Ilr value Ilrold,max have the same sign;based on determining the Ilr value Ilrold,min and the Ilr value Ilrold,max have the same sign, determining the magnitude of the updated Ilr value Ilrold,updated as the minimum of the magnitude the Ilr value Ilrold,min and the sum of the first parameter (Q) and the magnitude of the Ilr value Ilrold,max;based on determining the Ilr value Ilrold,min and the Ilr value Ilrold,max do not have the same sign, determining the magnitude of the updated Ilr value Ilrold,updated as the minimum of the magnitude of the Ilr value Ilrold,min and the first parameter (Q).
  • 4. The apparatus of claim 1, wherein in the apparatus is further caused to perform, repeating the obtaining, determining and updating for a predetermined number M of transmitted bits.
  • 5. The apparatus of claim 1, wherein in the apparatus is further caused to perform, repeating the obtaining, determining and updating for respective possible state (s) of the received sample at the current time step (k).
  • 6. The apparatus of claim 1, wherein, the first parameter is a magnitude of a log-likelihood ratio (Ilrnew) between the most likely state (s′min) and the less likely state (s′max).
  • 7. The apparatus of claim 1, wherein in the apparatus is further caused to perform, obtaining, the Ilr values Ilrold,min, Ilrold,max of the first transmitted bit (bj) respectively from the mth Ilr position of a memory corresponding to respective possible state (s′) of the received sample at the previous time step (k−1); wherein, 1≤m≤M, M being related to length of the memory.
  • 8. The apparatus of claim 1, wherein in the apparatus is further caused to perform, storing the updated Ilr value Ilrold,updated into the m+1th Ilr position of a further memory corresponding to the first possible state (s) of the received sample at the current time step (k), when 1≤m<M.
  • 9. The apparatus of claim 7, wherein in the apparatus is further caused to perform, determining whether the first possible state (s) is the most likely state (smin) related to the received sample at the current time step (k) and whether m=M;based on determining the first possible state (s) is the most likely state (smin), and m=M, outputting the updated Ilr value (Ilrold,updated) and performing FEC decoding based on the output Ilr value (Ilrold,updated).
  • 10. The apparatus of claim 1, wherein the received samples are modulated according to any one of: a binary modulation scheme, especially the non-return to zero on-off keying (NRZ-OOK) modulation, andPAM 4 modulation.
  • 11. The apparatus of claim 1, wherein the received samples are binary modulated, especially using non-return to zero on-off keying (NRZ-OOK) modulation, the first parameter is a magnitude of a Ilr value (Ilrnew) of a second transmitted bit associated with the first possible state (s), wherein, the second transmitted bit distinguishes the most likely state (s′min) from the less likely state (s′max).
  • 12. The apparatus claim 8, wherein in the apparatus is further caused to perform, the means for performing are further configured for: storing, the Ilr value (Ilrnew) of the second transmitted bit into the first Ilr position of the further memory corresponding to the first possible state (s) of the received sample at the current time step (k).
  • 13. The apparatus of claim 1, wherein in the apparatus is further caused to perform, repeating the obtaining, determining and updating for respective less likely state (s′maxi, s′max2, s′max3) of the received sample at a previous time step (k−1).
  • 14. A method comprising: obtaining, by an equalizer based on Viterbi algorithm in a receiver in a communication link, for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (bj), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at a previous time step (k−1);determining, by the equalizer, based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max);updating, by the equalizer, magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter, to obtain an updated Ilr value Ilrold,updated.
  • 15. A non-transitory computer readable medium storing instructions which, executed by a computer, cause the computer to perform obtaining for a first possible state (s) of a received sample at the current time step (k), log-likelihood ratio, Ilr, values Ilrold,min, Ilrold,max of a first transmitted bit (bj), wherein, the Ilr values Ilrold,min, Ilrold,max are respectively associated with a most likely state (s′min) and a less likely state (s′max) related to a received sample at a previous time step (k−1);determining based on path metrics and branch metrics corresponding to the received sample at the current time step (k); a first parameter related to a difference between likelihoods of the most likely state (s′min) and the less likely state (s′max);updating magnitude of the Ilr value Ilrold,min at least based on the Ilr value Ilrold,min, the Ilr value Ilrold,max, and the first parameter, to obtain an updated Ilr value Ilrold,updated.
Priority Claims (1)
Number Date Country Kind
22196395.2 Sep 2022 EP regional