The present disclosure relates to optimization of capacity of communication channels and targets and addresses in particular the case of non-trivial channels, where an optimal input distribution cannot be obtained theoretically.
An essential characteristic of a memoryless communication channel can be represented by the so-called “conditional probability distribution”, noted pY|X(y|x), of the output Y given the input X. Some examples of well-known communication channels are given in the following list:
Once the conditional probability distribution pY|X(y|x) is accurately known, it is possible to optimize the communication system relying on
The channel conditional probability distribution pY|X(y|x) is supposed hereafter to have low fluctuations so that a learning and tracking is possible.
In general, wireless communication systems can be modelled by linear systems, parameters of which are estimated by sending training signals. However, when the number of parameters to be tracked is high, a large overhead of pilots is required and reduces the system throughput. Furthermore, when the channel is more sophisticated, defining the model to be tracked is difficult or impossible. The future of wireless communication systems is to continue increasing the carrier frequency in the goal of exploiting available frequency bands. Terahertz communications (above 300 GHz) is one keyword envisioned for new generation telecommunications (6G). However, up to today's knowledge, terahertz communication requires new materials, new radio equipment at the frontier between electronics and photonics. The strong non-linear and random characteristics of these new channels cannot be ignored so as to take the best benefit of these new frequency bands. It is of interest to learn such communication channels instead of relying on inaccurate models.
In order to optimize the transmission strategy, the optimized input distribution must be known in advance at the transmitter. The received signal is used to estimate the channel conditional probability distribution, which in turn is used for optimizing the optimal input distribution, which is finally provided to the transmitter. The compact and accurate representation of channel conditional probability distribution is used in order to
A problem lies however on jointly learning the channel conditional probability distribution and optimizing the transmission method accordingly. One of the main challenges is to learn the channel conditional probability distribution when the channel model is unknown. In order to address the widest varieties of channels, an accurate description of the channel is needed, but often leads to high complexity at the receiver. Thus, it would be better to find a compact and accurate representation of channel conditional probability distribution.
The present disclosure aims to improve this situation.
To that end, it is proposed a method for optimizing a capacity of a communication channel in a communication system comprising at least a transmitter (reference 10 of
The transmitter (10) transmits messages conveyed by a signal associated with a transmission probability according to an input signal probability distribution estimated by said estimator (13).
The transmitter (10) takes at least the messages as inputs and outputs the signal to be transmitted on the channel (12), and the channel takes the transmitted signal as an input and outputs a received signal which is processed at the receiver (11) in order to decode the transmitted message.
The probability distribution, noted pY|X(y|x), relates to an output Y, corresponding to the received signal, given an input X which corresponds to the transmitted signal X, and represents thus a channel conditional probability distribution of a probability of outputting a received signal Y, when the transmitted signal X is given.
More particularly, the probability distribution is estimated as an approximation of the channel conditional probability distribution pY|X(y|x) by using a functional basis of probability distributions. The channel conditional probability distribution is thus approximated, for each possible signal sent by the transmitter, by a mixture model with mixing probability distribution from said functional basis. The probability distribution for each possible transmitted signal is estimated from at least output signals received at the receiver, and by using a collapsed Gibbs sampling relying on a Dirichlet process as detailed in the presentation of embodiments below.
The implementation of the present disclosure makes it then possible to reach an accurate representation of the channel conditional probability distribution, and thereby enhances the robustness of the decoding.
The wording according to which the transmitter (10) takes “at least” the messages as inputs, refers to the fact that the transmitter can take also the estimated input signal probability distribution, and more particularly an optimized input signal probability distribution, in an optional embodiment implementing an optimizer (14). So too, the probability distribution for each possible transmitted signal can be estimated from the output signals received at the receiver and also thanks to the optimized input signal probability distribution.
In an embodiment which appears advantageous relatively to calculations to perform, the aforesaid functional basis is chosen in a family of exponential functions. Alternatively, other possible embodiments may use known methods of digital integration.
In an embodiment, the probability distribution estimation is based on an approximation of the channel conditional probability distribution relying on a decomposition into a basis of probability distributions functions p(y|x; θ), where θ is a set of parameters θj, said distribution function being the exponential family and the parameters are a mean and a variance, such that the approximation of pY|X(y|x) in said basis of probability distributions functions p(y|x; θ) is given by pY|X(y|x)≅Σj=1Nwjg(y|x; θj), where N, and the sets {θj}, {wj} are parameters to be estimated.
In this embodiment, the probability distributions functions g(y|x; θ) being from the exponential distribution family, the prior on θ can be conjugate to a corresponding exponential distribution with a parameter λ, with λ={λ1, λ2} denoting a hyperparameter for the conjugate prior on the parameter λ,
And the following steps can then be performed:
In this embodiment, the updated parameters {θj}, {wj} can be sent to the transmitter and/or to the receiver so as to compute said conditional probability distribution estimation. The signalization obtained at the receiver and/or at the transmitter can then be enhanced with such updated parameters. In this embodiment then, the transmitter can improve the accuracy of its knowledge of the conditional probability distribution and thereby optimize more efficiently its transmission strategy. On the receiver side, the improved accuracy of the knowledge of the conditional probability distribution can be used to improve its decoding performance.
In an embodiment, the posterior distribution p(ci=k|x1:n, y1:n, c−i, λ) can be given by:
p(ci=k|x1:n,y1:n,c−i,λ)=P(ci=k|c−i)p(yi|x1:i−1,xi+1:n,y1:i−1,y1+1:n,c−i,ci=k,λ), where:
and p(yi|x1:i−1,xi+1:n,y1:i−1yi+1:n,c−i,ci=k,λ)=∫p(yi|xi,θ)P(θ|x1:i−1,xi+1:n,y1:i−1,yi+1:n,c−i,ci=k,λ)dθ
In this embodiment, it can be furthermore checked whether a new cluster is to be created so as to determine a best value of parameter N, providing an optimum approximation of the channel conditional probability distribution.
In this embodiment, it can be randomly chosen to create a new cluster from a probability verifying:
p(yi|xi,P0)=∫p(yi|xi,θ)dP0, where P0 is the conjugate to p(y|x;θ)
In a first embodiment and in a second embodiment, a transmitted message is conveyed by a signal which belongs to a finite set of signals, and this set of signals is represented by a set of possible respective symbols x. In the first embodiment, the transmitted symbols x are known at the receiver in a “training mode”, while in the second embodiment they are not known.
Therefore, in the first embodiment, the method can be applied for only one predefined value of the input X, this input X being known, and the associated output Y is considered only when the input X equals said predefined value.
In a third embodiment and in a fourth embodiment, the transmitted message is conveyed by a signal which does not necessary belong to a finite set of signals. The third embodiment corresponds however to the situation where the input X is known, while the fourth embodiment corresponds to the situation where the input X is not known and the transmitted signal does not belong to a known finite set of signals.
Therefore, the second and fourth embodiments where the input X is not known are related to a tracking mode. In this tracking mode, an input signal probability distribution pX(x) is taken into account for the channel conditional probability distribution estimation. A value of a transmitted symbol x can be inferred with a Bayesian approach so that each observation y contributes to an update of pY|X(y|x) with a weight corresponding to the probability of y being associated to x, from a current estimation on the condition probability distribution.
It should be noted that in the aforesaid second embodiment, the input signal probability distribution pX(x) becomes simply a scalar (a probability value), and pY|X(y|x) becomes pY|X(y|x=θm) with a weight corresponding to the probability of y being associated to x=ωm.
It should be noted also that, in the first and second embodiments, the output Y can be selected on the basis of the set of possible inputs X, before building the N clusters for parameter sets {θj}, {wj}. Furthermore, the input X can be known typically in the first embodiment, leading thus to a high reduction of complexity when the first embodiment is implemented.
Otherwise, when the input X is not known and when it does not even belong to a finite known set of symbols (or signals), typically in the fourth embodiment, a model for non-parametric joint density can be defined however as:
p(y,x)=Σj=1Nwjg(y|x,θj)p(x|ψj)), where: (21)
parameters {θj}, {wj}, and {ψj} denote conditional density parameters, and j being a cluster index, the parameters (θ, ψ) being jointly obtained from a base measure of Dirichlet Process “DP” such that (θ, ψ)˜DP(αP0θ×P0ψ) where a is a scaling parameter.
According to the option related to the optimization computation presented above, the set of signals being represented by a set of respective symbols, parameters of said mixture model are optimized for defining optimized symbols positions and probabilities to be used at the transmitter, in order to optimize the capacity of the approximated channel, the optimized symbols positions and probabilities being then provided to the receiver and/or to the transmitter.
The input signal distribution optimizer (reference 14 of
The present disclosure aims also at a system comprising at least a transmitter (10), a receiver (11), a communication channel (12) between the transmitter and the receiver, and a channel conditional probability distribution estimator (13) for implementing the method above. In this system, the estimator:
The system can furthermore comprise an input signal probability distribution optimizer (14) for performing the method implementing the optimization. Here, the input signal distribution optimizer (14):
The present disclosure aims also at a computer program comprising instructions for implementing the method as presented above, when such instructions are executed by a processor. The instructions of the program can be distributed over the receiver, the transmitter, the estimator and optionally to the optimizer.
More details and advantages of the invention will be understood when reading the following description of embodiments given below as examples, and will appear from the related drawings where:
It is described hereafter a context of the present disclosure where, in addition to the estimation of the channel conditional probability distribution, an optimization of the channel conditional probability distribution is furthermore performed according to an optional embodiment. These two steps (estimation and optimization) can then rely either on the knowledge of the input signal, or of its probability distribution, or simply without any knowledge on the input signal (performing then a so-called blind “tracking” phase).
Thus, the input signal probability distribution is to be shared with the transmitter, so as to optimize the channel capacity. The input signal probability distribution is to be shared with the receiver, so as to optimize the receiver's performance. The input signal probability distribution is to be shared with the channel conditional probability distribution estimator, so as to improve the channel conditional probability distribution estimation performance.
By improving the input signal probability distribution according to the channel knowledge, the channel capacity is improved, which further allows to improve the channel conditional probability distribution estimation, and so on. Thus, an iterative approach of jointly optimizing the channel conditional probability distribution estimation and input signal probability distribution is preferred. Another advantage of this approach is to allow tracking the changes of the input signal probability distribution, as their variation speed is not too high.
The channel conditional probability distribution estimator can implement an approximation of the channel conditional probability distribution pY|X(y|x) by using a functional basis of probability distributions which is preferably of the exponential functions family. Indeed, it is shown below that specific calculations are easier to manipulate with such exponential functions.
As a general approach, a transmitter uses a finite set of symbols on a communication channel characterized by a conditional probability distribution. The conditional probability distribution can be approximated, for each possible symbol sent by the transmitter, by a mixture model with mixing probability distribution from the exponential family. The conditional probability distribution for each possible transmitted symbol of the finite set of symbols is estimated from at least output symbols, and using a collapsed Gibbs sampling relying on Dirichlet process. The result is in the form of mixture model and the number of components is learnt directly from the observation. The exponential family makes it possible to get simpler implementation of the Gibbs sampling.
The conditional probability distribution estimation is preferably computed at the receiver.
In a first embodiment, called “training mode”, it is considered that the symbols x are known at the receiver.
In a second embodiment, called “tracking mode”, the symbols x are unknown at the receiver. However, it is considered that the input signal probability distribution pX(x) is known and taken into account during the conditional probability distribution estimation step.
The two embodiments can be interleaved in time according to the cases where pilot symbols are sent, or when data has been correctly decoded and can be used as pilots signal for the channel estimation and approximation, or when data has not been correctly decoded.
In addition, the parameters of the mixture model as estimated by the conditional probability distribution estimator can be provided to an input distribution optimizer (according to the aforesaid optional embodiment) that defines optimized symbols positions and probabilities to be used at the transmitter for optimizing the capacity of the approximated channel. The optimized symbols positions and probabilities are then provided to the transmitter and receiver.
Referring to
A channel conditional probability distribution of the probability of outputting a given signal, when the input is fixed, is computed. The probability distribution can generally be defined on a discrete or continuous input and/or output alphabet. It is preferably considered the continuous output alphabet, and the probability distribution is called a “probability density function” in this case.
The channel conditional probability distribution estimator 13 takes the received signal and the input signal or its estimated (or optionally optimized) probability distribution as inputs and outputs the channel conditional probability distribution. It is located preferably in the receiver 11, in the transmitter 10, or in an external computing device.
The input signal probability distribution optimizer 14 can take then the conditional probability distribution estimation as an input and outputs the optimized input signal probability distribution to the transmitter 10 and receiver 11, here. The conditional probability distribution estimation can be then used in this embodiment for computing the optimized input signal probability distribution at the input signal probability distribution optimizer 14. Furthermore, it is shown below that the optimization can be made more efficient when the conditional probability distribution estimation is approximated by a mixture of exponential distributions.
The receiver 11 takes the received signal, the optimized input signal probability distribution and the estimated channel conditional probability distribution as inputs and performs an estimation of the message conveyed in the received signal.
In this example of embodiment, the conditional probability distribution estimator 13 implements preferably the following steps as illustrated in
The input signal distribution optimizer 14 can implement in an exemplary embodiment the steps illustrated in
The transmitter 10 preferably implements the steps illustrated in
The receiver 11 can implement the steps illustrated in
The purpose of the conditional probability distribution estimator 13 is to estimate the distribution pY|X(y|x) based on an approximation of the channel probability distribution relying on a decomposition into a basis of probability distributions functions g(y|x; θ), where θ is a parameter set. The distribution function is the exponential family and the parameters are essentially the mean and variance for the scalar case, and more generally the mean vector and covariance matrix for the multivariate case. For an example, the parameter set 0 may contain the mean vector and covariance matrix of a so-called “multivariate normal distribution”. In another example, it may contain the shape parameter and spread parameter of a Nakagami distribution.
In general the considered functions g(y|x; θ) are written in the form
g(y|x;θ)=h(y,θ)exp(xTy−a(x,θ)),
where x, h(y, θ) is a function of y and θ, and a(x, θ) is the moment generating function. (x and y are vectors in this general case).
The conjugate prior for the above exponential model is of the form of:
g(θ|λ)=g′(θ)exp(λ1Tθ−λ2a(θ)−b(λ1,λ2)),
in which λ={λ1, λ2}, g′(θ) is a scalar function of θ, parameter λ1 has the same dimension as θ and λ2 is a scalar, and b(λ1, λ2) is the moment generating function of the prior chosen such that the prior integrates to one.
Thus, the objective of the conditional probability distribution estimator 13 is to find the best approximation of pY|X(y|x) in the aforementioned basis:
p
Y|X(y|x)≅Σj=1Nwjg(y|x;θj)
where N, and the sets {θj}, {wj} are parameters to be estimated. The weights wj are scalar parameters.
The function pY|X(y|x) is bivariate with variables x and y which span in general in a continuous domain.
In a first case, it is assumed that the symbols x belong to a finite alphabet Ω={ω1, . . . , ωM} of cardinality M, which is especially fixed and known. In a first embodiment according to the “training mode”, the symbols x are known at the receiver. In a second embodiment according to the “tracking mode”, the symbols x are unknown at the receiver but the input signal probability distribution pX(x) is known.
In a second case, the symbols x might belong to a finite set of signals, but this set is not especially fixed, nor known. In a third embodiment according to the “training mode”, the symbols x are known at the receiver. In a fourth embodiment according to the “tracking mode”, the symbols x are unknown.
In the first case (common to the first and second embodiments above), the conditional probability distribution is fully characterized by
where pX(x=ωm) is null everywhere except in ωm, where the value is equal to the probability that the symbol om, is transmitted. Thus, it is sought to approximate the M functions pY|X(y|x=ωm) of only y for the M possible values of x.
This can be done in parallel by using M estimators, as it will be stated below.
Thus, it is now considered the following approximation:
p
Y|X(y|x=ωm)≅Σj=1N
Here, each transition depends on the transmitted symbol (M possibility with M the cardinality of the known set of symbols in the first case shared by the first and second embodiments).
Solving this estimation problem of equation (1) is in general difficult. Indeed, the number Nm providing the best approximation is unknown, which makes difficult the use of deterministic methods, such as the expectation maximization algorithm.
Several embodiments are possible for the conditional probability distribution estimation and differ according to assumptions on the knowledge of x or its statistics.
Embodiments are described below especially for the conditional probability distribution estimation performed by the estimator 13. As will be apparent below, these embodiments do not need necessarily any optimizer 14 (which is optional as indicated above).
In the first embodiment related to the “training mode”, it is assumed that the symbols x sent by the transmitter are known at the conditional probability distribution estimator 13. These symbols are used as pilot symbols in order to learn the best approximation as shown in
In this first embodiment related to the training mode, as well as in a second embodiment related to a tracking mode detailed below, the optimizer 14 is not necessary, and the estimation of pY|X(y|x) is finally needed at the receiver 11 only.
The training mode is activated at the transmitter 10 on given time/frequency resource known at the receiver 11, which also knows the sent message or equivalently x. Therefore, referring to
When x is perfectly known at the receiver, solving equation (1) can be achieved by using the method described based on Gibbs sampling as disclosed for example in
NEAL00: Neal, Radford M. “Markov chain sampling methods for Dirichlet process mixture models.” Journal of computational and graphical statistics 9.2 (2000): 249-265.
It provides a computational efficient and high performance solution, especially when the probability distributions functions g(y|x=ωm; θ) belong to the exponential family. Gibbs sampling is a randomized algorithm, i.e., it uses generated random values to perform an approximation of a statistical inference instead of relying on a deterministic approach. In general, this involves an iterative approach. Using sampling allows reducing the complexity by manipulating a finite number of elements instead of continuous functions. Using Gibbs sampling in particular allows to efficiently manipulate complicated multivariate functions representing the multivariate probability distribution used in the statistical inference methods. Thus, the main principle for performing the estimation in (1) is to sample (i.e., obtain representative samples) from the posterior distribution on the parameters {θm,j}, {wm,j} knowing observation samples y. In particular, a Dirichlet process can be used as the prior probability distribution in infinite mixture models. It is a distribution over the possible parameters of a multinomial distribution and has the advantageous property to be the conjugate prior of the multinomial distribution. This property helps in simplifying some steps of the Gibbs sampling. Furthermore, it is known to lead to few dominant components. Thus, by finding these Nm dominant components, the approximation is compact and accurate.
Some adaptation is needed to use the teaching of NEAL00 to the present context. x is assumed to belong to a finite alphabet which is known at the receiver. Each observation yi is associated to a given symbol xi out of M possible values of x, and feeds one out of M parallel estimators according to the value of xi (the estimators are then one out of the three estimators described in [NEAL00]). Finally, M functions pY|X(y|x=ωm) are obtained out of the M estimators. In particular, the parameters Nm, {θm,j}, {wm,j} are the output of each estimator and allow to characterize the M functions PY|X(y|x=ωm).
Details of an example of embodiment are given below relatively to the training mode, as shown on
The conditional probability distribution estimation is based on an approximation of the channel probability distribution relying on a decomposition into a basis of probability distributions functions g(y|θm,j), where θm,j is a parameter set, in step S10. The distribution function is the exponential family and the parameters are essentially the mean and variance for the scalar case, and more generally the mean vector and covariance matrix for the multivariate case.
In step S11, the conditional probability distribution estimator can find the best approximation of pY|X(y|x=ωm) in the aforementioned basis
p
Y|X(y|x=ωm)≅Σj=1N
where Nm, and the sets {θm,j}, {wm,j} are parameters to be estimated in step S12.
Solving this estimation problem is in general difficult. Indeed, the number Nm providing the best approximation is unknown, which makes difficult the use of deterministic methods, such as the expectation maximization algorithm.
However, the use of clusters described below with reference to steps S121 to S124 of
Using sampling allows reducing the complexity by manipulating a finite number of elements instead of continuous functions. Using Gibbs sampling in particular allows managing efficiently complicated multivariate functions representing the multivariate probability distribution used in the statistical inference methods. Thus, the main principle for performing the estimation in (1) is to sample (i.e., obtain representative samples) from the posterior distribution on the parameters {θm,j}, {wm,j} knowing observation samples y at the input of the m-th estimators, i.e., after selection of the observed symbols y associated to a transmitted symbol x=ωm, as known by the receiver, i.e., for time slots where it is known that the transmitted symbol x=ωm, the observed symbol feeds the m-th estimator.
In particular, a Dirichlet process can be used as the prior probability distribution in infinite mixture models. It is a distribution over the possible parameters of a multinomial distribution and has the property to the conjugate prior for the multinomial (this helps in simplifying some steps of the Gibbs sampling, as explained in [NEAL00]). Furthermore, it is known to lead to few dominant components. Thus, by finding these Nm dominant components, the approximation is compact and accurate.
In order to determine these components in a tractable way, Monte Carlo methods can be applied (and more specifically Markov chain Monte Carlo), allowing to draw samples from probability distributions. More specifically, Gibbs sampling provide a relevant class of algorithms to implement the present disclosure. The Gibbs sampling method in general involves a numerical integration step, which is computational demanding. Fortunately, when the probability distributions functions g(y|x; θ) are conjugate prior distributions (which is the case for the exponential family), the integration can be performed in a closed form expression which significantly reduced the computation complexity. Since pY|X(y|x=ωm) is a weighted sum of g(y|x; θ) functions, its conjugate priori is also known and computed efficiently. At the output of the Gibbs sampling, the value Nm, and the sets {θm,j}, {wm,j}, are estimated as shown as a general step S12 of
In the present case, the collapsed Gibbs sampling of [NEAL00] is used in a particular embodiment shown in
Then, the general step S12 can be decomposed into sub-steps S121 to S124, as follows:
p(ci=k|y1:n,c−i,λ,x1:n=ωm)=p(ci=k|c−i)p(yi|y1:i−1,yi+1:n,c−i,ci=k,λ,x1:n=ωm)
p(yi|xi=ωm,P0)=∫p(yi|xi=ωm,θ)dP0,
At the output of this procedure, the parameters Nm, {θm,j}, {wm,j} are estimated and provide the approximation of the channel conditional probability distribution pY|X(y|x=ωm) from (1). This procedure is repeated or performed in parallel for the M estimators.
The second embodiment is given below, implementing a “tracking mode” with statistical knowledge. This mode is efficiently used after the first embodiment. The transmitted symbols x cannot be known in this embodiment, but, as presented on
The main difference with the training mode is that the sent symbol x is unknown. However, the value of x can be inferred with a Bayesian approach, i.e., each observation y contributes to the update of pY|X(y|x=ωm) with a weight corresponding to the probability of y being associated to ωm, from the current estimation on the condition probability distribution.
Details are given below regarding the second embodiment implementing the aforesaid tracking mode.
A clustering technique will be used. Let the cluster allocation labels be si=j if (θj) belongs to the j-th cluster. Let ρn=[s1, . . . , sn]T denote the vector of cluster allocation labels. Using the prediction in the Dirichlet Process (DP) by a covariate-dependent urn scheme, one obtains:
In order to update, for each estimator, the conditional probability distribution pY|X(y|x=ωm) from the observed symbol, one can compute
where ζ0,y(y|xn+1)=∫p(y|xn+1, θ)dP0, ζj,y(y|xn+1)=p(y|xn+1, xj*, yj*).
The clusters can then be updated similarly as in the first embodiment. Details of the principle of the prediction in the Dirichlet Process (DP) by a covariate-dependent urn scheme are given for example in document: “Improving prediction from Dirichlet process mixtures via enrichment”, Sara K. Wade, David B. Dunson, Sonia Petrone, Lorenzo Trippa, Journal of Machine Learning Research (Nov. 15, 2013)
In the third embodiment, the symbols x are not assumed to belong to a finite alphabet. Thus, the bi-variate conditional probability pY|X(y|x) must be learnt for all possible x values.
The conditional probability distribution estimation is based on an approximation of the channel probability distribution relying on a decomposition into a basis of probability distributions functions g(y|x, θj), where θj is a parameter set, in step S10. The distribution function is the exponential family and the parameters are essentially the mean and variance for the scalar case, and more generally the mean vector and covariance matrix for the multi-variate case. The conditional probability distribution estimator can find the best approximation of pY|X(y|x) in the aforementioned basis
p
Y|X(y|x)≅Σj=1Nwjg(y|x,θj)
where N, and the sets {θj}, {wj} are parameters to be estimated.
The solution is relying on the same algorithm as in the first embodiment, with the main following differences: there is only one estimator instead of M. The transmitter will transmit symbols x that are known at the receiver. The performance will be improved by using these transmitted symbols from a pseudo-random generator that will generate x values on a domain of interest, for example following a Gaussian distribution. By knowing the initial parameters of the pseudo-random generator, both the transmitter and receiver can know the x symbols.
The estimator is using a collapsed Gibbs sampling approach as in the first embodiment. In the present case, the collapsed Gibbs sampling of NEAL00 is used in a particular embodiment shown again in same
The general step S12 can be decomposed again into:
p(ci=k|y1:n,x1:n,c−i,λ)=p(ci=k|c−i)p(yi|y1:i−1,yi+1:n,x1:n,c−i,ci=k,λ)
p(yi|x,P0)=∫p(yi|x,θ)dP0,
where P0 is the conjugate to the p(y|x; θj) (with parameter λ), which is computable since p(y|x; θj) is a weighted sum of g(y|x, θj) functions belonging to the exponential family. Then it is randomly chosen to create a new cluster from this probability. Therefore, it is randomly chosen to create a new cluster from a probability verifying that probability.
Details of use of Polya urn scheme can be obtained from document: “Ferguson distributions via Polya urn schemes”, D. Blackwell and J. B. MacQueen, The Annals of Statistics, vol. 1, no. 2, pp. 353-355, 1973.
In a fourth embodiment related to a tracking mode related to the aforesaid second case, it is of interest to consider that the input does not belong to a finite alphabet and that the sent symbols are unknown. This mode is efficiently used after the third embodiment. The model for non-parametric joint density is defined as:
p
Y,X(y,x)=Σj=1Nwjg(y|x,θj)p(x|ψj). (21)
The parameters {θj}, {wj}, and {ψj} denote the conditional density parameters that can be mean and covariance for the case of Gaussian density, magnitude, and the corresponding parameters for the input distribution, respectively. Generally speaking, the parameters (θ, ψ) are jointly obtained from the base measure of Dirichlet Process “DP”, i.e., (θ, ψ)˜DP(αP0θ×P0ψ) where a is a scaling parameter. The corresponding conditional probability density can be written in the following non-parametric form
It is worth noting that in the prediction phase considering the fact that the placement of the particles is fixed and optimized from the training: the denominator in (22) and p(x|ψj) act as scaling factors.
A clustering technique will be used. Let the cluster allocation labels be si=j if (θi, ψi) belongs to the j-th cluster. Let ρn=[s1, . . . , sn]T denote the vector of cluster allocation labels. Using the prediction in DP by covariate-dependent urn scheme, one obtains:
in which nj is the number of subject indices in the j-th cluster.
It is further assumed that the set of particles is fixed, in the tracking phase of channel probability density estimation. This is a practical assumption as the location of the particles in the constellation are fixed in most of the practical applications. The notation xn+1∈x1:n is used here to emphasize that the new signal is transmitted from the set of fixed constellation points that was used during the training. The estimated density p(y|y1:n, x1:n, xn+1 ∈x1:n) for the new received signal y is obtained as
p(y|y1:n,x1:n,xn+1∈x1:n)=Σρ
where ρn=[s1, . . . , sn]T denotes the vector of cluster allocation labels with si=j if (θi, ψi) belongs to the j-th cluster. Using (24), (27) can be simplified as
where k is the number of groups in the partition ρn, and
ζ0,y(y|xn+1)∫p(y|xn+1,θ)dP0θ(θ), (29)
ζj,y(y|xn+1)∫p(y|xn+1,ψ)p(ψ|xj*,yj*)dψ, (30)
where xj*, yj* denote the set of inputs and outputs for the j-th cluster. Consequently, the estimated conditional probability density is obtained according to equation (28).
After any of the above embodiments, an estimation of the conditional density probability function pY|X(y|x) is obtained. In the two first embodiment, it is obtained for a fixed constellation of transmitted symbols x, while for the third and fourth embodiment, it is known as a bivariate function of y and x, and in particular for any values of x.
This knowledge of pY|X(y|x) is used in the receiver in order to compute the likelihood of each received symbol y assuming a transmitted symbol x. This likelihood is required in the maximum likelihood decoding of transmitted symbols by selection the symbol x maximizing the likelihood for any received symbol x. This likelihood is also a component for computing Log-Likelihood Ratios provided at the input of a soft-input decoder, such as in:
Thus, it is of interest for the receiver to know the estimation of pY|X(y|x).
This invention can be advantageously used to provide a compact representation of pY|X(y|x) through the parameters {wj} and {θj} as provided by the channel conditional probability distribution estimator to the receiver.
In another option, the knowledge of pY|X(y|x) can be advantageously used at the transmitter in order to optimize the input distribution. Indeed, from the knowledge of a given distribution pX(x) which is in most applications discrete with fixed position of x and varying probabilities, the capacity of the channel characterized by pY|X(y|x) with input pX(x) can be evaluated. Thus, the input distribution can be optimized.
In a first case of such optimization, the input distribution providing the highest capacity among a predefined set of input distributions is selected. Such a set is for example obtained by sampling Gaussian input distributions with different variance values. The sampling is for example performed for positions of x following a QAM (e.g. 256 QAM) constellation. In a second case of such optimization, the set is provided by several known constellation, such as QPSK, 16-QAM, 64-QAM, 256QAM, 8-PSK, 32-PSK, and so on. In a third case of such optimization, the position of the symbols x and their associated probability is chosen randomly and the best found random constellation is selected after each capacity computation.
In a last case of such optimization, it is advantageous to select the function g(y|x, θj) from an exponential families. Indeed, pY|X(y|x) being the linear combination of such functions, it is possible to compute the derivative functions of pY|X(y|x) for fixed values of x in a closed form. Thus, a gradient descent approach can be used in order to optimize the capacity with respect to the input distribution pX(x).
This invention can be advantageously used to provide a compact representation of pY|X(y|x) through the parameters {wj} and {θj} as provided by the channel conditional probability distribution estimator to the transmitter.
Number | Date | Country | Kind |
---|---|---|---|
20305674.2 | Jun 2020 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/016387 | 4/16/2021 | WO |