The present invention relates to the field of telecommunications and more particularly targets the problem of optimizing the capacity of communication channels.
The optimization can be implemented by computer means and more particularly for example by artificial intelligence, and can be based on observations of whether transmitted messages via communication channels from a transmitter are well received or not at a receiver.
Especially, the case of mixture channels where the optimal input distribution cannot be obtained theoretically, is difficult to address. The probability distribution can be decomposed on a functional basis.
The essential characteristics of a memoryless communication channel can be represented by the conditional probability distribution pY|X(y|x) of the output Y given the input X. Some examples of well-known communication channels are given below:
Once the conditional probability distribution pY|X(y|x) is accurately known, it is possible to optimize the communication system relying on:
It is still needed a solution to the problem of optimizing the transmission strategy preferably by designing the probability of transmission and optionally the position of each symbol of a constellation, typically (QAM, PSK, etc. for example). The main challenges are:
The present invention aims to improve the situation.
To that end, it proposes a method for optimizing a capacity of a communication channel in a communication system comprising at least a transmitter, a receiver, and the communication channel between the transmitter and the receiver, the transmitter using a finite set of symbols Ω={ω1, . . . , ωN} having respective positions on a constellation, to transmit a message including at least one symbol on said communication channel, and the communication channel being characterized by a conditional probability distribution pY|X(y|x), where y is the symbol received at the receiver while x is the symbol transmitted by the transmitter.
More particularly, the aforesaid conditional probability distribution pY|X(y|x) is obtained, for each possible transmitted symbol x, by a mixture model using probability distributions represented by exponential functions, and an optimized input distribution px(x) is computed, based on parameters of said mixture model, to define optimized symbols positions and probabilities to be used at the transmitter for optimizing the capacity of the channel.
Therefore, the decomposed representation of the channel conditional probability distribution, in a basis of exponential distribution functions, is used in order to limit the computational complexity of computing the optimal input distribution. By improving the input signal probability distribution according to the channel knowledge, the channel capacity is thus highly improved.
The aforesaid optimized symbols positions and probabilities can be obtained at the transmitter, but also at the receiver as well, in a particular embodiment.
In an embodiment, the transmitter can transmit messages conveyed by a signal belonging to a finite set of signals corresponding respectively to said symbols ω1, . . . , ωN, each signal being associated with a transmission probability according to an optimized input signal probability distribution corresponding to said optimized input distribution px(x). In this embodiment then, the transmitter takes:
In this embodiment, the communication channel takes the transmitted signal as an input, and outputs a received signal intended to be processed at the receiver (in order to decode the received message at the receiver, typically), the aforesaid conditional probability distribution pY|X(y|x) being related thus to a probability of outputting a given signal y when the input x is fixed.
Preferably, in this embodiment, the conditional probability distribution pY|X(y|x) is defined on a continuous input/output alphabet, as a probability density function.
An estimation of the conditional probability distribution pY|X(y|x) is taken as input, to output the optimized input signal probability distribution px(x) to be obtained at the transmitter (and at the receiver in an embodiment), the conditional probability distribution estimation being used then for computing the optimized input signal probability distribution, the conditional probability distribution estimation being approximated by said mixture model.
In an embodiment, the receiver takes the received signal, and also the optimized input signal probability distribution px(x), and an estimation of the channel conditional probability distribution pY|X(y|x) as inputs and performs an estimation of a message conveyed in said received signal.
Therefore, in this embodiment, the receiver can perform an enhanced determination of the conveyed message thanks to the optimized input signal probability distribution px(x), from which the channel conditional probability distribution pY|X(y|x) can be estimated.
In an embodiment, the aforesaid mixed model follows a conditional probability distribution pY|X(y|x) which is decomposable into a basis of probability distributions exponential functions g(y|x;θ) , where θ is a parameter set, such that:
p
Y|X(y|x)=Σj=1K wjg(y|x;θj) (E)
where K is a predetermined parameter, the sets {θj},{wj} are parameters representing respectively a mean vector coordinates and covariance matrix parameters.
Moreover, in this embodiment, the derivative of the probability distributions exponential functions g(y|x;θ) are more particularly given by g(y|x;θ)=h(y,θ)exp(xTy−α(x,θ)), where h(y,θ) is a function of y and θ, and α(x,θ) is the moment generating function, x and y being vectors, such that said derivative is given by:
The aforesaid distribution pY|X(y|x) can be approximated by a finite set of continuous functions minimizing a metric defined by Kullback-Leibler divergence, by determining parameters set {θj},{wj} which minimize the Kullback-Leibler divergence between an analytical observation of pY|X(y|x) and its expression given by:
p
Y|X(y|x)=Σj=1K wjg(y|x;θj).
The input distribution px(x) can be represented as a list of N constellation positions as {(x1,π1), . . . , (xN, πN)}, where xi and πi denote respectively constellation positions and probability weights,
And the input distribution px(x) is estimated by solving an optimization problem at the transmitter given by:
Where:
The mutual information can be expressed as:
and argument yi,m are samples from the distribution pY|X(y|xi).
In this embodiment, an alternating optimization can be performed iteratively to calculate both px(x) and pY|X(y|x)=Σj=1K wjg(y|x;θj), so as to derive from said calculations optimized positions π(t) described from a preceding iteration t−1 to a current iteration t as follows:
And repeating iteratively these two steps until a stopping condition occurs on the mutual information I(x,π).
This embodiment is described in details below with reference to
The present invention aims also at a computer program comprising instructions causing a processing circuit to implement the method as presented above, when such instructions are executed by the processing circuit.
The present invention aims also at a system comprising at least a transmitter, a receiver, and a communication channel between the transmitter and the receiver, wherein the transmitter at least is configured to implement the method above.
The invention aims also at a communication device comprising a processing circuit configured to perform the optimization method as presented above.
More details and advantages of possible embodiments of the invention will be presented below with reference to the appended drawings.
Referring to
The transmitter 10 transmits messages conveyed by a signal belonging to a finite set of signals, each associated with a transmission probability according to an (optimized) input signal probability distribution. The transmitter 10 takes the messages and the (optimized) input signal probability distribution as inputs, and outputs the signal to be transmitted on the channel. The channel 11 takes the transmitted signal as an input, and outputs a received signal which is processed at the receiver 12 in order to decode the transmitted message. It is characterized by a channel conditional probability distribution of the probability of outputting a given signal when the input is fixed. The probability distribution can generally be defined on a discrete or continuous input and/or output alphabet. Here, as an example, the continuous output alphabet is considered, and the probability distribution is called a probability density function in this case.
The input signal probability distribution optimizer 13 takes the conditional probability distribution estimation as an input, and outputs the optimized input signal probability distribution to the transmitter 10 and receiver 12.
It is worth noting here that the optimizer 13 can be a same module which is a part of both the transmitter and the receiver. It can be alternatively a module which is a part of a scheduling entity (e.g. a base station or other) in a telecommunication network linking said transmitter and receiver through the communication channel. More generally, a communication device such as the transmitter 10, the receiver 12, or else any device 13 being able to perform the optimization method, can include such a module which can have in practice the structure of a processing circuit as shown on
More particularly, the conditional probability distribution estimation is used for computing the optimized input signal probability distribution at the input signal probability distribution optimizer 13. In particular, it is shown hereafter that the optimization is made more efficient when the conditional probability distribution estimation is approximated by a mixture of exponential distributions.
The receiver 12 takes the received signal, the optimized input signal probability distribution and the estimated channel conditional probability distribution as inputs and performs an estimation of the message conveyed in the received signal.
The transmission channel 11 is represented by a model, hereafter, that follows a conditional probability distribution pY|X(y|x) that can be decomposed into a basis of probability distributions functions p(y|x;θ), where θ is a parameter set. For example, the distribution function is the exponential family and the parameters are essentially the mean and variance for the scalar case, and more generally the mean vector and covariance matrix for the multi-variate case, such that:
p
Y|X(y|x)=Σj=1K wjp(y|x;θj) (E)
where K, and the sets {θj},{wj} are parameters.
For example, three examples of channels following the model can be cited hereafter.
Channels might have random discrete states when the channel fluctuates randomly in time according to discrete events, such as:
In case of channel estimation impairments (typically when the transmission channel is imperfectly known), residual self-interference is obtained on the received signal. In general, the channel model is obtained as ={circumflex over (α)}x+η−vx, which leads to:
Therefore, it is shown here that, from any known continuous distribution pY|X(y|X), this distribution can be approximated by a finite set of continuous functions.
The approximation is done by minimizing a metric. One relevant metric is the Kullback-Leibler divergence that allows getting a measure of the difference between two distributions. Thus, when knowing pY|X(y|x) analytically, it is possible to find parameters set {θj},{wj} that minimize the Kullback-Leibler divergence between pY|X(y|x) and an approximated expression in the form of equation (E) given above.
From an estimated histogram of pY|X(y|x), it can be approximated by a finite set of continuous functions, in the same way as with a known continuous distribution, by using the Kullback-Leibler divergence as a metric.
The function pY|X(y|x) is bi-variate with variables x and y which spans in general in a continuous domain.
Hereafter a focus is made on symbols x belonging to a finite alphabet Ω={ω1, . . . , ωN} of cardinality N.
It is further assumed that the derivative of the probability distributions functions g(y|x;θ) is known. For example, when g(y|x;θ) is from the exponential family, it can be written:
g(y|x; θ)=h(y,θ)exp(xTy−α(x,θ)),
where h(y,θ) is a function of y and θ, and α(x,θ) is the moment generating function, x and y being vectors in this general case. Thus,
For example, in the scalar Gaussian case, the probability density function is thus decomposed as follows:
The input signal distribution optimizer 13 relies on the estimation of the channel probability distribution in the form of equation (E). When the functional basis chosen for the estimation of the channel is the exponential family, closed form expression can be derived and the algorithm converges to the optimal solution.
The capacity approaching input is deemed to be discrete for some channels. For the case of continuous capacity achieving input (that is the case for more general channels), the input distribution pX(x) can be represented as a list of N particles as
Where:
The constraint (2) sets the total probability of particles to 1. Constraints (3) and (4) guarantee the total transmit power to be less than or equal to P, and the magnitude of particle probabilities to be positive values less than 1, respectively. The mutual information I({circumflex over (x)},π), involves an integration on continuous random variables, but can be approximated by Monte-Carlo integration (the main principle of which is to replace the expectation function, which usually involves an integration, by a generation of samples which are realizations of said random variable and an averaging of the obtained values) as
where M denotes the number of samples (i.e., the number of realizations of the random variables generated from their probability distribution), and where
p
Y|X(y|x)=Σj=1K wjg(y|x;θj), (6)
denoting thus a decomposition of the conditional probability pY|X(y|x) into a basis of functions g() involving θj.
The argument yi,m in (5) are the samples from the distribution PY|X(y|xi).
Hereafter, an alternating optimization method is proposed, described from iteration t−1 to t as follows:
These two steps are detailed hereafter respectively as S1 and S2. They can intervene after an initialization step S0 of an algorithm presented below.
Step S1: Optimization of π(t) for a Fixed Set of Particles x(t−1) and a Previous Value π(t−1)
The optimization in (1) is concave with respect to it for fixed values of x. So, for a given x(t−1), (1) is solved for it by writing the Lagrangian and solving for πi for i=1, . . . , N as
Here, the expression
is the approximation of the mathematical expectation E[logq(xi(t−1)|yi)] according to the random variable yi. The approximation is performed by the above mentioned Monte-Carlo integration, i.e., by generating M samples according to the distribution of yi. The term
can be advantageously replaced by a numerical integration or a closed form expression when available.
In (7), β denotes the Lagrangian multiplier that can be determined by replacing (7) in (3) with equality for the maximum total transmit power P, and resulting to the non-linear equation
The non-linear equation (8) can be solved using different tools, e.g., gradient descent based approaches such as Newton-Raphson, or by selecting several values of 16, computing the left part of the equation in (8) and keeping the closest one to 0 in absolute value. And the values of πi(t) are obtained from (7).
Step S2: Optimization of x(t) for a Fixed π(t) and Previous x(t−1)
The Lagrangian for the optimization in (1) with a given weight vector π(t) can be given by:
(x;β,π(t))=I(x,π(t))+β(P−Σi=1N |xi|2πi(t)). (9)
The position vector x is obtained such that the Kullback-Leibler divergence D(pY|X(y|xi)∥pY(y)) penalized by the second term in (9) is maximized. This way the value of Lagrangian (x; β, π(t), i.e., penalized mutual information, is greater than or equal to the previous values after each update of the position and weight vectors. This is achieved by gradient ascent based methods, i.e.:
where the step size λt is a positive real number.
In the aforementioned gradient ascent based methods, it is required to compute the derivative of the term D(pY|X(y|xi)∥pY(y)) by Monte-Carlo integration as
Using (6), it can be obtained:
Thus, when g(y|x;θj) is known in a closed form and its derivative is known in a closed form, the equation can be computed.
Finally the x(t) values are obtained and the iteration can continue until a stopping condition is met. The stopping condition is for example an execution time, or if I(x(t),π(t))−I(x(t−1),π(t−1)) is lower than a given threshold, typically small.
An example of algorithm is detailed hereafter, with reference to
Step S0: Initialization Step
is known.
Step S1: Iterative Step t
Step S10: Samples Generation
Step S11: Compute the Stopping Condition
Step S12: Update the Probabilities πi(t)
Step S2: Update the Symbols xi(t) Position with New πi(t) and Previous xi(t−1)
which is obtained from the known expression of
by substituting y by yi,m and x by xi
where h(yi,m,xi(t−1)) is the value of the function
Next step S3 is an incrementing of t to loop, for a next iteration, to step S101.
An artificial intelligence can thus be programmed with such an algorithm to optimize the capacity of a given communication channel (one or several communication channels) in a telecommunication network.
Number | Date | Country | Kind |
---|---|---|---|
20305672.6 | Jun 2020 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/011559 | 3/12/2021 | WO |