METHOD, COMPUTER PROGRAM, SYSTEM, AND COMMUNICATION DEVICE FOR OPTIMIZING THE CAPACITY OF COMMUNICATION CHANNELS

Information

  • Patent Application
  • 20230261923
  • Publication Number
    20230261923
  • Date Filed
    March 12, 2021
    3 years ago
  • Date Published
    August 17, 2023
    a year ago
Abstract
The invention relates to a method for optimizing a capacity of a communication channel in a communication system comprising at least a transmitter (10), a receiver (11), and the communication channel (12) between the transmitter and the receiver. The transmitter (10) uses a finite set of symbols Ω={ω1, . . . , ωN} having respective positions on a constellation, to transmit a message including at least one symbol on said communication channel (11). The communication channel (11) is characterized by a conditional probability distribution ρY|X(y|x), where y is the symbol received at the receiver (12) while x is the symbol transmitted by the transmitter. More particularly, the conditional probability distribution ρY|X(y|x) is obtained, for each possible transmitted symbol x, by a mixture model using probability distributions represented by exponential functions. An optimized input distribution px(x) is computed, based on parameters of the mixture model, to define optimized symbols positions and probabilities to be used at the transmitter for optimizing the capacity of the channel.
Description
TECHNICAL FIELD

The present invention relates to the field of telecommunications and more particularly targets the problem of optimizing the capacity of communication channels.


BACKGROUND ART

The optimization can be implemented by computer means and more particularly for example by artificial intelligence, and can be based on observations of whether transmitted messages via communication channels from a transmitter are well received or not at a receiver.


Especially, the case of mixture channels where the optimal input distribution cannot be obtained theoretically, is difficult to address. The probability distribution can be decomposed on a functional basis.


The essential characteristics of a memoryless communication channel can be represented by the conditional probability distribution pY|X(y|x) of the output Y given the input X. Some examples of well-known communication channels are given below:

    • The additive white Gaussian noise channel, where Y=X+η and η is Gaussian-distributed models corresponds to a wired communication subject to perturbations caused by the thermal noise at the receiver,
    • The fading channel Y=α.X+η where a follows a fading distribution (such as a Rayleigh distribution), models the transmission over a narrow-band wireless channel in a radio propagation environment involving rich scattering,
    • More complicated channels can involve non-linear effects. This is for example the case of optical channels where the Kerr effect cannot be neglected and reduces the channel capacity, which is driven by the Nonlinear Schrödinger equations, when increasing too much the transmit power in Wavelength Division Multiplexing. (WDM transmissions).


Once the conditional probability distribution pY|X(y|x) is accurately known, it is possible to optimize the communication system relying on:

    • The design of the input signal such as to maximize the mutual information between the input and the output of the channel,
    • The design of optimal receivers, which in general relies on the processing of the likelihood probabilities pY|X(y|x).


It is still needed a solution to the problem of optimizing the transmission strategy preferably by designing the probability of transmission and optionally the position of each symbol of a constellation, typically (QAM, PSK, etc. for example). The main challenges are:

    • Get the channel conditional probability distribution of the channel, decomposed on a functional basis.
    • Optimize the input distribution.


The present invention aims to improve the situation.


SUMMARY OF INVENTION

To that end, it proposes a method for optimizing a capacity of a communication channel in a communication system comprising at least a transmitter, a receiver, and the communication channel between the transmitter and the receiver, the transmitter using a finite set of symbols Ω={ω1, . . . , ωN} having respective positions on a constellation, to transmit a message including at least one symbol on said communication channel, and the communication channel being characterized by a conditional probability distribution pY|X(y|x), where y is the symbol received at the receiver while x is the symbol transmitted by the transmitter.


More particularly, the aforesaid conditional probability distribution pY|X(y|x) is obtained, for each possible transmitted symbol x, by a mixture model using probability distributions represented by exponential functions, and an optimized input distribution px(x) is computed, based on parameters of said mixture model, to define optimized symbols positions and probabilities to be used at the transmitter for optimizing the capacity of the channel.


Therefore, the decomposed representation of the channel conditional probability distribution, in a basis of exponential distribution functions, is used in order to limit the computational complexity of computing the optimal input distribution. By improving the input signal probability distribution according to the channel knowledge, the channel capacity is thus highly improved.


The aforesaid optimized symbols positions and probabilities can be obtained at the transmitter, but also at the receiver as well, in a particular embodiment.


In an embodiment, the transmitter can transmit messages conveyed by a signal belonging to a finite set of signals corresponding respectively to said symbols ω1, . . . , ωN, each signal being associated with a transmission probability according to an optimized input signal probability distribution corresponding to said optimized input distribution px(x). In this embodiment then, the transmitter takes:

    • messages to be transmitted, and
    • the optimized input signal probability distribution as inputs,


      and outputs a transmitted signal on the communication channel.


In this embodiment, the communication channel takes the transmitted signal as an input, and outputs a received signal intended to be processed at the receiver (in order to decode the received message at the receiver, typically), the aforesaid conditional probability distribution pY|X(y|x) being related thus to a probability of outputting a given signal y when the input x is fixed.


Preferably, in this embodiment, the conditional probability distribution pY|X(y|x) is defined on a continuous input/output alphabet, as a probability density function.


An estimation of the conditional probability distribution pY|X(y|x) is taken as input, to output the optimized input signal probability distribution px(x) to be obtained at the transmitter (and at the receiver in an embodiment), the conditional probability distribution estimation being used then for computing the optimized input signal probability distribution, the conditional probability distribution estimation being approximated by said mixture model.


In an embodiment, the receiver takes the received signal, and also the optimized input signal probability distribution px(x), and an estimation of the channel conditional probability distribution pY|X(y|x) as inputs and performs an estimation of a message conveyed in said received signal.


Therefore, in this embodiment, the receiver can perform an enhanced determination of the conveyed message thanks to the optimized input signal probability distribution px(x), from which the channel conditional probability distribution pY|X(y|x) can be estimated.


In an embodiment, the aforesaid mixed model follows a conditional probability distribution pY|X(y|x) which is decomposable into a basis of probability distributions exponential functions g(y|x;θ) , where θ is a parameter set, such that:






p
Y|X(y|x)=Σj=1K wjg(y|x;θj)   (E)


where K is a predetermined parameter, the sets {θj},{wj} are parameters representing respectively a mean vector coordinates and covariance matrix parameters.


Moreover, in this embodiment, the derivative of the probability distributions exponential functions g(y|x;θ) are more particularly given by g(y|x;θ)=h(y,θ)exp(xTy−α(x,θ)), where h(y,θ) is a function of y and θ, and α(x,θ) is the moment generating function, x and y being vectors, such that said derivative is given by:











x



g

(


y
|
x

;
θ

)


=


h

(

y
,
θ

)



(

y
-





x



a

(

x
,
θ

)



)



exp

(



x
T


y

-

a

(

x
,
θ

)


)






The aforesaid distribution pY|X(y|x) can be approximated by a finite set of continuous functions minimizing a metric defined by Kullback-Leibler divergence, by determining parameters set {θj},{wj} which minimize the Kullback-Leibler divergence between an analytical observation of pY|X(y|x) and its expression given by:






p
Y|X(y|x)=Σj=1K wjg(y|x;θj).


The input distribution px(x) can be represented as a list of N constellation positions as {(x11), . . . , (xN, πN)}, where xi and πi denote respectively constellation positions and probability weights,


And the input distribution px(x) is estimated by solving an optimization problem at the transmitter given by:







(


x
*

,

π
*


)

=


argmax

x
,
π





I

(

x
,
π

)









subject


to






i
=
1

N


π
i



=
1










i
=
1

N






"\[LeftBracketingBar]"


x
i



"\[RightBracketingBar]"


2



π
i




P







0
<

π
i

<
1

,


for


i

=
1

,


,
N




Where:

    • I(X,π) is a mutual information as a function of position vector x=[x1, . . . , xN]T and weight vector π=[π1, . . . , πN]T,
    • optimal values are tagged with the superscript *, and
    • P denotes a total transmit power.


The mutual information can be expressed as:








I

(

x
,
π

)

=


1
M






i
=
1

N





m
=
1

M



π
i


log




p

Y
|
X


(


y

i
,
m


|

x
i


)





j
=
1

N



π
j




p

Y
|
X


(


y

i
,
m


|

x
j


)









,
where









p

Y
|
X


(

y
|
x

)

=




j
=
1

K



w
j



g

(


y
|
x

;

θ
j


)




,




and argument yi,m are samples from the distribution pY|X(y|xi).


In this embodiment, an alternating optimization can be performed iteratively to calculate both px(x) and pY|X(y|x)=Σj=1K wjg(y|x;θj), so as to derive from said calculations optimized positions π(t) described from a preceding iteration t−1 to a current iteration t as follows:

    • at first, positions π(t) are optimized for a fixed set of symbol positions x(t−1) and previous position values π(t−1);
    • then, symbol positions x(t) are optimized for the thusly determined π(t) and previous values of x(t−1),


And repeating iteratively these two steps until a stopping condition occurs on the mutual information I(x,π).


This embodiment is described in details below with reference to FIG. 2, showing an example of steps of an algorithm involving a Gradient descent on particles representing positions on a telecommunication constellation (QAM, PSK, or other).


The present invention aims also at a computer program comprising instructions causing a processing circuit to implement the method as presented above, when such instructions are executed by the processing circuit.



FIG. 2 commented below can illustrate the algorithm of such a computer program.


The present invention aims also at a system comprising at least a transmitter, a receiver, and a communication channel between the transmitter and the receiver, wherein the transmitter at least is configured to implement the method above.


The invention aims also at a communication device comprising a processing circuit configured to perform the optimization method as presented above.





BRIEF DESCRIPTION OF DRAWINGS

More details and advantages of possible embodiments of the invention will be presented below with reference to the appended drawings.



FIG. 1 is an overview of a system according to an example of embodiment of the invention.



FIG. 2 shows possible steps of an optimization method according to an embodiment of the invention.



FIG. 3 shows schematically a processing circuit of a communication device to perform the optimization method of the invention.





DESCRIPTION OF EMBODIMENTS

Referring to FIG. 1, a system according to the present invention comprises in an example of embodiment a transmitter 10, a receiver 12, a transmission channel 11 and an input signal probability distribution optimizer 13.


The transmitter 10 transmits messages conveyed by a signal belonging to a finite set of signals, each associated with a transmission probability according to an (optimized) input signal probability distribution. The transmitter 10 takes the messages and the (optimized) input signal probability distribution as inputs, and outputs the signal to be transmitted on the channel. The channel 11 takes the transmitted signal as an input, and outputs a received signal which is processed at the receiver 12 in order to decode the transmitted message. It is characterized by a channel conditional probability distribution of the probability of outputting a given signal when the input is fixed. The probability distribution can generally be defined on a discrete or continuous input and/or output alphabet. Here, as an example, the continuous output alphabet is considered, and the probability distribution is called a probability density function in this case.


The input signal probability distribution optimizer 13 takes the conditional probability distribution estimation as an input, and outputs the optimized input signal probability distribution to the transmitter 10 and receiver 12.


It is worth noting here that the optimizer 13 can be a same module which is a part of both the transmitter and the receiver. It can be alternatively a module which is a part of a scheduling entity (e.g. a base station or other) in a telecommunication network linking said transmitter and receiver through the communication channel. More generally, a communication device such as the transmitter 10, the receiver 12, or else any device 13 being able to perform the optimization method, can include such a module which can have in practice the structure of a processing circuit as shown on FIG. 3. Such a processing circuit can comprise typically an input interface IN to receive data (at least data enabling the estimation of the conditional probability distribution), linked to a processor PROC cooperating with a memory unit MEM (storing at least instructions of a computer program according to the invention), and an output OUT to send results of optimization computations.


More particularly, the conditional probability distribution estimation is used for computing the optimized input signal probability distribution at the input signal probability distribution optimizer 13. In particular, it is shown hereafter that the optimization is made more efficient when the conditional probability distribution estimation is approximated by a mixture of exponential distributions.


The receiver 12 takes the received signal, the optimized input signal probability distribution and the estimated channel conditional probability distribution as inputs and performs an estimation of the message conveyed in the received signal.


The transmission channel 11 is represented by a model, hereafter, that follows a conditional probability distribution pY|X(y|x) that can be decomposed into a basis of probability distributions functions p(y|x;θ), where θ is a parameter set. For example, the distribution function is the exponential family and the parameters are essentially the mean and variance for the scalar case, and more generally the mean vector and covariance matrix for the multi-variate case, such that:






p
Y|X(y|x)=Σj=1K wjp(y|x;θj)   (E)


where K, and the sets {θj},{wj} are parameters.


For example, three examples of channels following the model can be cited hereafter.


Channels might have random discrete states when the channel fluctuates randomly in time according to discrete events, such as:

    • interference by bursts that changes the signal to noise ratio from one transmission to another,
    • shadowing effects that change the received signal power,
    • an approximation of a random fading channel by a discrete distribution, where the channel coefficient α is random and follows p(α)=Σj=1K wjδ(α−αj), where αj is one out of n possible values of the channel coefficient α occurring with a probability wj, and δ(.) is the Kronecker function. Thus, in case of Gaussian noise with variance ση2, such fading channel leads to the probability distribution is noted as








p

Y
|
X


(

y
|
x

)

=




j
=
1

K



w
j




e

-





"\[LeftBracketingBar]"


y
-


α
j


x




"\[RightBracketingBar]"


2


2


σ
η
2







2


σ
η
2










In case of channel estimation impairments (typically when the transmission channel is imperfectly known), residual self-interference is obtained on the received signal. In general, the channel model is obtained as ={circumflex over (α)}x+η−vx, which leads to:








p

Y
|
X


(

y
|
x

)

=




j
=
1

K



w
j




e

-





"\[LeftBracketingBar]"


y
-


α
^


x




"\[RightBracketingBar]"


2


2


(


σ
η
2

+


σ
v
2






"\[LeftBracketingBar]"

x


"\[RightBracketingBar]"


2



)







2


(


σ
η
2

+


σ
v
2






"\[LeftBracketingBar]"

x


"\[RightBracketingBar]"


2



)










Therefore, it is shown here that, from any known continuous distribution pY|X(y|X), this distribution can be approximated by a finite set of continuous functions.


The approximation is done by minimizing a metric. One relevant metric is the Kullback-Leibler divergence that allows getting a measure of the difference between two distributions. Thus, when knowing pY|X(y|x) analytically, it is possible to find parameters set {θj},{wj} that minimize the Kullback-Leibler divergence between pY|X(y|x) and an approximated expression in the form of equation (E) given above.


From an estimated histogram of pY|X(y|x), it can be approximated by a finite set of continuous functions, in the same way as with a known continuous distribution, by using the Kullback-Leibler divergence as a metric.


The function pY|X(y|x) is bi-variate with variables x and y which spans in general in a continuous domain.


Hereafter a focus is made on symbols x belonging to a finite alphabet Ω={ω1, . . . , ωN} of cardinality N.


It is further assumed that the derivative of the probability distributions functions g(y|x;θ) is known. For example, when g(y|x;θ) is from the exponential family, it can be written:






g(y|x; θ)=h(y,θ)exp(xTy−α(x,θ)),


where h(y,θ) is a function of y and θ, and α(x,θ) is the moment generating function, x and y being vectors in this general case. Thus,











x



g

(


y
|
x

;
θ

)


=


h

(

y
,
θ

)



(

y
-





x



a

(

x
,
θ

)



)



exp

(



x
T


y

-

a

(

x
,
θ

)


)






For example, in the scalar Gaussian case, the probability density function is thus decomposed as follows:











x




e

-





"\[LeftBracketingBar]"


y
-


α
j


x




"\[RightBracketingBar]"


2


2


σ
η
2







2


σ
η
2





=



2



α
j

(

y
-


α
j


x


)



2


σ
η
2






e

-





"\[LeftBracketingBar]"


y
-


α
j


x




"\[RightBracketingBar]"


2


2


σ
η
2







2


σ
η
2









The input signal distribution optimizer 13 relies on the estimation of the channel probability distribution in the form of equation (E). When the functional basis chosen for the estimation of the channel is the exponential family, closed form expression can be derived and the algorithm converges to the optimal solution.


The capacity approaching input is deemed to be discrete for some channels. For the case of continuous capacity achieving input (that is the case for more general channels), the input distribution pX(x) can be represented as a list of N particles as

    • {(x11), . . . , (xNN)},


      where xi and πi denote the positions (i.e., represented by a set of coordinates or by a complex number in a 2-dimension case) and weights, respectively. The optimization problem in the transmitter can be written as










(


x
*

,

π
*


)

=


argmax

x
,
π





I

(

x
,
π

)






(
1
)













subject


to






i
=
1

N


π
i



=
1




(
2
)
















i
=
1

N






"\[LeftBracketingBar]"


x
i



"\[RightBracketingBar]"


2



π
i




P




(
3
)













0
<

π
i

<
1

,


for


i

=
1

,


,
N




(
4
)







Where:

    • I(x, π) is the mutual information as a function of position vector x=[x1, . . . , xN]T and weight vector π=[π1, . . . , πN]T,
    • the optimal values are shown with the superscript *, and
    • P denotes the total transmit power constraint, which is set arbitrarily. In general, this value is defined by a power budget of the transmitter which is related to the physical limit of the power amplifier or is related to a maximum radiated power allowed by regulation.


The constraint (2) sets the total probability of particles to 1. Constraints (3) and (4) guarantee the total transmit power to be less than or equal to P, and the magnitude of particle probabilities to be positive values less than 1, respectively. The mutual information I({circumflex over (x)},π), involves an integration on continuous random variables, but can be approximated by Monte-Carlo integration (the main principle of which is to replace the expectation function, which usually involves an integration, by a generation of samples which are realizations of said random variable and an averaging of the obtained values) as











I

(

x
,
π

)

=


1
M






i
=
1

N





m
=
1

M



π
i


log




p

Y
|
X


(


y

i
,
m


|

x
i


)





j
=
1

N



π
j




p

Y
|
X


(


y

i
,
m


|

x
j


)









,




(
5
)







where M denotes the number of samples (i.e., the number of realizations of the random variables generated from their probability distribution), and where






p
Y|X(y|x)=Σj=1K wjg(y|x;θj),   (6)


denoting thus a decomposition of the conditional probability pY|X(y|x) into a basis of functions g() involving θj.


The argument yi,m in (5) are the samples from the distribution PY|X(y|xi).


Hereafter, an alternating optimization method is proposed, described from iteration t−1 to t as follows:

    • at first, optimize π(t) for a fixed set of particles x(t−1) and a previous value π(t−1);
    • then, optimize x(t) for the obtained π(t) and a previous value x(t−1).


These two steps are detailed hereafter respectively as S1 and S2. They can intervene after an initialization step S0 of an algorithm presented below.


Step S1: Optimization of π(t) for a Fixed Set of Particles x(t−1) and a Previous Value π(t−1)


The optimization in (1) is concave with respect to it for fixed values of x. So, for a given x(t−1), (1) is solved for it by writing the Lagrangian and solving for πi for i=1, . . . , N as











π
i

(
t
)


=


exp

(


β





"\[LeftBracketingBar]"


x
i

(

t
-
1

)




"\[RightBracketingBar]"


2


+


1
M






m
=
1

M


log


q

(


x
i

(

t
-
1

)


|

y

i
,
m



)





)





j
=
1

N


exp

(


β





"\[LeftBracketingBar]"


x
j

(

t
-
1

)




"\[RightBracketingBar]"


2


+


1
M






m
=
1

M


log



q

(


x
j

(

t
-
1

)


|

y

j
,
m



)





)




,
where




(
7
)










q

(


x
i

|

y

i
,
m



)

=




π
i

(

t
-
1

)





p

Y
|
X


(


y

i
,
m


|

x
i


)






j
=
1

N



π
j

(

t
-
1

)





p

Y
|
X


(


y

i
,
m


|

x
j


)




.





Here, the expression







1
M






m
=
1

M


log


q

(


x
i

(

t
-
1

)


|

y

i
,
m



)







is the approximation of the mathematical expectation E[logq(xi(t−1)|yi)] according to the random variable yi. The approximation is performed by the above mentioned Monte-Carlo integration, i.e., by generating M samples according to the distribution of yi. The term







1
M






m
=
1

M


log


q

(


x
i

(

t
-
1

)


|

y

i
,
m



)







can be advantageously replaced by a numerical integration or a closed form expression when available.


In (7), β denotes the Lagrangian multiplier that can be determined by replacing (7) in (3) with equality for the maximum total transmit power P, and resulting to the non-linear equation













i
=
1

N



exp

(


β





"\[LeftBracketingBar]"


x
i

(

t
-
1

)




"\[RightBracketingBar]"


2


+


1
M






m
=
1

M


log



q

(


x
i

(

t
-
1

)


|

y

i
,
m



)





)

[

P
-




"\[LeftBracketingBar]"


x
i

(

t
-
1

)




"\[RightBracketingBar]"


2


]


=
0.




(
8
)







The non-linear equation (8) can be solved using different tools, e.g., gradient descent based approaches such as Newton-Raphson, or by selecting several values of 16, computing the left part of the equation in (8) and keeping the closest one to 0 in absolute value. And the values of πi(t) are obtained from (7).


Step S2: Optimization of x(t) for a Fixed π(t) and Previous x(t−1)


The Lagrangian for the optimization in (1) with a given weight vector π(t) can be given by:






custom-character(x;β,π(t))=I(x,π(t))+β(P−Σi=1N |xi|2πi(t)).   (9)


The position vector x is obtained such that the Kullback-Leibler divergence D(pY|X(y|xi)∥pY(y)) penalized by the second term in (9) is maximized. This way the value of Lagrangian custom-character(x; β, π(t), i.e., penalized mutual information, is greater than or equal to the previous values after each update of the position and weight vectors. This is achieved by gradient ascent based methods, i.e.:







x
i

(
t
)


=



x
i

(

t
-
1

)


+


λ
t








x
i




D

(



p

Y

X


(

y
|

x
i


)






p
Y

(
y
)



)





|


x

(

t
-
1

)


,

π

(
t
)









where the step size λt is a positive real number.


In the aforementioned gradient ascent based methods, it is required to compute the derivative of the term D(pY|X(y|xi)∥pY(y)) by Monte-Carlo integration as













x
i




D

(



p

Y

X


(

y
|

x
i


)






p
Y

(
y
)



)



|


x

(

t
-
1

)


,

π

(
t
)








1
M






m
=
1

M





h

(


y

i
,
m


,

x
i

(

t
-
1

)



)

[

1
+


log




p

Y
|
X


(


y

i
,
m


|

x
i

(

t
-
1

)



)





j
=
1

N



π
j

(
t
)





p

Y
|
X


(


y

i
,
m


|

x
j

(

t
-
1

)



)





-


π
i

(
t
)






p

Y
|
X


(


y

i
,
m


|

x
i

(

t
-
1

)



)





j
=
1

N



π
j

(
t
)





p

Y

X


(


y

i
,
m




x
j

(

t
-
1

)



)






]

.



where




h

(


y

i
,
m


,

x
i


)






=







x
i



log





p

Y

X


(


y

i
,
m


|

x
i


)

.






Using (6), it can be obtained:







h

(


y

i
,
m


,

x
i


)

=







x
i




log

(




j
=
1

K



w
j



g

(



y

i
,
m


|

x
i


;

θ
j


)



)


=



w
i








x
i




g

(



y

i
,
m


|

x
i


;

θ
i


)







j
=
1

K



w
j



g

(



y

i
,
m


|

x
i


;

θ
j


)









Thus, when g(y|x;θj) is known in a closed form and its derivative is known in a closed form, the equation can be computed.


Finally the x(t) values are obtained and the iteration can continue until a stopping condition is met. The stopping condition is for example an execution time, or if I(x(t)(t))−I(x(t−1)(t−1)) is lower than a given threshold, typically small.


An example of algorithm is detailed hereafter, with reference to FIG. 2.


Step S0: Initialization Step

    • Step S01: Get the input parameters
      • P, the power limit of the constellation
      • The initial constellation of N symbols
      • The stopping criterion threshold ϵ
    • Step S02: Get the channel conditional probability distribution in the form PY|X(y|x)=Σj=1K wjg(y|x;θj), where K, wj are sacalar parameters and θj is a parameter set, and where the expression of











x
i




g

(


y
|
x

;

θ
j


)





is known.

    • Step S02: Set t=0; Set all πi(0)=1/N; Set all xi(0) from an initial constellation C0; Set I(−1)=0; Set t=1


Step S1: Iterative Step t


Step S10: Samples Generation

    • S101: For all i in [1,N], generate M samples yi,m from the distribution pY|X(y|x=xi(t−1))
    • S102: For all i in [1,N], for all j in [1,N], compute pY|X(yi,m|xj(t−1))


Step S11: Compute the Stopping Condition







I

(

t
-
1

)


=


1
M






i
=
1

N





m
=
1

M



π
i

(

t
-
1

)



log




p

Y
|
X


(


y

i
,
m




x
i

(

t
-
1

)



)





j
=
1

N



π
j

(

t
-
1

)





p

Y

X


(


y

i
,
m




x
j

(

t
-
1

)



)














    • S111: Compute

    • S112: I(t−1)−I(t−2)<ϵ, stop the iterative algorithm (S113). Otherwise, go to S121.





Step S12: Update the Probabilities πi(t)

    • S121: For all i in [1,N], and m in [1,M], compute







q

(


x
i

(

t
-
1

)


|

y

i
,
m



)

=



π
i

(

t
-
1

)





p

Y
|
X


(


y

i
,
m




x
i

(

t
-
1

)



)






j
=
1

N



π
j

(

t
-
1

)





p

Y
|
X


(


y

i
,
m


|

x
j

(

t
-
1

)



)










    • S122: Compute β by solving:













i
=
1

N



exp

(


β





"\[LeftBracketingBar]"


x
i

(

t
-
1

)




"\[RightBracketingBar]"


2


+


1
M






m
=
1

M


log


q

(


x
i

(

t
-
1

)




y

i
,
m



)





)

[

P
-




"\[LeftBracketingBar]"


x
i

(

t
-
1

)




"\[RightBracketingBar]"


2


]


=
0






    • For example by using a Newton-Raphson descent, and/or
      • by using a line-search strategy (taking several β values, computing the above expression and selecting the closest to 0);

    • S123: For all i in [1,N], compute











π
i

(
t
)


=


exp

(


β





"\[LeftBracketingBar]"


x
i

(

t
-
1

)




"\[RightBracketingBar]"


2


+


1
M






m
=
1

M


log


q

(


x
i

(

t
-
1

)




y

i
,
m



)





)





j
=
1

N


exp

(


β





"\[LeftBracketingBar]"


x
j

(

t
-
1

)




"\[RightBracketingBar]"


2


+


1
M






m
=
1

M


log


q

(


x
j

(

t
-
1

)




y

i
,
m



)





)




,




Step S2: Update the Symbols xi(t) Position with New πi(t) and Previous xi(t−1)

    • S21: For all i in [1,N], for all j in [1,N], compute












x
i




g

(



y

i
,
m


|

x
i


;

θ
i


)


,




which is obtained from the known expression of











x
í




g

(


y
|
x

;

θ
i


)





by substituting y by yi,m and x by xi

    • S22: For all i in [1,N], compute







x
i

(
t
)


=


x
i

(

t
-
1

)


+


λ
t



1
M






m
=
1

M



h

(


y

i
,
m


,

x
i

(

t
-
1

)



)

[

1
+


log




p

Y

X


(


y

i
,
m


|

x
i

(

t
-
1

)



)





j
=
1

N



π
j

(
t
)





p

Y
|
X


(


y

i
,
m




x
j

(

t
-
1

)



)





-


π
i

(
t
)






p

Y
|
X


(


y

i
,
m




x
i

(

t
-
1

)



)





j
=
1

N



π
j

(
t
)





p

Y
|
X


(


y

i
,
m


|

x
j

(

t
-
1

)



)






]








where h(yi,m,xi(t−1)) is the value of the function







h

(


y

i
,
m


,

x
i


)

=





w
i








x
i




g

(



y

i
,
m


|

x
i


;

θ
i


)







j
=
1

K



w
j



g

(



y

i
,
m


|

x
i


;

θ
j


)






for



x
i


=

x
i

(

t
-
1

)







Next step S3 is an incrementing of t to loop, for a next iteration, to step S101.


An artificial intelligence can thus be programmed with such an algorithm to optimize the capacity of a given communication channel (one or several communication channels) in a telecommunication network.

Claims
  • 1. A method for optimizing a capacity of a communication channel in a communication system comprising at least a transmitter, a receiver, and said communication channel between the transmitter and the receiver, the transmitter using a finite set of symbols Ω={ω1, . . . , ωN} having respective positions on a constellation, to transmit a message including at least one symbol on said communication channel,the communication channel being characterized by a conditional probability distribution pY|X(y|x) , where y is the symbol received at the receiver while x is the symbol transmitted by the transmitter,wherein said conditional probability distribution pY|X(y|x) is obtained, for each possible transmitted symbol x, by a mixture model using probability distributions represented by exponential functions, and an optimized input distribution px(x) is computed, based on parameters of said mixture model, to define optimized symbols positions and probabilities to be used at the transmitter for optimizing the capacity of the channel.
  • 2. The method of claim 1, wherein said optimized symbols positions and probabilities are obtained at the transmitter and at the receiver.
  • 3. The method according to claim 1, wherein the transmitter transmits messages conveyed by a signal belonging to a finite set of signals corresponding respectively to said symbols ω1, . . . , ωN, each signal being associated with a transmission probability according to an optimized input signal probability distribution corresponding to said optimized input distribution px(x), And the transmitter takes messages to be transmitted and said optimized input signal probability distribution as inputs, and outputs a transmitted signal on the communication channel.
  • 4. The method according to claim 3, wherein the communication channel takes the transmitted signal as an input, and outputs a received signal intended to be processed at the receiver, said conditional probability distribution pY|X(y|x) being related thus to a probability of outputting a given signal y when the input x is fixed.
  • 5. The method according to claim 4, wherein an estimation of said conditional probability distribution pY|X(y|x) is taken as input, to output the optimized input signal probability distribution px(x) to be obtained at least at the transmitter the conditional probability distribution estimation being used for computing the optimized input signal probability distribution, the conditional probability distribution estimation being approximated by said mixture model.
  • 6. The method according to claim 4, wherein the receiver takes the received signal, the optimized input signal probability distribution px(x) and an estimation of the channel conditional probability distribution pY|X(y|x) as inputs and performs an estimation of a message conveyed in said received signal.
  • 7. The method according to claim 1, wherein said mixed model follows a conditional probability distribution pY|X(y|x) which is decomposable into a basis of probability distributions exponential functions g(y|x;θ), where θ is a parameter set, such that: pY|X(y|x)=Σj=1K wjg(y|x;θj)   (E)
  • 8. The method of claim 7, wherein the derivative of the probability distributions exponential functions g(y|x;θ) are given by g(y|x;θ)=h(y,θ)exp(xTy−α(x,θ)), where h(y,θ) is a function of y and θ, and α(x,θ) is the moment generating function, x and y being vectors, such that said derivative is given by:
  • 9. The method according to claim 7, wherein said distribution pY|X(y|x) is approximated by a finite set of continuous functions minimizing a metric defined by Kullback-Leibler divergence, by determining parameters set {θj},{wj} which minimize the Kullback-Leibler divergence between an analytical observation of pY|X(y|x) and its expression given by: pY|X(y|x)=Σj=1K wjg(y|x;θj).
  • 10. The method according to claim 1, wherein the input distribution px(x) is represented as a list of N constellation positions as {(x1,π1), . . . , (xN,πN)}, where xi and πi denote respectively constellation positions and probability weights, And wherein said input distribution px(x) is estimated by solving an optimization problem at the transmitter given by:
  • 11. The method of claim 10, wherein said mixed model follows a conditional probability distribution pY|X(y|x) which is decomposable into a basis of probability distributions exponential functions g(y|x;θ), where θ is a parameter set, such that: pY|X(y|x)=Σj=1K wjg(y|x;θj)   (E)
  • 12. The method of claim 11, wherein an alternating optimization is performed iteratively to calculate both px(x) and pY|X(y|x)=Σj=1K wjg(y|x;θj), so as to derive from said calculations optimized positions π(t) described from a preceding iteration t−1 to a current iteration t as follows: at first, positions π(t) are optimized for a fixed set of symbol positions x(t−1) and previous position values π(t−1);then, symbol positions x(t) are optimized for the thusly determined π(t) and previous values of x(t−1),And repeating iteratively these two steps until a stopping condition occurs on the mutual information I(x,π).
  • 13. A computer program comprising instructions causing a processing circuit to implement the method as claimed in claim 1, when such instructions are executed by the processing circuit.
  • 14. A system comprising at least a transmitter, a receiver, and a communication channel between the transmitter and the receiver, wherein the transmitter at least is configured to implement the method according to claim 1.
  • 15. A communication device comprising a processing circuit configured to perform the optimization method according to claim 1.
Priority Claims (1)
Number Date Country Kind
20305672.6 Jun 2020 EP regional
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2021/011559 3/12/2021 WO