ENERGY EFFICIENCY OPTIMIZATION METHOD FOR IRS-ASSISTED NOMA THZ NETWORK

Information

  • Patent Application
  • 20230413069
  • Publication Number
    20230413069
  • Date Filed
    August 08, 2022
    a year ago
  • Date Published
    December 21, 2023
    5 months ago
Abstract
An energy efficiency optimization method for an IRS-assisted NOMA THz network comprises: classifying users into BS users and IRS users; defining a channel model for the BS users and a channel model for the IRS users; calculating a BS user rate and an IRS user rate respectively, and calculating a total rate of a system; proposing an optimization problem for downlink power control and IRS phase shift adjustment; and solving the optimization problem through an MADRL method. The invention puts forward an energy efficiency concept and adopts an MADRL method to maximize the overall energy efficiency of the system under the constraints of minimum rate and maximum power of each user.
Description
BACKGROUND OF THE INVENTION

The invention relates to a network energy efficiency optimization method, in particular to an energy efficiency optimization method for an IRS-assisted NOMA THz network, and belongs to the technical field of communication.


The demand for an ultra-high data rate of information and entertainment grows rapidly in current and future wireless communication. However, the available spectrum resources are far from supporting the increasing data rate, which makes it urgent to explore new broadband to break through the spectrum bottleneck. Therefore, Terahertz (THz) band attracted wide attentions of the academic and industrial communities with its broadband characteristics, and considered as the basic technology of the sixth-generation (6G) mobile communications. THz wave refers to the frequency of 0.1-10 THz and its available bandwidth is more than tens time of the millimeter wave. Its peak data rate is expected to 1-10 TBits/s. Owing to the advantages of narrow beam and large communication capacity, THz band provides more potentials to achieve ultra-high wireless transmission rate. However, due to the high frequency and small wavelength, the diffraction and penetration ability of THz wave is worse than microwave and millimeter wave, which makes it easier to be blocked by obstacles.


Due to the intense attenuation performance, THz band is only suitable for short-distance communication scenarios, such as shopping malls, subway stations and other indoor places. The THz applications in outdoor communications require a lot of relay equipment. Therefore, some scholars propose to combine THz technology with intelligent reflecting surface (IRS) to make the transmission more efficient. IRS is a kind of reflecting surface composed of a large number of passive reflecting components. Each component can adjust its angle to reflect the signal independently. The intelligent reflector can be placed on the surface of the buildings, which effectively reflects the indoor and outdoor signals. Many studies have focused on the IRS-assisted communication by THz band.


THz wave has wide bandwidth with more latent users and equipments such as mobile users, industrial users and intelligent health-care terminals. However, the THz band has a major defect of small coverage, which is caused by severe attenuation of THz signals. This defect will lead to a heavy transmission burden and result in a rapid increase of energy consumption. Non-orthogonal multiple access (NOMA) is a promising wireless communication technique, which allows users to share the same sub-channel simultaneously and their communication resources through a power domain or a code domain. Compared with traditional orthogonal multiple access, NOMA is an effective technique for improving spectral efficiency and realizing mass wireless network connection [8]. NOMA encourages more user devices to share the same sub-channel and can provide many data services to increase the utilization rate of resources in a THz network. In order to realize mass wireless connection and increase the resource utilization rate in THz communication, NOMA is applied to the THz network in recent studies. By introducing NOMA to a THz cellular network, a sub-channel and power allocation approach based on an alternating direction method is put forward to optimize energy efficiency.


Inspired by the capacity enhancement of NOMA and the talent coverage improvement of IRS, the combination of NOMA with IRS-assisted communications has aroused significant interests. For example, in some researches, a design of IRS-assisted NOMA downlink transmission was proposed, wherein channel vectors of marginal users are aligned in a preset spatial direction with the aid of IRSs. Some researches put forward an IRS-aided NOMA network, and proposed an energy-efficient scheme to jointly optimize the transmission beamforming of the BS and the reflection phase shift of the IRS. In addition, some researches think out IRS-enhanced millimeter-wave NOMA systems and come up with joint optimization of beam formation and power allocation.


The resource management mechanism of traditional networks has been relatively mature, but when applied to THz networks, it still has many limitations, which mainly include:

    • Limitation in the number of users and devices accessed to networks: the existing resource management mechanism is only suitable for cases where the number of users accessed to the networks is small; with the increase of users and devices accessed to the networks, the utilization rate of the spectrum will be decreased, so the energy efficiency of the networks needs to be studied in cases of too many users and devices;
    • Severe signal attenuation: due to severe attenuation of the THz frequency, signals are quite likely to be blocked by a building, users in the shade of the building will be unable to receive signals from a BS, which leads to a failure of normal communication; and a large number of BSs have to be constructed for traditional THz networks to guarantee a minimum signal-to-noise ratio of users;
    • Low energy efficiency: with the increase of users and devices accessed to networks, too many BSs with high transmitting power have to be constructed and will consume too much energy, which makes the energy utilization rate of existing THz communication systems excessively low;
    • Low algorithm efficiency: traditional networks use a DQN algorithm and a single agent for reinforcement learning, each agent represents one user, training is performed on the user side without considering information exchange between users, and a large flow will be occupied every time a BS is trained to transmit information.


BRIEF SUMMARY OF THE INVENTION

The technical issue to be settled by the invention is to provide an energy efficiency optimization method for an IRS-assisted NOMA THz network to overcome the defects of the prior art.


The technical solution adopted by the invention to settle the above technical issue is as follows:


An energy efficiency optimization method for an IRS-assisted NOMA THz network comprises the following steps:

    • Step 1: classifying users into BS users and IRS users;
    • Step 2: defining a channel model for the BS users and a channel model for the IRS users;
    • Step 3: calculating a BS user rate and an IRS user rate respectively, and calculating a total rate of a system;
    • Step 4: proposing an optimization problem for downlink power control and IRS phase shift adjustment; and
    • Step 5: solving the optimization problem through an MADRL method.


Further, in Step 1,


NB antennas are configured for a base station, NU antennas are configured for users, and the users are classified into BS users and IRS users; assume the number of the BS users is L, the BS users is represented by a set custom-character={1, 2, . . . , L}; the IRS users are divided into M clusters, wherein each cluster comprises K users and is served by G IRS elements, and custom-character={1, 2, . . . M} custom-character={1, 2, . . . G}, custom-character={1, 2, . . . , K)} a bandwidth of the system is divided into multiple sub-channels, wherein each BS user and each IRS user respectively use one sub-channel, and assume the BS users use the first L sub-channels, IRS users use the remaining sub-channels.


Further, in Step 2, the channel model for the BS users is specifically as follows:


Considering that a THz channel from a BS to users is modeled into a LoS path with the neglect of the reflected, scattered and diffracted fading due to severe attenuation of THz; a channel gain from the BS to a user l at a sub-channel n is expressed as:







h

l
,
n

B

=


1

PL

(


f
n

,

d
l


)







Wherein, PL(fn, dl) is a path loss of the THz LoS path, and Fn and dl are a THz frequency and a distance between the BS and the user; the path loss of the THz LoS path is formed by two parts, of which one is a free space spreading loss and the other is a molecular absorption loss, with an expression as:






PL(fn,dl)=Lspread(fn,dlLabs(fn,dl)


Where, Lspread(fn, dl) and Labs(fn, dl) meet:









L
spread

(


f
n

,

d
l


)

=


(


4

π


f
n



d
l


c

)

2







L
abs

(


f
n

,

d
l


)


=


-


k
abs

(

f
n

)




d
l








Where, c represents a speed of light, and kabs(fn) represents molecular absorption coefficient;


Assume power transmitted to the user l through the sub-channel n is Pl,nB, a received signal is:







y

l
,
n

B

=



h

l
,
n

B



p

l
,
n

B


+


h

l
,
n

B








l


=
1

,


l



l


L








n


=

n
-
1


,


n



n



n
+
1




p


l


,

n



B




+

σ
2






Where, σ2 is additive white Gaussian noise power, and pl′,n′B is power transmitted to a user l′ through the sub-channel n.


Further, in Step 2, the channel model for the IRS users is specifically as follows:


A channel for the IRS users is composed of a channel from the BS to an IRS, a channel from the IRS to the users, and a phase shift of IRS elements; according to a classical S-V model, assume a channel vector reflected by an IRS i to a kth user in an mth cluster is defined as:






H=H
I
ΦH
B


Wherein, HB represents channel attenuation from the BS to the IRS, HI represents channel attenuation from the IRS to the users; Φ is a G×G diagonal matrix, represents the phase shift of the IRS elements and meets Φ=diag([e1, . . . , eG]), wherein φg represents the phase shift of a gth element; HB is expressed as:






H
B
=A
I

1
diag(α)AB*


Wherein





α=√{square root over (NBG/L1)}[α1, . . . ,αl1, . . . ,αL1]*






A
B=[αB1), . . . ,αBl1), . . . ,αBL1)]






A
l

1
=[αl11A), . . . ,αl1l1A), . . . ,αl1L1A)]


Wherein, L1 represents the number of scattering paths from the BS to the IRS, αl1 is a complex gain from the path loss of a path l1, ϕl1∈[0, 2π] and γl1A∈[0, 2π] represent a departure angle and an arrival angle on the path l1 from the BS to the IRS; here, uniform linear arrays are considered, and αBl1) and αl1l1A) represent array response vectors at the BS and the IRS and are expressed as:









α
B

(

ϕ

l
1


)

=



1


N
B



[

1
,

e


j

(

2

π
/
λ

)


d



sin
(

ϕ

l
1


)



,


,

e


j

(


N
B

-
1

)



(

2

π
/
λ

)


d



sin
(

ϕ

l
1


)




]

*







α

l
1


(

γ

l
1

A

)

=



1

G


[

1
,

e


j

(

2

π
/
λ

)


d



sin
(

γ

l
1

A

)



,


,

e


j

(

G
-
1

)



(

2

π
/
λ

)


d



sin
(

γ

l
1

A

)




]

*






Where, λ is a wavelength of THz signals, and d is a distance between adjacent antenna elements or IRS elements;


Similar to a BS-IRS link, a IRS-user channel is formulated as:






H
I
=A
Udiag(β)AI2*


Wherein,





β=NUG/L21, . . . ,βl1, . . . ,βL1]*






A
U=[αU1), . . . ,αUl2), . . . ,αUL2)]






A
I

2
=[αI21D), . . . ,αI2l2D), . . . ,αI2L2D)]


Wherein, L2 represent the number of scattering paths from the BS to the IRS, ψl2 is a complex gain from the path loss of a path l2, and ψl2∈[0, 2π] and γl2D∈[0, 2π] represent a departure angle and an arrival angle on the path l2 from the IRS to the BS; here, uniform linear arrays are considered, and αUl2) and αI2l2D) are expressed as:









α
U

(

ϕ

l
2


)

=



1


N
U



[

1
,

e


j

(

2

π
/
λ

)


d



sin
(

ϕ

l
2


)



,


,

e


j

(


N
U

-
1

)



(

2

π
/
λ

)


d



sin
(

ϕ

l
2


)




]

*







α

l
1


(

γ

l
1

D

)

=



1

G


[

1
,

e


j

(

2

π
/
λ

)


d



sin
(

γ

l
2

D

)



,


,

e


j

(

G
-
1

)



(

2

π
/
λ

)


d



sin
(

γ

l
2

D

)




]

*






So, IRS-user channel is:






H=A
Udiag(β)AI2*ΦAI1diag(α)AB*


For the sake of brevity, assume NB=1 and NU=1, the vector H is composed of a vector representing a channel gain hi,m,k,nI of the kth user in the mth cluster on the sub-channel pi,m,k,nI represents power transmitted to the kth user in the mth cluster on the sub-channel n; a signal received by the kth user in the mth cluster on the sub-channel n is expressed as:







y

i
,
m
,
k
,
n

I

=



h

i
,
m
,
k
,
n

I



p

i
,
m
,
k
,
n

I


+


h
I

i
,
m
,
k
,
n









n


=

n
-
1


,


n



n




n


=

n
+
1




p

i
,
m
,
k
,
n

I



+


h

i
,
m
,
k
,
n

I







k


=
1



k


=

k
-
1




p

i
,
m
,
k
,
n

I



+

σ
2






Further, in Step 3,


The BS user rate is calculated:


Wherein, a signal to noise ratio for signal reception of a BS user l is:







SINR

l
,
n

B

=



h

l
,
n

B



p

l
,
n

B





h

l
,
n

B








l


=
1

,


l



l


L







n


=

n
-
1


,


n



n



n
+
1



p


l


,

n



B




+

σ
2







By a Shannon equation, a rate of the user l is expressed as:







R
l
B

=

B





n
=
1

L




log
2

(

1
+

SINR

l
,
n

B


)







Where, B is a bandwidth,


The IRS user rate is calculated:


Wherein, a signal to noise ratio of the kth user in the mth cluster is:







SINR

i
,
m
,
k
,
n

I

=



h

i
,
m
,
k
,
n

I



p

i
,
m
,
k
,
n

I





h

i
,
m
,
k
,
n

I








n


=

n
-
1


,


n



n



n
+
1



p

i
,
m
,
k
,
n

I



+


h

i
,
m
,
k
,
n

I







k


=
1



k


=

k
-
1




p

i
,
m
,
k
,
n

I



+

σ
2







The rate is expressed as:






R
i,m,k
I
=Bπ
n=1
L+IMK log2(1+SINRi,m,k,nI)


The total rate of the system is expressed as:






R=Σ
l=1
L
R
l
Bi=1IΣm=1MΣk=1KRi,m,kI


Further, in Step 4,


To maximize overall energy efficiency of a network, the optimization problem for downlink power control and the IRS phase shift adjustment is proposed, wherein total transmission power of the BS is calculated as the sum power of all the users by:






P
=





l
=
1

L






n
=
1

L



p

l
,
n

B



+




i
=
1

I






m
=
1

M






k
=
1

K






n
=

L
+
1



L
+
IMK




p

i
,
m
,
k
,
n

I










The energy efficiency of the system network is defined as a ratio of a sum rate to the total power of the network, and the optimization problem is formulated as:









max


φ

i
,
m
,
g


,

p

i
,
m
,
k
,
n

I

,

p

l
,
n

B




EE

=

R
P








C
1

:

0

<

p

i
,
m
,
k
,
n

I

<

P
T


,



i



,



m



,



k



,



n










C
2

:

0

<

p

l
,
n

B

<

P
T


,



l




,



n

𝒩









C
3

:


R

i
,
m
,
k

I




R
min


,



i



,



m



,



k










C
4

:


R
l
B




R
min


,



l











C
5

:


φ

i
,
m
,
g





[

0
,

2

π


]


,



i




,



m




,



g

𝒢







Wherein, C1 and C2 are power limitations of each user, C3 and C4 are minimum rate requirements, and C5 is an angle range.


Further, in Step 5,


The optimization problem is solved through the MADRL method: virtual agents are introduced into the BS as mappings of the users, and the virtual agents perform training to obtain optimal power and phase shift; a central control unit is configured on the BS to collect user information including channel state information (CSI), phase shift and power; a clock is set to ensure synchronous iteration during agent training, so that overall energy efficiency is calculated after each iteration; and the agents perform training according to the collected user information and real-time iteration results to realize global optimization.


Further, a Markov process taking a discrete time, a finite state space and an action space into account is used for training; basic elements of reinforcement learning are represented by a tuple (custom-character, custom-character, custom-character, custom-character) where custom-character represents a state space, custom-character represents an action space, custom-character represents a reward function, and custom-character represents a state transition probability; and the state space and the action space are set as follows:

    • (1) State space: a tuple (φ, p) is defined to represent an angle of the IRS elements and the power of the BS users and the IRS users, and the state space is expressed by a formula custom-character={s|s=(φ, p)}, wherein φ={φ1, . . . , φj, . . . φG};
    • (2) Action space: in order to obtain a finite space, the angle and the power are discretized by;








φ
j

:


{

0
,

{






φ
min

(


φ
max


φ
min


)


i




"\[LeftBracketingBar]"

φ


"\[RightBracketingBar]"


-
2




i

=
0

,


,




"\[LeftBracketingBar]"

φ


"\[RightBracketingBar]"


-
2


}


}





φ
:


{

0
,

{






P
min

(


P
max


P
min


)


i




"\[LeftBracketingBar]"

P


"\[RightBracketingBar]"


-
2




i

=
0

,


,




"\[LeftBracketingBar]"

P


"\[RightBracketingBar]"


-
2


}


}






Wherein, φmin and φmax are a minimum phase and a maximum phase of the IRS elements, Pmin and Pmax are minimum user power and maximum user power, and a discrete quantity of the angle and a discrete quantity of the power are |φ| and |P| respectively; the action space is formed as custom-character={a|a=(φ, p)}.

    • (3) Reward space: a difference between the overall energy efficiency in a current state and the overall energy efficiency in a previous state is defined as a reward, which is presented as:
    • custom-character=EEt+1−EEt, wherein EEt+1 and EEt are energy efficiency in a state st+1 and energy efficiency in a state st respectively;


An optimal strategy π is obtained by the agents to realize a maximum cumulative reward, which is obtained by:







R
t


=
Δ





t
=
0






γ
t



r

t
+
1








Where, γ∈(0,1] is a discount factor for future rewards;


During training, the agents select an action according to the optimal strategy π; at the state st, the agents take an action αt according to the optimal strategy π, and at this moment, an action-value function Qπ(st, at) at of the agents is expressed as:






Qπ(st,at)custom-characterEπ[Rt|s=st,a=at]


According to a Bellman equation,








Q
*

(

s
,
a

)


=
Δ


E
[





r
t

+

γ


max

a






Q
*

(


s


,

a



)




s

=

s
t


,

a
=

a
t



]





An evaluation of the optimal strategy is expressed as:








Q
*

(

s
,
a

)


=
Δ




Q
π

(

s
,
a

)






The optimal strategy is obtained by:







π
*

=

arg


Q
*

(

s
,
a

)






To search an optimal strategy in a large state space and a large action space, a DQN is introduced into MADRL; the optimal strategy and the value function are approximated as function according to Qi(s, α; θ)≈Q*(s, α), where θ is a weight and is updated by training; the DQN comprises a target network and a current network, which are trained by minimizing a loss function to optimize the parameter θ; the loss function is:





loss(θ)=(ytDQN−Qt(stt;θ))2


Wherein, Qt(st, αt; θ) is an output of the neural network with the parameter is θ at the state st, and ytDQN is an output of the target network with the parameter is {circumflex over (θ)} at the state st+1;







y
t
DQN

=


r
t

+

γ

Q

(


s

t
+
1


,


a

t
+
1


;

θ
^



)







The loss function is minimized through a gradient descent algorithm, and the action-value function is approximated by the neural network until convergence.


Further, an action-value function Q is, a target action-value function {circumflex over (θ)}=θ, an index T for iteration and an experience pool custom-character are generated according to the random parameter θ;


For episode=1 to M do

    • (1) Initializing the state st
    • (2) For t=1 to T
    • a. Selecting an action by the agents according to







=

arg


Q
*

(


s
t

,


a
t

;
θ


)



;






    • b. Performing the action at by the agents to switch from the current state st to the next state st+1;

    • c. Obtaining the reward rt through data exchange between the agents and the central control unit;

    • d. Forming a tuple (st, at, rt, st+1) by st, at, rt, st+1, and saving the tuple (st, at, rt, st+1) the experience pool custom-character;

    • e. Randomly selecting a mini-batch tuple (st, at, rt, st+1) from the experience pool custom-character;

    • f. Calculating ytDQN according to











y
t
DQN

=


r
t

+

γ

Q

(


s

t
+
1


,


a

t
+
1


;

θ
^



)




;






    • g. Updating the parameter θ in loss(θ)=(ytDQN−Qt(st, αt; θ))2 through a gradient descent method;

    • h. Assigning θ to {circumflex over (θ)} update θ every a period of time, that is {circumflex over (θ)}=θ;

    • i. Calculating energy efficiency EEt by the central control unit;

    • j. Calculating the reward according to rt=EEt+1−EEt,

    • Ending the cycle.





Compared with the prior art, the invention has the following advantages and effects:

    • 1. An IRS-aided THz cellular network is constructed through NOMA. When the BS user rate and the IRS user rate are calculated, both adjacent band interference of the users and in-band interference of each group of IRS users are taken into consideration.
    • 2. To maximum the energy efficiency of the system, an optimization problem is proposed to adjust the phase angle of IRS elements and control downlink power under the constraints of maximum transmission power and minimum date rate.
    • 3. The optimization problem is solved through the MADRL method. Virtual agents are introduced into the BS and perform training synchronously through the central control and periodical exchange information. The DQN is sued to perform non-uniform discretization on optimization variables to construct an action space.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a model diagram of an IRS-aided NOMA THz network system according to the invention.



FIG. 2 is a schematic diagram of the interaction between agents and the environment according to one embodiment of the invention.





DETAILED DESCRIPTION OF THE INVENTION

To expound in detail the technical solutions adopted by the invention to fulfill desired technical purposes, the technical solutions of the embodiments of the invention will be clearly and completely described below in conjunction with the drawings of the embodiments of the invention. Obviously, the embodiments in the following description are merely illustrative ones, and are not all possible ones of the invention, and the technical means or technical features in the embodiments of the invention can be substituted without creative labor. The invention will be described in detail below with reference to the accompanying drawings and embodiments.


First of all, part of specialized vocabularies used in the invention will be explain:

    • 1. IRS-aided THz communication: many researches focus on IRS-aided communication in the THz band. Literature [1] puts forward a method for searching for an optimal phase shift of IRS elements to improve the system rate in the THz band. In Literature [2], discrete phase shifts of IRSs and pre-coders are designed at a base station (BS) to optimize spectral efficiency. In Literature [3], a space tracking approach is developed for channel estimation of IRS-assisted THz networks, so as to maximum the rate of a system.


Application of NOMA to THz communication: in order to realize mass wireless connection and increase the resource utilization rate in THz communication, NOMA s applied to THz networks in recent study. In Literature [5], NOMA is applied to THz cellular networks, and a sub-channel and power allocation scheme based on an alternating direction method is proposed to optimize energy efficiency. In addition, in Literature [4], a long-term use-center window property of THz is captured, and a central sub-band and side sub-bands of a THz window are allocated to long and short NOMA groups respectively. In NOMA, power allocated to users is related to the channel gain. Small channel gains will be allocated to high-power users, and large channel gains will be allocated to low-power users [4]. NOMA can decode or demodulate superposed signals under coverage.


IRS-aided NOMA network: Under the enlightenment of capacity enhancement of NOMA and coverage increase of IRSs, IRS-aided NOMA communication has aroused the interest of researchers. Literature [6] proposes a design of IRS-aided NOMA downlink transmission, wherein channel vectors of marginal users are aligned in a preset spatial direction with the aid of IRSs. In Literature [7], the author emphatically studies an IRS-aided NOMA network and puts forward an energy-saving scheme based on joint optimization of emitted wave beam formation of a base station (BS) and reflecting phase shift of IRSs. In addition, Literature [8] proposes IRS-enhanced millimeter-wave NOMA systems and comes up with joint optimization of beam formation and power allocation. In Literature [5], the author focuses on an IRS-aided NOMA network and puts forward an energy-saving algorithm, which maximizes the energy efficiency of a system by joint optimization of the transmitted beam formation of a BS and reflecting phase shift of IRSs. Literature [6] studies an IRS-enhanced millimeter wave NOMA system and puts forward joint optimization of active beam formation, passive beam formation and power distribution. In Literature [7], the validity of IRSs in transmission power of NOMA systems is studied, and considering the constraints of minimum signal-to-interference ratio of each user, the problem of power minimization of an IRS-aided downlink NOMA system is proposed. In Literature [8], a simple design of IRS-assisted NOMA downlink transmission is put forward. The base station generates orthogonal wave beams in the spatial direction of a channel near to users by means of traditional space division multiple access; and with the aid of IRSs, valid channel vectors of marginal users are aligned in a preset spatial direction to ensure that these wave beams can serve extra marginal users.


Introduction of reinforcement learning: Literature [9]-[11] solve an optimization problem by means of reinforcement learning. Literature [9] studies a method for power allocation in multi-cell networks, which, different from traditional optimization methods, uses deep reinforcement learning (DRL) for power allocation. The objective of this article is to maximize the overall capacity of a whole network under the condition of random and dense distribution of base stations. A wireless resource mapping method and a deep neural network Deep Q-fully-connected network (DQFCNet) are provided. Compared with power allocation based on water-filling and the Q learning method, the DQFCNet can realize a higher overall capacity. Simulation results indicate that the convergence rate and stability of the DQFCNet are remarkably improved. Literature [9] solves the problem of dynamic spectrum access by means of DRL. Specifically, in this article, a scene where different types of nodes share multiple discrete channels is studied, these nodes do not have the capacity to communicate with other nodes and have no prior knowledge about the behaviors of other nodes. The objective of each node is to maximize the long-term transmission succeed rate of its own. This problem is expressed as a Markov decision process (MDP) with unknown system dynamics. To overcome the challenge of an unknown environment and a large transition matrix, two specific DRL methods are used: deep Q network (DQN) and double deep Q network (DDQN). In addition, improved DQN techniques, including qualification tracking, prior experience and “prediction process”, are introduced. Simulation results indicate that the DQN and the DDQN can effectively learn communication modes of different nodes without prior knowledge and fulfill approximately the optimal performance. Literature [11] points out that complete system observability is necessary for optimizing radio transmission power and user data rate in wireless systems. Although this issue has been widely studied in this literature, there is still no practical solution for approaching the optimal performance merely by means of the observability of available parts of an actual system. The invention provides a reinforcement learning method for realizing downlink power control and rate adaptation in a cellular network to overcome this defect. The invention puts forward the design of a comprehensive learning framework, including a system state, a common reward function and an effective learning algorithm. System-level simulation results show that this design learns a power control strategy rapidly, fulfills a remarkable energy-saving effect and guarantees the fairness of users in a system.


As shown in FIG. 1, the invention provides an energy efficiency optimization method for an IRS-assisted NOMA THz network, comprising the following steps:


Step 1: users are classified into BS users and IRS users.


NB antennas are configured for a base station, NU antennas are configured for users, and the users are classified into BS users and IRS users; assume the number of the BS users is L, the BS users are represented by a set custom-character={1, 2, . . . , L}; the IRS users are divided into M clusters, wherein each cluster comprises K users and is served by G IRS elements, and custom-character={1, 2, . . . , M} custom-character={1, 2, . . . G}, custom-character={1, 2, . . . , K}; a bandwidth of the system is divided into multiple sub-channels, wherein each BS user and each IRS user respectively use one sub-channel, and assume the BS users use the first L sub-channels, IRS users use the remaining sub-channels.


Step 2: a channel model for the BS users and a channel model for the IRS users are defined.


The channel model for the BS users is specifically as follows:


Considering that a THz channel from a BS to users is modeled into a LoS path with the neglect of the reflected, scattered and diffracted fading due to severe attenuation of THz; a channel gain from the BS to a user l at a sub-channel n is expressed as:







h

l
,
n

B

=


1

PL

(


f
n

,

d
l


)







Wherein, PL(fn, dl) is a path loss of the THz LoS path, and fn and dl are a THz frequency and a distance between the BS and the user; the path loss of the THz LoS path is formed by two parts, of which one is a free space spreading loss and the other is a molecular absorption loss, with an expression as:






PL(fn,dl)=Lspread(fn,dlLabs(fn,dl)


Where, Lspread(fn, dl) and Labs(fn, dl) meet









L
spread

(


f
n

,

d
l


)

=


(


4

π


f
n



d
l


c

)

2







L
abs

(


f
n

,

d
l


)

=

e


-


k
abs

(

f
n

)




d
l








Where, c represents a speed of light, and kabs(fn) represents molecular absorption coefficient;


Assume power transmitted to the user l through the sub-channel n is Pl, nB, a received signal is:







y

l
,
n

B

=



h

l
,
n

B



p

l
,
n

B


+


h

l
,
n

B








l


=
1

,


l



l


L








n


=

n
-
1


,


n



n



n
+
1




p


l


,

n



B




+

σ
2






Where, σ2 is additive white Gaussian noise power, and pl′,n′B is power transmitted to a user l′ through the sub-channel n.


The channel model for the IRS users is specifically as follows:

    • A channel for the IRS users is composed of a channel from the BS to an IRS, a channel from the IRS to the users, and a phase shift of IRS elements; according to a classical S-V model, assume a channel vector reflected by an IRS i to a kth user in an mth cluster is defined as:






H=H
I
ΦH
B


Where, HB represents channel attenuation from the BS to the IRS, HI represents channel attenuation from the IRS to the users; Φ is a G×G diagonal matrix, represents the phase shift of the IRS elements and meets Φ=diag([e1, . . . , eG]), wherein φg represents the phase shift of a gth element; HB is expressed as:






H
B
=A
I

1
diag(α)AB*


Wherein





α=√{square root over (NBG/L1)}[α1, . . . ,αl1, . . . ,αL1]*






A
B=[αB1), . . . ,αBl1), . . . ,αBL1)]






A
I

1
=[αl11A), . . . ,αl1l1A), . . . ,αl1L1A)]


Where, L1 represents the number of scattering paths from the BS to the IRS, αl1 is a complex gain from the path loss of a path l1, ϕl1∈[0, 2π] and γl1A∈[0, 2π] represent a departure angle and an arrival angle on the path l1 from the BS to the IRS; here, considering uniform linear array are considered, and αBl1) and αl1l1A) represent array response vectors at the BS and the IRS and are expressed









α
B

(

ϕ

l
1


)

=



1


N
B



[

1
,

e


j

(

2

π
/
λ

)


d



sin
(

ϕ

l
1


)



,


,

e


j

(


N
B

-
1

)



(

2

π
/
λ

)


d



sin
(

ϕ

l
1


)




]

*







α

l
1


(

γ

l
1

A

)

=



1

G


[

1
,

e


j

(

2

π
/
λ

)


d



sin
(

γ

l
1

A

)



,


,

e


j

(


N
B

-
1

)



(

2

π
/
λ

)


d



sin
(

γ

l
1

A

)




]

*






Where, λ is a wavelength of THz signals, and d is a distance between adjacent antenna elements or IRS elements;


Similar to a BS-IRS link, a IRS-user channel is formulated as:






H
I
=A
Udiag(β)AI2*


Wherein,





β=√{square root over (NUG/L2)}[β1, . . . ,βl1, . . . ,βL1]*






A
U=[αU1), . . . ,αUl2), . . . ,αUL2)]






A
I

2
=[αI21D), . . . ,αI2l2D), . . . ,αI2L2D)]


Where, L2 represent the number of scattering paths from the BS to the IRS, ψl2 is a complex gain from the path loss of a path l2, and ψl2∈[0, 2π] and γl2D∈[0, 2π] represent a departure angle and an arrival angle on the path l2 from the IRS to the BS; here, uniform linear arrays are considered, and a αUl2) and αI2l2D) are expressed as:









α
U

(

ϕ

l
2


)

=



1


N
U



[

1
,

e


j

(

2

π
/
λ

)


d



sin
(

ϕ

l
2


)



,


,

e


j

(


N
U

-
1

)



(

2

π
/
λ

)


d



sin
(

ϕ

l
2


)




]

*







α

l
2


(

γ

l
2

D

)

=



1

G


[

1
,

e


j

(

2

π
/
λ

)


d



sin
(

γ

l
2

D

)



,


,

e


j

(

G
-
1

)



(

2

π
/
λ

)


d



sin
(

γ

l
2

D

)




]

*






So, IRS-user channel is:






H=A
U
diag(β)AI2*ΦAI1diag(α)AB*


For the sake of brevity, assume NB=1 and NU=1, the vector H is composed of a vector representing a channel gain hi,m,k,nI of the kth user in the mth cluster on the sub-channel n; Pi,m,k,nI represents power transmitted to the ktn user in the mth cluster on the sub-channel n; a signal received by the kth user in the mth cluster on the sub-channel n is expressed as:







y

i
,
m
,
k
,
n

I

=



h

i
,
m
,
k
,
n

I



p

i
,
m
,
k
,
n

I


+


h

i
,
m
,
k
,
n

I








n


=

n
+
1


,


n



n




n


=

n
+
1




p

i
,
m
,
k
,
n

I



+


h

i
,
m
,
k
,
n

I







k


=
1



k


=

k
-
1




p

i
,
m
,
k
,
n

I



+

σ
2






Step 3: a BS user rate and an IRS user rate are calculated respectively, and a total rate of a system is calculated.


The BS user rate is calculated:


Wherein, a signal to noise ratio for signal reception of a BS user l is:







SINR

l
,
n

B

=



h

l
,
n

B



p

l
,
n

B





h

l
,
n

B








l


=
1

,


l



l


L







n


=

n
-
1


,


n



n



n
+
1



p


l


,

n



B




+

σ
2









    • By a Shannon equation, a rate of the user l is expressed as:










R
l
B

=

B





n
=
1

L




log
2

(

1
+

SINR

l
,
n

B


)







Where, B is a bandwidth;


The IRS user rate is calculated:


Wherein, a signal to noise ratio of the kth user in the mth cluster is:







SINR

i
,
m
,
k
,
n

I

=





h

i
,
m
,
k
,
n

I



p

i
,
m
,
k
,
n

I





h

i
,
m
,
k
,
n

I








n


=

n
-
1


,


n



n




n


=

n
+
1




p

i
,
m
,
k
,
n

I



+


h

i
,
m
,
k
,
n

I





k


=
1



k


=

k
-
1








p

i
,
m
,
k
,
n

I


+

σ
2






The rate is expressed as:






R
i,m,k
I
=BΣ
n=1
L+IMK log2(1+SINRi,m,k,nI)


The total rate of the system is expressed as:






R=Σ
l=1
L
R
l
Bi=1IΣm=1MΣk=1KRi,m,kI


Step 4: an optimization problem for downlink power control and IRS phase shift adjustment is proposed.


To maximize overall energy efficiency of a network, the optimization problem for downlink power control and IRS phase shift adjustment is proposed, wherein total transmission power of the BS is calculated as the sum power of all the users by: The optimization problem of downlink power control and IRS phase shift modulation is proposed to maximize overall energy efficiency of a network, wherein total transmission power of the BS is a sum of power of all the users and is expressed as:






P
=





l
=
1

L






n
=
1

L



p

l
,
n

B



+




i
=
1

I






m
=
1

I






k
=
11

K






n
=

L
+
1



L
+
IMK




p

i
,
m
,
k
,
n

I










The energy efficiency of the system network is defined as a ratio of a sum rate to the total power of the network, and the optimization problem is formulated as:









max


φ

i
,
m
,
g


,

p

i
,
m
,
k
,
n

I

,

p

l
,
n

B




EE

=

R
P








C
1

:

0

<

p

i
,
m
,
k
,
n

I

<

P
T


,



i



,



m



,



k



,



n










C
2

:

0

<

p

l
,
n

B

<

P
T


,



l




,



n

𝒩









C
3

:


R

i
,
m
,
k

I




R
min


,



i




,



m




,



k

𝒦









C
4

:


R
l
B




R
min


,



l











C
5

:


φ

i
,
m
,
g





[

0
,

2

π


]


,



i



,



m



,



g








Where, C1 and C2 are power limitations of each user, C3 and C4 are minimum rate requirements, and C5 is an angle range.


Step 5: the optimization problem is solved through an MADRL method.


The optimization problem is solved through the MADRL method: virtual agents are introduced into the BS as mappings of the users and perform training to obtain optimal power and phase shift; a central control unit is configured on the BS to collect user information including channel state information (CSI), phase shift and power; setting a clock to ensure synchronous iteration during agent training, so that overall energy efficiency is calculated after each iteration; and the agents perform training according to the collected user information and real-time iteration results to realize global optimization.


A Markov process taking a discrete time, a finite state space and an action space into account is used for training; basic elements of reinforcement learning are represented by a tuple (custom-character, custom-character, custom-character, custom-character), where custom-character represents a state space, custom-character represents an action space, custom-character represents a reward function, and custom-character represents a state transition probability; and the state space and the action space are set as follows:

    • 1) State space: a tuple (φ, p) is defined to represent an angle of the IRS elements and the power of the BS users and the IRS users, and the state space is expressed by a formula as custom-character={s|s=(φ, p)}, wherein φ={φ1, . . . , φj, . . . , φG};
    • 2) Action space: in order to obtain a finite space, the angle and the power are discretized by;








φ
j

:


{

0
,

{






φ
min

(


φ
max


φ
min


)


i




"\[LeftBracketingBar]"

φ


"\[RightBracketingBar]"


-
2




i

=
0

,


,




"\[LeftBracketingBar]"

φ


"\[RightBracketingBar]"


-
2


}


}





φ
:


{

0
,

{






P
min

(


P
max


P
min


)


i




"\[LeftBracketingBar]"

P


"\[RightBracketingBar]"


-
2




i

=
0

,


,




"\[LeftBracketingBar]"

P


"\[RightBracketingBar]"


-
2


}


}






Wherein, φmin and φmax are a minimum phase and a maximum phase of the IRS elements, Pmin and Pmax are minimum user power and maximum user power, and a discrete quantity of the angle and a discrete quantity of the power are |φ| and |P| respectively; the action space is formed as custom-character={a|a=(φ, p)};


3) Reward space: a difference between the overall energy efficiency in a current state and the overall energy efficiency in a previous state is defined as a reward, which is presented as:

    • custom-character=EEt+1−EEt, wherein EEt+1 and EEt are energy efficiency in a state st+1 and energy efficiency in a state st respectively;
    • An optimal strategy π is obtained by the agents to realize a maximum cumulative reward, which is obtained by:







R
t


=
Δ





t
=
0






γ
t



r

t
+
1








Where, γ∈(0,1] is a discount factor for future rewards;


During training, the agents select an action according to the optimal strategy π; at the state st, the agents take an action αt according to the optimal strategy π, and at this moment, an action-value function Qπ(st, αt) of the agents is expressed as:






Q
π(stt)custom-characterEπ[Rt|s=st,α=αt]


According to a Bellman equation,








Q
*

(

s
,
a

)


=
Δ


E
[





r
t

+

γ


max

a






Q
*

(


s


,

a



)




s

=

s
t


,

a
=

a
t



]





An evaluation of the optimal strategy is expressed as:








Q
*

(

s
,
a

)


=
Δ




Q
π

(

s
,
a

)






The optimal strategy is obtained by:







π
*

=

arg


Q
*

(

s
,
a

)






To search an optimal strategy in a large state space and a large action space, a DQN is introduced into MADRL; the optimal strategy and the value function are approximated as a function according to Qi(s, α; θ)≠Q*(s, α), where θ is a weight and is updated by training; the DQN comprises a target network and a current network, which are trained by minimizing a loss function to optimize the parameter θ; the loss function is:





loss(θ)=(ytDQN−Qt(stt;θ))2


Wherein, Qt(st, αt; θ) is an output of the neural network with the parameter is θ at the state st, and ytDQN is an output of the target network with the parameter is {circumflex over (θ)} at the state st+1;







y
t
DQN

=


r
t

+

γ

Q

(


s

t
+
1


,


a

t
+
1


;

θ
^



)







The loss function is minimized through a gradient descent algorithm, and the action-value function is approximated by the neural network until convergence.


An action-value function Q is, a target action-value function {circumflex over (θ)}=θ, an index T for iteration and an experience pool custom-character are generated according to the random parameter θ;


For episode=1 to M

    • 1) Initializing the state st
    • 2) For t=1 to T
    • a. Selecting an action by the agents according to







=

arg


Q
*

(


s
t

,


a
t

;
θ


)



;






    • b. Performing the action t by the agents to switch from the current state st to the next state st+1;

    • c. Obtaining the reward rt through data exchange between the agents and the central control unit;

    • d. Forming a tuple (st, αt, rt, st+1) by st, αt, rt, st+1, and saving the tuple (st, αt, rt, st+1) the experience pool custom-character;

    • e. Randomly selecting a mini-batch tuple (st, αt, rt, st+1) from the experience pool custom-character;

    • f. Calculating ytDQN according to











y
t
DQN

=


r
t

+

γ

Q

(


s

t
+
1


,


a

t
+
1


;

θ
^



)




;






    • g. Updating the parameter θ in loss(θ)=(ytDQN−Qt(st, αt; θ))2 through a gradient descent method;

    • h. Assigning θ to {circumflex over (θ)} to update θ every a period of time, that is {circumflex over (θ)}=θ;

    • i. Calculating energy efficiency EEt by the central control unit;

    • j. Calculating the reward according to rt=EEt+1−EEt;

    • Ending the cycle.





The above embodiments are merely preferred ones of the invention, and are not intended to limit the invention in any form. Although the invention has been disclosed above with reference to the preferred embodiments, these embodiments are not used to limit the invention. Any skilled in the art can obtain equivalent embodiments by slightly changing or modifying the technical contents disclosed above without departing from the scope of the technical solutions of the invention. Any simple amendments, equivalent substitutions and improvements made to the above embodiments based on the spirit and principle of the invention according to the technical essence of the invention should still fall within the protection scope of the technical solutions of the invention.


Literature list in this application:

  • [1] W. Chen, X. Ma, Z. Li, and N. Kuang, “Sum-rate maximization for intelligent reflecting surface based terahertz communication systems,” IEEE Int. Conf. Commun., pp. 153-157, August 2019.
  • [2] W. Chen, Z. Chen, X. Ma, Y. Chi, and Z. Li, “Spectral efficiency optimization for intelligent reflecting surface aided multi-input multioutput terahertz system,” Microwave and Optical Technology Lett., vol. 62, no. 8, pp. 2754-2759, August 2020.
  • [3] X. Ma, Z. Chen, W. Chen, Z. Li, Y. Chi, C. Han, and S. Li, “Joint channel estimation and data rate maximization for intelligent reflecting surface assisted terahertz MIMO communication systems,” IEEE Access, vol. 8, pp. 99565-99581, August 2020.
  • [4] X. Zhang, C. Han, and X. Wang, “Joint beamforming-power-bandwidth allocation in terahertz NOMA networks,” Int. Conf. on Sensing, Commun., and Netw., pp. 1-9, June 2019.
  • [5] H. Zhang, Y. Duan, K. Long, and V. C. M. Leung, “Energy efficient resource allocation in terahertz downlink NOMA systems,” IEEE Trans. Commun., vol. 69, no. 2, pp. 1375-1384, February 2021.
  • [6] Z. Ding and H. V. Poor, “A simple design of IRS-NOMA transmission,” I EEE Commun. Lett., vol. 24, no. 5, pp. 1119-1123, May 2020.
  • [7] F. Fang, Y. Xu, Q. Pham, and Z. Ding, “Energy-efficient design of IRS-NOMA networks,” IEEE Trans. Veh. Technol., vol. 69, no. 11, pp. 14088-14092, November 2020.
  • [8] J. Zuo, Y. Liu, E. Basar, and O. A. Dobre, “Intelligent reflecting surface enhanced millimeter-wave NOMA systems,” IEEE Commun. Lett., vol. 24, no. 11, pp. 2632-2636, November 2020.
  • [9] Y. Zhang, C. Kang, T. Ma, Y. Teng, and D. Guo, “Power allocation in multi-cell networks using deep reinforcement learning,” IEEE Veh. Technol. Conf., pp. 1-6, August 2018.
  • [10] Y. Xu, J. Yu, W. C. Headley, and R. M. Buehrer, “Deep reinforcement learning for dynamic spectrum access in wireless networks,” IEEE Military Commun. Conf., pp. 207-212, October 2018.
  • [11] E. Ghadimi, F. D. Calabrese, G. Peters, and P. Soldati, “A reinforcement learning approach to power control and rate adaptation in cellular networks,” IEEE Int. Conf. Commun., pp. 1-7, May 2017.

Claims
  • 1. An energy efficiency optimization method for an IRS-assisted NOMA THz network, comprising the following steps: Step 1: classifying users into BS users and IRS users;Step 2: defining a channel model for the BS users and a channel model for the IRS users;Step 3: calculating a BS user rate and an IRS user rate respectively, and calculating a total rate of a system;Step 4: proposing an optimization problem for downlink power control and IRS phase shift adjustment; andStep 5: solving the optimization problem through an MADRL method.
  • 2. The energy efficiency optimization method for an IRS-assisted NOMA THz network according to claim 1, wherein in Step 1, NB antennas are configured for a base station, NU antennas are configured for users and the users are classified into BS users and IRS users; assume the number of the BS users is L, the BS users is represented by a set ={1, 2 . . . , L}; the IRS users are divided into M clusters, wherein each cluster comprises K users and is served by G IRS elements, and ={1, 2, . . . M} ={1, 2, . . . G}, {1, 2, . . . K}; a bandwidth of the system is divided into multiple sub-channels, wherein each BS user and each IRS user respectively use one sub-channel, and assume the BS users use the first L sub-channels, IRS users use the remaining sub-channels.
  • 3. The energy efficiency optimization method for an IRS-assisted NOMA THz network according to claim 1, wherein in Step 2, the channel model for the BS users is specifically as follows: considering that a THz channel from a BS to users is modeled into a LoS path with the neglect of the reflected, scattered and diffracted fading due to severe attenuation of THz; a channel gain from the BS to a user l at a sub-channel n is expressed as:
  • 4. The energy efficiency optimization method for an IRS-assisted NOMA THz network according to claim 3, wherein in Step 2, the channel model for the IRS users is specifically as follows: a channel for the IRS users is composed of a channel from the BS to an IRS, a channel from the IRS to the users, and a phase shift of IRS elements; according to a classical S-V model, assume a channel vector reflected by an IRS i to a kth user in an mth cluster is defined as: H=HIΦHB where, HB represents channel attenuation from the BS to the IRS, HI represents channel attenuation from the IRS to the users; Φ is a G×G diagonal matrix,represents the phase shift of the IRS elements andmeets Φ=diag([ejφ1, . . . , ejφG]), where φg represents the phase shift of a gth element; HB is expressed as: HB=Al1diag(α)AB*where α=√{square root over (NBG/L1)}[α1, . . . ,αl1, . . . ,αL1]*AB=[αB(ϕ1), . . . ,αB(ϕl1), . . . ,αB(ϕL1)]AI1=[αI1(γ1A), . . . ,αI1(γl1A), . . . ,αI1(γL1A)]Where, L1 represents the number of scattering paths from the BS to the IRS, αl1 is a complex gain from the path loss of a path l1, ϕl1∈[0, 2π] and γl1A∈[0, 2π] represent a departure angle and an arrival angle on the path l1 from the BS to the IRS; here, uniform linear array are considered, and αB(ϕl1) and αI2(γl1A) represent array response vectors at the BS and the IRS and are expressed as:
  • 5. The energy efficiency optimization method for an IRS-assisted NOMA THz network according to claim 4, wherein in Step 3, the BS user rate is calculated:wherein, a signal to noise ratio for signal reception of a BS user l is:
  • 6. The energy efficiency optimization method for an IRS-assisted NOMA THz network according to claim 5, wherein in Step 4, to maximize overall energy efficiency of a network, the optimization problem for downlink power control and the IRS phase shift adjustment is proposed, wherein total transmission power of the BS is calculated as the sum power of all the users by:
  • 7. The energy efficiency optimization method for an IRS-assisted NOMA THz network according to claim 6, wherein in Step 5, the optimization problem is solved through the MADRL method: virtual agents are introduced into the BS as mappings of the users, and the virtual agents perform training to obtain optimal power and phase shift; a central control unit is configured on the BS to collect user information including channel state information (CSI), phase shift and power; a clock is set to ensure synchronous iteration during agent training, so that overall energy efficiency is calculated after each iteration; and the agents perform training according to the collected user information and real-time iteration results to realize global optimization.
  • 8. The energy efficiency optimization method for an IRS-assisted NOMA THz network according to claim 7, wherein a Markov process taking a discrete time, a finite state space and an action space into account is used for training; basic elements of reinforcement learning are represented by a tuple (, , , ), where represents a state space, represents an action space, represents a reward function, and represents a state transition probability; and the state space and the action space are set as follows: 1) state space: a tuple (φ, p) is defined to represent an angle of the IRS elements and the power of the BS users and the IRS users, and the state space is expressed by a formula ={s|s=(φ, p)}, wherein φ={φ1, . . . , φj, . . . , φG};2) action space: in order to obtain the finite space, the angle and the power are discretized by;
  • 9. The energy efficiency optimization method for an IRS-assisted NOMA THz network according to claim 8, wherein an action-value function Q is, a target action-value function {circumflex over (θ)}=θ, an index T for iteration and an experience pool are generated according to the random parameter θ; for episode=1 to M1) initializing the state st 2) for t=1 to Ta. selecting an action by the agents according to
Priority Claims (1)
Number Date Country Kind
202210680248.1 Jun 2022 CN national