Channel decoding method and channel decoding device

Information

  • Patent Grant
  • 11546086
  • Patent Number
    11,546,086
  • Date Filed
    Tuesday, February 22, 2022
    2 years ago
  • Date Issued
    Tuesday, January 3, 2023
    a year ago
Abstract
A channel decoding method includes constructing a maximum likelihood decoding problem including an objective function and a parity check constraint; converting the parity check constraint in the maximum likelihood decoding problem into a cascaded form, converting a discrete constraint into a continuous constraint, and adding a penalty term to the objective function to obtain a decoding optimization problem with the penalty term; obtaining ADMM iterations according to a specific form of the penalty term, and obtaining a channel decoder based on the ADMM with the penalty term; constructing a deep learning network according to the ADMM iterations, and converting a penalty coefficient and a coefficient contained in the penalty term into network parameters; training the deep learning network with training data offline and learning the network parameters; and loading the learned network parameters in the channel decoder based on the ADMM with the penalty term, and performing real-time channel decoding.
Description
FIELD

The present disclosure belongs to the field of channel coding and decoding in wireless communications, and more particularly relates to a channel decoding method and a channel decoding device.


BACKGROUND

Channel decoding is used on the receiver side to return the binary information back to its original form based on the received signals. In a classical communication system, a message received by a sink is not necessarily the same as a message sent by a source, and the sink needs to know which source message is sent at this time, so a decoder needs to choose a source massage from a codebook based on the received massage. A linear programming (LP) decoder is based on the linear relaxation of an original maximum likelihood decoding problem, and is a popular decoding technique for binary linear codes. Because the linear programming decoder has a strong theoretical guarantee on the decoding performance, it has received extensive attention from academy and industry, especially about the decoding of low density parity check (LDPC) codes. However, compared with a classical belief propagation (BP) decoder, the LP decoder has higher computational complexity and lower error correction performance in a low signal-to-noise ratio (SNR) region.


In addition, since deep learning techniques have been successfully applied in many other fields, such as computer vision and natural language processing, it has also begun to be applied in the field of wireless communication, such as signal detection, channel estimation, channel coding, etc.


Unfolding an existing iterative algorithm is a more effective way to build a deep learning network recently. Different from some classical deep learning techniques such as the fully connected neural network and the convolutional neural network, which essentially operate as a black box, this method can make full use of the inherent mechanism of the problem itself and utilize multiple training data to improve the performance with lower training complexity.


SUMMARY

According to an embodiment of the present disclosure, a channel decoding method is provided.


The channel decoding method includes: constructing a maximum likelihood decoding problem including an objective function and a parity check constraint based on channel decoding; converting the parity check constraint in the maximum likelihood decoding problem into a cascaded form, converting a discrete constraint into a continuous constraint, and adding a penalty term to the objective function to obtain a decoding optimization problem with the penalty term; introducing an alternating direction method of multipliers (ADMM), obtaining ADMM iterations according to a specific form of the penalty term, and obtaining a channel decoder based on the ADMM with the penalty term; constructing a deep learning network according to the ADMM iterations, and converting a penalty coefficient and a coefficient contained in the penalty term into network parameters; training the deep learning network with training data offline and learning the network parameters; and loading the learned network parameters in the channel decoder based on the ADMM with the penalty term, and performing online real-time channel decoding.


According to an embodiment of the present disclosure, a channel decoding device is provided. The channel decoding device includes a processor; and a memory for storing instructions executable by the processor. The processor is configured to: construct a maximum likelihood decoding problem including an objective function and a parity check constraint based on channel decoding; convert the parity check constraint in the maximum likelihood decoding problem into a cascaded form, convert a discrete constraint into a continuous constraint, and add a penalty term to the objective function to obtain a decoding optimization problem with the penalty term; introduce an ADMM, obtain ADMM iterations according to a specific form of the penalty term, and obtain a channel decoder based on the ADMM with the penalty term; construct a deep learning network according to the ADMM iterations, and convert a penalty coefficient and a coefficient contained in the penalty term into network parameters; train the deep learning network with training data offline and learn the network parameters; and load the learned network parameters in the channel decoder based on the ADMM with the penalty term, and perform online real-time channel decoding.


According to an embodiment of the present disclosure, a non-transitory computer-readable storage medium is provided. The non-transitory computer-readable storage medium has stored therein instructions that, when executed by a processor, causes the processor to perform the channel decoding method according to the abovementioned embodiment of the present disclosure.


Additional aspects and advantages of embodiments of present disclosure will be given in part in the following descriptions, become apparent in part from the following descriptions, or be learned from the practice of the embodiments of the present disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following descriptions made with reference to the drawings, in which:



FIG. 1 is a schematic diagram of a wireless communication system having a transmitter and a receiver with a decoder according to an exemplary embodiment:



FIG. 2 is a flow chart showing a channel decoding method according to an exemplary embodiment,



FIG. 3 is a structure diagram of a LADN deep learning-based decoding network:



FIG. 4 is a structure diagram of a single layer of a LADN deep learning-based decoding network; and



FIG. 5 is a block error ratio (BLER) graph of a BP decoder, an ADMM L2 decoder, a deep learning network LADN and a deep learning network LADN-I in Rayleigh channel environment.





DETAILED DESCRIPTION

In order to make the technical solutions and advantages of the present disclosure clearer, embodiments will be described herein with reference to drawings. The embodiments are explanatory, illustrative, and used to generally understand the present disclosure, but shall not be construed to limit the present disclosure.


An objective of this application is to propose a deep learning-based channel decoding method using ADMM method in order to improve the decoding performance during the channel decoding process. ADMM iterations are unfolded to construct a network, and the included penalty parameters and coefficients in a penalty function are converted into network parameters to be learned. Once training is completed, the network parameters are fixed for online real-time channel decoding. In embodiments of the present disclosure, the capability of deep learning is used to find the optimal parameters in the channel decoder based on ADMM with a penalty term, which can further improve the decoding performance.


In some embodiments, the channel decoding method may be applied in a receiver of a wireless communication system with binary linear codes, and configured to perform channel decoding on the binary linear codes. As an example, in a wireless communication system. LDPC is used for channel coding, and the code type is [96, 48] MacKay 96.33.964 LDPC code. As shown in FIG. 1, a transmitter A and a receiver B form a complete transceiver system. The transmitter A consists of a source encoder and an LDPC encoder, and the LDPC encoder is configured to perform LDPC encoding on a bit sequence encoded by a source. The receiver B includes the proposed decoder and a source decoder. The core of the decoder is an ADMM L2 decoding algorithm, and parameters included in the algorithm are obtained by training an LDAN network, which is unfolded by the ADMM L2 algorithm. After the training is completed, the parameters are fixed and the ADMM L2 decoding algorithm is fixed. Then, when the receiver B receives the signals from the transmitter A, the proposed decoder will determine which codebook to send according to the detected signals.


According to an embodiment of the present disclosure, a channel decoding method is provided. As shown in FIG. 2, the channel decoding method includes steps 201 to 206 as follows:


In step 201, a maximum likelihood decoding problem including an objective function and a parity check constraint is constructed.


In step 202, the parity check constraint in the maximum likelihood decoding problem is converted into a cascaded form, a discrete constraint is converted into a continuous constraint, and a penalty term is added to the objective function to obtain a decoding optimization problem with the penalty term.


In step 203, an ADMM is introduced, ADMM iterations are obtained according to a specific form of the penalty term, and a channel decoder based on ADMM with the penalty term is obtained.


In step 204, a deep learning network is constructed according to the ADMM iterations, and a penalty coefficient and a coefficient contained in the penalty term are converted into network parameters.


In step 205, the deep learning network is trained with training data offline and the network parameters are learned.


In step 206, the learned network parameters are loaded in the channel decoder based on the ADMM with the penalty term, and online real-time channel decoding is performed.


In embodiments of the present disclosure, it is possible to make full use of the advantages of the model-driven deep learning method, and unfold the iterative process of the channel decoder based on the ADMM with a penalty term into a deep learning network. Since the optimal decoder parameters are determined by the deep learning method, the channel decoder based on the ADMM with the penalty term can obtain better decoding performance after loading the learned network parameters. At the same time, due to the limited network parameters, the network is much easier to train, and the training process requires less training time and computational resource as compared to other classical deep learning networks.


In some embodiments, constructing the maximum likelihood decoding problem comprising the objective function and the parity check constraint based on channel decoding specifically comprises:


constructing a maximum likelihood decoding problem represented by formula (1):











min
x




v
T


x










s
.
t
.






[




t
=
1

N




H
ji



x
i



]

2


=
0

,

j

𝒥









x



{

0
,
1

}


N
×
1



,

i








(
1
)







where N represents a length of a binary linear code custom character, H represents an M×N parity check custom charactermatrix and is configured to designate each codeword, x represents a transmitted codeword, x={xi={0,1},i∈custom character};


[·] represents a modulo-2 operation, custom character and custom character represent a variable node and a check node of the binary linear code custom character, respectively, custom charactercustom character{1, . . . , N}, custom charactercustom character{1, . . . , M};


v represents a log likelihood ratio, v=[v1, . . . , vN]Tcustom character, each element of v is defined as











v
i

=

log
(


Pr


(



y
i

|

x
i


=
0

)



Pr


(



y
i

|

x
i


=
1

)



)


,

i







(
2
)







where Pr(·) represents a conditional probability.


In some embodiments, converting the parity check constraint in the maximum likelihood decoding problem into the cascaded form, converting the discrete constraint into the continuous constraint, and adding the penalty term to the objective function to obtain the decoding optimization problem with the penalty term specifically comprise:

    • converting a parity check constraint of a high dimension into a finite number of three-variable parity check constraints defined as Tx≤t,x∈{0,1}3, where










x
=


[


x
1

,

x
2

,

x
3


]

T


,





t
=


[

0
,
0
,
0
,
2

]

T


,





T
=

[



1



-
1




-
1






-
1



1



-
1






-
1




-
1



1




1


1


1



]






(
3
)







introducing dj-3 auxiliary variables x to a parity check constraint with a dimension dj to form dj-2 parity check equations, where dj represents the number of dimensions corresponding to a jth check node; providing







Γ
a

=




j
=
1

M



(


d
j

-
3

)







auxiliary variables and







Γ
a

=




j
=
1

M



(


d
j

-
2

)







three-variable parity check equations for a constraint









[




i
=
1

N




H
ji



x
i



]

2

=
0

,

j

𝒥

,





and converting the maximum likelihood decoding problem (1) into an equivalent integer linear programming problem including an objective function (4a), an inequality constraint (4b) and a discrete constraint (4c):










min
u




q
T


u





(

4

a

)








s
.
t
.




Au

-
b


0




(

4

b

)






u



{

0
,
1

}



(

N
+

Γ
a


)

×
1






(

4

c

)







where






u
=



[


x
T

,


x
~

T


]

T




{

0
,
1

}



(

N
+

Γ
a


)

×
1








b=lΓc×l⊗t, q=[v;0Γo×l], A=[TQ1; . . . ; TQτ; . . . ; TQΓc]∈{0,±1}c×(N+Γc); Qτ represents an element in a u vector corresponding to a τth three-variable parity check equation, Qτ∈{0,1}(N+Γa)×l; converting the inequality constraint (4b) into an equality constraint (5b) by introducing an auxiliary variable z to obtain an equivalent optimization problem including an objective function (5a), the equality constraint (5b) and a discrete constraint (5c):










min

u
,
z





q
T


u





(

5

a

)








s
.
t
.




Au

+
z

=
b




(

5

b

)







u



{

0
,
1

}



(

N
+

Γ
a


)

×
1



,

z



+

4


Γ
a

×
1







(

5

c

)







relaxing the discrete constraint (5c) into u∈[0,1](Γα+n)×l by adding a penalty item to the objective function (5a) to obtain a decoding optimization problem with the penalty term represented by formulas (6a) to (6c):











min

u
,
z





q
T


u


+



i



g


(

u
i

)







(

5

a

)








s
.
t
.




Au

+
z

=
b




(

5

b

)







u



{

0
,
1

}



(

N
+

Γ
a


)

×
1



,

z



+

4


Γ
a

×
1







(

5

c

)







where g(·):[0,1]→custom character∪{±∞}, g(·) represents a penalty function, specifically a function









g

l
2




(
u
)


=


-

α
2







u
-
0.5



2
2



,





where u represents an ith element in the u vector, and α represents a coefficient contained in the penalty function for controlling a slope of the penalty function.


In some embodiments, introducing the ADMM, obtaining the ADMM iterations according to the specific form of the penalty term, and obtaining the channel decoder based on the ADMM with the penalty term specifically comprise:

    • obtaining an augmented Lagrange equation (7) according to the decoding optimization problem with the penalty term:










L


(

u
,
z
,
y

)


=



q
T


u

+



i



g


(

u
i

)



+


y
T



(

Au
+
z
-
b

)


+


μ
2






Au
+
z
-
b



2
2







(
7
)







where y represents a Lagrange multiplier, y∈custom characterc×l, and μ represents a penalty parameter;


introducing the ADMM method, and performing an iteration as follows:










u

k
+
1


=


argmin

u



[

0
,
1

]



(

N
+

Γ
a


)

×
1







L
μ



(

u
,

z
k

,

y
k


)







(

8

a

)







z

k
+
1


=


argmin

z



+


Γ
a

×
1







L
μ



(


u

k
+
1


,
z
,

y
k


)







(

8

b

)







y

k
+
1


=


y
k

+

μ


(


Au

k
+
1


+

z

k
+
1


-
b

)







(

8

c

)







where ATA is a diagonal matrix due to mutual independence of column vectors of A, resolving formula (8a) into N+Γα parallel subproblems (9a) and (9b):











min
u




1
2


μ






e
i



u
i
2



+

g


(

u
i

)


+


(


q
i

+


a
i
T



(

y
+

μ


(


z
k

-
b

)



)



)



u
i






(

9

a

)








s
.
t
.





u
i




[

0
,
1

]


,

i







(

9

b

)







making a derivative of ½μeiui2+g(ui)+(qi+aiT(y+μ(zk−b)))ui with respect to ui equal to 0 to obtain a solution (10) of the subproblems (9a) and (9b):










u
i

k
+
1


=




[

0
,
1

]




(



q
i

+


a
i
T



(


y
k

+

μ


(


z
k

-
b

)



)


+

α
2





-
μ







e
i


+
α


)






(
10
)







obtaining a solution (11) of formula (8b) in a similar way:










z

k
+
1


=





[

0
,

+



]


4


Γ
c






(

b
-

Au

k
+
1


-


y
k

μ


)






(
11
)







so as to obtain the channel decoder based on the ADMM with the penalty term, abbreviated as an ADMM L2 decoder, where μ and α are preset coefficients.


In some embodiments, constructing the deep learning network according to the ADMM iterations, and converting the penalty coefficient and the coefficient contained in the penalty term into network parameters specifically comprise:


constructing a deep learning network LADN according to the ADMM iterations, and converting the preset coefficients in the ADMM L2 decoder into network parameters, wherein the deep learning network LADN is composed of K layers with the same structure, each layer corresponds to one iteration in the ADMM L2 decoder and includes three nodes u, z and y, (u(k),z(k),y(k)) represents an output of three nodes of a kth layer, and each of the K layers of the deep learning network LADN is represented by formulas (12a) to (12c):










u

(

k
+
1

)


=




[

0
,
1

]




(

η


(

q
+


A
T



(


y

(
k
)


+

μ


(


z

(
k
)


-
b

)



)


+

α
2


)


)






(

12

a

)







z

(

k
+
1

)


=

relu


(

b
-

Au

(

k
+
1

)


-


y

(
k
)


μ


)






(

12

b

)







y

(

k
+
1

)


=


y

(
k
)


+

μ


(


Au

(

k
+
1

)


+

z

(

k
+
1

)


-
b

)







(

12

c

)







where η∈custom characterN+Γα, η represents an output of a function η(A;α;μ)custom characterdiag (1/(α−μATA)), symbol ⊙ represents a Hadamad product, relu(·) represents an activation function defined as relu(x)=max(x;0);

    • a decoding output of the deep learning network LADN is represented by a formula (13):

      {circumflex over (x)}=Π(0,1)(|u1(K), . . . , uN(K)|)  (13)


where {circumflex over (x)} represents a decoding result


In some embodiments, K is 50 or 70.


In some embodiments, a penalty coefficient in the deep learning network LADN is converted into an independent parameter set μ=[μ1, . . . , μK] as the network parameters to obtain a deep learning network LADN-I with layers each represented by formulas 14a to (14c):










u

(

k
+
1

)


=




[

0
,
1

]




(

η


(

q
+


A
T



(


y

(
k
)


+


μ
k



(


z

(
k
)


-
b

)



)


+

α
2


)


)






(

14

a

)







z

(

k
+
1

)


=

relu


(

b
-

Au

(

k
+
1

)


-


y

(
k
)



μ
k



)






(

14

b

)







y

(

k
+
1

)


=


y

(
k
)


+


μ
k



(


Au

(

k
+
1

)


+

z

(

k
+
1

)


-
b

)







(

14

c

)







In some embodiments, the training data is generated by selecting a predetermined signal-to-noise ratio through cross-validation to generate training samples, which is constituted as {vp, xp})p=1P, where vp represents a feature, and xp represents a label; and


training the deep learning network with training data offline and learning the network parameters specifically comprises:


providing a loss function (17) based on a mean squared error:













(
Θ
)


=


1
P






p
=
1

P



(


σ






Au
p

(
K
)


+

z
p

(
K
)


-
b



2
2


+


(

1
-
σ

)









LADN



(


v
p

;
Θ

)


-

x
p




2
2



)







(
17
)







where Θ represents network parameters contained in the deep learning network, and is denoted by (α,μ) in the deep learning network LADN, or by {α,μ} in the deep learning network LADN-I, ∥Aup(K)+zp(K)−b∥22 represents an unsupervised item, and ∥custom character(vp;Θ)−xp22 represents a supervised item;

    • performing training using a degressive learning rate with an initial value of λ and a decaying rate of 0.5 until the loss function no longer decreases, to obtain the learned network parameters μ and α.


In some embodiments, the predetermined signal-to-noise ratio is 2 dB, and 40,000 training samples and 1,000 validation samples are generated to constitute the training data.


In some embodiments, σ is 0.3 or 0.9.


In some embodiments, the initial value λ is 0.001.


Embodiment 1

Considering an additive Gaussian channel, code types considered are [96, 48] MacKay 96.33.964 LDPC code custom character and [128, 64] CCSDS LDPC code custom character. According to a first embodiment of the present disclosure, a deep learning-based channel decoding method using ADMM method proposed for this system includes steps 1) to 6) as follows:


In step 1), a maximum likelihood decoding problem is constructed. The maximum likelihood decoding problem includes an objective function and a parity check constraint.


Specifically, a maximum likelihood decoding problem represented by formula (1) is constructed based on channel decoding:











min
x




v
T


x










s
.
t
.






[




i
=
1

N




H
ji



x
i



]

2


=
0

,

j

𝒥









x



{

0
,
1

}


N
×
1



,

i








(
1
)







where N represents a length of the LDPC code custom character or custom character, H represents an M×N parity check matrix and is configured to designate each codeword of the LDPC code, x represents a transmitted codeword, x={xi={0,1},i∈custom character};


[·], represents a modulo-2 operation, custom character and custom character represent a variable node and a check node of the codeword, respectively, custom charactercustom character{1, . . . , N}, custom charactercustom character{1, . . . , M}; v represents a log likelihood ratio, v=[v1, . . . ,vN]T∈RN×l, each element of v is defined as











v
i

=

log


(


Pr


(



y
i



x
i


=
0

)



Pr


(



y
i



x
i


=
1

)



)



,

i







(
2
)







where Pr(·) represents a conditional probability.


In step 2), the parity check constraint in the maximum likelihood decoding problem is converted into a cascaded form which is easy to process, a 0/1 discrete constraint is converted into a continuous constraint in [0,1] interval, and a penalty term is added to the objective function to suppress a pseudo-codeword to obtain a decoding optimization problem with the penalty term.


The key to simplifying the parity check constraint is to convert a parity check constraint of a high dimension into a finite number of three-variable parity check constraints defined as Tx≤t,x∈{0,1}3, where










x
=


[


x
1

,

x
2

,

x
3


]

T


,

t
=


[

0
,
0
,
0
,
2

]

T


,

T
=


[



1



-
1




-
1






-
1



1



-
1






-
1




-
1



1




1


1


1



]

.






(
3
)







dj-3 auxiliary variables {tilde over (x)} are introduced to a parity check constraint with a dimension dj to form dj-2 parity check equations, where dj represents the number of dimensions corresponding to a jth check node.







Γ
a

=




j
=
1

M



(


d
j

-
3

)







auxiliary variables and







Γ
a

=




j
=
1

M



(


d
j

-
2

)







three-variable parity check equations are provided for a constraint









[




i
=
1

N




H
ji



x
i



]

2

=
0

,

j


𝒥
.







Therefore, the maximum likelihood decoding problem (1) is converted into an equivalent integer linear programming problem including an objective function (4a), an inequality constraint (4b) and a discrete constraint (4c):










min
u




q
T


u





(

4

a

)








s
.
t
.




Au

-
b


0




(

4

b

)






u



{

0
,
1

}



(

N
+

Γ
a


)

×
1






(

4

c

)







where






u
=



[


x
T

,


x
~

T


]

T




{

0
,
1

}



(

N
×

Γ
a


)

×
1








{tilde over (x)} is the introduced auxiliary variable, b=1Γc×l⊗t, q=[v;0Γd×1], A=[TQ1; . . . ;TQr; . . . ;TQΓc]∈{0,±1}4Γc×(N+Γd); Qτ represents an element in a u vector corresponding to a τth three-variable parity check equation, Qτ∈{0,1}(N+Γd)×l.


Further, the inequality constraint (4b) is converted into an equality constraint (5b) by introducing an auxiliary variable z to obtain an equivalent optimization problem including an objective function (5a), the equality constraint (5b) and a discrete constraint (5c):










min

u
,
z





q
T


u





(

5

a

)








s
.
t
.




Au

+
z

=
b




(

5

b

)







u



{

0
,
1

}



(

N
+

Γ
a


)

×
1



,

z



+

4


Γ
a

×
1







(

5

c

)







Then, the discrete constraint (5c) is relaxed into u∈[0,1]a+n)×l by adding a penalty item to the objective function (5a) to obtain a decoding optimization problem with the penalty term represented by formulas (6a) to (6c):











min

u
,
z





q
T


u


+



i






g


(

u
i

)







(

6

a

)








s
.
t
.




Au

+
z

=
b




(

6

b

)







u



{

0
,
1

}



(

N
+

Γ
a


)

×
1



,

z



+

4


Γ
a

×
1







(

6

c

)







where g(·):[0,1]→custom character∪{±∞}, g(·) represents the introduced penalty function, specifically a function









g

i
2




(
u
)


=


-

a
2







u
-
0.5



2
2



,





where u represents an ith element in the u vector, and α represents a coefficient contained in the penalty function and is configured to control a slope of the penalty function.


In step 3), the ADMM is introduced to solve the decoding optimization problem with the penalty term, ADMM iterations are obtained according to a specific form of the penalty term, and a channel decoder based on the ADMM with the penalty term is obtained.


First, an augmented Lagrange equation (7) is obtained according to the decoding optimization problem (6) with the penalty term:










L


(

u
,
z
,
y

)


=



q
T


u

+



i






g


(

u
i

)



+


y
T



(

Au
+
z
-
b

)


+


μ
2






Au
+
z
-
b



2
2







(
7
)







where y represents a Lagrange multiplier, y∈custom characterc×l, and μ represents a penalty parameter.


The ADMM is introduced and an iteration is performed as follows:










u

k
+
1


=


argmin

u



[

0
,
1

]



(

N
+

Γ
a


)

×
1











L
μ



(

u
,

z
k

,

y
k


)







(

8

a

)







z

k
+
1


=


argmin

z



+

4


Γ
a

×
1











L
μ



(


u

k
+
1


,
z
,

y
k


)







(

8

b

)







y

k
+
1


=


y
k

+

μ


(


Au

k
+
1


+

z

k
+
1


-
b

)







(

8

c

)







Since ATA is a diagonal matrix due to mutual independence of column vectors of A, formula (8a) is resolved into N+Γa parallel subproblems (9a) and (9b):











min
u




1
2


μ






e
i



u
1
2



+

g


(

u
i

)


+


(


q
i

+


a
i
T



(

y
+

μ


(


z
k

-
b

)



)



)



u
i






(

9

a

)








s
.
t
.





u
i




[

0
,
1

]


,

i



.






(

9

b

)







A derivative of ½μeiui2+g(ui)+(qi+aiT(y+μ(zk−b)))ui with respect to ui is made equal to 0 to obtain a solution (10) of the subproblems (9a) and (9b):










u
i

k
+
1


=




[

0
,
1

]












(



q
i

+


a
i
T



(


y
k

+

μ


(


z
k

-
b

)



)


+

a
2





-
μ







e
i


+
α


)

.






(
10
)







A solution (11) of formula (8b) is obtained in a similar way:










z

k
+
1


=





[

0
,

+



]


4






Γ
c









(

b
-

Au

k
+
1


-


y
k

μ


)






(
11
)







The above decoding method in steps 1) to 3) is referred to as the channel decoder based on the ADMM with the penalty term, abbreviated as an ADMM L2 decoder, where μ and α are preset coefficients.


In step 4), a deep learning network LADN is constructed according to the ADMM iterations, and a penalty coefficient and a coefficient contained in the penalty term are converted into network parameters.


Specifically, a deep learning network LADN is constructed according to the ADMM iterations, and the preset coefficients in the ADMM L2 decoder are converted into network parameters. As shown in FIG. 3, the data stream of the LADN network corresponds to the iterative process of the ADMM L2 decoder. The nodes in FIG. 3 correspond to the operations in the ADMM L2 decoder, and the directed edges represent the data streams between different operations. The deep learning network LADN is composed of K layers with the same structure, in which each of the K layers corresponds to one iteration in the ADMM L2 channel decoder. K is 50 for the LDPC code custom character, or 70 for the LDPC code custom character. Each of the K layers includes three nodes u, z and y. (u(k), z(k), y(k)) represents an output of three nodes of a kth layer. As shown in FIG. 4, each of the K layers of the deep learning network LADN is represented by formulas (12a) to (12c):










u

k
+
1


=




[

0
,
1

]







(

η


(

q
+


A
T



(


y

(
k
)


+

μ


(


z

(
k
)


-
b

)



)


+

a
2


)


)






(

12

a

)







z

k
+
1


=

relu


(

b
-

Au

(

k
+
1

)


-


y

(
k
)


μ


)






(

12

b

)







y

k
+
1


=


y

(
k
)


+

μ


(


Au

(

k
+
1

)


+

z

(

k
+
1

)


-
b

)







(

12

c

)







where η∈custom characterN+Γa, η represents an output of a function η(A;α;μ)custom characterdiag(1/(α−μATA)), symbol ⊙ represents a Hadamad product, relu(·) represents a classical activation function in the deep learning field and is defined as relu(x)=max(x;0). As can be seen from the definition of relu(·), the function relu(·) is completely equivalent to the mapping Π[0,∞](·) in the solution (11). The structure of a single-layer network is shown in FIG. 4.


A decoding output of the deep learning network LADN is represented by a formula (13):

{circumflex over (x)}=Π(0,J)([u1(K), . . . , uN(K)])  (13)


where {circumflex over (x)} represents a decoding result.


In step 5), the deep learning network is trained with training data offline and the network parameters are learned.


First, the signal-to-noise ratio (SNR) of the training data needs to be set. If the training SNR of the training data is set too high, there are very few decoding errors, and the network may fail to learn potential error patterns. However, if the SNR is too low, only few transmitted codewords can be decoded continuously, which will prevent the proposed network from learning an efficient decoding mechanism. Through cross-validation, the training SNR is set to 2 dB, and 40,000 training samples and 1,000 validation samples are generated.


Specifically, an appropriate SNR (e.g., 2 dB) is selected through cross-validation to generate training samples, which constitute training data {vp,xp}p=1P, where vp represents a feature, and xp represents a label.


A loss function (17) based on a mean squared error is provided.













(
Θ
)


=


1
P






p
=
1

P



(


σ






Au
p

(
K
)


+

z
p

(
K
)


-
b



2
2


+


(

1
-
σ

)









LADN



(


v
p

;
Θ

)


-

x
p




2
2










(
17
)







where Θ represents network parameters contained in the deep learning network, and is denoted by {α,μ} in the deep learning network LADN, ∥Aup(K)+zp(K)−b∥22 represents an unsupervised item, and ∥custom character(vp;Θ)−xp22 represents a supervised item. The loss function is composed of the unsupervised item and the supervised item. σ is 0.3 for the LDPC code custom character, or 0.9 for the LDPC code custom character.


Training is performed using a degressive learning rate with an initial value of 0.001 and a decaying rate of 0.5 until the loss function no longer decreases, to obtain the learned network parameters μ and α.


In step 6), the deep learning network is restored into the iterative channel decoder based on the ADMM method with the penalty term, the learned network parameters are loaded in the channel decoder based on the ADMM method with the penalty term, and online real-time channel decoding is performed.


Embodiment 2

Considering an additive Gaussian channel, code types considered are [96, 48] MacKay 96.33.964 LDPC code custom character and [128, 64] CCSDS LDPC code custom character. According to a second embodiment of the present disclosure, a deep learning-based channel decoding method using ADMM method proposed for this system includes steps 1) to 6) as follows:


In step 1), a maximum likelihood decoding problem is constructed. The maximum likelihood decoding problem includes an objective function and a parity check constraint.


Specifically, a maximum likelihood decoding problem represented by formula (1) is constructed based on channel decoding:











min
x




v
T


x










s
.
t
.






[




i
=
1

N




H
ji



x
i



]

2


=
0

,

j

𝒥









x



{

0
,
1

}


N
×
1



,

i








(
1
)







where N represents a length of the LDPC code custom character or custom character, H represents an M×N parity check matrix and is configured to designate each codeword of the LDPC code, x represents a transmitted codeword, x={xi={0,1},i∈custom character};


[·]2 represents a modulo-2 operation, custom character and custom character represent a variable node and a check node of the codeword, respectively, custom charactercustom character{1, . . . , N}, custom charactercustom character{1, . . . , M};


v represents a log likelihood ratio, v=[v1, . . . ,vN]Tcustom characterN×l, each element of v is defined as











v
i

=

log


(


Pr


(



y
i



x
i


=
0

)



Pr


(



y
i



x
i


=
0

)



)



,

i







(
2
)







where Pr(·) represents a conditional probability.


In step 2), the parity check constraint in the maximum likelihood decoding problem is converted into a cascaded form which is easy to process, a 0/1 discrete constraint is converted into a continuous constraint in [0,1] interval, and a penalty term is added to the objective function to suppress a pseudo-codeword to obtain a decoding optimization problem with the penalty term.


The key to simplifying the parity check constraint is to convert a parity check constraint of a high dimension into a finite number of three-variable parity check constraints defined as Tx≤t,x∈{0,1}3, where










x
=


[


x
1

,

x
2

,

x
3


]

T


,





t
=


[

0
,
0
,
0
,
2

]

T


,





T
=


[



1



-
1




-
1






-
1



1



-
1






-
1




-
1



1




1


1


1



]

.






(
3
)







dj-3 auxiliary variables {tilde over (x)} are introduced to a parity check constraint with a dimension dj to form dj-2 parity check equations, where dj represents the number of dimensions corresponding to a jth check node.







Γ
a

=




M


j
=
1




(


d
j





-
3

)







auxiliary variables and







Γ
a

=




M


j
=
1




(


d
j





-
2

)







three-variable parity check equations are provided for a constraint









[




i
=
1

N




H
ji



x
i



]

2

=
0

,

j


𝒥
.







Therefore, the maximum likelihood decoding problem (1) is converted into an equivalent integer linear programming problem including an objective function (4a), an inequality constraint (4b) and a discrete constraint (4c):










min
u




q
T


u





(

4

a

)








s
.
t
.




Au

-
b


0




(

4

b

)






u



{

0
,
1

}



(

N
+

Γ
a


)

×
1






(

4

c

)







where







u
=



[


x
T

,


x
~

T


]

T




{

0
,
1

}



(

N
+

Γ
a


)

×
1




,





{tilde over (x)} is the introduced auxiliary variable, b=lΓτ×l⊗t, q=[v;0Γτ×l], A=[TQ1; . . . ;TQτ; . . . ;TQΓc]∈(0,±1)4Γc×{N+Γa}; Qτ represents an element in a u vector corresponding to a τth three-variable parity check equation, Qτ∈{0,1}(N+Γa)×l.


Further, the inequality constraint (4b) is converted into an equality constraint (5b) by introducing an auxiliary variable z to obtain an equivalent optimization problem including an objective function (5a), the equality constraint (5b and a discrete constraint (5c):










min

u
,
z





q
T


u





(

5

a

)








s
.
t
.




Au

+
z

=
b




(

5

b

)







u



{

0
,
1

}



(

N
+

Γ
a


)

×
1



,

z


+

4


Γ
a

×
1







(

5

c

)







Then, the discrete constraint (5c) is relaxed into u∈[0,1]a+n)×l by adding a penalty item to the objective function (5a) to obtain a decoding optimization problem with the penalty term represented by formulas (6a) to (6c)











min

u
,
z





q
T


u


+



i



g


(

u
i

)







(

6

a

)








s
.
t
.




Au

+
z

=
b




(

6

b

)







u



{

0
,
1

}



(

N
+

Γ
a


)

×
1



,

z


+

4


Γ
a

×
1







(

6

c

)







where g(·):[0,1]→custom character∪{±∞}, g(·) represents the introduced penalty function, specifically a function










(
u
)


=


-

α
2







u
-
0.5



2
2



,





where u represents an ith element in the u vector, and α represents a coefficient contained in the penalty function and is configured to control a slope of the penalty function.


In step 3), the ADMM is introduced to solve the decoding optimization problem with the penalty term, ADMM iterations are obtained according to a specific form of the penalty term, and a channel decoder based on the ADMM with the penalty term is obtained.


First, an augmented Lagrange equation (7) is obtained according to the decoding optimization problem (6) with the penalty term:










L


(

u
,
z
,
y

)


=



q
T


u

+



i



g


(

u
i

)



+


y
T



(


A

u

+
z
-
b

)


+


μ
2






Au
+
z
-
b



2
2







(
7
)







where y represents a Lagrange multiplier, y∈custom character4Γc×l, and μ represents a penalty parameter.


The ADMM is introduced and an iteration is performed as follows:










u

k
+
1


=


argmin

u



[

0
,
1

]



(

N
+

Γ
a


)

×
1











L
μ



(

u
,

z
k

,

y
k


)







(

8

a

)







z

k
+
1


=




L
μ



(


u

k
+
1


,
z
,

y
k


)







(

8

b

)







y

k
+
1


=


y
k

+

μ


(


Au

k
+
1


+

z

k
+
1


-
b

)







(

8

c

)







Since ATA is a diagonal matrix due to mutual independence of column vectors of A, formula (8a) is resolved into N+Γa parallel subproblems (9a) and (9b):











min
u








1
2


μ






e
i



u
i
2



+

g


(

u
i

)


+


(


q
i

+


a
i
T



(

y
+

μ


(


z
k

-
b

)



)



)



u
i






(

9

a

)








s
.
t
.





u
i




[

0
,
1

]


,

i



.






(

9

b

)







A derivative of ½μeiui2+g(ui)+(qi+aiT(y+μ(zk−b)))ui with respect to ui is made equal to 0 to obtain a solution (10) of the subproblems (9a) and (9b):










u
i

k
+
1


=




[

0
,
1

]





(



q
i

+


a
i
T



(


y
k

+

μ


(


z
κ

-
b

)



)


+

α
2





-
μ







e




i



+
α


)

.






(
10
)







A solution (11) of formula (8b) is obtained in a similar way:










z

k
+
1


=





[

0
,

+



]


4


Γ
a






(

b
-

Au

k
+
1


-


y
k

μ


)






(
11
)







The above decoding method in steps 1) to 3) is referred to as the channel decoder based on the ADMM with the penalty term, abbreviated as an ADMM L2 decoder, where μ and α are preset coefficients.


In step 4), a deep learning network LADN-I is constructed according to the ADMM iterations, and a penalty coefficient and a coefficient contained in the penalty term are converted into network parameters.


Specifically, a deep learning network LADN-I is constructed according to the ADMM iterations, and the preset coefficients in the ADMM L2 decoder are converted into network parameters. As shown in FIG. 3, the data stream of the LADN-I network corresponds to the iterative process of the ADMM L2 decoder. The nodes in FIG. 3 correspond to the operations in the ADMM L2 decoder, and the directed edges represent the data streams between different operations. The deep learning network LADN-I is composed of K layers with the same structure, in which each of the K layers corresponds to one iteration in the ADMM L2 channel decoder. K is 50 for the LDPC code C1, or 70 for the LDPC code C2. Each of the K layers includes three nodes u, z and y. (u(k)),z(k),y(k))) represents an output of three nodes of a kth layer. Since the increase in the number of learnable parameters (or network size) can improve the generalization ability of the neural network, and the use of different multiplication parameters for different iterations can increase the convergence of iterations, the results do not depend on the choice of initial penalty parameters. Therefore, an independent parameter set μ=[μ1, . . . ,μK] is taken as the network parameters in the deep learning network LADN-I, and each of the K layers of the deep learning network LADN-I is represented by formulas (14a to (14c):










u

(

k
+
1

)


=




[

0
,
1

]




(

η






(

q
+


A
T



(


y

(
k
)


+


μ
k



(


z

(
k
)


-
b

)



)


+

α
2


)


)






(

14

a

)







z

(

k
+
1

)


=

relu






(

b
-

Au

(

k
+
1

)


-


y

(
k
)



μ
k



)






(

14

b

)







y

(

k
+
1

)


=


y

(
k
)


+


μ
k



(


Au

(

k
+
1

)


+

z

(

k
+
1

)


-
b

)







(

14

c

)







where η∈custom characterN+Γa, η represents an output of a function η(A;α;μ)custom characterdiag(1/(α−μATA)), symbol ⊙ represents a Hadamad product, relu(·) represents a classical activation function in the deep learning field and is defined as relu(x)=max(x;0). As can be seen from the definition of relu(·), the function relu(·) is completely equivalent to the mapping Π(0,∞)(·) in the solution (11). The structure of a single-layer network is shown in FIG. 4.


A decoding output of the deep learning network LADN-I is represented by a formula (13):

{circumflex over (x)}=Π(0,1)([ui(K), . . . ,uN(K)])  (13)

where {circumflex over (x)} represents a decoding result.


In step 5), the deep learning network is trained with training data offline and the network parameters are learned.


First, the SNR of the training data needs to be set. If the training SNR of the training data is set too high, there are very few decoding errors, and the network may fail to learn potential error patterns. However, if the SNR is too low, only few transmitted codewords can be decoded continuously, which will prevent the proposed network from learning an efficient decoding mechanism. Through cross-validation, the training SNR is set to 2 dB, and 40,000 training samples and 1,000 validation samples are generated.


Specifically, an appropriate signal-to-noise ratio (e.g., 2 dB) is selected through cross-validation to generate training samples, which constitute training data {vp,xp}p=1P, where vp represents a feature, and xp represents a label.


A loss function (17) based on a mean squared error is provided:













(
Θ
)


=


1
P






p
=
1

P







(


σ






Au
p

(
K
)


+

z
p

(
K
)


-
b



2
2


+


(

1
-
σ

)









LADN



(


v
p

;
Θ

)


-

x
p




2
2



)







(
17
)







where Θ represents network parameters contained in the deep learning network, and is denoted by {α,μ} in the deep learning network LADN-I, ∥Aup(K)+zp(K)−b∥22 represents an unsupervised item, and ∥custom character(vp;Θ)−xp22 represents a supervised item. The loss function is composed of the unsupervised item and the supervised item. σ is 0.3 for the LDPC code C1, or 0.9 for the LDPC code custom character.


Training is performed using a degressive learning rate with an initial value of 0.001 and a decaying rate of 0.5 until the loss function no longer decreases, to obtain the learned network parameters μ and α.


In step 6), the deep learning network is restored into the iterative channel decoder based on the ADMM with the penalty term, the learned network parameters are loaded in the channel decoder based on the ADMM method with the penalty term, and online real-time channel decoding is performed.



FIG. 5 is a block error ratio (BLER) graph of a BP decoder, an ADMM L2 decoder, a deep learning network LADN and a deep learning network LADN-I in Rayleigh channel environment. As can be seen from FIG. 5, for two code types, the deep learning network LADN achieves the best decoding performance under a high signal-to-noise ratio, and the deep learning network LADN-I achieves the best decoding performance under a low signal-to-noise ratio.


According to an embodiment of the present disclosure, a channel decoding device is provided. The channel decoding device includes a processor; and a memory for storing instructions executable by the processor. The processor is configured to: construct a maximum likelihood decoding problem including an objective function and a parity check constraint; convert the parity check constraint in the maximum likelihood decoding problem into a cascaded form, convert a discrete constraint into a continuous constraint, and add a penalty term to the objective function to obtain a decoding optimization problem with the penalty term; introduce an ADMM, obtain ADMM iterations according to a specific form of the penalty term, and obtain a channel decoder based on the ADMM with the penalty term; construct a deep learning network according to the ADMM iterations, and convert a penalty coefficient and a coefficient contained in the penalty term into network parameters; train the deep learning network with training data oflline and learn the network parameters; and load the learned network parameters in the channel decoder based on the ADMM with the penalty term, and perform online real-time channel decoding.


In some embodiments, the processor is configured to:


construct a maximum likelihood decoding problem represented by formula (1):











min
x








v
T


x










s
.
t
.






[




i
=
1

N




H
ji



x
i



]

2


=
0

,

j

𝒥









x



{

0
,
1

}


N
×
1



,

i








(
1
)







where N represents a length of a binary linear code custom character, H represents an M×N parity check matrix and is configured to designate each codeword, x represents a transmitted codeword, X={xi={0,1},i∈custom character};


[·]2 represents a modulo-2 operation, custom characterand custom characterrepresent a variable node and a check node of the binary linear code custom character, respectively, custom charactercustom character{1, . . . , N}, custom charactercustom character{1, . . . , M}; v represents a log likelihood ratio, v=[v1, . . . , vN]Tcustom characterN×l each element of v is defined as











v
i

=

log






(


Pr


(



y
i



x
i


=
0

)



Pr


(



y
i



x
i


=
0

)



)



,

i







(
2
)







where Pr(·) represents a conditional probability.


In some embodiments, the processor is configured to:


convert a parity check constraint of a high dimension into a finite number of three-variable parity check constraints defined as Tx≤t,x∈{0,1}3, where










x
=


[


x
1

,

x
2

,

x
3


]

T


,





t
=


[

0
,
0
,
0
,
2

]

T


,





T
=

[



1



-
1




-
1






-
1



1



-
1






-
1




-
1



1




1


1


1



]






(
3
)







introduce dj-3 auxiliary variables {tilde over (x)} to a parity check constraint with a dimension dj to form dj-2 parity check equations, where dj represents the number of dimensions corresponding to a jth check node; provide







Γ
a

=




M


j
=
1




(


d
j

-
3

)







auxiliary variables and







Γ
a

=




M


j
=
1




(


d
j

-
2

)







three-variable parity check equations for a constraint









[




i
=
1

N








H
ji



x
i



]

2

=
0

,

j

𝒥

,





and convert the maximum likelihood decoding problem (1) into an equivalent integer linear programming problem including an objective function (4a), an inequality constraint (4b) and a discrete constraint (4c):










min
u




q
T


u





(

4

a

)








s
.
t
.




Au

-
b


0




(

4

b

)






u



{

0
,
1

}



(

N
+

Γ
a


)

×
1






(

4

c

)







where






u
=



[


x
T

,


x
~

T


]

T




{

0
,
1

}



(

N
×

Γ
a


)

×
1








b=1Γc×l ⊗t, q=[v;0Γa×l]A=[TQ1; . . . ;TQτ; . . . ;TQΓc]∈{0,±1}c×(N+Γa); Qτ represents an element in a u vector corresponding to a τth three-variable parity check equation, Qτ∈{0,1}(N+Γa)×l;


convert the inequality constraint (4b) into an equality constraint (5b) by introducing an auxiliary variable z to obtain an equivalent optimization problem including an objective function (5a), the equality constraint (5b) and a discrete constraint (5c):










min

u
,
z





q
T


u





(

5

a

)








s
.
t
.




Au

+
z

=
b




(

5

b

)







u



{

0
,
1

}



(

N
+

Γ
a


)

×
1



,

z


+

4


Γ
a

×
1







(

6

c

)







relax the discrete constraint (5c) into u∈[0,1]a+n)×l by adding a penalty item to the objective function (5a) to obtain a decoding optimization problem with the penalty term represented by formulas (6a) to (6c):











min

u
,
z





q
T


u


+



i



g


(

u
i

)







(

6

a

)








s
.
t
.




Au

+
z

=
b




(

6

b

)







u



{

0
,
1

}



(

N
+

Γ
a


)

×
1



,

z


+

4


Γ
a

×
1







(

6

c

)







where g(·):[0,1]→custom character∪{±∞}, g(·) represents a penalty function, specifically a function










(
u
)


=


-

α
2







u
-
0.5



2
2



,





where u represents an ith element in the u vector, and α represents a coefficient contained in the penalty function for controlling a slope of the penalty function.


In some embodiments, the processor is configured to:


obtain an augmented Lagrange equation (7) according to the decoding optimization problem with the penalty term:










L


(

u
,
z
,
y

)


=



q
T


u

+



i



g


(

u
i

)



+


y
T



(


A

u

+
z
-
b

)


+


μ
2






Au
+
z
-
b



2
2







(
7
)









    • where y represents a Lagrange multiplier, y∈custom characterc×l, and μ represents a penalty parameter;

    • introduce the ADMM method, and performing an iteration as follows:













u

k
+
1


=


argmin

u



[

0
,
1

]



(

N
+

Γ
a


)

×
1











L
μ



(

u
,

z
k

,

y
k


)







(

8

a

)







z

k
+
1


=




L
μ



(


u

k
+
1


,
z
,

y
k


)







(

8

b

)







y

k
+
1


=


y
k

+

μ


(


Au

k
+
1


+

z

k
+
1


-
b

)







(

8

c

)







where ATA is a diagonal matrix due to mutual independence of column vectors of A, resolving formula (8a) into N+Γa, parallel subproblems (9a) and (9b):











min
u




1
2


μ






e
i



u
i
2



+

g


(

u
i

)


+


(


q
i

+


a
i
T



(

y
+

μ


(


z
k

-
b

)



)



)



u
i






(

9

a

)








s
.
t
.





u
i




[

0
,
1

]


,

i







(

9

b

)







make a derivative of ½μeiui2+g(ui)+(qi+aiT(y+μ(zk−b)))ui with respect to ui equal to 0 to obtain a solution (10) of the subproblems (9a) and (9b):










u
i

k
+
1


=




[

0
,
1

]




(



q
i

+


a
i
T



(


y
k

+

μ


(


z
k

-
b

)



)


+

α
2





-
μ







e
i


+
α


)






(
10
)







obtain a solution (11) of formula (8b) in a similar way:










z

k
+
1


=





[

0
,

+



]


4


Γ
c






(

b
-

Au

k
+
1


-


y
k

μ


)






(
11
)







so as to obtain the channel decoder based on the ADMM with the penalty term, abbreviated as an ADMM L2 decoder, where μ and α are preset coefficients.


In some embodiments, the processor is configured to:

    • construct a deep learning network LADN according to the ADMM iterations, and convert the preset coefficients in the ADMM L2 decoder into network parameters, wherein the deep learning network LADN is composed of K layers with the same structure, each of the K layers corresponds to one iteration in the ADMM L2 decoder and includes three nodes u, z and y, (u(k),z(k),y(k)) represents an output of three nodes of a kth layer, and each of the K layers of the deep learning network LADN is represented by formulas (12a) to (12c):










u

(

k
+
1

)


=




[

0
,
1

]




(

η


(

q
+


A
T



(


y

(
k
)


+

μ


(


z

(
k
)


-
b

)



)


+

α
2


)


)






(

12

a

)







z

(

k
+
1

)


=

relu


(

b
-

Au

(

k
+
1

)


-


y

(
k
)


μ


)






(

12

b

)







y

(

k
+
1

)


=


y

(
k
)


+

μ


(


Au

(

k
+
1

)


+

z

(

k
+
1

)


-
b

)







(

12

c

)







where η∈custom characterN+Γa, represents an output of a function η(A;α;μ)custom characterdiag (1/(α−μATA)), symbol ⊙ represents a Hadamad product, relu(·) represents an activation function defined as relu(x)=max(x;0);

    • a decoding output of the deep learning network LADN is represented by a formula (13):

      {circumflex over (x)}=Π(0,1)([u1(K), . . . , uN(K)])  (13)
    • where {circumflex over (x)} represents a decoding result.
    • In some embodiments, K is 50 or 70.


In some embodiments, a penalty coefficient in the deep learning network LADN is converted into an independent parameter set μ=[μi, . . . ,μK] as the network parameters to obtain a deep learning network LADN-I with layers each represented b formulas (14a) to (14c):










u

(

k
+
1

)


=




[

0
,
1

]




(

η


(

q
+


A
T



(


y

(
k
)


+


μ
k



(


z

(
k
)


-
b

)



)


+

α
2


)


)






(

14

a

)







z

(

k
+
1

)


=

relu


(

b
-

Au

(

k
+
1

)


-


y

(
k
)



μ
k



)






(

14

b

)







y

(

k
+
1

)


=


y

(
k
)


+


μ
k



(


Au

(

k
+
1

)


+

z

(

k
+
1

)


-
b

)







(

14

c

)







In some embodiments, the training data is generated by selecting a predetermined signal-to-noise ratio through cross-validation to generate training samples, which is constituted as {vp, xp}p=1P where vp represents a feature, and xp represents a label.


The processor is configured to:

    • provide a loss function (17) based on a mean squared error:













(
Θ
)


=


1
P






p
=
1

P



(


σ






Au
p

(
K
)


+

z
p

(
K
)


-
b



2
2


+


(

1
-
σ

)









LADN



(


v
p

;
Θ

)


-

x
p




2
2



)







(
17
)







where Θ represents network parameters contained in the deep learning network, and is denoted by {α,μ} in the deep learning network LADN, or by {α, μ} in the deep learning network LADN-I, ∥Aup(K)+zp(K)−b∥22 represents an unsupervised item, and ∥custom character(vp;Θ)−xp22 represents a supervised item;

    • perform training using a degressive learning rate with an initial value of λ and a decaying rate of 0.5 until the loss function no longer decreases, to obtain the learned network parameters μ and α.


In some embodiments, the predetermined signal-to-noise ratio is 2 dB, and 40,000 training samples and 1,000 validation samples are generated to constitute the training data.


In some embodiments, σ is 0.3 or 0.9.


In some embodiments, the initial value λ is 0.001.


With respect to the devices in the above embodiments, the specific manners for performing operations for individual modules therein have been described in detail in the embodiments regarding the channel decoding method methods, which will not be elaborated herein.


According to an embodiment of the present disclosure, a non-transitory computer-readable storage medium is provided. The non-transitory computer-readable storage medium has stored therein instructions that, when executed by a processor, causes the processor to perform the channel decoding method according to the abovementioned embodiment of the present disclosure.


It should be noted that, although the present disclosure has been described with reference to the embodiments, it will be appreciated by those skilled in the art that the disclosure includes other examples that occur to those skilled in the art to execute the disclosure. Therefore, the present disclosure is not limited to the embodiments.


It will be understood that, the flow chart or any process or method described herein in other manners may represent a module, segment, or portion of code that comprises one or more executable instructions to implement the specified logic function(s) or that comprises one or more executable instructions of the steps of the progress. Although the flow chart shows a specific order of execution, it is understood that the order of execution may differ from that which is depicted. For example, the order of execution of two or more boxes may be scrambled relative to the order shown. Also, two or more boxes shown in succession in the flow chart may be executed concurrently or with partial concurrence. In addition, any number of counters, state variables, warning semaphores, or messages might be added to the logical flow described herein, for purposes of enhanced utility, accounting, performance measurement, or providing troubleshooting aids, etc. It is understood that all such variations are within the scope of the present disclosure. Also, the flow chart is relatively self-explanatory and is understood by those skilled in the art to the extent that software and/or hardware can be created by one with ordinary skill in the art to carry out the various logical functions as described herein.


The logic and step described in the flow chart or in other manners, for example, a scheduling list of an executable instruction to implement the specified logic function(s), it can be embodied in any computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor in a computer system or other system. In this sense, the logic may comprise, for example, statements including instructions and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system. In the context of the present disclosure, a “computer-readable medium” can be any medium that can contain, store, or maintain the printer registrar for use by or in connection with the instruction execution system.


The computer readable medium can comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor media. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic floppy diskettes, magnetic hard drives, or compact discs. Also, the computer-readable medium may be a random access memory (RAM) including, for example, static random access memory (SRAM) and dynamic random access memory (DRAM), or magnetic random access memory (MRAM). In addition, the computer-readable medium may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other type of memory device.


Although the device, system, and method of the present disclosure is embodied in software or code executed by general purpose hardware as discussed above, as an alternative the device, system, and method may also be embodied in dedicated hardware or a combination of software/general purpose hardware and dedicated hardware. If embodied in dedicated hardware, the device or system can be implemented as a circuit or state machine that employs any one of or a combination of a number of technologies. These technologies may include, but are not limited to, discrete logic circuits having logic gates for implementing various logic functions upon an application of one or more data signals, application specific integrated circuits having appropriate logic gates, programmable gate arrays (PGA), field programmable gate arrays (FPGA), or other components, etc. Such technologies are generally well known by those skilled in the art and, consequently, are not described in detail herein.


It can be understood that all or part of the steps in the method of the above embodiments can be implemented by instructing related hardware via programs, the program may be stored in a computer readable storage medium, and the program includes one step or combinations of the steps of the method when the program is executed.


In addition, each functional unit in the present disclosure may be integrated in one progressing module, or each functional unit exists as an independent unit, or two or more functional units may be integrated in one module. The integrated module can be embodied in hardware, or software. If the integrated module is embodied in software and sold or used as an independent product, it can be stored in the computer readable storage medium.


The computer readable storage medium may be, but is not limited to, read-only memories, magnetic disks, or optical disks.


Reference throughout this specification to “an embodiment,” “some embodiments,” “one embodiment”, “another example,” “an example,” “a specific example,” or “some examples,” means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the phrases such as “in some embodiments,” “in one embodiment”, “in an embodiment”, “in another example,” “in an example,” “in a specific example,” or “in some examples,” in various places throughout this specification are not necessarily referring to the same embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.


Although explanatory embodiments have been shown and described, it would be appreciated by those skilled in the art that the above embodiments cannot be construed to limit the present disclosure, and changes, alternatives, and modifications can be made in the embodiments without departing from spirit, principles and scope of the present disclosure.

Claims
  • 1. A channel decoding method, comprising: constructing a maximum likelihood decoding problem including an objective function and a parity check constraint based on channel decoding;converting the parity check constraint in the maximum likelihood decoding problem into a cascaded form, converting a discrete constraint into a continuous constraint, and adding a penalty term to the objective function to obtain a decoding optimization problem with the penalty term;introducing an alternating direction method of multipliers (ADMM), obtaining ADMM iterations according to a specific form of the penalty term, and obtaining a channel decoder based on the ADMM with the penalty term, comprising:obtaining an augmented Lagrange equation (7) according to the decoding optimization problem with the penalty term:
  • 2. The method of claim 1, wherein constructing the maximum likelihood decoding problem comprising the objective function and the parity check constraint based on channel decoding specifically comprises: constructing a maximum likelihood decoding problem represented by formula (1) based on channel decoding:
  • 3. The method of claim 2, wherein converting the parity check constraint in the maximum likelihood decoding problem into the cascaded form, converting the discrete constraint into the continuous constraint, and adding the penalty term to the objective function to obtain the decoding optimization problem with the penalty term specifically comprise: converting a parity check constraint of a high dimension into a finite number of three-variable parity check constraints defined as Tx≤t,x∈{0,1}3, where
  • 4. The method of claim 1, wherein constructing the deep learning network according to the ADMM iterations, and converting the penalty coefficient and the coefficient contained in the penalty term into network parameters specifically comprise: constructing a deep learning network LADN according to the ADMM iterations, and converting the preset coefficients in the ADMM L2 decoder into network parameters, wherein the deep learning network LADN is composed of K layers with the same structure, each of the K layers corresponds to one iteration in the ADMM L2 decoder and includes three nodes u, z and y, (u(k),z(k),y(k)) represents an output of three nodes of a kth layer, and each of the K layers of the deep learning network LADN is represented by formulas (12a) to (12c):
  • 5. The method of claim 4, wherein a penalty coefficient in the deep learning network LADN is converted into an independent parameter set μ=[μ1, . . . ,μK] as the network parameters to obtain a deep learning network LADN-I with layers each represented by formulas (14a) to (14c):
  • 6. The method of claim 4, wherein the training data is generated by selecting a predetermined signal-to-noise ratio through cross-validation to generate training samples, which is constituted as {vp,xp}p=1P, where vp represents a feature, and xp represents a label; andtraining the deep learning network with training data offline and learning the network parameters specifically comprises:providing a loss function (17) based on a mean squared error:
  • 7. The method of claim 6, wherein the predetermined signal-to-noise ratio is 2 dB, and 40,000 training samples and 1,000 validation samples are generated to constitute the training data.
  • 8. The method of claim 6, wherein σ is 0.3 or 0.9.
  • 9. The method of claim 6, wherein the initial value λ is 0.001.
  • 10. The method of claim 4, wherein K is 50 or 70.
  • 11. A channel decoding device, comprising: a processor; anda memory for storing instructions executable by the processor;wherein the processor is configured to: construct a maximum likelihood decoding problem including an objective function and a parity check constraint based on channel decoding;convert the parity check constraint in the maximum likelihood decoding problem into a cascaded form, convert a discrete constraint into a continuous constraint, and add a penalty term to the objective function to obtain a decoding optimization problem with the penalty term;introduce an ADMM, obtain ADMM iterations according to a specific form of the penalty term, and obtain a channel decoder based on the ADMM with the penalty term;construct a deep learning network according to the ADMM iterations, and convert a penalty coefficient and a coefficient contained in the penalty term into network parameters;train the deep learning network with training data offline and learn the network parameters; andload the learned network parameters in the channel decoder based on the ADMM with the penalty term, and perform online real-time channel decoding,wherein the processor is further configured to: obtain an augmented Lagrange equation (7) according to the decoding optimization problem with the penalty term:
  • 12. The device of claim 11, wherein the processor is configured to: construct a maximum likelihood decoding problem represented by formula (1) based on channel decoding:
  • 13. The device of claim 12, wherein the processor is configured to: convert a parity check constraint of a high dimension into a finite number of three-variable parity check constraints defined as Tx≤t,x∈{0,1}3, where
  • 14. The device of claim 11, wherein the processor is configured to: construct a deep learning network LADN according to the ADMM iterations, and convert the preset coefficients in the ADMM L2 decoder into network parameters, wherein the deep learning network LADN is composed of K layers with the same structure, each of the K layers corresponds to one iteration in the channel decoder based on the ADMM and includes three nodes u, z and y, (u(k),z(k),y(k)) represents an output of three nodes of a kth layer, and each of the K layers of the deep learning network LADN is represented by formulas (12a) to (12c):
  • 15. The device of claim 14, wherein a penalty coefficient in the deep learning network LADN is converted into an independent parameter set μ=[μ1, . . . , μK] as the network parameters to obtain a deep learning network LADN-I with layers each represented by formulas (14a) to (14c):
  • 16. The device of claim 14, wherein the training data is generated by selecting a predetermined signal-to-noise ratio through cross-validation to generate training samples, which is constituted as {vp,xp}p=1P, where vp represents a feature, and xp represents a label; andthe processor is configured to: provide a loss function (17) based on a mean squared error:
  • 17. The device of claim 16, wherein the predetermined signal-to-noise ratio is 2 dB, and 40,000 training samples and 1,000 validation samples are generated to constitute the training data.
  • 18. The device of claim 16, wherein σ is 0.3 or 0.9.
  • 19. The device of claim 16, wherein the initial value λ is 0.001.
  • 20. The device of claim 14, wherein K is 50 or 70.
  • 21. A non-transitory computer-readable storage medium having stored therein instructions that, when executed by a processor, causes the processor to perform a channel decoding method, the method comprising: constructing a maximum likelihood decoding problem including an objective function and a parity check constraint based on channel decoding;converting the parity check constraint in the maximum likelihood decoding problem into a cascaded form, converting a discrete constraint into a continuous constraint, and adding a penalty term to the objective function to obtain a decoding optimization problem with the penalty term;introducing an ADMM, obtaining ADMM iterations according to a specific form of the penalty term, and obtaining a channel decoder based on the ADMM with the penalty term, comprising: obtain an augmented Lagrange equation (7) according to the decoding optimization problem with the penalty term:
Priority Claims (1)
Number Date Country Kind
201911110905.3 Nov 2019 CN national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is based on International Patent Application No. PCT/CN2020/128787, filed on Nov. 13, 2020, which claims priority to Chinese Patent Application No. 201911110905.3, filed on Nov. 14, 2019, the entire contents of which are incorporated herein by reference.

Foreign Referenced Citations (1)
Number Date Country
WO-2022086962 Apr 2022 WO
Non-Patent Literature Citations (2)
Entry
Guo et al. Efficient ADMM decoder for non-binary LDPC codes with codeword-independent performance. Aug. 2015, Journal of class files. vol. 14, No. 8 pp. 1-13 (Year: 2015).
WIPO, International Search Report for International Application No. PCT/CN2020/128787, dated Feb. 5, 2021.
Related Publications (1)
Number Date Country
20220182178 A1 Jun 2022 US
Continuations (1)
Number Date Country
Parent PCT/CN2020/128787 Nov 2020 US
Child 17677792 US