DECODING APPARATUS, DECODING METHOD AND PROGRAM

Information

  • Patent Application
  • 20240056102
  • Publication Number
    20240056102
  • Date Filed
    January 04, 2021
    3 years ago
  • Date Published
    February 15, 2024
    3 months ago
Abstract
A decoding device includes a memory and a processor configured to execute inputting a code word encoded by a polar code from an original message; decoding the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and outputting the decoded original message.
Description
TECHNICAL FIELD

The present invention relates to a technique for decoding information encoded by a polar code.


BACKGROUND ART

The polar code proposed by Arikan is a code that can implement characteristics asymptotically approaching the Shannon limit by using communication channel polarization, and is used for communication channel encoding in 5G, for example.


Further, as a method of decoding a code word encoded by a polar code, there are a successive cancellation decoding method, a successive cancellation list decoding method, and the like. The successive cancellation decoding method is a method of sequentially decoding bits of a message one by one. The successive cancellation list decoding method is a method in which, when the bits are decoded, L sequences (L is a list size) having high likelihood are set as survival paths, and only the most probable sequence is finally output as a decoding result.


CITATION LIST
Non-Patent Literature

Non-Patent Literature 1: I. Tal, A. Vardy, “List decoding of polar codes, “IEEE Transactions on Information Theory, vol. 61, no. 5, May 2015.


Non-Patent Literature 2: E.Arikan, et al. “Source polarization, “Proc. 2010IEEE International Symposium on Information Theory, 2010, pp. 899-903.


SUMMARY OF INVENTION
Technical Problem

As a conventional technique of decoding a polar code, for example, there is a technique disclosed in Non-Patent Literature 1 and the like. However, in the conventional technique disclosed in Non-Patent Literature 1 and the like, there is a problem that the memory amount used in the decoding processing increases. When the used memory amount increases, for example, the calculation amount (calculation time) related to processing, such as copying states of candidates in list decoding, also increases.


The present invention has been made in view of the above points, and an object thereof is to provide a technique capable of reducing the memory amount in decoding of a polar code.


Solution to Problem

According to the disclosed technique, a decoding device is provided that includes: an input unit that inputs a code word encoded by a polar code from an original message; a computation unit that decodes the original messaue from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and an output unit that outputs the decoded original message.


Advantacreous Effects of Invention

According to the disclosed techniques, a technique capable of reducing a memory amount in decoding of a polar code is provided.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a configuration diagram illustrating a communication system according to an embodiment of the present invention.



FIG. 2 is a configuration diagram illustrating the communication system according to the embodiment of the present invention.


3 is a configuration diagram of a decoding device.



FIG. 4 is a hardware configuration diagram of the decoding device.



FIG. 5 is a diagram illustrating Algorithm 1.



FIG. 6 is a diagram illustrating Algorithm 2.



FIG. 7 is a diagram illustrating Algorithm 3.



FIG. 8 is a diagram illustrating Algorithm 4.



FIG. 9 is a diagram illustrating Algorithm 5.



FIG. 10 is a diagram illustrating Algorithm 6.



FIG. 11 is a diagram illustrating Algorithm 7.



FIG. 12 is a diagram illustrating Algorithm 8.



FIG. 13 is a diagram illustrating Algorithm 9.



FIG. 14 is a diagram illustrating Algorithm 2.



FIG. 15 is a diagram illustrating Algorithm 2.



FIG. 16 is a diagram illustrating Algorithm 7-1.



FIG. 17 is a diagram illustrating Algorithm 7-2.



FIG. 18 is a diagram illustrating Algorithm 7-3.



FIG. 19 is a diagram illustrating Algorithm 7-4.



FIG. 20 is a diagram illustrating Algorithm 1.



FIG. 21 is a diagram illustrating Algorithm 6



FIG. 22 is a diagram illustrating Algorithm 7.



FIG. 23 is a diagram illustrating Algorithm 4.



FIG. 24 is a diagram illustrating Algorithm 7.



FIG. 25 is a diagram illustrating Algorithm 7.



FIG. 26 is a diagram illustrating a table used for application to an erasure channel.



FIG. 27 is a diagram illustrating a table used for application to the erasure channel.



FIG. 28 is a diagram illustrating a table used for application to the erasure channel.



FIG. 29 is a diagram illustrating a table used for application to the erasure channel.



FIG. 30 is a diagram illustrating a table used for application to the erasure channel.



FIG. 31 is a diagram illustrating a table used for application to the erasure channel.



FIG. 32 is a diagram illustrating a table used for application to the erasure channel.



FIG. 33 is a diagram illustrating Algorithm 1.





DESCRIPTION OF EMBODIMENTS

Hereinafter, embodiments of the present invention (present embodiments) will be described with reference to the drawings. The embodiments described below are merely examples, and embodiments to which the present invention is applied are not limited to the following embodiments.


Note that, in the text of the main body of the present description, a hat “{circumflex over ( )}” will be added to the head of characters for convenience of description. “{circumflex over ( )}M” is an example. Furthermore, in the text of the main body of the present specification, in a case where a superscript or a subscript C is used for a subscript B of a certain character A, the subscript B of the character A is not represented by a subscript but is represented by using “_” as in A_BC.


System Configuration Example


FIG. 1 illustrates a configuration example of a communication system in the present embodiment. FIG. 1 illustrates an example in which the present invention is applied to a communication channel code (error correction code). As illustrated in FIG. 1, the communication system includes an encoding device 100 and a decoding device 200, which are connected by a communication channel. Note that the encoding device may be referred to as an encoder or the like. The decoding device may be referred to as a decoder or the like.


In the present embodiment, the encoding device 100 encodes an input message M with a polar code, and outputs a code word X as a communication channel input. The decoding device 200 receives Y, which is a communication channel output in which noise is mixed into the code word, performs polar code decoding processing, and outputs a reproduction message {circumflex over ( )}M. As will be described later, the decoding device 200 performs decoding by a successive cancellation decoding method or a successive cancellation list decoding method. The successive cancellation decoding method is a method of sequentially decoding bits of a message one by one. The successive cancellation list decoding method is a method in which, when the bits are decoded, L sequences (L is a list size) having high likelihood are set as survival paths, and only the maximum likelihood sequence is finally output as a decoding result. The processing (algorithm) of the decoding device 200 will be described in detail later.


The decoding algorithm in the present embodiment can uniformly handle decoding of a communication channel code and decoding of an information source code. FIG. 2 is a system configuration diagram in a case where the present invention is applied to an information source code (information compression code).


In the configuration illustrated in FIG. 2, it is assumed that the encoding device 100 and the decoding device 200 are connected by a noise-free communication channel or information is exchanged by a noise-free storage medium. X is input to the encoding device 100 as a message, and a code word is output from the encoding device 100 after being encoded by a polar code. The code word and auxiliary information Y correlated with X are input to the decoding device 100. The decoding device 100 performs polar code decoding processing and outputs a reproduction messaue {circumflex over ( )}M. The auxiliary information is, for example, information of a frame preceding a frame to be decoded in a video. Note that the auxiliary information Y may be input to the decoding device 200 from the outside, or may be stored in the decoding device 200 in advance, or a past decoding result may be used as the auxiliary information.



FIG. 3 is a functional configuration diagram of the decoding device 200. As illustrated in FIG. 3, the decoding device 200 includes an input unit 210, a computation unit 220, an output unit 230, and a data storage unit 240. The input unit 210 inputs information from a communication channel or a recording medium, or the like. The computation unit 220 executes an algorithm related to the decoding processing. The output unit 230 outputs a computation result (for example, a reproduction message) obtained by the computation unit 220. The data storage unit 240 corresponds to a memory referenced by an algorithm of the decoding processing executed by the computation unit 220. In addition, the data storage unit 240 stores known information and the like used for processing.


In the present embodiment, an “algorithm” may also be called a “processing procedure”, a “program”, “software”, or the like.


Hardware Configuration Example

The decoding device 200 according to the present embodiment can be implemented, for example, by causing a computer to execute a program in which processing contents described in the present embodiment are described. Note that the “computer” may be a physical machine or a virtual machine on a cloud. In a case where a virtual machine is used, “hardware” described herein is virtual hardware.


The above program can be stored and distributed by having it recorded on a computer-readable recording medium (portable memory or the like). Furthermore, the above program can also be provided through a network such as the Internet or e-mail.



FIG. 4 is a diagram illustrating a hardware configuration example of the computer. The computer in FIG. 4 includes a drive device 1000, an auxiliary storage device 1002, a memory device 1003, a CPU 1004, an interface device 1005, a display device 1006, an input device 1007, an output device 1008, and the like which are connected to each other by a bus B.


The program for implementing the processing in the computer is provided by, for example, a recording medium 1001 such as a CD-ROM or a memory card. When the recording medium 1001 storing the program is set in the drive device 1000, the program is installed from the recording medium 1001 to the auxiliary storage device 1002 via the drive device 1000. However, the program is not necessarily installed from the recording medium 1001, and may be downloaded from another computer via a network. The auxiliary storage device 1002 stores the installed program and also stores necessary files, data, and the like.


In a case where an instruction to start the program is made, the memory device 1003 reads and stores the program from the auxiliary storage device 1002. The CPU 1004 implements a function related to the decoding device 200 in accordance with a program stored in the memory device 1003. The interface device 1005 is used as an interface for connecting to the network. The display device 1006 displays a graphical user interface (GUI) or the like by the program. The input device 1007 includes a keyboard and mouse, buttons, a touch panel, or the like, and is used to input various operation instructions. The output device 1008 outputs a computation result. Note that the display device 1006 and the input device 1007 may not be provided.


Hereinafter, algorithms (corresponding to the above program) executed by the decoding device 200 will be described in detail. Hereinafter, first, definitions and basic related techniques will be described, and then each algorithm will be described. In the following description, Reference Literature is indicated as [1], [2], and the like. The names of the Reference Literature are described at the end of the specification. Note that the polar information source code/communication channel code is introduced in Arikan [1], [2], and [3], and Arikan proposes a successive cancellation decoding method that can be implemented with a calculation amount O (Nlog2N) with respect to a block length N.


In the present embodiment, an algorithm of the successive cancellation decoding method and an algorithm for the successive cancellation list decoding method based on what is introduced in Tal and Vardy [9] will be described. As described above, the technique according to the present embodiment can be applied to both the polar information source code and the polar communication channel code. Furthermore, the decoding algorithm in the present embodiment can reduce the memory amount and the calculation amount as compared with the conventional technique such as [9].


Regarding Definitions and Notations

In the description of the algorithm of the present embodiment, the following definitions and notations will be used.


For a certain n, N=2n indicates a block length. is given as a constant, and all the algorithms of the present embodiment can refer to this n. In the present embodiment, bit indexing introduced in [1] is used. The index of the N-dimensional vector can be expressed as XN=(X_0n, . . . , X_1n) as a set of n-bit strings. Here, 0n and 1n represent n-bit all-0 sequences and all-1 sequences, respectively. To represent integer intervals, the following notations are used.





[0n:bn]={0n, . . . , bn}





[0n:bn]=[0n:bn]\{bn}





[bn:1n]=[0n1n]\[0n:bn)





(bn:1n]=[0n:1n]\[0n:bn]  [Math. 1]


For a certain subset I of [0n:1n], a subsequence of XN is defined as follows.






X
I
={X
b

n

custom-character  [Math. 2]


It is assumed that clbk∈{0, 1}l+k is a concatenation of bk∈{0, 1}k and cl∈{0, 1}l. For a given bk∈{0, 1}k and cl∈{0, 1}l, subsets of cl[0:bk] and cl[0:bk) of {0, 1}k+l can be defined as

    • cl[0:bk]={cldk:dk∈[0k:bk]} and
    • cl[0:bk)={cldk:dk∈[0k:bk)}.


Bipolar Binary Conversion of b∈{0, 1}





custom-character
b   [Math. 3]


is defined as











b


=

{





-






if


b

=
1






+






if


b

=
0




.







[

Math
.

4

]







Binary Polar Code

Here, the binary polar information source/communication channel code introduced in the existing techniques [1], [2], [3], and [7] will be described. It is assumed that {0,1} is a body having a characteristic of 2. For a given positive integer n, polar conversion G is defined as follows.









G
=



(



1


0




1


1



)



n




BR






[

Math
.

5

]







Here, a symbol on the right shoulder of the matrix represent the n-th Kronecker product and ΠBR is the bit inverted permutation matrix [1]. A vector u∈{0,1}N is then defined as u=xG for a given vector x∈{0,1}N. It is assumed that {I0, I1} is a division of [0n:1n], and I0∩I1=ϕ and I0∩I1=[0n:1n] are satisfied. {I0, I1} will be defined later.


It is assumed that X=(X_0n, . . . , X_1n) and Y=(Y_0n, . . . , Y_1n) are random variables, and U=(U_0n, . . . , U_1n) is a random variable defined as U=XG.


Joint Distribution





custom-character
Y of (custom-character, custom-character, Y)   [Math. 6]


using a joint distribution PXY of (X,Y),






custom-character
Y(custom-character, custom-character, custom-character)=PXY((custom-character, custom-character)G−1, custom-character)   [Math. 7]


is defined.


In the above formula, the elements of (u_I1, u_I0) are sorted in the order of indices before the processing of G−1. u_I1 and u_I0 are referred to as a frozen bit and a non-frozen bit, respectively.


It is assumed that










P


U

b
n


|

U


|


0

n
:




b
n



)





Y




[

Math
.

8

]







is a conditional probability distribution defined as follows.











P


U

b
n






"\[LeftBracketingBar]"



U


[


0
n

:

b
n



)



Y




(


u

b
n






"\[LeftBracketingBar]"



u


[


0
n

:

b
n



)


,
y



)

=








u


(


b
n

:

1
n



]






P


U


0




U


1



Y


(


u


0


,

u


1


,
y

)









u

[


b
n

:

1
n


]






P


U


0




U


1



Y


(


u


0


,

u


1


,
y

)







[

Math
.

9

]







For the vector u_I1 and the auxiliary information y∈yN, a successive cancellation (SC) decoder f that outputs {circumflex over ( )}u=f(u_I1, y) is recursively defined as











u
^


b
n


=

{






f

b
n


(




u
^



[


0
n

:

b
n



)


,
y

)





if



b
n





0







u

b
n






if



b
n





1





.






[

Math
.

10

]







The set of functions {f_bn}_bn∈I0 used in the above formula is defined as






[

Math
.

11

]








f

b
n


(


u


[


0
n

:


b
n



)


,
y

)

=

arg


max

u


{

0
,
1

}





P


U

b
n





U


[


0
n

:


b
n



)



Y



(


u


u


[


0
n

:


b
n



)



,
y

)






which is the maximum posterior probability discrimination rule after the observation (u_[0n:bn), y).


In the case of a polar information source code (with auxiliary information), x∈{0,1}N is an information source output, u_I1 is a code word, and y∈YN is an auxiliary information source output. The decoder reproduces the information source output f(u_I1, y)G−1 from code words u_I1 and y. The (block) decoding error probability is given as Prob(f(U_I1, Y)G−1≠X).


In the systematic polar communication channel code [3], I0′ and I1′ for a given (I0, I1) are defined as follows.





[Math. 12]






custom-character={b0b1 . . . bn−1:bn−1 . . . b1b0custom-character}  (1)






custom-character={b0b1 . . . bn−1:bn−1 . . . b1b0custom-character}  (2)


It is assumed that the encoder and the decoder share a vector u_I1. The encoder calculates (x_I1′, u_I0) satisfying (x_I1′, x_I0′)=(u_I1, u_I0)G−1 from a message x_I0′ and the shared vector u_I1. Here, the elements of (x_I1′, x_I0′) and (u_I1, u_I0) are sorted in the order of indices before the operation of G−1.


The encoder generates a communication channel input x=(x_I0′, x_I1′). Here, the elements of (x_I0′, x_I1′) are sorted in the order of indices. The decoder reproduces the communication channel input {circumflex over ( )}x=f(u_I1, y)G−1 from the communication channel output y∈yN and the shared vector u_I1. {circumflex over ( )}x_I0′ is reproduction of the message. The (block) decoding error probability is also given as Prob(f(U_I1, Y)G−1≠X).


In a non-systematic polar communication channel code, u_I0 is a message, and a vector u_I1 is shared by an encoder and a decoder. The encoder generates a communication channel input x∈{0,1}N with x=(u_I1, u_I0)G−1. Here, the elements of (u_I1, u_I0) are rearranged in the order of indices before G−1 is operated. The decoder generates a pair of vectors (u_I1, {circumflex over ( )}u_I0)=f(u_I1, y) from the communication channel output y∈yN and the shared vector u_I1. Then, {circumflex over ( )}u_I0 is reproduction of the message. The (block) decoding error probability is given as Prob(f(U_I1, Y)≠(U_I0, U_I1)).


Lemmas will be described below.


Lemma 1 ([2, Theorem 2], [7, Theorem 4.10]):I0 is defined as follows.






[

Math
.

13

]






=

{


b
n





[


0
n

:


1
n


]

:


Z

(



U

b
n




U


[


0
n

:


b
n



)



,

Y

[


0
n

:


1
n


]



)




2

-

2

n

β






}





where Z is a function [2] that gives a Battacharyya parameter. For any β∈[0, ½),






[

Math
.

14

]









lim

n








"\[LeftBracketingBar]"



0



"\[RightBracketingBar]"



2
n



=

1
-

H

(

X

Y

)








lim

n








"\[LeftBracketingBar]"



1



"\[RightBracketingBar]"



2
n



=

H

(

X

Y

)






are obtained.


There is the following lemma which can be proven as in the previous proof [1].


Lemma 2 ([6, Lemma 2], [8, Formula (1)], [7, Proposition 2.7]):








[

Math
.

15

]








Prob

(



f

(


U

,
Y

)



G

-
1




X

)

=


Prob

(


f

(


U

,
Y

)



(


U


0


,

U


)


)







b
n




0




Prob

(



f

b
n


(


U


[


0
n

:


b
n



)


,
Y

)



U

b
n



)








b
n




0




Z

(



U

b
n




U


[


0
n

:


b
n



)



,

Y

[


0
n

:


1
n


]



)







From the above lemma, a fact is obtained that the rate of the polar code reaches the fundamental limit, and the decoding error probability becomes zero with n→∞. For example, I0 can be obtained by using a technique introduced in [8]. In the following, it is assumed that any I0 is given.


Symmetric parameterization

Here, a polar conversion based on the symmetric parameterization used in the expression of the conditional probability in the present embodiment will be described. For a given probability distribution PU of a binary random variable U, θ is defined as follows.





θ=PU(0)−PU(1)


then,






[

Math
.

16

]








P
U

(
u
)

=


1

u

θ

2





is obtained.






custom-character
u   [Math. 17]


is the bipolar binary conversion of u.


In the basic polar conversion, a pair of binary random variables (U0, U1) is converted as follows.






U
0
′=U
0
⊕U
1





U1′=U1   [Math. 18]


where





⊕  [Math. 19]


indicates addition in a field of characteristic 2. It is assumed that the random variables U0, U1∈{0,1} are independent. For each i∈{0,1}, θi is defined as follows.





θi=PUi(0)−PUi(1)   [Math. 20]


Here, the following equalities are obtained.






[

Math
.

21

]













P

U
0



=





P

U
0


(
0
)




P

U
1


(
0
)


+



P

U
0


(
1
)




P

U
1


(
1
)









=





1
+

θ
0


2

·


1
+

θ
1


2


+



1
-

θ
0


2

·


1
-

θ
1


2









=



1
+


θ
0



θ
1



2








(
3
)







The first equality is obtained from the definition of U0′ and the fact that U0 and U1 are independent of each other.


Furthermore, the following equalities are obtained.






[

Math
.

22

]














P

U
0



(
1
)

=


1
-


P

U
0



(
0
)








=



1
-


θ
0



θ
1



2








(
4
)







From Formula (3) and Formula (4),






[

Math
.

23

]











P

U
0



(

u
0


)

=


1



u
0





θ
0



θ
1



2





(
5
)







is obtained.






custom-character
u

0

  [Math. 24]


is the bipolar binary conversion of u0′. θ0′ is defined as





[Math. 25]





θ0′=PU0(0)−PU0(1)   (6)


From Formula (3) to Formula (6),





θ0′=θ1θ0   (7)


is obtained.


Since the symmetric parameterization is a binary version of the Fourier transform of a probability distribution [5, Definitions 24 and 25], the right side of Formula (7) corresponds to the Fourier transform of the convolution.


Here,






[

Math
.

26

]














P


U
0




U
1




(


u
0


,
0

)

=




P

U
0


(

u
0


)




P

U
1


(
0
)








=




1

θ
0


2

·


1
+

θ
1


2








=




1

θ
0



θ
1


+


θ
1


θ
0



4








(
8
)







is obtained. The first equality is derived from the definition of U0′ and U1′ and the fact that U0 and U1 are independent of each other. Further,






[

Math
.

27

]














P


U
1




U
0




(

0


u
0



)

=




P


U
1




U
0




(

0
,

u
0



)



P

U
0



(

u
0


)








=



1
+


[


θ
1


θ
0


]



/
[

1

θ
1



θ
0


]



2








(
9
)

















P


U
1




U
0




(

1


u
0



)

=


1
-


P


U
1




U
0




(

0


u
0



)








=



1
-


[


θ
1


θ
0


]



/
[

1

θ
1



θ
0


]



2








(
10
)







is obtained.


θ1′ is defined as follows.










θ
1


=



P


U
1


|

U
0




(

0
|

u
0


)

-


P

U

1
|

U
0






(

1
|

u
0


)






(
11
)







From Formula (9) to Formula (11),






[

Math
.

29

]













θ
1


=




θ
1


θ
0



1

θ
1



θ
0









=




θ
1


θ
0



1

θ
0











(
12
)







is obtained. Here, the second equality is obtained from Formula (7).


Successive Cancellation Decoding

Next, an algorithm of successive cancellation decoding executed by the decoding device 200 according to the present embodiment will be described.


Note that, in the present embodiment, notations of the algorithm illustrated in each drawing are general by themselves. For example, “for A do B” means executing B on each element defined by A. “if C then D else E” means that D is executed in a case where the condition C is satisfied (in a case of true), and E is executed in a case where the condition is not satisfied.


The decoding device 200 performs decoding processing of the polar code by executing Algorithm 1 of successive cancellation decoding illustrated in FIG. 5. This algorithm is based on a technique introduced in [9]. Processing contents of updateΘ(Θ, U, n, bn) in line 3 of Algorithm 1 are illustrated in FIG. 6 as Algorithm 2. In addition, processing contents of updateU(U, n, bn−1) in line 9 of Algorithm 1 are illustrated in FIG. 7 as Algorithm 3.


Here, each element of Θ is a conditional probability of 0 and 1 expressed by one parameter by applying the symmetric parameterization, and corresponds to a conditional probability expressing how much the value of the bit to be decoded deviates from ½. This Θ is updated by updateΘ(Θ, U, n, bn).


As illustrated in line 7 of Algorithm 1, whether the value of the bit is 0 or 1 is determined by comparing Θ[n][0] and 0. Note that “0 or 1” in line 7 of Algorithm 1 indicates that the value of the bit may be either 0 or 1.


The computation unit 220 that executes Algorithms 1 to 3 can access (refer to) the number n of times of polar conversion, the frozen bit u(n)_I1, and the following memory space. The memory space corresponds to the data storage unit 240 of the decoding device 200. The successive cancellation decoding processing executed by Algorithms 1 to 3 will be described below.






[

Math
.

30

]







Θ
=

{



Θ
[
k
]

[

c

n
-
k


]

:





k


{

0
,


,
n

}








c

n
-
k




[


0

n
-
k


:


1

n
-
k



]






}





U
=

{




U
[
k
]

[

c

n
-
k


]

[
b
]

:





k


{

0
,


,
n

}








c

n
-
k




[


0

n
-
k


:


1

n
-
k



]







b


{

0
,
1

}






}






In the above formula, Θ[k][cn−k] is a real variable, U[k][cn−k][b] is a binary variable, and c0 is a null character string. Note that Θ has Σnk=02n−k=2n+1−1 variables, and U has 2Σnk=02n−k=2n+2−2 variables.


In the following,











{

U

c
n


(
0
)


}




c
n




[


0
n

:

1
n


]





[

Math
.

31

]







is a memoryless information source. That is,






[

Math
.

32

]






P

U

[


0
n

:


1
n


]


(
0
)






is defined as






[

Math
.

33

]








P

U

[


0
n

:


1
n


]


(
0
)



(

u

[


0
n

:


1
n


]


(
0
)


)

=





c
n



[


0
n

:


1
n


]






P

U

c
n


(
0
)



(

u

c
n


(
0
)


)

.






where










{

P

U

c
n


(
0
)



}



c
n



[


0
n

:

1
n


]






[

Math
.

34

]







is given depending on the context. In addition,











{

U

c
n


(
0
)


}



c
n




[


0
n

:

1
n


]





[

Math
.

35

]







is allowed to be a non-steady state.


U(n)_bn is recursively defined (calculated) as






[

Math
.

36

]










U


c

n
-
k




b

k
-
1



0


(
k
)


=


U


c

n
-
k



0


b

k
-
1




(

k
-
1

)




U


c

n
-
k



1


b

k
-
1




(

k
-
1

)








(
13
)














U


c

n
-
k




b

k
-
1



1


(
k
)


=

U


c

n
-
k



1


b

k
-
1




(

k
-
1

)






(
14
)







for a certain bn∈{0,1}n and cn−k∈{0,1}n−k.


At this time,





U[0n:1](n)=U[0n:1n](0)G   [Math. 37]


obtained. This is a polar conversion of U(0)_[0n:1n]. updateΘ(Θ, U, n, bn) in line 3 of Algorithm 1 starts from the following Formula (16), and






[

Math
.

38

]










θ

b
n


(
n
)


=



P


U

b
n


(
n
)




U


[


0
n

:


b
n



)


(
n
)




(

0


u


[


0
n

:


b
n



)


(
n
)



)

-


P


U

b
n


(
n
)




U


[


0
n

:


b
n



)


(
n
)




(

1


u


[


0
n

:


b
n



)


(
n
)



)






(
15
)







is calculated recursively.









[

Math
.

39

]










θ

b
n


(
0
)


=



P

U

b
n


(
0
)



(
0
)

-


P

U

b
n


(
0
)



(
1
)






(
16
)







Processing contents of updateΘ(Θ, U, n, bn) in line 3 in Algorithm 1 are shown. In Algorithm 2 of FIG. 6,

    • parameters defined as








[

Math
.

40

]











θ


c

n
-
k




b
k



(
k
)


=



P


U


c

n
-
k




b
k



(
k
)




U



c

n
-
k


[


0
k

:


b
k



)


(
k
)




(

0


u



c

n
-
k


[


0
k

:


b
k



)


(
k
)



)

-


P


U


c

n
-
k




b
k



(
k
)




U



c

n
-
k


[


0
k

:


b
k



)


(
k
)




(

1


u



c

n
-
k


[


0
k

:


b
k



)


(
k
)



)






(
17
)







are calculated for each cn−k for a given bn∈{0, 1}n. By using Formulas (7), (12), (13), (14), and (17), relations of






[

Math
.

41

]










θ


c

n
-
k




b

k
-
1



0


(
k
)


=


θ


c

n
-
k



1


b

k
-
1




(

k
-
1

)




θ


c

n
-
k



0


b

k
-
1




(

k
-
1

)







(
18
)













θ


c

n
-
k




b

k
-
1



1


(
k
)


=



θ


c

n
-
k



1


b

k
-
1




(

k
-
1

)



θ


c

n
-
k



0


b

k
-
1




(

k
-
1

)




1

θ


c

n
-
k




b
k


0


(
k
)








(
19
)







can be obtained. Here,






custom-character
u   [Math. 42]


is the bipolar binary conversion of u=u(k)_cn−kbk−10. The purpose of updateU(U, n, bn−1) in line 9 of Algorithm 1 is to calculate u(k)_cn−kbk−10 from u(n)_[0n:bn−10] using the relation of





[Math. 43]






u
c

n−k

0b

k−1

(k−1)
=u
c

n−k

b

k−1

0
(k)
⊕u
c

n−k

b

k−1

1
(k)   (20)






u
c

n−k

1b

k−1

(k−1)
=u
c

n−k

b

k−1

1
(k)   (21)


Formulas (20) and (21) are obtained from Formulas (13) and (14). Here, it is assumed that u(n)_[0n:bn−10] is normally decoded.


Formulas (18) and (19) correspond to lines 5 and 7 of Algorithm 2, respectively, and formulas (20) and (21) correspond to lines 2 and 3 of Algorithm 3, respectively.


After applying lines 3 and 9 of Algorithm 1, respectively, the relation of





[Math. 44]





Θ[k][cn−k]=θcn−kbk(k)   (22)






U[k][c
n−k
][b
k−1
]=u
c

n−k

b

k

(k)   (23)


is obtained.


Further, line 7 of Algorithm 1 corresponds to a maximum posterior probability discrimination defined as follows.






[

Math
.

45

]








u
^


b
n


=

arg


max

u


{

0
,
1

}





P


U

b
n


(
n
)




U


[


0
n

:


b
n



)


(
n
)




(

u



u
^



[


0
n

:


b
n



)



)






Regarding Application of Algorithm 1 to Information Source Code

In a case where Algorithm 1 is used in a decoder of a polar information source code in which the code word u(n)_I1 and the auxiliary information vector y_[0n:1n] are used as inputs,






[

Math
.

46

]











P

U

c
n


(
0
)



(
x
)

=




P


X

c
n




Y

c
n




(

x


y

c
n



)



for


x



{

0
,
1

}






(
24
)







is defined,





[Math. 47]






custom-character
c

n

U[0][cn][{circumflex over (b)}−1]  (25)


is defined, and


reproduction





{custom-charactercn}cn∈[0n:1n]  [Math. 48]


is obtained. Here, {circumflex over ( )}b−1 is a null character string.


Regarding Application of Algorithm 1 to Systematic Polar Communication Channel Code

In a case where Algorithm 1 is used for a decoder of a systematic polar communication channel code in which the communication channel output vector y_[0n:1n] and the shared vector u(n)_I1 are used as inputs,

    • with respect to a given communication channel distribution





{PYcn|Xcn}cn∈[0n:1n]  [Math. 49]


an input distribution





{PXcn}cn∈[0n:1n], custom-character∈{0, 1}  [Math. 50]


and y_cn∈Y,






[

Math
.

51

]











P

U

c
n


(
0
)



(
x
)

=




P


Y

c
n




X

c
n




(


y

c
n



x

)




P

X

c
n



(
x
)










x




{

0
,
1

}






P

Y

c
n



(


y

c
n




x



)




P

X

c
n



(

x


)







(
26
)







is defined. Then, the reproduction {{circumflex over ( )}x_cn}_cn∈I0′ defined by Formula (25) is obtained. Here, I0′ is defined by Formula (1).


Regarding Application of Algorithm 1 to Non-Systematic Polar Communication Channel Code

In a case where Algorithm 1 is applied to a decoder of a non-systematic polar communication channel code, a series of binary variables {M[bn]}_bn∈I0 is prepared, and immediately after U[n][c0][bn−1] (line 7 of Algorithm 1) is updated,





M[bn]←U[n][c0][bn−1]


is inserted. Algorithm 1 with this modification is shown in FIG. 20.


As a result, reproduction {circumflex over ( )}u_I0 defined by {circumflex over ( )}u_bn=M[bn] can be obtained.


Remark 1: In a case where Algorithm 1 is applied to a binary erasure channel, the value that can be taken by Θ[k][cn−k] is limited to {−1, 0, 1}, and






[

Math
.

52

]








Θ
[
0
]

[

b
n

]





P

U

b
n


(
0
)



(
0
)

-


P

U

b
n


(
0
)



(
1
)






which is a value in line 1 of Algorithm 1 can be replaced by






[

Math
.

53

]








Θ
[
0
]

[

b
n

]



{



1




if



y

b
n



=
0





0



if



y

b
n




is


the


erasure


symbol






-
1





if



y

b
n



=
1









for a given communication channel output y_[0n:1n].


As described below as Algorithm 2, Algorithm 2 can be improved. Furthermore, this technique can also be used with systematic communication channel codes, as introduced in [3]. There, for a given message vector x_I0′, y_[0n:1n] is defined as






[

Math
.

54

]







y

b
n


=

{





x

b
n






if



b
n








erasure




if



b
n







.






Successive Cancellation List Decoding

Next, an algorithm of successive cancellation list decoding executed by the decoding device 200 according to the present embodiment will be described. This algorithm is based on a technique introduced in [9].


In the technique according to the present embodiment, a fixed address memory space is used instead of using the stacking memory space in [9]. In the technique according to the present embodiment, since the size of the memory space for calculating the conditional probability is approximately half of that used in [9], the calculation amount of the algorithm according to the present embodiment is approximately half of that introduced in [9].


The decoding device 200 performs decoding processing of the polar code by executing Algorithm 4 of successive cancellation decoding illustrated in FIG. 8. Processing contents of updateΘ(Θ[λ], U[λ], n, bn) in line 5 of Algorithm 4 are illustrated in FIG. 6 as Algorithm 2. Processing contents of extendPath(bn, Λ) in line 7 of Algorithm 4 are illustrated in FIG. 9 as Algorithm 5. Processing contents of splitPath(bn, Λ) in line 10 of Algorithm 4 is illustrated in FIG. 10 as Algorithm 6. Processing contents of prunePath(bn, Λ) in line 13 of Algorithm 4 is illustrated in FIG. 11 as Algorithm 7. Processing contents of copyPath(λ′, λ−Λ) in line 25 of Algorithm 7 is illustrated in FIG. 12 as Algorithm 8. Processing contents of magnifyP(Λ) in line 17 of Algorithm 4 is illustrated in FIG. 13 as Algorithm 9. Processing contents of updateU(U[λ], n, bn−1) in line 19 of Algorithm 4 is illustrated in FIG. 7 as Algorithm 3.


In the calculation by the procedure of Algorithms 2 to 9, the computation unit 220 of the decoding device 100 shown in FIG. 3 accesses the number of times n of the polar conversion, the list size L, the frozen bit u(n)_I1, the memory spaces {Θ[λ]}L−1λ=0, {U[λ]}L−1λ=0, {P[λ]}2L−1λ=0, and {Active [λ]}2L−1λ=0, and performs processing. The memory space corresponds to the data storage unit 240.


Note that Θ[λ] and U[λ] are calculated in Algorithms 2 and 3. {P[λ]}2L−1λ=1 is a series of real variables, and {Active [λ]}2L−1λ=1 is a series of binary variables.


After applying Algorithm 4, the calculation result is stored in {U[λ]}L−1λ=0 and {P[λ]}2L−1λ=0 to satisfy






[

Math
.

55

]














P
[
λ
]


2
N


=






b
n



[


0
n

:


1
n


]





P


U

b
n


(
n
)




U


[


0
n

:


b
n



)


(
n
)




(




u
^


b
n


(
n
)


(
λ
)





u
^



[


0
n

:


b
n



)


(
n
)


(
λ
)


)








=



P

U

[


0
n

:


1
n


]


(
n
)



(



u
^


[


0
n

:


1
n


]


(
n
)


(
λ
)

)








(
27
)








and





[

Math
.

56

]














U
[
λ
]

[
0
]

[

c
n

]

[

b

-
1


]

=




u
^


c
n


(
0
)


(
λ
)










u
^


[


0
n

:


1
n


]


(
n
)


(
λ
)

=





u
^


[


0
n

:


1
n


]


(
0
)


(
λ
)


G





.




where





û[0n:1n](n)(λ)   [Math. 57]


is the λth survival path. In line 5 of Algorithm 7 that performs the processing of selecting the survival path, L paths {circumflex over ( )}u(n)_[0n:bn] with the highest probability (likelihood)






[

Math
.

58

]














P
[
λ
]


2



"\[LeftBracketingBar]"


[


0
n

:


b
n


]



"\[RightBracketingBar]"




=






d
n



[


0
n

:


b
n


]





P


U

d
n


(
n
)




U


[


0
n

:


d
n



)


(
n
)




(




u
^


d
n


(
n
)


(
λ
)





u
^



[


0
n

:


d
n



)


(
n
)


(
λ
)


)








=



P

U

[


0
n

:


b
n


]


(
n
)



(



u
^


[


0
n

:


b
n


]


(
n
)


(
λ
)

)








(
28
)







are selected where





û[0n:bn](n)(λ)   [Math. 59]


is the λth survival path.


Regarding Application of Algorithm 4 to Information Source Code

In a case where Algorithm 4 is used in a decoder of a polar information source code, accessing the code word u_I1 and the auxiliary information vector {y_bn}_bn∈[0n:1n],









P

U

b
n


(
0
)






[

Math
.

60

]







is defined by Formula (24), and reproduction





{custom-charactercn(l)}cn∈[0n:1n]  [Math. 61]


is obtained. The above reproduction is defined below.





[Math. 62]






custom-character
c

n
(l)=U[l][0][cn][b−1]  (29)


where






[

Math
.

63

]









l
=

arg


max
λ


P
[
λ
]






(
30
)







That is, 1 in Formula (30) is the most probable path (sequence).


In a case where s=parity(xn) is added to the code word u_I1 by using an external parity check function parity,


with respect to 1 satisfying





parity ({custom-charactercn(l)}cn∈[0n:1n])=s   [Math. 64]


reproduction is defined as (29).


Regarding Application of Algorithm 4 to Systematic Polar Communication Channel Code

In a case where Algorithm 4 is used in a decoder of a systematic polar communication channel code, accessing the communication channel output vector {y_bn}_bn∈[0n:1n] and the shared vector u_I1, reproduction defined by Formulas (29) and (30)





{custom-charactercn(l)custom-character  [Math. 65]


is obtained. Here, I0′ is defined by Formula (1).


In a case where the code word (for example, polar code with CRC [9]) having a check vector s satisfying s=parity(xn) for all the communication channel inputs xn is used using the external parity check function parity,


with respect to 1 satisfying





parity ({custom-charactercn(l)}cn∈[0n:1n])=s   [Math. 66]


reproduction





{custom-charactercn(l)custom-character  [Math. 67]


is defined as (29).


Regarding Application of Algorithm 4 to Non-Systematic Polar Communication Channel Code

In a case where Algorithm 4 is used for a decoder of a non-systematic polar communication channel code, the binary variable





{M[λ][bn]custom-character  [Math. 68]


is prepared, and immediately after U[λ][n][c0][bn−1] and U[∧+λ][n][c0][bn−1] are updated (lines 5 and 6 of Algorithm 6, lines 8, 11, 15, and 26 of Algorithm 7), respectively,






M[λ][b
n
]←U[λ][n][c
0
][b
n−1]






M[Λ+λ][b
n
]←U[Λ+λ][n][c
0
][b
n−1]  [Math. 69]


is inserted.





[Math. 70]





ûbn(l)=M[l][bn]  (31)


is defined, and reproduction





bn(l)custom-character  [Math. 71]


is obtained. Here, l is a maximum likelihood sequence defined by Formula (30). The modified Algorithm 6 is shown in FIG. 21, and the modified algorithm 7 is shown in FIG. 22.


In a case where the code word (for example, polar code with CRC [9]) having a check vector s satisfying s=parity(xn) for all the communication channel inputs xn is used using the external parity check function parity, with respect to l in which the corresponding communication channel input, which is defined by Formula (29),





{custom-charactercn(l)}cn∈[0n:1n]  [Math. 72]


satisfies





parity ({custom-charactercn(l)}cn∈[0n:1n])=s   [Math. 73]


reproduction of non-systematic code





bn(l)custom-character  [Math. 74]


is defined as Formula (31).


Remark 2: Line 17 of Algorithm 4 is unnecessary in a case where a real variable with infinite precision is used. In the above description, it is assumed to use a real variable with finite precision (floating point) in order to avoid P[λ] becoming 0 due to truncation when bn increases. Regardless of whether P[λ] is infinite precision or not, while (bn=0n) and bn∈I1 are continuously satisfied from the beginning, line 17 of Algorithm 4 and line 2 of Algorithm 5 can be skipped.


Note that this type of technique is used in [9, Algorithm 10, lines 20 to 25], and this technique has been repeated Nn times. In contrast to that, Algorithm 4 in the present embodiment uses this technique outside the update of the parameter {Θ[λ]}L−1λ=0 (Algorithm 2), and magnifyP(λ) is repeated N times.


Remark 3: Assuming that L is a power of 2, Λ=L is always satisfied in line 13 of Algorithm 4. Therefore, since Λ+l≥L is always satisfied, line 14 of Algorithm 4 and lines 9 to 12 of Algorithm 7 can be omitted. FIG. 23 illustrates Algorithm 4 after skip processing of Remark 2 and omission of Remark 3 with respect to Algorithm 4. In addition, Algorithm 7 after omission of Remark 3 with respect to Algorithm 7 is illustrated in FIG. 24. Note that line 9 of Algorithm 4 in FIG. 23 corresponds to skip processing of Remark 2 with respect to Algorithm 5 in FIG. 9, skip is not performed in extendPath that calls line 18 of Algorithm 4 in FIG. 23, and the processing of Algorithm 5 in FIG. 9 is executed as it is. Further, FIG. 25 illustrates a modification when Algorithm 7 illustrated in FIG. 24 is applied to the non-systematic communication channel code.


Remark 4: Since Algorithm 6 and Algorithm 7 have the same lines 2 to 3, lines 2 to 3 of Algorithm 6 can be omitted, and then lines 1 to 4 of Algorithm 7 can be moved between the line 8 and the line 9 of Algorithm 4.


Implementation of Binary Representation of Index

The description so far has used a binary representation of the index. Implementation of this representation will now be described.


In an implementation, the binary representation bk∈[0k:1k] may be represented by a corresponding integer in {0, . . . , 2k−1}. It is assumed that i(bk) is the corresponding integer of bk.


For a given bk∈[0k:1k], bk−1 is the least significant bit of bk, and the corresponding integer can be obtained as i(bk−1)==i(bk)mod2. This is used in lines 5, 7, and 9 of Algorithm 1 and lines 2, 3, and 5 of the algorithm.


In line 9 of Algorithm 1 (FIG. 5), line 2 of Algorithm 2 (FIG. 6), and line 5 of Algorithm 3 (FIG. 7), bk−1 is used for a given bk. bk−1 can be obtained by using a 1-bit right shift of bk. Alternatively, a corresponding integer can be obtained by using a relational expression of i(bk−1)=floor(i(bk)/2). Here, floor is a function that outputs an integer value obtained by rounding down decimal places with respect to an input of a real number.


In lines 5 and 7 of Algorithm 2 and lines 2 and 3 of Algorithm 3, cn−k0 can be obtained by setting the least significant bit of cn−k0 to 1 and using cn−k and a 1-bit left shift of cn−k1. Alternatively, the corresponding integer can be obtained using the following relational expression.






i(cn−k0)=2i(cn−k)






i(cn−k1)=2i(cn−k)+1   [Math. 75]


In lines 5 and 7 of Algorithm 1, line 3 of Algorithm 5, lines 5 and 6 of Algorithm 6, and lines 8, 11, 15, and 26 of Algorithm 7, the corresponding integer of the null character string c0 can be defined as i(c0)=0. When k=n, this also appears in lines 5 and 7 of Algorithm 2 and lines 2 and 3 of Algorithm 3. Here, the null character string is distinguished from the binary symbol 0 according to the value k. In lines 2 and 3 of Algorithm 3, the null character string b−1 appears when k=1. i(b−1)=0 can be defined and can be distinguished from the binary symbol 0 according to the value k. The following relational expression is obtained.





i(c00)=2i(c0)






i(c01)=2i(c0)+1






i(b−1)=floor(i(b1)/2)   [Math. 76]


Regarding Improvement of Algorithm 2 in Case of Assuming Θ[k][cn−k]∈{−1, 0, 1}

As described in Remark 1, in a case where Algorithm 1 is applied to the binary erasure channel, Θ[k][cn−k]∈{−1,0,1} is obtained. Given this, two refinements to Algorithm 2 can be introduced.


First, FIG. 14 illustrates Algorithm 2. In this Algorithm 2, it is assumed that Θ[k][cn−k] is represented by a 3-bit signed integer including a sign bit and two bits representing an absolute value.



FIG. 15 illustrates Algorithm 2. In this Algorithm 2,


It is assumed that





Θ[k][cn−k]=(Θ_[k][cn−k], Θ1[k][cn−k])   [Math. 77]


is a 2-bit signed integer including a sign bit





Θ_[k][cn−k]  [Math. 78]


and two bits representing an absolute value bit Θ1[k][cn−k]. Further, line 7 of Algorithm 1 can be replaced with






[

Math
.

79

]









U
[
n
]

[

c
0

]

[

b

n
-
1


]



{






Θ
-

[
n
]

[
0
]





if





Θ
1

[
n
]

[
0
]


=
1






0


or


1



otherwise








or simply with





U[n][c0][bn−1]←Θ_[n][0]  [Math. 80]


In this Algorithm 2, ∧ represents an AND computation, ∨ represents an OR computation, and





⊕  [Math. 81]


represents an XOR computation.


Algorithm of Line 5 of Algorithm 7

Here, the content of the algorithm in line 5 of Algorithm 7 shown in FIG. 11 will be described. In the present embodiment, line 5 of Algorithm 7 can be implemented by Algorithm 7-1 (markPath(λ)) illustrated in FIG. 16. The algorithm of selectPath(0, 2·λ−1) in line 5 in Algorithm 7-1 is shown in FIG. 17 as Algorithm 7-2. The algorithm of partition(left,right) on line 2 in Algorithm 7-2 is shown in FIG. 18 as Algorithm 7-3. The algorithm of swapIndex in Algorithm 7-3 is shown in FIG. 19 as Algorithm 7-4.


Algorithms 7-1 to 7-4 can access the memory spaces {P[λ]}2L−1λ=0, {Index [λ]}2L−1λ=1, and {Active [λ]}2L−1λ=0. Here, Index [X]∈{0, . . . 2L−1} is an integer variable. The calculation result is stored in {Active [λ]}2Λ−1λ=0.


Remark 5: As mentioned in [9], instead of adopting selectPath(0, 2·Λ−1) in line 5 of Algorithm 7-1, {Index[λ]}Λ−1λ=0 may be simply sorted to obtain P[Index[0]]≥P[Index[1]]≥ . . . ≥P[Index[Λ−1]]. The time complexity of sorting is O(ΛlogΛ), but in a case where Λ is small, the time complexity may be faster than selectPath(0,2·Λ−1).


Remark 6: Line 1 (can be omitted) of Algorithm 7-3 guarantees that the average time complexity of selectPath(0,2·Λ−1) is O(Λ). By selecting the index corresponding to the median of {P[λ]}rightλ=left, this line can be replaced, and the worst-case time complexity O(Λ) can be guaranteed (refer to [4]).


Summary of Each Algorithm

Hereinafter, as a summary of each algorithm, input, output, and the like for each algorithm will be described. Note that the number n of times of polar conversion is a constant, and can be referred to from all the algorithms. The block length is N=2n.


For example, the input information described below is input from the input unit 210 in the decoding device 200 shown in FIG. 3, stored in the data storage unit 240, and referred to by the computation unit 220 that executes the algorithm. Reference to information by the computation unit 220 that executes the algorithm may be expressed as input of information to the algorithm. A computation result by the computation unit 220 is output to and stored in the data storage unit 240. The computation result (original message) is read from the data storage unit 240 by the computation unit 220 and output from the output unit 230.


Algorithm 1: FIG. 5
Input

The input to Algorithm 1 is as follows.

    • Position information I1 of frozen bit This is a subset of {0n, . . . , 1n}.
    • Information u_I1 of frozen bit in the case of the information source code, the information source code is transmitted as a code word from the encoding device 100 to the decoding device 200.


The code word in the case of the information source code is more specifically as follows.


The encoding device 100 defines u(0)_bn=x_bn for the information source output x_[0n:1n], and applies a known polar conversion algorithm to obtain {u[lsb(n)][bn]}_bn∈[0n:1n]. At this time,





{u[lsb(n)][bn]}_bn∈I1


is a code word.


Series of Probability Distributions









{

P

U

b
n


(
0
)



}



b
n



[


0
n

:

1
n


]






[

Math
.

82

]







This is set as follows by the computation unit 220 depending on the object to be applied.

    • In the case of the information source code, the information observed by the encoding device 100 is set as (X1, . . . , Xn), and the auxiliary information observed by the decoding device 200 is set as (Y1, . . . , Yn). While conditional probability distribution of X_bn after observation of Y_bn is





PXbn|Ybn   [Math. 83]


after the observation of the auxiliary information y_[0n:1n],









P

U

b
n


(
0
)






[

Math
.

84

]







is set by the above-described Formula (24).

    • In the case of the communication channel code, the communication channel input is (X1, . . . , Xn), and the communication channel output observed by the decoding device 200 is (Y1, . . . , Yn). The conditional probability distribution of the communication channel is





PYbn|Xbn   [Math. 85]


At this time, after observing the communication channel output y_[0n:1n],









[

Math
.

86

]









P

U

b
n


(
0
)












is set by the above-described Formula (26).


The communication channel input for the systematic communication channel code is more specifically as follows.


The encoding device 100 sets x_I0′ as a message vector and u(0)_I1 as information shared by the encoding device 100 and the decoding device 200 (for example, all are set to 0). Then, a known polar code algorithm is applied to obtain (x_I1′, u_I0) from (x_I0′, u_I1). Here, the relationship of I0, I1, I0′, and I1′ is defined by the above-described Formulas (1) and (2). At this point, the communication channel input (also referred to as a code word) that is the output of encoding device 100 is obtained by aligning (x_I0′, x_I1′) in the order of indices.


The communication channel input for the non-systematic communication channel code is more specifically as follows.


The encoding device 100 sets u(0)_I0 as a message vector and u(0)_I1 as information shared by the encoding device 100 and the decoding device 200 (for example, all are set to 0). By applying a known polar code algorithm to this,





{u[lsb(n)][bn]}_bn∈[0n:1n]


is obtained and set as a communication channel input (also referred to as a code word) which is an output of the encoding device 100.


Output

As described above, the output from Algorithm 1 is stored in the memory space shown in [Math. 30].


The reproduction information as the decoding result is obtained by the following interpretation depending on the object to be applied.

    • In the case of the information source code, the computation unit 220 sets {circumflex over ( )}x_[0n:1n] defined in the above-described Formula (25) as the reproduced information source sequence with reference to the memory space of U shown in [Math. 30].
    • In the case of the systematic communication channel code, the computation unit 220 refers to the memory space of U shown in [Math. 30], and sets {circumflex over ( )}x_I0′ defined in the above-described Formula (25) as a reproduced message. Here, I0′ is as described above.
    • In the case of the non-systematic communication channel code, as described above, the modified algorithm 1 shown in FIG. 20 is used. That is, the memory area {M[bn]:bn∈I0} corresponding to the position is prepared in the non-frozen bit, and the result is stored. The reproduction message becomes {circumflex over ( )}u_I0 defined by {circumflex over ( )}u_bn=M[bn].


Algorithm 2: FIGS. 6, 14, and 15
Input

The input to Algorithm 2 is as follows.

    • Reference is made to the memories Θ and U used in Algorithm 1.
    • Integer k.
    • Binary representation bn of index. As described previously, implementations may also handle bn as an integer.


Output





    • In a case where the condition is satisfied, Algorithm 2 is recursively called, and the content of Θ is rewritten.





Algorithm 3: FIG. 7
Input





    • Reference is made to the memory U used in Algorithm 1.

    • Integer k.

    • Binary representation bn of index Implementations may also treat bn as an integer.





Output





    • After the content of U is rewritten, when the condition is satisfied, Algorithm 3 is recursively called.





Algorithm 4: FIG. 8
Input





    • The same information as the input information to Algorithm 1

    • List size L.





Output

The output from Algorithm 4 is stored in the memory spaces (memory areas) of Θ, U, and P as described above. Specifically, this is as follows.






[

Math
.

87

]






{




Θ
[
λ
]

[
k
]

[

c

n
-
k


]

:





λ


{

0
,


,
L

}







k


{

0
,


,
n

}








c

n
-
k




[


0

n
-
k


:


1

n
-
k



]






}






[

Math
.

88

]






{





U
[
λ
]

[
k
]

[

c

n
-
k


]

[
b
]

:





λ


{

0
,


,
L

}







k


{

0
,


,
n

}








c

n
-
k




[


0

n
-
k


:


1

n
-
k



]







b


{

0
,
1

}






}






[

Math
.

89

]






{



P
[
λ
]

:

λ



{

0
,


,
L

}


}




The reproduction information as the decoding result is obtained by the following interpretation depending on the object to be applied.

    • In the case of the information source code, the computation unit 220 obtains l by the above-described Formula (30) with reference to the above P, and with reference to the above U,






custom-character
[0

n

:1

n

](l)   [Math. 90]

    • which is defined by Formula (25), is set as reproduced information source sequence.
    • In the case of the systematic communication channel code, the computation unit 220 obtains l by the above-described Formula (30) with reference to the P, and with reference to the above U, {circumflex over ( )}x_I′0 defined by Formula (25) is set as a reproduced message. Here, I′0 is as described above.
    • In a case where Algorithms 4 to 7 are applied to a non-systematic communication channel code, as described above, the memory area shown in [Math. 68] is prepared and the result is stored. The modified algorithms 6 and 7 accompanying this are as shown in FIGS. 21, 22, and 25, respectively. The reproduction message is as shown in [Math. 70] and [Math. 71].


Algorithm 5: FIG. 9
Input





    • Reference is made to the above memories Θ, U, and P.

    • The information u_I1 of the frozen bit described in the input of Algorithm 4 (Algorithm 1) is referred to.

    • Binary representation bn of index Implementations may also treat bn as an integer.

    • Size Λ of valid (active) list





Output





    • The content of the above memories U and P is rewritten.





Algorithm 6: FIG. 10
Input





    • Reference is made to the above memories Θ, U, and P.

    • Binary representation bn of index Implementations may also treat bn as an integer.

    • Size Λ of valid (active) list





Output





    • The content of the above memories U and P is rewritten.





Algorithm 7: FIGS. 11, 22, 24, and 25

Here, the memory





{Active [λ]:λ∈{0, . . . , 2L−1}}


is used.


Input





    • Reference is made to the above memories Θ, U, P, and Active.

    • Binary representation bn of index Implementations may also treat bn as an integer.

    • Size Λ of valid (active) list





Output





    • The content of the above memories U, P, and Active is rewritten. In Algorithm 7 illustrated in FIG. 25, the content of M is further rewritten.





Algorithm 8: FIG. 12

Algorithm 8 is an algorithm corresponding to copyPath(λ′,λ) in line 4 of Algorithm 6 (FIG. 10) and lines 10 and 25 of Algorithm 7 (FIG. 11), and implements a function of copying the state of the λth list to a memory that stores the state of the λ′th list.


Input





    • Reference is made to the above memories Θ and U.

    • Index λ′ of list of copy destinations.

    • Index λ′ of list of copy sources.





Output





    • The content of the above memories Θ and U is rewritten.





Algorithm 9: FIG. 13

The algorithm 9 is an algorithm that implements rewriting of content of the above memory P (setting the maximum value to 1 without change of ratio of real number stored in memory).


Input





    • Reference is made to the above memory P.

    • Size Λ of valid (active) list.





Output





    • The content of the above memory P is rewritten.





Algorithm 2: FIGS. 14 and 15

Algorithm 2 illustrated in FIGS. 14 and 15 is an algorithm that speeds up Algorithm 2 when Algorithm 1 is applied to a polar code for an erasure channel. The input/output is the same as that of Algorithm 2.


Algorithms 7-1 to 7-4: FIGS. 16 to 19

Algorithms 7-1 to 7-4 are algorithms that set Active[λ]=1 only to the top L values of P[λ] as described in line 5 of Algorithm 7 (FIG. 11).


Simple Configuration

As described above, in order to simplify the configuration, Algorithm 4 and Algorithm 7 in a case where it is assumed that the set I0 is not an empty set, L is a power of 2, and only a real number computation with finite accuracy is possible (although not realistic, in the case of infinite accuracy, line 27 of algorithm 4 in FIG. 23 can be deleted) are as shown in FIGS. 23 to 25. Note that, in a case where I0 is an empty set, L=1 is set, and the processing up to lines 5 to 8 of Algorithm 4 illustrated in FIG. 23 is skipped and the processing up to line 14 is finished.


Configuration of Decoding Device 200 for Erasure Channel

As described in Remark 1, in the case of the decoding device 200 for the erasure channel, it can be assumed that the value Θ[k][cn−k] takes any value of {−1,0,1}.


When line 7 of Algorithm 1 (FIG. 5) is replaced with






[

Math
.

91

]









U
[
n
]

[

c
0

]

[

b

n
-
1


]



{




0




if




Θ
[
n
]

[
0
]


=
1





1




if




Θ
[
n
]

[
0
]


=

-
1







0


or


1





if




Θ
[
n
]

[
0
]


=
0




,






regardless of the method of encoding {−1,0,1} into Θ[k][cn−k], processing in lines 4 to 8 of Algorithm 1 (FIG. 5) may be any processing as long as the values in the tables illustrated in FIGS. 26 to 28 are substituted into Θ[k][cn−k]. Two examples (Example A, Example B) will be described below.


In a case of being applied to Algorithms 4 to 9, Θ[λ][k][cn−k] can be a variable having a value of {−1,0,1}, and P[λ] can be an N-bit unsigned integer variable. In this case, the processing of magnifyP in line 17 of Algorithm 4 is unnecessary. Further,


Since




1custom-characterΘ[λ][n][bn]∈{0, 1, 2}  [Math. 92]


is established, for line 2 of Algorithm 5, lines 2 to 3 of Algorithm 6, and lines 2 to 3 of Algorithm 7,


in a case of





1custom-characterΘ[λ][n][bn]=2   [Math. 93]


calculation is possible using 1-bit left shift. cl Example A: Method Using Integer Addition/Subtraction


The division can be omitted by using Algorithm 2 illustrated in FIG. 14 instead of Algorithm 2 in FIG. 6. Since carrying-up occurs, three bits including a sign are required for Θ[k][cn−k].


Note that, when this method is applied to successive cancellation list decoding (Algorithm 4), a change may be made only by regarding a real variable as an integer variable.


Example B: Method of Using Sign Bit and Absolute Value Bit

Further, Θ[k][cn−k] is represented as illustrated in FIG. 29.





Θ_[k][cn−k], Θ1[k][cn−k]  [Math. 94]


may also be encoded with two bits.


In this case, the tables illustrated in FIGS. 30 to 32 may be implemented in lines 4 to 8 of Algorithm 2.


This can be implemented by replacing line 7 of Algorithm 1 (FIG. 5) with line 7 illustrated in FIG. 33, and using Algorithm 2 illustrated in FIG. 15 instead of Algorithm 2 in FIG. 6. Line 7 of Algorithm 1 may be












U
[
n
]

[

c
0

]

[

b

n
-
1


]



{







Θ
-

[
n
]

[
0
]





if





Θ
1

[
n
]

[
0
]


=
1






0


or


1



otherwise



.






[

Math
.

95

]







Note that, when this method is applied to successive cancellation list decoding (Algorithm 4),





Θ[k][cn−k]=(Θ_[k][cn−k], Θ1[k][cn−k])   [Math. 96]


is interpreted as a 2-bit signed integer of a sign part 1-bit and a numerical part 1-bit.


Regarding Effects of Embodiment

In a case where the technique according to the present embodiment (referred to as a proposed method) is compared with the technique of Reference Literature [9] (referred to as a conventional method), two values Pλ[β][0] and Pλ[β][1] indicated by P of the conventional method are expressed by one value Θ[k][cn−k] by applying the symmetric parameterization in the proposed method [Algorithm 2], and thus, in the proposed method, the used memory amount is half that in the conventional method. As a result, the calculation amount can also be reduced. In particular, in list decoding, it is possible to greatly reduce a processing time in a case where states of candidates are copied.


In addition, a stack memory is used in the processing of the conventional method, but a memory space having a fixed address is directly operated in the proposed method. Thus, the operation in the conventional method regarding the stack memory is simplified in the proposed method.


Furthermore, with the technique according to the present embodiment, it is possible to uniformly perform decoding for the communication channel code and decoding for the information source code.


Summary of Embodiment

In the present specification, at least a decoding device, a decoding method, and a program described in clauses described below are described.


Clause 1

A decoding device, including:

    • an input unit that inputs a code word encoded by a polar code from an original message;
    • a computation unit that decodes the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and
    • an output unit that outputs the decoded original message.


Clause 2

The decoding device according to clause 1, in which

    • in a case where the decoding device is applied to decoding for an information source code,
    • the input unit inputs information of a frozen bit as the code word from a noise-free communication channel or a storage medium, and
    • the computation unit decodes the original message by using auxiliary information as the observation information.


Clause 3

The decoding device according to clause 1, in which

    • in a case where the decoding device is applied to decoding for a communication channel code,
    • the input unit inputs, as the code word, a communication channel output from a communication channel having noise, and
    • the computation unit decodes the original message by using the communication channel output as the observation information.


Clause 4

The decoding device according to clause 3, in which

    • in a case where the decoding device is applied to decoding for a systematic communication channel code,
    • information of a frozen bit is shared between an encoding side and a decoding side,
    • the input unit inputs the communication channel output in which the original message obtained by polar conversion on the encoding side and an additional information are used as communication channel inputs, and
    • the computation unit decodes the original message by using the information of the frozen bit and the communication channel output.


Clause 5

The decoding device according to clause 3, in which

    • in a case where the decoding device is applied to decoding for a non-systematic communication channel code,
    • information of a frozen bit is shared between an encoding side and a decoding side,
    • the input unit inputs the communication channel output in which information of non-frozen bit and information obtained by polar conversion from the information of the frozen bit on the encoding side are used as communication channel inputs, and
    • the computation unit decodes the original message by storing a determination result of 0 or 1 in a memory area corresponding to a position of a non-frozen bit based on the information of the frozen bit and the communication channel output.


Clause 6

The decoding device according to any one of clauses 1 to 5, in which

    • the computation unit executes list decoding for selecting a sequence of a most probable path from among the number (predetermined list size) of survival paths.


Clause 7

A decoding method executed by a decoding device, including:

    • an input step of inputting a code word encoded by a polar code from an original message;
    • a computation step of decoding the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and
    • an output step of outputting the decoded original message.


Clause 8

A program for causing a computer to function as each unit in the decoding device according to any one of clauses 1 to 6.


Although the present embodiment has been described above, the present invention is not limited to such a specific embodiment, and various modifications and changes can be made within the scope of the gist of the present invention described in the claims.


REFERENCE LITERATURE





    • [1] E. Arikan, “Channel polarization:a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels”, IEEE Trans. Inform. Theory, vol. IT-55, no. 7, pp. 3051-3073, Jul. 2009.

    • [2] E. Arikan, “Source polarization”, Proc. 2010 IEEE Int. Symp. Inform. Theory, Austin, U.S.A., Jun. 13-18, 2010, pp. 899-903. (corresponding to Non-Patent Literature 2)

    • [3] E. Arikan, “Systematic polar coding”, IEEE Communications Letters, vol. 15, no. 8, pp. 860-862, Aug. 2010.

    • [4] M. Blum, R. W. Floyd, V. Pratt, R. L. Rivest, and R. E. Tarjan, “Time bounds for selection”, J. Computer and System Sciences, vol. 7, no. 4, pp. 448-461, 1973.

    • [5] R. Mori and T. Tanaka, “Source and channel polarization over finite fields and Reed-Solomon matrices”, IEEE Trans. Inform. Theory, vol. IT-60, no. 5, pp. 2720-2736, May 2014.

    • [6] J. Muramatsu, “Successive-cancellation decoding of linear source code”, Proceedings of the 2019 IEEE Information Theory Workshop, Visby, Sweden, Aug. 25-28, 2019. Extended version available at arXiv:1903.11787[cs.IT], 2019.

    • [7] E. S, as,oglu, “Polarization and polar codes”, Fund. Trends Commun. Inf. Theory, vol. 8, no. 4, pp. 259-381, Oct. 2012.

    • [8] I. Tal and A. Vardy, “How to construct polar codes”, IEEE Trans. Inform. Theory, vol. IT-59, no. 10, pp. 6563-6582, Oct. 2013.

    • [9] I. Tal and A. Vardy, “List decoding of polar codes”, IEEE Trans. Inform. Theory, vol. IT-61, no. 5, pp. 2213-2226, May. 2015. (corresponding to Non-Patent Literature 1)





REFERENCE SIGNS LIST






    • 100 Encoding device


    • 200 Decoding device


    • 210 Input unit


    • 220 Computation unit


    • 230 Output unit


    • 240 Data storage unit


    • 1000 Drive device


    • 1001 Recording medium


    • 1002 Auxiliary storage device


    • 1003 Memory device


    • 1004 CPU


    • 1005 Interface device


    • 1006 Display device


    • 1007 Input device


    • 1008 Output device




Claims
  • 1. A decoding device, comprising: a memory; anda processor configured to executeinputting a code word encoded by a polar code from an original message;decoding the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; andoutputting the decoded original message.
  • 2. The decoding device according to claim 1, wherein, in a case where the decoding device is applied to decoding for an information source code, the processor further executes, inputting information of a frozen bit as the code word from a noise-free communication channel or a storage medium, anddecoding the original message by using auxiliary information as the observation information.
  • 3. The decoding device according to claim 1, wherein, in a case where the decoding device is applied to decoding for communication channel code, the processor further executes, inputting, as the code word, a communication channel output from a communication channel having noise, anddecoding the original message by using the communication channel output as the observation information.
  • 4. The decoding device according to claim 3, wherein, in a case where the decoding device is applied to decoding for a systematic communication channel code, information of a frozen bit is shared between an encoding side and a decoding side, andthe processor further executes inputting the communication channel output in which the original message obtained by polar conversion on the encoding side and an-additional information are used as communication channel inputs, anddecoding the original message by using the information of the frozen bit and the communication channel output.
  • 5. The decoding device according to claim 3, wherein, in a case where the decoding device is applied to decoding for a non-systematic communication channel code, information of a frozen bit is shared between an encoding side and a decoding side, andthe processor further executes inputting the communication channel output in which information of non-frozen bit and information obtained by polar conversion from the information of the frozen bit on the encoding side are used as communication channel inputs, anddecoding the original message by storing a determination result of 0 or 1 in a memory area corresponding to a position of a non-frozen bit based on the information of the frozen bit and the communication channel output.
  • 6. The decoding device according to claim 1, wherein the processor further executes list decoding for selecting a sequence of a most probable path from among the number (predeteimined list size) of survival paths.
  • 7. A decoding method executed by a decoding device including a memory and a processor, the method comprising: inputting a code word encoded by a polar code from an original message;decoding the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; andoutputting the decoded original message.
  • 8. A non-transitory computer-readable recording medium having computer-readable instructions stored thereon, which when executed, cause a computer to function as the decoding device according to claim 1.
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2021/000035 1/4/2021 WO