This application claims the priority benefit of Korean Patent Application No. 10-2022-0167300, filed on Dec. 5, 2022, Korean Patent Application No. 10-2023-0065479, filed on May 22, 2023, Korean Patent Application No. 10-2023-0093191, filed on Jul. 18, 2023, and Korean Patent Application No. 10-2023-0110701, filed on Aug. 23, 2023 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.
Example embodiments relate to technology related to a method for encoding and decoding improved deep polarization and an apparatus thereof.
In a communication system or a memory system, data to be transmitted may be transmitted through a physical channel, for example, a wired communication channel, a wireless communication channel, and a storage medium. In a process in which the data is transmitted through the physical channel, noise may be mixed or data may be partially lost, making restoration difficult.
As technology for detecting and correcting errors that occur in transmission data, error correcting codes are being studied. For example, encoding technology using a polar code that is one of the error correcting codes is developed. The polar code refers to a code that corrects an error based on channel polarization in a physical channel that transmits data or a channel polarization phenomenon.
The existing encoding method of using the polar code uses a polar kernel matrix connected to a single layer while constructing an encoding device with the single layer and thus, has a disadvantage in that a decoding error occurs for short-length information bits.
Therefore, there is a need to propose technology for solving the disadvantage that a decoding error occurs for short-length information bits.
To ensure low-decoding complexity and solve a disadvantage that a decoding error occurs for short-length information bits, example embodiments provide an encoding device that may perform deep polar encoding using polar kernel matrices respectively connected to a plurality of matrices and a decoding device that may perform backpropagation-based deep polar decoding corresponding thereto.
Here, the technical subjects to be solved by the disclosure are not limited to the aforementioned subjects and may be expanded in various ways without departing from the technical spirit and scope of the disclosure.
According to an example embodiment, an encoding device for performing code encoding may generates a codeword using polar kernel matrices respectively connected to a plurality of layers.
According to an aspect, the polar kernel matrices respectively connected to the plurality of layers may have different matrix sizes.
According to another aspect, the polar kernel matrices respectively connected to the plurality of layers may have smaller matrix sizes as going along an upper layer from an output end to an input end of the encoding device.
According to still another aspect, polar kernel matrices respectively connected to remaining layers excluding a bottom layer provided at the output end of the encoding device among the plurality of layers may have a transpose relationship with a polar kernel matrix connected to the bottom layer.
According to still another aspect, at least one polar kernel matrix included in the polar kernel matrices may be connected to each of the plurality of layers.
According to still another aspect, the encoding device may successively and alternately use an upper triangular matrix and a lower triangular matrix for the polar kernel matrices respectively connected to the plurality of layers.
According to still another aspect, a polar kernel matrix used in a layer immediately preceding a bottom layer provided at an output end of the encoding device among the plurality of layers may be the upper triangular matrix when a polar kernel matrix used in the bottom layer is the lower triangular matrix and may be the lower triangular matrix when the polar kernel matrix used in the bottom layer is the upper triangular matrix.
According to still another aspect, the encoding device may generate the codeword using a successive encoding method based on a linear or nonlinear function using a partial connection between the plurality of layers.
According to still another aspect, the encoding device may generate the codeword using a successive encoding method of using cyclic redundancy check (CRC) precoding for at least some of information bits.
According to still another aspect, the encoding device may use at least one bit among an information bit, a connection bit, and a frozen bit as an input bit of encoding in each of the plurality of layers and an input position of each of the information bit, the input bit, and the frozen bit may be configured to not overlap.
According to still another aspect, a codeword output as an output value of encoding in each of the plurality of layers may be determined based on a polar kernel matrix used in each of the plurality of layers, the connection bit, and the information bit and may be used as an input of a connection bit of the encoding in an immediately following layer.
According to still another aspect, an input position of an information bit of encoding in each of the plurality of layers may be selected as a position at which a bit channel capacity is greater than or equal to a preset value based on a channel status and a polar kernel matrix used in each of the plurality of layers, and an input position of a connection bit of encoding in each of the plurality of layers may be selected as at least one position at which the bit channel capacity is greater than or equal to the preset value excluding a position at which the information bit is assigned among row positions at which a weight size of rows of the polar kernel matrix used in each of the plurality of layers is greater than or equal to the preset value.
According to still another aspect, the encoding device may retransmit at least some of output values of encoding in each of the plurality of layers when retransmission is required in a communication environment.
According to an example embodiment, a decoding device for performing code decoding may use, as a parity bit, a frozen bit for each layer used for encoding in each of a plurality of layers included in an encoding device when decoding.
According to an aspect, the decoding device may use, as parity bits, frozen bits used for encoding of remaining layers excluding a bottom layer provided at an output end of the encoding device among the plurality of layers while decoding a connection bit used for encoding of the bottom layer based on a frozen bit used for encoding of the bottom layer among the plurality of layers.
According to another aspect, the decoding device may successively perform parity bit check in a backpropagation structure.
According to still another aspect, the decoding device may perform decoding using at least some of output values of encoding in each of the plurality of layers previously received from the encoding device and at least some of output values of encoding in each of the plurality of layers received again from the encoding device.
According to still another aspect, the decoding device may decode a connection bit used for encoding of a bottom layer provided at an output end of the encoding device among the plurality of layers using a pattern of a connection bit available for encoding of the bottom layer as an additional frozen bit.
According to still another aspect, the decoding device may generate in advance the pattern of the connection bit available for encoding of the bottom layer, may decode information bits in parallel by simultaneously using the generated connection bit and the frozen bit, and may select and decode an information bit having a highest reliability among the decoded information bits and a connection bit corresponding to the information bit having the highest reliability.
According to still another aspect, the decoding device may decode an information bit and a connection bit used for encoding of a layer immediately preceding the bottom layer through an inverse matrix of an encoding matrix used for encoding of the layer immediately preceding the bottom layer, based on the decoded connection bit as the connection bit used for encoding of the bottom layer, and may decode an information bit used for encoding of the immediately preceding layer.
According to still another aspect, the decoding device may decode an information bit and a connection bit used for encoding of an (L−1)-th layer through an inverse matrix of an encoding matrix used for encoding of the (L−1)-th layer that is a layer immediately preceding an L-th layer among the plurality of layers based on a connection bit used for encoding of the L-th layer, and may decode an information bit used for encoding of the (L−1)-th layer.
According to an example embodiment, a decoding device for performing code decoding may compute a long likelihood ratio (LLR) of a received signal that is received from an encoding device according to a guessing random additive noise decoding (GRAND) rule, may estimate a transmission codeword by inverting bits of the received signal in descending order of the LLR, and may determine the estimated transmission codeword as a codeword transmitted from the encoding device by sequentially verifying a layer-by-layer parity bit of each of a plurality of layers included in the encoding device in a backpropagation structure using the estimated transmission codeword and an inverse matrix of a polar kernel matrix used for encoding of each of the plurality of layers.
According to an aspect, the decoding device may compute an LLR of each of connection bits for each of the plurality of layers, may estimate the layer-by-layer connection bit in descending order among the layer-by-layer connection bits, and then successively backpropagate an LLR of a connection bit of a corresponding immediately preceding layer when verification of the layer-by-layer parity bit succeeds.
According to an example embodiment, a decoding device for performing code decoding may perform a backpropagation LLR that sequentially computes an LLR of a connection bit of each of a plurality of layers included in a encoding device using a belief propagation (BP) algorithm based on a received signal received from the encoding device and a successive encoding structure of the plurality of layers.
According to an aspect, the decoding device may sequentially apply the BP algorithm using a parity check matrix present in a null space of an encoding matrix used for encoding of the plurality of layers, in a process of backpropagating the layer-by-layer LLR of each of the plurality of layers.
According to another aspect, the decoding device may perform backpropagation for the layer-by-layer LLR, may perform successive encoding using an information bit estimated for each of the plurality of layers and an LLR corresponding to the estimated information bit, and may perform LLR forward propagation of updating the LLR of the connection bit of each of the plurality of layers.
According to still another aspect, the decoding device may alternately perform backpropagation of the LLR and forward propagation of the LLR.
According to some example embodiments, by providing an encoding device that performs deep polar encoding using polar kernel matrices respectively connected to a plurality of layers and a decoding device that performs backpropagation-based deep polar decoding corresponding thereto, it is possible to accomplish the technical effect of solving a disadvantage that a decoding error occurs for short-length information bits while ensuring low decoding complexity.
However, the effect of the disclosure is not limited to the aforementioned effect and may be expanded in various ways without departing from the technical spirit and scope of the disclosure.
These and/or other aspects, features, and advantages of the invention will become apparent and more readily appreciated from the following description of embodiments, taken in conjunction with the accompanying drawings of which:
Hereinafter, example embodiments will be described in detail with reference to the accompanying drawings. When describing the example embodiments with reference to the accompanying drawings, like reference numerals refer to like elements.
Also, the terms (terminology) used herein are those used to appropriately explain the example embodiments and may vary depending on intent of a viewer or an operator or customs of the field to which the example embodiments pertain. Therefore, the terms should be defined based on the overall contents of the present specification. For example, as used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, it will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components or a combination thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Also, although terms, “first,” “second,” and the like, are used to explain various regions, directions, and shapes, the regions, the directions, and the shapes should not be defined by such terms. The terms are used only to distinguish one region, direction, or shape from another region, direction, or shape. Therefore, a portion referred to as a first portion in an example embodiment may be termed a second portion in another example embodiment.
Also, it should be understood that various example embodiments differ from each other, but are not necessarily mutually exclusive. For example, a specific shape, structure, and feature described herein may be implemented in another example embodiment without departing from the technical spirit and scope of the disclosure. Also, it should be understood that a position, an arrangement, or a configuration of an individual component in the category of each example embodiment may be changed without departing from the technical spirit and scope of the disclosure.
A method for encoding and decoding based on improved deep polarization described below may solve a disadvantage that a decoding error occurs for short-length information bits while ensuring low decoding complexity by performing deep polar encoding using polar kernel matrices respectively connected to a plurality of layers and by performing backpropagation-based deep polar decoding corresponding thereto, and, at the same time, may prevent a degradation in a transmission rate performance in consideration of performance of each of channels with low complexity by assigning information bits and frozen bits based on a weight of each of the channels.
Referring to
Here, the polar kernel matrices respectively connected to the plurality of layers may construct a partial successive connection between layers with different matrix sizes. For example, the polar kernel matrices respectively connected to the plurality of layers may have smaller sizes as going along an upper layer from an output end to an input end of the encoding device 100. In detail, for example, as illustrated in
Here, polar kernel matrices respectively connected to remaining layers excluding the bottom layer provided at the output end of the encoding device 100 among the plurality of layers may have a transpose relationship with a polar kernel matrix connected to the bottom layer. For example, as illustrated in
Also, a single polar kernel matrix may be connected to each of the plurality of layers as illustrated in
An upper triangular matrix and a lower triangular matrix may be successively and alternately used for the polar kernel matrices respectively connected to the plurality of layers.
For example, a polar kernel matrix used as a preprocessing matrix in layer immediately preceding the bottom layer provided at the output end of the encoding device 100 among the plurality of layers may be an upper triangular matrix when the polar kernel matrix used in the bottom layer is a lower triangular matrix and may be the lower triangular matrix when the polar kernel matrix used in the bottom layer is the upper triangular matrix. In detail, for example, as illustrated in
As another example, when the encoding device 100 includes two layers, a polar kernel matrix used as a preprocessing matrix in a layer immediately preceding the bottom layer provided at the output end of the encoding device 100 may be a partial upper triangular matrix when the polar kernel matrix used in the bottom layer is the lower triangular matrix and may be a partial lower triangular matrix when the polar kernel matrix used in the bottom layer is the upper triangular matrix.
The encoding device 100 as above may perform deep polar encoding by generating a codeword using a successive encoding method based on a linear or nonlinear function using a partial connection between the plurality of layers. However, the encoding device 100 may use not only a polar kernel matrix but also an existing linear block code generation matrix (e.g., low-density parity-check (LDPC), reed muller code, etc.) as an encoding matrix.
In terms of using the successive encoding method of using the partial connection between the plurality of layers, the encoding device 100 may generate a codeword with the successive encoding method of using cyclic redundancy check (CRC) precoding for at least some of information bits.
In detail, the encoding device 100 may generate a codeword with a successive encoding method of using at least one bit among an information bit, a connection bit, and a frozen bit as an input bit of encoding in each of the plurality of layers.
Here, the encoding device 100 may be configured such that input positions of the information bit, the input bit, and the frozen bit do not overlap.
As described above, since the successive encoding method of using at least one bit among the information bit, the connection bit, and the frozen bit as the input bit of encoding in each of the plurality of layers is used, a codeword output as an output value of encoding in each of the plurality of layers may be determined based on a polar kernel matrix used in each of the plurality of layers, the connection bit, and the information bit and may be used as an input of a connection bit of encoding in an immediately following layer. For example, a codeword used as an output value of encoding in the (L−1)-th layer (Layer L−1) may be used as an input of a connection bit of encoding in the L-th layer (Layer L) that is an immediately following layer.
If at least one polar kernel matrix is used in each of the plurality of layers (e.g., if the number of polar kernel matrices used in each of the plurality of layers is plural), codewords output as output values of encoding in each of the plurality of layers may be used as an input of multiple connection bits in the immediately following layer.
In particular, the encoding device 100 may use a weight as well as a bit channel. capacity of each of channels as a metric that considers performance of each of the channels. Hereinafter, the weight may be defined according to the number of specific binary data (e.g., binary data of “1”) on a bit generated by the polar kernel matrix used in each of the plurality of layers of the encoding device 100.
In detail, an input position of an information bit of encoding in each of the plurality of layers may be selected as a position at which a bit channel capacity is greater than or equal to a preset value based on a channel status and a polar kernel matrix used in each of the plurality of layers by the encoding device 100, and an input position of a connection bit of encoding in each of the plurality of layers may be selected as at least one position at which the bit channel capacity is greater than or equal to the preset value excluding a position at which the information bit is assigned among row positions at which a weight size of rows of the polar kernel matrix used in each of the plurality of layers by the encoding device 100 is greater than or equal to the preset value.
Here, the “preset value” that is a reference value to which the bit channel capacity is initially compared may be different from the “preset value” that is a reference value to which the weight size of rows is compared and may be the same as or different from the “preset value” that is a reference value to which the bit channel capacity is compared after the weight size is compared.
When retransmission is required in a communication environment, the encoding device 100 may retransmit at least some of output values of encoding in each of the plurality of layers. This is to use not only an output value of encoding previously received from the encoding device 100 but also an output value of encoding received again from the encoding device 100 when a decoding device 400, which is described below, performs decoding.
The encoding device 100 may include a determination unit 110 and a mapping unit 120 and may perform a code encoding method of operations S210 and S220 to perform the aforementioned encoding method.
In operation S210, the determination unit 110 may determine a first channel group connected to a lower encoding matrix and a second channel group connected to an upper encoding matrix and the lower encoding matrix among channels, based on a weight of each of polarized channels.
In the following, in the encoding device 100, an encoding matrix close to the input end is defined as the upper encoding matrix and an encoding matrix close to the output end is defined as the lower encoding matrix. Therefore, the upper encoding matrix and the lower encoding matrix may sequentially overlap, forming different layers as an output of the upper encoding matrix is connected to an input of the lower encoding matrix.
Also, the weight of each of the polarized channels may be defined according to the number of specific binary data (e.g., binary data of “1”) on a bit generated by each of the channels being polarized.
In detail, the determination unit 110 may determine, as the second channel group, at least one channel having the weight in the range of greater than or equal to a first value and less than a second value among the channels and may determine, as the first channel group, at least one remaining channel excluding the at least one channel determined as the second channel group among the channels.
Here, when determining the second channel group, the determination unit 110 may determine, as the second channel group, at least one channel having the same weight within the range of greater than or equal to the first value and less than the second value. For example, if each of channels 1, 2, and 3 has the weight within the range of greater than or equal to the first value and less than the second value and here, the weight of channel 1 and the weight of channel 2 are the same and the weight of channel 3 is different, the determination unit 110 may determine channels 1 and 2 having the same weight as the second channel group, excluding channel 3 having the different weight.
Also, when determining the second channel group, the determination unit 110 may consider the number of at least some codewords generated from information bits to be assigned to the channels included in the second channel group and at least some codewords generated from frozen bits to be assigned to the channels included in the second channel group.
For example, the determination unit 110 may determine at least one channel having the weight within the range of greater than or equal to the first value and less than the second value as the second channel group to correspond to the number of at least some codewords generated from information bits to be assigned to the channels included in the second channel group and at least some codewords generated from frozen bits to be assigned to the channels included in the second channel group.
In detail, for example, the determination unit 110 may determine, as the second channel group, the number of at least one channel that matches the number of at least some codewords generated from information bits to be assigned to the channels included in the second channel group and at least some codewords generated from frozen bits to be assigned to the channels included in the second channel group among channels having the weight within the range of greater than or equal to the first value and less than the second value and may exclude remaining channels.
In operation S220, the mapping unit 120 may generate codewords by mapping the information bits to the upper encoding matrix and the lower encoding matrix and may assign the generated codewords to the first channel group and the second channel group.
As described above, since the upper encoding matrix and the lower encoding matrix sequentially overlap, forming different layers, as the output of the upper encoding matrix is connected to the input of the lower encoding matrix, the mapping unit 120 may map the information bits at different depths for each channel group using the sequentially overlapping upper encoding matrix and lower encoding matrix.
Hereinafter, the depth represents the number of times polarization is performed by the lower encoding matrix and the upper encoding matrix.
In detail, in operation S220, the mapping unit 120 may map at least some bits of information bits to the lower encoding matrix such that at least some codewords generated from at least some bits of the information bits are assigned to at least one channel having a weight of greater than or equal to the second value among channels included in the first channel group, may map at least some frozen bits to the lower encoding matrix such that at least some codewords generated from at least some bits of the frozen bits are assigned to at least one channel having a weight of less than the first value among the channels included in the first channel group, and may sequentially map at least some remaining bits of the information bits and at least some remaining bits of the frozen bits to the upper encoding matrix and the lower encoding matrix such that at least some codewords generated from at least some remaining bits of the information bits and at least some codewords generated from at least some remaining bits of the frozen bits are assigned to the channels included in the second channel group.
An encoding method in a structure that includes layer 1 corresponding to the upper encoding matrix and layer 2 corresponding to the lower encoding matrix is described above. However, even in a structure in which the number of upper encoding matrices and lower encoding matrices are implemented to be plural and the number of sets of the upper encoding matrix and the lower encoding matrix is implemented to be plural, the aforementioned encoding method may be repeatedly performed to a plurality of sets while being performed to a set of the upper encoding matrix and the lower encoding matrix. Since a plurality of lower encoding matrices and a plurality of upper encoding matrices are implemented, the determination unit 110 and the mapping unit 120 may repeatedly operate.
A structure of the encoding device 100 according to an example embodiment will be further described with reference to
Referring to
The mapping unit 120 may generate a codeword (X4) by mapping some information bits, for example, an information bit (U4), of information bits (V2, U4) to the lower encoding matrix (G4) and then may assign the generated codeword (X4) to the channel (W4) having the weight of greater than or equal to the second value between the channels (W1, W4) included in the first channel group.
Also, the mapping unit 120 may generate a codeword (X1) by mapping some frozen bits, for example, a frozen bit (U1), of frozen bits (U1, V1) to the lower encoding matrix (G4) and then may assign the generated codeword (X1) to the channel (W1) having the weight of less than the first value between the channels (W1, W4) included in the first channel group.
Also, the mapping unit 120 may generate codewords by mapping a remaining information bit (V2) of the information bits (V2, U4) and a remaining frozen bit (V1) of the frozen bits (U1, V1) to the upper encoding matrix (G2) and by mapping bits (U2, U3) output as a result thereof to the lower encoding matrix (G4) and then may sequentially assign the generated codewords (X2, X3) to the channels (W2, W3) included in the second channel group.
Although the encoding method according to an example embodiment is described as an example applied when a blocklength is 4, the encoding method may be applied equally when the blocklength is 2n that is 4 or more.
Hereinafter, the encoding method of the encoding device 100 according to an example embodiment is described based on deep polar codes, a family of pre-transformed polar codes.
A(N,K,{,{,{}) deep polar code is defined with the following parameters:
For the parameters, a deep polar encoder (hereinafter, the deep polar encoder or an encoder represents the encoding device 100 of
Information bit splitting and mapping: Information vector d∈2K that transmits K information bits is split into L information sub-vectors d, each with a size of =||(<∈[L] and =K.
Let =[, , . . . ]∈ be an input vector of layer with a length of for ∈[L]. An index set of the -th layer is partitioned into three nonoverlapping sub-index sets as below.
[]=∪∪, <Equation 1>
Here, =[]/{∪} denotes a frozen bit set of layer and ∩=ϕ. The information vector of the -th layer, ∈, is assigned to . Meanwhile, the frozen bits are assigned to =0. Here, =┌┐/{∪}.
Successive encoding: As illustrated in
=. <Equation 2>
The transpose matrix of has an upper triangular matrix structure. As far as the upper triangular matrix structure having not 0 but a diagonal component is maintained, all pre-transformation matrices may be used.
In a first layer, the encoder generates an input vector of first layer encoding, u1=[, ]. Here, A1=ϕ. An encoder output of the first layer is generated by GN
v1=u1GN
In a second layer, an output vector of first layer encoding is assigned to an input of the second layer to a connection index set A2, i.e., =v1. Since =d2 and =0, an input vector of second layer encoding is generated as follows.
u2=[,,]. <Equation 4>
By multiplying u2 with GN
v2=u2GN
Similar to the second layer encoding, an -th layer encoder for 2<≤L takes the input vector.
=[,,], <Equation 6>
Here, = and a corresponding output vector is generated by multiplying as follows.
=. <Equation 7>
For notational simplicity, an output of a last layer encoder is denoted as channel input x=vL∈2N
An error-correcting performance of the deep polar code is determined according to a selection of information and connection sets in all layers ( and A). Therefore, the encoding device 100 according to an example embodiment may employ an efficient rate-profiling method that may achieve superior coding performance with flexible decoding complexity. The rate-profiling method is to construct and individually across the entre layers and carefully connect the layers.
Selection of and : A method of selecting information and connection sets for the -th layer for ∈{1, . . . , L} is described. To construct and independently over the layers, it is assumed that a channel input is formed by the transpose of polar transformation = for ∈{1, 2, . . . , L−1} and xL=uLGN
Here, =[, , . . . , ] for a, b∈[] and a<b and I() denotes an i-th bit-channel capacity.
To construct and , an index set is initially defined according to RM rate-profiling by selecting i∈[] such that the weight of rows is greater than or equal to .
={i∈[]:wt(,i)≥}, <Equation 9>
Here, denotes a target minimum distance for -th layer encoding. Then, an ordered index set of R is defined as follows.
={i1,i2, . . . ,}, <Equation 10>
Here, i1 denotes an index of a most reliable synthetic channel, that is, I()≥I()≥ . . . ≥I(). This ordered set may be constructed using indices of Bhattacharyya values, that is, Z()≤Z(), . . . ≤Z().
Using the ordered index set, the information set is generated by selecting bit-channel indices that are approximately polarized to capacity of one for given code length .
={i∈:I()≥1−}, <Equation 11>
Here, >0 is selected as an arbitrary small value and ||=.
Then, the connection set is constructed as a subset of . With the assumption that i1, i2, . . . , ∈/ are indices that provide a highest bit-channel capacity in / such that I()≥I()≥ . . . ≥I(), the connection set is represented as follows.
={i1,i2, . . . ,}. <Equation 12>
A frozen set of layer is defined as a collection of indices that are excluded in both the information set and the connection set as follows.
=[]/{∪}. <Equation 13>
The deep polar codeword generated by the proposed information and layer-by-layer connection sets may ensure a minimum distance of .
Encoding complexity: The deep polar code may be regarded as a normalized version of standard polar codes. The standard polar code may be regarded as the deep polar code having a single layer in which a null connection is set. Also, the deep polar code operates as an alternative form of a pre-transformed polar code that includes a PAC code.
The encoding complexity of -th layer is O( log2 ). Therefore, the total encoding complexity may be expressed as follows.
When NL is sufficiently larger than for ∈[L−1], the encoding complexity may be comparable to that of the standard polar codes. Selecting a small size of for ∈[L−1] is a practical and effective strategy for reducing the encoding complexity.
Construction of flexible connection set: When a size of differs from , a pre-transformation matrix of the -th layer may be constructed as a partial matrix of by selecting a column to provide additional flexibility in layer connection. This construction improves adaptability of layer connection of a system.
Superposition codes: The deep polar code is generated using L-th layered polar transformation in which an input vector uL includes , , . Consequently, the deep polar code may be represented as superposition of two sub-codewords as follows.
Meanwhile, first term xp== forms a polar code. Meanwhile, second term xTP== generates a polar code transformed to multiple layers. Consequently, the deep polar code may be understood as a combination of the polar code and a supposition code with the transformed polar code.
Minimum distance of deep polar code: The minimum distance of the deep polar codes is greater than or equal to a minimum distance of layer L, dLmin. By the construction, the encoder selects row vectors of GNL with respect to all of information and connection sets with the weight by the construction greater than dLmin. That is, for j∈L∪L, it is expressed as follows.
wt(gN
Since a transformation matrix has an upper triangular structure, transformed sub-codewords xTP= are ensured to maintain at least dLmin.
Weight spectrum improvement: Similar to other pre-transformed polar codes, the deep polar code has an improved weight spectrum compared to the existing polar codes since sub-codewords xTP are generated through partial pre-transformation.
Cyclic redundancy check (CRC)-aided deep polar codes: One possible extension of deep polar codes is to concatenate them with CRC codes. In this approach, precoding of K-bit information vector d is performed using CRC generator polynomials and adds CRC bits. The length of resulting information sequence becomes K+KCRC. Here, KCRC denotes the number of CRC bits. Also, short CRC bits may be added to to improve decoding performance.
In this section, provided are some examples of deep polar codes in a short blocklength regime to better understand the proposed encoding method. Throughout this section, focus is made on a short packet transmission scenario with a length of 32 over BEC with an erasure probability of 0.5, that is, I(W)=0.5 at three different rates { 11/32, 15/32}.
For a deep polar code construction, it is important to understand the polarization bit-channel capacity and row weight of G32. As shown in
Ordered index sets are mismatched according to bit-channel capacity and normalized weights. For example, I(W32(25))>I(W32(12)) but
Accordingly, when including bit-index 25 in information that is set according to polar rate-profiling to facilitate SC decoding, the minimum distance of codewords is reduced. To improve superior coding performance, the bit index needs to be selected in consideration of all the weight and the bandwidth capacity. Rate-profiling of polar codes uses this strategy with sequential pre-transformation between layers.
(32, 11, 1, 2, 2, G8T, G32) deep polar code is constructed using two transformation matrices, i.e., G8T for layer 1 and G32 for layer 2. Here, a code rate is R= 11/32. To construct the deep polar codeword using G8T and G32, information sets for layer 1, 1, and layer 2, 2, and a set that connects between layers A2 need to be defined. For a minimum target distance of the second layer, d2min=8, set R2 may be generated by collecting indices of rows in G32 of which weights are greater than or equal to d2min=8. That is, 2={8, 12, 14, 15, 16, 20, 22, 23, 24, 26, 27, 28, 29, 30, 31, 32}.
From bit-channel capacity I(W32(i)) for i∈R2, information set for the second layer is selected as follows.
Here, |2|K
2=2/2={26, 27, 23, 22, 15, 20, 14, 12}.
It is worth mentioning that bit-channel index 25 is excluded in the connection set although the capacity of bit-channel index 25 is greater than the capacity of index 12, that is, I(W32(25))≥I(W32(12)). This is because wt(g32,25)=4<wt(g32,12)=8. A frozen set of the second layer is 2={1,2, . . . , 32}/{2∪2}.
In the first layer, four indices that yield highest bit-channel capacity using the polar transformation matrix G8 are selected, while ensuring d2min≥4. The corresponding information and frozen sets are selected as 1={1, 2, 3, 5} and F1={8, 7, 6, 4}, respectively. Then, an output vector of the first layer, v1=u1G8T, is connected to an input of the connection set =v1.
Provided is another example that constructs (32, 11, 1, 2, 2, G4T, G32) deep polar code of which a code rate is R= 15/32 in the same BEC channel with I(W)=0.5. Similar to Example 1, indices corresponding to row vectors of G32 with a weight of greater than 8 are selected in an identical manner as R2={8, 12, 14, 15, 16, 20, 22, 23, 24, 26, 27, 28, 29, 30, 31, 32}.
Using a deep polar rate-profiling method, information and connection sets are selected as 2={15, 16, 22, 23, 24, 26, 27, 28, 29, 30, 31, 32} with K2=|2|=12 and A2={8, 12, 14, 20}. Then, the information set for the first layer is selected as 1={1, 2, 3} with K1=|2|=3.
Deep polar codes in Example 1 and Example 2 are compared to the existing RM and polar codes. For fair comparison, polar codes are generated by selecting top K∈{11, 15} bit-channel indices that provide the highest capacity. For RM code construction with information bit size K∈{11, 15}, a sub-codeword set of [32, 16] RM code is considered. Since an information set of the [32, 16] RM code is given as the following Equation 17, an information set for K=15 is selected as a subset of RM16, that is, RM15⊆RM16. In particular, to optimize the code performance, weight distributions of 16 possible sub-codebooks of the [32, 16] RM code are evaluated and one having the smallest number of codewords with the minimum weight is selected.
RM
16
={i:wt(g32,i)≥8}, <Equation 17>
Weight distribution: As shown in the following Table 1, the proposed deep polar codes provide superior weight distributions than those of the RM and polar codes in both code rates. In particular, the deep polar code has fewer codewords having a minimum weight than the RM code while maintaining the same minimum distance of 8 in both code rates. Also, the [32,15] deep polar code may improve both the minimum distance and the number of codewords having the smallest weight.
Block error rate (BLER) performance: To demonstrate the effect of weight distribution improvement in code design, frame error rates (FERs) for three channels are displayed under maximum likelihood (ML) decoding while increasing an erasure probability of BEC. As shown in
One remarkable result is that the deep polar code may achieve better BLER performance than a dependence-testing (DT) bound that is one of strongest achievability bounds for BEC in a finite blocklength regime. The result shows that the deep polar code has the potential capable of achieving capacity of BEC at small intervals in the short blocklength regime having various code rates.
Pre-transformation using a sub-upper triangular matrix: In contrast to conventional pre-transformed polar codes that use an upper triangular matrix for pre-transformation, the proposed encoding structure introduces an outstanding approach. The resulting transformation matrix across a plurality of layers may be seen as a sub-matrix of the upper triangular matrix. As shown in the aforementioned example, a two-layered encoding structure may be expressed with a unified pre-transformation matrix T∈232×32, as follows.
It is evident that the resulting pre-transformed matrix T does not exhibit an upper diagonal structure. Instead, a sub-matrix of T takes a form of the upper triangular matrix. This condition is less strict than the existing pre-transformed code in which the entire T matrix needs to be upper triangular.
In this section, presented is an improved deep polar encoding method based on the previous construction technology. In the previous approach, an encoder of each layer uses a single pre-transformation matrix. Although this encoding method is simple, the limitation of using only a single pre-transformation matrix for each layer limits the flexibility of code design. This basic deep polar code construction method is referred to as Construction A.
To improve the flexibility of deep polar code design, a novel approach called a normalized successive encoding method for deep polar code is introduced. The key concept of this method uses a plurality of pre-transformation matrices for each encoding layer. A size of a multi-pre-transformation matrix may differ. This technology is differentiated from Construction A by improving adaptability and user definition in the code design. This advanced construction method is called Construction B. Construction B provides significantly improved functions compared to original Construction A using power of a plurality of pre-transformation matrices in each encoding layer, which provides a more versatile and adaptable framework for the deep polar code design.
Some notations are defined as follows:
An encoding process is a sequential operation. The difference lies in that a plurality of pre-transformation matrices may be used for each layer and a size of each pre-transformation matrix may be different.
Step 1: A j-th encoder output vector in layer is generated as follows.
=[,,]. <Equation 19>
Step 2: A j-th output vector of layer is assigned to a j-th connection bit of layer +1 as follows.
:=. <Equation 20>
The encoder performs encoding by repeatedly using Step 1 and Step 2 until last layer L is reached.
The deep polar encoder may have the structure illustrated in
In layer 3, two pre-transformation matrices, T31 and T32, are used in parallel. Sizes thereof may differ. Each connection vector generates v3j by constructing an input vector of encoder T3j through combination of and . A final layer encoder uses :=v3j and generates a final codeword x using T4=GN
To explain two different construction methods, an example of focusing on [32, 12] deep polar codes is presented and weight distributions and BELR performance are compared.
The following Table 2 compares weight distributions for three different codes including [32, 12] polar code, a deep polar code with Construction A, and a deep polar code with Construction B. As demonstrated by a pre-transformation method, two deep polar code constructions may significantly reduce the number of codewords with the minimum weight compared to the polar code. Also, Construction B has fewer codewords with the minimum weight than Construction A. This example explicitly highlights an advantage in using a plurality of connection sets for the deep polar code design.
To additionally evaluate the performance, BLERs of two deep polar codes are investigated by comparing polar and RM type codes using ML decoding.
Referring to
In detail, as illustrated in
That is, the decoding device 400 may successively perform parity bit check in a backpropagation structure.
The decoding device 400 may perform decoding using at least some of output values of encoding in each of the plurality of layers previously received from the encoding device 100 and at least some of output values of encoding in each of the plurality of layers received again from the encoding device 100. For example, when decoding using a received signal fails, the decoding device 400 may receive again an output value of an (L−1)-th layer and retry decoding using the output value of the (L−1)-th layer, i.e., a received signal that is received in re-reception and a received signal that is received in a previous reception. If decoding fails, the decoding device 400 may try decoding in the same manner by receiving again an output value of an (L−2)-th layer.
Here, as illustrated in
In detail, the decoding device 400 may generate in advance the pattern of the connection bit available for encoding of the bottom layer, may decode information bits in parallel by simultaneously using the generated connection bit and the frozen bit, and may select and decode an information bit having a highest reliability among the decoded information bits and a connection bit corresponding to the information bit having the highest reliability.
Here, the decoding device 400 may decode an information bit and a connection bit used for encoding of a layer immediately preceding the bottom layer through an inverse matrix of an encoding matrix used for encoding of the layer immediately preceding the bottom layer, based on the decoded connection bit as the connection bit used for encoding of the bottom layer, and may decode an information bit used for encoding of the immediately preceding layer.
For example, the decoding device 400 may decode an information bit and a connection bit used for encoding of an (L−1)-th layer through an inverse matrix of an encoding matrix used for encoding of the (L−1)-th layer that is a layer immediately preceding an L-th layer among the plurality of layers based on a connection bit used for encoding of the L-th layer, and may decode an information bit used for encoding of the (L−1)-th layer.
Also, as illustrated in
That is, when verification of a parity bit check to a top layer succeeds, the decoding device 400 may determine the estimated transmission codeword as a codeword transmitted from the encoding device 100.
For example, the decoding device 400 may compute an LLR of each of connection bits for each of the plurality of layers, may estimate the layer-by-layer connection bit in descending order among the layer-by-layer connection bits, and then may successively backpropagate an LLR of a connection bit of a corresponding immediately preceding layer when verification of the layer-by-layer parity bit succeeds.
Also, as illustrated in
Here, the decoding device 400 may sequentially apply the BP algorithm using a parity check matrix present in a null space of an encoding matrix used for encoding of the plurality of layers, in a process of backpropagating the layer-by-layer LLR of each of the plurality of layers.
For example, the decoding device 400 may perform backpropagation for the layer-by-layer LLR, may perform successive encoding using an information bit estimated for each of the plurality of layers and an LLR corresponding to the estimated information bit, and may perform LLR forward propagation of updating the LLR of the connection bit of each of the plurality of layers.
Therefore, the decoding device 400 may alternately perform backpropagation of the LLR and forward propagation of the LLR.
The aforementioned decoding device 400 may include a receiver 410 and an obtainer 420 and may perform a code decoding method of operations S510 and S520 to perform the aforementioned decoding method.
In operation S510, the receiver 410 may receive codewords through channels that include a first channel group connected to a lower encoding matrix and a second channel group connected to an upper encoding matrix and the lower encoding matrix.
Here, the first channel group and the second channel group correspond to the first channel group and the second channel group used in the encoding method according to an example embodiment described above with reference to
In operation S520, the obtainer 420 may obtain information bits from the received codewords by sequentially using a decoding matrix corresponding to the lower encoding matrix and a decoding matrix corresponding to the upper encoding matrix.
In detail, the obtainer 420 may perform operation S520 through a first operation of obtaining at least some bits of frozen bits transmitted through the first channel group from the received codewords, a second operation of computing LLR of bits output from the lower encoding matrix using at least some bits of the obtained frozen bits, a third operation of obtaining at least some bits of information bits transmitted through the first channel group using the decoding matrix corresponding to the lower encoding matrix based on the LLR of bits output from the lower encoding matrix, and a fourth operation of obtaining at least some remaining bits of frozen bits transmitted through the second channel group and at least some remaining bits of information bits transmitted through the second channel group using the decoding matrix corresponding to the upper encoding matrix based on the LLR of bits output from the lower encoding matrix.
Here, when computing the LLR in the second operation, the obtainer 420 may not firmly determine a bit of information transmitted from the upper encoding matrix to the lower encoding matrix and thus, may compute the LLR through probability marginalization.
Also, when performing the fourth operation, the, obtainer 420 may perform the fourth operation through a (4−1)-th operation of firmly determining a bit of information transmitted from the upper encoding matrix to the lower encoding matrix based on the LLR of bits output from the lower encoding matrix and a (4−2)-th operation of obtaining at least some remaining bits of frozen bits transmitted through the second channel group and at least some remaining bits of information bits transmitted through the second channel group using the decoding matrix corresponding to the upper encoding matrix based on the bit of information transmitted from the upper encoding matrix to the lower encoding matrix and at least some bits of frozen bits transmitted through the first channel group.
By performing operation S520 through the aforementioned operations, for example, the first operation to the fourth operation, the decoding device 400 may decode and acquire the entire information bits.
A decoding method in a structure that includes layer 1 of the decoding matrix corresponding to the upper encoding matrix and layer 2 of the decoding matrix corresponding to the lower encoding matrix is described above. However, even in a structure in which the number of decoding matrices corresponding to the upper encoding matrix and the number of decoding matrices corresponding to the lower encoding matrix are implemented to be plural and the number of sets of the decoding matrix corresponding to the upper encoding matrix and the decoding matrix corresponding to the lower encoding matrix is implemented to be plural, the aforementioned decoding method may be repeatedly performed to a plurality of sets while being performed to a set of the decoding matrix corresponding to the upper encoding matrix and the decoding matrix corresponding to the lower encoding matrix. Therefore, since the number of decoding matrices corresponding to the upper encoding matrix and the number of decoding matrices corresponding to the lower encoding matrix are implemented to be plural, the receiver 410 and the obtainer 420 may repeatedly operate.
A structure of the decoding device 400 according to an example embodiment will be further described with reference to
Hereinafter, the decoding method of the decoding device 400 according to an example embodiment is described based on deep polar codes that are a family of pre-transformed polar codes.
As illustrated in
BP decoding in layer L: The DBP decoder initially computes soft information on xi from channel output yi for i∈[NL] as follows.
Let a message from node uL,i identification to variable node vL,j in layer L be γL,i→iC. Also, a message for identifying node uL,j from the variable node vL,j to γL,i→iV is displayed. Let uL,j(vL,j)=supp(
A variable identification message transmitted from uL,j to vL,i for I∈VL,i(uL,j) is computed as follows.
After the fixed number of iterations, a resulting LLR of vL,i for i∈[NL] is computed as follows.
Belief backpropagation (BP) in layer L: Let an LLR vector obtained from BP decoding in layer L be [γL,1, . . . , γL,N
= <Equation 25>
=. <Equation 26>
BP decoding in layer L−1: The decoder obtains, from previous layer decoding, that is an LLR value for encoder output of layer L−1 and assigns the same to {circumflex over (v)}L−1=. Similar to layer L, the decoder performs BP decoding using a parity check in layer L−1.
({circumflex over (v)}L−1=. <Equation 27>
After a plurality of iterations, BP decoding updates soft information on {circumflex over (v)}L−1. Let an LLR value updated after BP decoding using an initial value of {circumflex over (v)}L−1 in a parity check condition of ({circumflex over (v)}L−1= be {circumflex over (v)}L−1′. Then, connection and information bits of layer L−1 are obtained using the updated LLR value, {circumflex over (v)}L−1′.
=({circumflex over (v)}L−1 <Equation 28>
=({circumflex over (v)}L−1. <Equation 29>
BP decoding in layer 1: The same belief backpropagation algorithm is performed in layer 1. Let an LLR value updated after BP decoding using {circumflex over (v)}= that is an initial LLR value in the parity check condition be {circumflex over (v)}1′.
=. <Equation 30>
The decoder obtains an LLR value for an information bit of layer 1 using the updated LLR value.
=({circumflex over (v)}1′. <Equation 31>
Belief forward propagation in layer 1: The decoder obtains, from belief backpropagation to layer 1, an LLR value for information bit . The decoder updates an LLR for {circumflex over (v)}1′ to {circumflex over (v)}1″ using encoder matrix T1 and frozen set F1 as follows.
{circumflex over (v)}
1
″=[
]T
1. <Equation 32>
Belief forward propagation from layer 1 to layer 2: {circumflex over (v)}1″=. Using belief backpropagation in layer 2 and a connection bit with an LLR value obtained during , the decoder updates the LLR value for {circumflex over (v)}1″ from {circumflex over (v)}1′ as follows.
{circumflex over (v)}
2
″=[
,
,
]T
2. <Equation 33>
Belief forward propagation from layer L: The same belief forward propagation decoding procedure is repeated up to layer L. Let an output of belief forward propagation in layer L be {circumflex over (v)}L″.
{circumflex over (v)}
L
″=[
,
,
]T
L. <Equation 34>
Then, the decoder performs again BP decoding using {circumflex over (v)}L″ as an initial variable node LLR value.
Belief backpropagation and forward propagation for iteration:
An efficient SCL decoding method for deep codes using a bit-wise BPPC principle is proposed as illustrated in
To improve a path pruning mechanism in SCL decoding with a limited list size, a new algorithm called the bit-wise BPPC algorithm is introduced. The basic concept of this algorithm is to verify a backpropagation syndrome check condition at a bit level of each layer, particularly, for an element denoted as uL,i in which i belongs to set AL. To simplify the notation, is defined as an upper-left submatrix of with a dimension of k×k. A pre-transformation matrix of a deep polar code may have a unique property. That is, the inverse of the pre-transformation matrix is the same as the transpose.
=, ∀∈[L]. <Equation 35>
When expressing a connection bit of layer that is i1<i2< . . . <, as =[, , . . . , ], it may be represented as follows.
=, <Equation 36>
Here, denotes a subsequent that includes first k elements of . Then, two portions are extracted from estimated frozen bits of layer −1. One portion is used for parity check and the other portion includes connection bits that are recursively applied using Equation 36. Finally, estimated frozen bits are collected from each ∈[L] and the syndrome is verified.
The SCL decoder uses a level-by-level search strategy on a binary tree. At each bit uL,i where i∈L∪L, the decoder extends a list of candidate paths by exploring two paths of the binary tree and by appending uL,i=0 or uL,i=1 to each candidate path. Consequently, the number of paths is doubled, but limited to a predetermined maximum value S. In contrast to a standard SCL decoder, the proposed approach integrates a bit-wise BPPC method for i∈AL whenever a new path is generated. This bit-wise BPPC mechanism may easily remove a path that does not satisfy a parity check condition while processing SCL decoding. If a new path satisfies BPPC and is ranked as one of S most reliable paths, the corresponding path is included in the list. On the contrary, if the new path fails BPPC or exhibits lower reliability than the existing S paths in the list, the corresponding path is discarded. This iterative process continues until i∈[NL]. Ultimately, the decoder selects a path with the highest reliability metric as an output. When S=1, the decoder simplifies to an SC decoder using the backpropagation parity check. A deep polar SCL decoding procedure is described in the following Algorithm 1.
alive ← {1}; pool ← [2S]/ alive;
alive ← alive ∪{s′}; pool ← pool/{s′};
← currently available frozen bits of layer ∈ [L];
alive ← alive/{s}; pool ← pool ∪{s} ;
alive ← alive/{s}; pool ← pool ∪{s} ;
indicates data missing or illegible when filed
Decoding complexity: The proposed decoder introduces additional decoding complexity by BPPC task in addition to SCL decoding complexity with a list size of O(SN1 log NL). The decoder may perform parity check with the complexity of O( log ) for each layer ∈1, 2, . . . , NL−1. Consequently, the overall complexity is a sum of complexity required for SCL decoding and the inverse of pre-transformation.
The additional complexity introduced by the inverse of pre-transformation may be ignored when NL is significantly larger than for ∈[L−1].
Also, as illustrated in
Low-latency decoding plays an important role in supporting ultra-reliable and low-latency communications (URLLC) applications. In this sub-section, a method of achieving low-latency decoding of deep polar codes is presented. As illustrated in
Based on identifier K1+K2+ . . . KL−1=K−KL, it is possible to infer that there are 2K−KL potential connection bit patterns for . ∈2N
The parallel-SCL decoding method may reduce a decoding latency by using the power of parallel processing. However, the hardware complexity of this method may exponentially increase according to the number of information bits encoded in layers. Here, ∈{1, 2, . . . , L−1} and K−KL. Therefore, the practical use of this method is limited to a scenario in which a value of K−KL is small enough.
The GRAND-aided backpropagation parity check decoder may be implemented as illustrated in
The decoder estimates xi for i∈[NL] as follows.
If decoding fails, that is, if ≠, the decoder estimates a channel input vector {circumflex over (x)} using a general-purpose method that includes a belief propagation (BP) and guessing random additive noise decoding (GRAND) method.
If {circumflex over (x)} is successfully decoded, the decoder generates an input vector of layer L through reverse encoding.
[,,]={circumflex over (x)}TL−1. <Equation 40>
The decoder obtains an information bit, , in layer L from the reverse encoding. Then, the estimated connection bit is propagated as an input for a (L−1) decoding operation.
(L−1) decoding operation: Similar to the previous decoding operation, the decoder applies a syndrome check in layer L−1 using the connection bit obtained from that is a previous operation.
(=. <Equation 41>
If the syndrome check fails, the decoder generates another estimate of by adding a possible noise pattern until the syndrome check of Equation 41 succeeds.
Unlike the existing GRAND method, computing a most likely noise pattern in layer L−1 is not apparent since channel reliability is known as {circumflex over (x)}i that is detected in layer L.
From reverse encoding of layer L, the connection bit is obtained as follows.
=. <Equation 43>
uL,i∈AL for i∈AL may be given by a sum of {circumflex over (x)}i for i∈L,i (i-th column vector support set of TL−1). Therefore, the reliability for uL,i computed as follows.
Noise patterns are sequentially added in descending order of reliability values for uL,i, that is, LLRL−1,i
If the syndrome check is successfully completed, the decoder obtains estimate of connection and information bits in layer L−1 using the inverse matrix TL−1−1 of TL−1, as follows.
=( <Equation 45>
=(. <Equation 46>
First operation decoding from L−2: The BPPC decoder recursively propagates the estimated connection bit to a reverse layer while performing a sequential syndrome check for each layer. Then, the decoder generates information bit for each layer until a first layer is reached.
The BPPC decoder reverses an encoding process of deep polar codes. The polar transformation matrix may be selected as an encoding matrix of all layers. In detail, TL=GN
The apparatuses described herein may be implemented using hardware components, software components, and/or combination thereof. For example, the apparatuses and components described herein may be implemented using one or more general-purpose or special purpose computers, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA), a programmable logic unit (PLU), a microprocessor, or any other device capable of responding to and executing instructions in a defined manner. A processing device may run an operating system (OS) and one or more software applications that run on the OS. The processing device also may access, store, manipulate, process, and create data in response to execution of the software. For purpose of simplicity, the description of a processing device is used as singular; however, one skilled in the art will appreciate that the processing device may include multiple processing elements and/or multiple types of processing elements. For example, the processing device may include multiple processors or a processor and a controller. In addition, different processing configurations are possible, such as parallel processors.
The software may include a computer program, a piece of code, an instruction, or some combinations thereof, for independently or collectively instructing or configuring the processing device to operate as desired. Software and/or data may be embodied in any type of machine, component, physical equipment, virtual equipment, computer storage medium or device, or in a propagated signal wave capable of providing instructions or data to or being interpreted by the processing device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. The software and data may be stored by one or more computer readable storage mediums.
The methods according to the example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer. Also, the media may include, alone or in combination with the program instructions, data files, data structures, and the like. Program instructions stored in the media may be those specially designed and constructed for the example embodiments, or they may be well-known and available to those having skill in the computer software arts. Examples of the media include magnetic media such as hard disks, floppy disks, and magnetic tapes; optical media such as CD ROM disks and DVDs; magneto-optical media such as floptical disks; and hardware devices that are specially to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. Examples of program instructions include both a machine code, such as produced by a compiler, and files containing a higher level code that may be executed by the computer using an interpreter.
While this disclosure includes specific example embodiments, it will be apparent to one of ordinary skill in the art that various alterations and modifications in form and details may be made in these example embodiments without departing from the spirit and scope of the claims and their equivalents. For example, suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0167300 | Dec 2022 | KR | national |
10-2023-0065479 | May 2023 | KR | national |
10-2023-0093191 | Jul 2023 | KR | national |
10-2023-0110701 | Aug 2023 | KR | national |