METHOD FOR NON-LINEAR DISTORTION IMMUNE END-TO-END LEARNING WITH AUTOENCODER - OFDM

Information

  • Patent Application
  • 20210111936
  • Publication Number
    20210111936
  • Date Filed
    October 08, 2020
    3 years ago
  • Date Published
    April 15, 2021
    3 years ago
Abstract
A new layer tailored for Artificial Intelligence-based communication systems to limit the instantaneous peak power for the signals that relies on manipulation of complementary sequences through neural networks. Disclosed is a method for providing non-linear distortion in end-to-end learning communication systems, the communication system comprising a transmitter and a receiver. The method includes mapping transmitted information bits to an input of a first neural network; controlling, by an output of the neural network, parameters of a complementary sequence (CS) encoder, producing an encoded CS; transmitting the encoded CS through an orthogonal frequency division multiplexing (OFDM) signal; processing, by Discrete Fourier Transform (DFT), the encoded CS, to produce a received information signal in a frequency domain; and processing, by a second neural network, the received information signal.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present invention relates to a new layer tailored for Artificial Intelligence-based communication systems to limit the instantaneous peak power for the signals that relies on manipulation of complementary sequences through neural networks.


Description of Related Art

Traditional end-to-end learning (e.g., auto-encoder orthogonal frequency division multiplexing (AE-OFDM)) methods do not provide a permanent solution for PAPR without a rigorous training/optimization procedure, which may potentially increase the training complexity in practice. Solutions that control instantaneous power fluctuations are needed for artificial intelligence (AI) based transmitter and receivers to decrease the training complexity. Accordingly, it is an object of the present invention to provide a new layer tailored for AI-based communication systems to limit the instantaneous peak power for the signals that relies on manipulation of complementary sequences through neural networks. The current disclosure also provides how to stabilize the mean power and peak power by using the algebraic representation of the complementary sequences.


SUMMARY OF THE INVENTION

In one aspect of the present disclosure, a method for avoiding non-linear distortion in end-to-end learning communication systems, is provided, the communication system comprising a transmitter and a receiver. The method includes mapping transmitted information bits to an input of a first neural network; controlling, by an output of the neural network, parameters of a complementary sequence (CS) encoder, producing an encoded CS; transmitting the encoded CS through an orthogonal frequency division multiplexing (OFDM) signal; processing, by Discrete Fourier Transform (DFT), the encoded CS in a frequency domain, to produce a received information signal; and processing, by a second neural network, the received information signal.


In one embodiment of this aspect, the CS encoder comprises an amplitude encoder, a phase encoder, and a shift encoder. In another embodiment, mapping the transmitted information bits to an input of a first neural network further comprises manually tuning the shift encoder to adjust a position of non-zero elements of the CS; tuning the amplitude encoder and the phase encoder using the first neural network to produce tuned parameters; and mapping the information bits to the tuned parameters.


In another embodiment, the encoded CS is processed by multiple layers at the transmitter, the layers including at least a Golay layer, the method further comprising: controlling only the amplitude encoder and the phase encoder of the Golay layer; and forming an autoencoder which captures transmitter, channel, and receiver behaviors. In another embodiment, sets for the amplitude encoder and phase encoder are predetermined and offset by the first neural network in order to be able to transmit a large number of information bits.


In another embodiment, the receiver further comprises a decoder, the method further comprising: subtracting, by the second neural network, the offsets from the received information signal to produce a remaining information signal; and decoding, by the decoder, the remaining information signal. In another embodiment, the layers further include a clipping layer configured to limit the amplitude of the information signal. In another embodiment, the layers further include a Polar-to-Cartesian layer configured to convert the coordinate system from Polar coordinates to a Cartesian coordinate system.


In another aspect of the present disclosure, an end-to-end learning communication system for avoiding non-linear distortion is provided. The system includes: a transmitter implemented by processing circuitry, the processing circuitry comprising a processor and a memory containing instructions executable by the processor, the processor of the transmitter configured to: map transmitted information bits to an input of a first neural network; control, by an output of the neural network, parameters of a complementary sequence (CS) encoder, producing an encoded CS; and transmit the encoded CS through an orthogonal frequency division multiplexing (OFDM) signal. The system also including a receiver implemented by processing circuitry, the processing circuitry comprising a processor and a memory containing instructions executable by the processor, the processor of the receiver configured to: process, by Discrete Fourier Transform (DFT), the encoded CS in a frequency domain, to produce a received information signal; and process, by a second neural network, the received information signal.


In one embodiment of this aspect, the CS encoder comprises an amplitude encoder, a phase encoder, and a shift encoder. In another embodiment, The system of claim 10, wherein mapping the transmitted information bits to an input of a first neural network further comprises: manually tuning, by the processor of the transmitter, the shift encoder to adjust a position of non-zero elements of the CS; tuning, by the processor of the transmitter, the amplitude encoder and the phase encoder using the first neural network to produce tuned parameters; and mapping, by the processor of the transmitter, the information bits to the tuned parameters.


In another embodiment, the encoded CS is processed by multiple layers at the transmitter, the layers including at least a Golay layer, the processor of the transmitter further configured to control only the amplitude encoder and the phase encoder of the Golay layer; and form an autoencoder which captures transmitter, channel, and receiver behaviors. In another embodiment, sets for the amplitude encoder and phase encoder are predetermined and offset by the first neural network in order to be able to transmit a large number of information bits. In another embodiment, the processor of the receiver is further configured to: subtract, by the second neural network, the offsets from the received information signal to produce a remaining information signal; and decode, by the decoder, the remaining information signal. In another embodiment, the layers further include a clipping layer configured to limit the amplitude of the information signal. In another embodiment, the layers further include a Polar-to-Cartesian layer configured to convert the coordinate system from Polar coordinates to a Cartesian coordinate system.





BRIEF DESCRIPTION OF THE DRAWINGS

The construction designed to carry out the invention will hereinafter be described, together with other features thereof. The invention will be more readily understood from a reading of the following specification and by reference to the accompanying drawings forming a part thereof, wherein an example of the invention is shown and wherein



FIG. 1 illustrates an exemplary communications system in accordance with embodiments of the present disclosure;



FIG. 2 illustrates an exemplary communications device in accordance with embodiments of the present disclosure;



FIG. 3 shows tuning the CS encoder for OFDM with a DNN and demodulating with another DNN;



FIG. 4 shows a transmitter diagram for real-valued output of the shift encoder;



FIG. 5 shows controlling only amplitude and phase encoders of Golay layer with a DNN over an OFDM transmission/reception and forming an autoencoder which captures transmitter, channel, and receiver behaviors;



FIG. 6 block error rate, bit error rate, and spectral efficiency of the AI-based learning with Golay layer;



FIG. 7 shows a PAPR comparison;



FIG. 8 shows the distribution of the elements of the sequence on different subcarriers (i.e., learned constellation per subcarrier); and



FIG. 9 shows Table 1—Layer Information at the Transmitter, Channel, and Receiver.





It will be understood by those skilled in the art that one or more aspects of this invention can meet certain objectives, while one or more other aspects can meet certain other objectives. Each objective may not apply equally, in all its respects, to every aspect of this invention. As such, the preceding objects can be viewed in the alternative with respect to any one aspect of this invention. These and other objects and features of the invention will become more fully apparent when the following detailed description is read in conjunction with the accompanying figures and examples. However, it is to be understood that both the foregoing summary of the invention and the following detailed description are of a preferred embodiment and not restrictive of the invention or other alternate embodiments of the invention. In particular, while the invention is described herein with reference to a number of specific embodiments, it will be appreciated that the description is illustrative of the invention and is not constructed as limiting of the invention. Various modifications and applications may occur to those who are skilled in the art, without departing from the spirit and the scope of the invention, as described by the appended claims Likewise, other objects, features, benefits and advantages of the present invention will be apparent from this summary and certain embodiments described below, and will be readily apparent to those skilled in the art. Such objects, features, benefits and advantages will be apparent from the above in conjunction with the accompanying examples, data, figures and all reasonable inferences to be drawn therefrom, alone or with consideration of the references incorporated herein.


DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT

With reference to the drawings, the invention will now be described in more detail. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood to one of ordinary skill in the art to which the presently disclosed subject matter belongs. Although any methods, devices, and materials similar or equivalent to those described herein can be used in the practice or testing of the presently disclosed subject matter, representative methods, devices, and materials are herein described.


Unless specifically stated, terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Likewise, a group of items linked with the conjunction “and” should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as “and/or” unless expressly stated otherwise. Similarly, a group of items linked with the conjunction “or” should not be read as requiring mutual exclusivity among that group, but rather should also be read as “and/or” unless expressly stated otherwise.


Furthermore, although items, elements or components of the disclosure may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.


In view of the apparatuses and methods further disclosed herein, exemplary embodiments may be implemented in the context of a communications system 10 as shown in FIG. 1. The communications system 10 may be a complex system of intermediate devices that support communications between communications device 100 and communications device 200, or the communications device 100 and communications device 200 may have a direct link 150, as shown in FIG. 1. In either case, the communications devices 100 and 200 may be configured to support wireless communications. In the context of this disclosure, communications device 100 may be a transmitter and communications device 200 may be a receiver. Thus, below, reference designator “100” will be used interchangeably to identify a communication device and sometimes be used to identify the transmitter while reference designator “200” will be used to identify a communication device and sometimes be used to identify the receiver.


In this regard, the system 10 may include any number of communications devices, including communications devices 100 and 200. Although not shown, the communications devices may be physically coupled to a stationary unit (e.g., a base station or the like) or a mobile unit (e.g., a mobile terminal such as a cellular phone, a vehicle such as an aerial vehicle, a smart device with IoT capabilities, or the like).


The communications device 100 may comprise, among other components, processing circuitry 101, a radio 110, and an antenna 115. As further described below, the processing circuitry 101 may be configured to control the radio 110 to transmit and receive wireless communications via the antenna 115. In the regard, a wireless communications link 150 may be established between the antenna 115 and the antenna 215 of the communications device 200. Similarly, the communications device 200 may comprise, among other components, processing circuitry 201, radio 210, and the antenna 215. The processing circuitry 201 may be configured the same or similar to the processor 101, and thus maybe configured to control the radio 210 to transmit and receive wireless communications via the antenna 215.


In this regard, FIG. 2 shows a more detailed version of the communications device 100, and in particular the processing circuitry 101. Communication device 100 may also be considered part the transmitter of the present disclosure. Again, shown in FIG. 2, the communications device 100 may comprise the processing circuitry 101, the radio 110, and the antenna 115. However, the link 150 is shown as being a communications link to communications device 200, or as a communications link to the network 120, which may be any type of wired or wireless communications network.


The processing circuity 101 may be configured to receive inputs and provide outputs in association with the various functionalities of the communications device 100. In this regard, the processing circuitry 101 may comprise, for example, a memory 102, a processor 103, a user interface 104, and a communications interface 105. The processing circuitry 101 may be operably coupled to other components of the communications device 100 or other components of a device that comprises the communications device 100.


Further, according to some example embodiments, processing circuitry 101 may be in operative communication with or embody, the memory 102, the processor 103, the user interface 104, and the communications interface 105. Through configuration and operation of the memory 102, the processor 103, the user interface 104, and the communications interface 105, the processing circuitry 101 may be configurable to perform various operations as described herein. In this regard, the processing circuitry 101 may be configured to perform computational processing, memory management, user interface control and monitoring, and manage remote communications, according to an example embodiment. In other words, the processing circuitry 101 may comprise one or more physical packages (e.g., chips) including materials, components or wires on a structural assembly (e.g., a baseboard). The processing circuitry 101 may be configured to receive inputs (e.g., via peripheral components), perform actions based on the inputs, and generate outputs (e.g., for provision to peripheral components). In an example embodiment, the processing circuitry 101 may include one or more instances of a processor 103, associated circuitry, and memory 102. As such, the processing circuitry 101 may be embodied as a circuit chip (e.g., an integrated circuit chip, such as a field programmable gate array (FPGA)) configured (e.g., with hardware, software or a combination of hardware and software) to perform operations described herein.


In an example embodiment, the memory 102 may include one or more non-transitory memory devices such as, for example, volatile or non-volatile memory that may be either fixed or removable. The memory 102 may be configured to store information, data, applications, instructions or the like. The memory 102 may operate to buffer instructions and data during operation of the processing circuitry 101 to support higher-level functionalities, and may also be configured to store instructions for execution by the processing circuitry 101. The memory 102 may also store image data, equipment data, crew data, and a virtual layout as described herein. According to some example embodiments, such data may be generated based on other data and stored or the data may be retrieved via the communications interface 105 and stored.


As mentioned above, the processing circuitry 101 may be embodied in a number of different ways. For example, the processing circuitry 101 may be embodied as various processing means such as one or more processors 103 that may be in the form of a microprocessor or other processing element, a coprocessor, a controller or various other computing or processing devices including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA, or the like. In an example embodiment, the processing circuitry 101 may be configured to execute instructions stored in the memory 102 or otherwise accessible to the processing circuitry 101. As such, whether configured by hardware or by a combination of hardware and software, the processing circuitry 101 may represent an entity (e.g., physically embodied in circuitry—in the form of processing circuitry 101) capable of performing operations according to example embodiments while configured accordingly. Thus, for example, when the processing circuitry 101 is embodied as an ASIC, FPGA, or the like, the processing circuitry 101 may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processing circuitry 101 is embodied as an executor of software instructions, the instructions may specifically configure the processing circuitry 101 to perform the operations described herein.


The communication interface 105 may include one or more interface mechanisms for enabling communication by controlling the radio 110 to generate the communications link 150. In some cases, the communication interface 105 may be any means such as a device or circuitry embodied in either hardware, or a combination of hardware and software that is configured to receive or transmit data from/to devices in communication with the processing circuitry 101. The communications interface 105 may support wireless communications via the radio 110 using various communications protocols (802.11WIFI, Bluetooth, cellular, WLAN, 3GPP NR or the like).


The user interface 104 may be controlled by the processing circuitry 101 to interact with peripheral devices that can receive inputs from a user or provide outputs to a user. In this regard, via the user interface 104, the processing circuitry 101 may be configured to provide control and output signals to a peripheral device such as, for example, a keyboard, a display (e.g., a touch screen display), mouse, microphone, speaker, or the like. The user interface 104 may also produce outputs, for example, as visual outputs on a display, audio outputs via a speaker, or the like.


The radio 110 may be any type of physical radio comprising radio components. For example, the radio 110 may include components such as a power amplifier, mixer, local oscillator, modulator/demodulator, and the like. The components of the radio 110 may be configured to operate in a plurality of spectral bands. Further, the radio 110 may be configured to receive signals from the processing circuitry 101 for transmission to the antenna 115. In some example embodiments, the radio 110 may be a software defined radio.


The antenna 115 may be any type of wireless communications antenna. The antenna 115 may be a configured to transmit and receive at more than one frequency or band. In this regard, according to some example embodiments, the antenna 115 may be an array of antennas that may be configured by the radio 115 to support various types of wireless communications as described herein.


AI-based communication systems utilize machine learning modules (e.g., deep learning) to replace the functionality of the highly engineered blocks (e.g., coding, modulation, waveform etc.) in the physical layer of communication systems. However, the machine learning methods in the field of computer vision may not be directly applied to the communication systems as communication system may introduce different challenges. One of the challenges related to communication systems is the high instantaneous power fluctuations. Although there are some methods limit the instantaneous peak power in the literature, these methods may require further training to overcome the non-linearities. In this invention, we solve the instantaneous peak problem of AI-based communication system through by introducing a new layer, i.e., Golay layer.


In this document, we disclose a new layer (Golay layer) tailored for AI-based communication systems to limit the instantaneous peak power for the signals. The disclosed methods can also be applied to communication devices that operate under power-limited link budgets while autonomously decreasing the error rate for auto-encoder orthogonal frequency division multiplexing. The invention may be part of a wireless standard that allows AI-based communication systems. The disclosed method may also decrease the training complexity. The embodiment relies on the manipulation of the complementary sequences through neural networks. We disclose how to stabilize the mean power and peak power by using the algebraic representation of the complementary sequences. The introduced layer may be used with several other basic layers, such as the clipping layer and Polar-to-Cartesian layer 510 as the Golay layer 505 operates in the polar coordinate. By also changing the parameter of the Golay layer 505, it can also allow constant-amplitude sequence in the frequency domain.


Motivation and Problem Statement

Traditional end-to-end learning (e.g., auto-encoder orthogonal frequency division multiplexing (AE-OFDM)) methods do not provide a permanent solution for PAPR without a rigorous training/optimization procedure, which may potentially increase the training complexity in practice. Solutions that control instantaneous power fluctuations are needed for artificial intelligence (AI) based transmitter and receivers to decrease the training complexity.


Sequences and Waveforms

The polynomial representation of the sequence a=(a0, a1, . . . , aN-1) is given by






p
a(z)=aN-1zN−1+aN-2zN−2+ . . . +a0  (1)


Based on the polynomial representation, the following interpretations can be made:

    • If







z


{


e


j





2

π





t

T


|

0

t
<
T


}


,




pa(z) is equivalent to OFDM signal in time where T is the OFDM symbol duration and the frequency domain coefficients are the elements of a where a0 is mapped to the DC tone.

    • If







z


{


e


j





2

π





t

T


|

0

t
<
T


}


,




the instantaneous power of an OFDM symbol can be calculated as |pa(z)|2=pa(z)pa*(z−1) as pa*(z−1)=(pa(z))*. Thus, the peak-to-average-power ratio (PAPR) of pa(z) can be obtained by using |pa(z)|2 within a period of






z
=

e


j





2

π





t

T






where t=[0, T).


Representation of a Sequence

Let f be a function that maps from custom-character2m={(x1, x2, . . . , xm)|xi∈{0,1}} to custom-character as






f(x1, x2, . . . , xm):custom-character2mcustom-character.  (2)


We associate a sequence f of length 2m with the function f(x1, x2, . . . , xm) by listing its values as (x1, x2, . . . , xm) ranges over its 2m values in lexicographic order. In other words, the (x+1)th element of the sequence f is equal to f(x1, x2, . . . , xm) where x=Σj=1mxj2m−j (i.e., the most significant bit is x1). The sequence x and f(x) denote (x1, x2, . . . , xm) and f(x1, x2, . . . , xm), respectively. Note that if custom-character=custom-character2, f(x) is a Boolean function. If custom-character=custom-characterH, f(x) is called a generalized Boolean function.


Algebraic Normal Form (ANF)

A generalized Boolean function can be uniquely expressed as a linear combination over custom-characterH of the monomials as











f


(


x
1

,

x
2

,





,

x
m


)


=


f


(
x
)


=





k
=
0



2
m

-
1





c
k







j
=
1

m



x
j

k
j






ith





monomial





=



c
0


1

+



c
1



(

x
1

)


2

+



c
2



(

x
2

)




x
2


+










c

m


?


1




(


x
1



x
2


)


2


+












?



indicates text missing or illegible when filed






(
3
)







where the coefficient of each monomial belongs to custom-characterH i.e., ckcustom-characterH and k=Σj=1mkj2m−j and xjcustom-character2. Note that monomials, e.g., 1, x1, x2, x1x2, . . . , and x1x2 . . . xm are linearly independent. Linear independence can be proven by using the definition of linear independence, i.e., Σiaixi=0 if and only if ai=0 for all x.


For example, let m=3 and H=4. Then










f


(


x
1

,

x
2

,

x
3


)


=



c
0



x
1
0



x
2
0



x
3
0


+


c
1



x
1
0



x
2
0



x
3
1


+


c
2



x
1
0



x
2
1



x
3
0


+


c
3



x
1
0



x
2
1



x
3
1


+


c
4



x
1
1



x
2
0



x
3
0


+


c
5



x
1
1



x
2
0



x
3
1


+


c
6



x
1
1



x
2
1



x
3
0


+


c
7



x
1
1



x
2
1




x
3
1

.







(
4
)







Assume that c_0=3 and c_5=2 and other c_n=0 for n=1, 2, 3, 4, 6, 7. Then,






f(x1, x2, x3)=3+2x1x3  (5)


As described herein, we associate a sequence f of length 2m with the function f(x1, x2, . . . , xm) by listing its values as (x1, x2, . . . , xm) ranges over its 2m values in lexicographic order. In other words, the (x+1)th element of the sequence f is equal to f(x1, x2, . . . , xm) where x=Σj=1mxj2m−j (i.e., the most significant bit is x1)






f(x1=0,x2=0,x3=0)=3+2x1x3=3 mod 4=3






f(x1=0,x2=0,x3=1)=3+2x1x3=3 mod 4=3






f(x1=0,x2=1,x3=0)=3+2x1x3=3 mod 4=3






f(x1=0,x2=1,x3=1)=3+2x1x3=3 mod 4=3






f(x1=1,x2=0,x3=0)=3+2x1x3=3 mod 4=3






f(x1=1,x2=0,x3=1)=3+2x1x3=5 mod 4=1






f(x1=1,x2=1,x3=0)=3+2x1x3=3 mod 4=3






f(x1=1,x2=1,x3=1)=3+2x1x3=5 mod 4=1


Therefore, f(x1, x2, x3)=3+2x1x3 leads to a sequence of f=(3,3,3,3,3,1,3,1).


All the possible monomials construct a basis for the generalized Boolean functions. Since there are 2m monomials for a given m, there are H2m different generalized Boolean functions, each of which is a mapping custom-character2mcustom-characterH.


If f(x1, x2, . . . , xm) is over custom-character, the coefficient of each monomial belongs to custom-character, i.e., ckcustom-character and the monomials construct a vector space over custom-character and the dimensionality of the space is 2m. Therefore, different sets of {ck|k=0, . . . , 2m−1} lead to different sequences.


Aperiodic Auto Correlation (APAC) of a Sequence

Let ρa(k) be the aperiodic autocorrelation of a complex sequence a of length N and ρa(k) is expressed as
















ρ
a



(
k
)




=
Δ



{







ρ
a
+



(
k
)


,




k
>
0









ρ
a
+



(

-
k

)



?


,




k
<
0













Where







(
6
)













ρ
a
+



(
k
)




=
Δ



{










i
=
0


N
-
k
-
1





a
i
+



a


?



?


k




,




0

k


N
-
1







0
,



otherwise



.





?




indicates text missing or illegible when filed








(
7
)







Complementary Sequences

The pair of (a, b) is called a Golay complementary pair (GCP) if





ρa(k)+ρb(k)=0, k≠0.  (8)


The sequence a=(a0, a1, . . . , aN-1) is defined as a complementary sequence (CS) if there exists another sequence b=(b0, b1, . . . , bN-1) which complements a as ρa(k)+ρb(k)=0, k≠0.


It has been shown that the PAPR of a CS can be less than 3 dB.


Complementary Sequence Encoder

The following theorem for constructing complementary sequences is provided:


Theorem: Let π denote any permutation {1, 2, . . . , m} and (a, b) be a Golay complementary pair (GCP) of length N and calculate
















f
o



(

x
,
z

)


=

(




p

?




(
z
)




(

1
-

x


?



(
1
)




)


+



p
b



(
z
)




x

n


(

?

)





)






(
9
)













f
r



(
x
)


=


e
0

+


e
m



x

π


(
m
)




+




l
=
1


m
-
1





e
l



(


x

π


(
l
)



+

x

u


(

l


?


1

)




)









(
10
)













f
i



(
x
)


=


k
D

+




i
=
1

m




k

?




x


?



(

?

)





+


H
2






l
=
1


m
-
1





x

n


(

?

)





x

n


(


?



?


1

)












(
11
)














f

?




(
x
)


=




n
=
1

m




d
n



x


?



(
n
)













?



indicates text missing or illegible when filed







(
12
)







where x=(x1, x2, . . . xm) and x=Σj2mxj2m−j for xjcustom-character2, encustom-character, kn∈[0, H), dncustom-character for n=0, 1, . . . , m. Then, the sequence c where its polynomial representation is given by

















p

?




(
z
)


=




x
=
0



2
m

-
1






f
0



(

x
,
z

)


×

e



2

π

H



(



f
r



(
x
)




?



?




f
i



(
x
)



)



×

z



f

?




(
x
)


+
xN













?



indicates text missing or illegible when filed








(
13
)







is a complementary sequence (CS).


The polynomial pc(z) forms an OFDM symbol for






z
=

e


2

π





t

T






and limits the peak-to-average-power ratio to be less than or equal to 2 (i.e., approximately 3 dB) as the sequence c is a CS. On the other hand, there is no teaching how to choose the parameters, i.e., en, kn, dn, based on information bits and the demonstrations are provided for random bit mappings.


End-to-End Learning with Deep Learning
Autoencoder

Autoencoders are a specific type of neural networks, where the inputs are mapped to themselves, i.e., the network is trained to approximate the identity operation. Typically, an autoencoder contains a central layer containing fewer nodes than inputs. It can be considered as two half-networks: the encoder and decoder map either to or from a reduced set of feature variables embodied in central layer nodes. Autoencoders provides an intuitive approach to non-linear dimensionality reduction, i.e., non-linear principal component analysis (PCA).


For the communication point-of-view, transmitter-channel-receiver can also be thought as an autoencoder as “the fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.” It has been shown that the transmit map the information bits to a higher dimensional space through a neural network (e.g., deep neural network (DNN, convolutional neural network (CNN)) and the receiver 200 decodes the sequence in the higher-dimension space by also using another network. The coefficients of the networks can be obtained through an offline or online training procedure. In literature, it has been shown that autoencoders can also be combined with OFDM-based waveforms, called AE-OFDM.


Backpropagation Algorithm

Backpropagation (BP) is a method to calculate the gradients of the learnable parameters in a network with multiple layers for a given set of inputs and a loss function. After the loss is calculated for a given set of inputs, the gradients for the learnable parameters in each layer, typically in a stochastic way, are calculated starting from the last layer and propagates to the input. After the gradients are calculated, the learnable parameters are updated based on various algorithmic methods, e.g., gradient descent.


Prior-Art Deep Learning Layers

Let custom-character={x(i)|n=1:N} be a minibatch. The output of a layer can be expressed as y(i)=f(x(i); θ), where θ corresponds to the parameter vector. The loss function can be expressed as










J


(
θ
)


=


1
N






i
=
1

N



L


(


y

(
i
)


,

x

(
i
)


,
θ

)








(
14
)







where L(y(i),x(i),θ) is the per-example loss function. Because of the additive loss functions, the gradient ∇aJ(θ) can be calculated as












θ



J


(
θ
)



=


1
N






i
=
1

N






0



L


(


y

(
i
)


,

x

(
i
)


,
θ

)



.







(
15
)







Fully-Connected Layer

The fully-connected layer can be expressed as






y=f(x; W, b)=Wx+b  (16)


where W∈custom-characterX×M be a matrix containing the weights of a linear transformation and b∈custom-characterM×1 is a bias vector. Let X=[x(1) x(2) . . . x(N)]∈custom-characterM×N be a matrix for the minibatch and Y=[y(1) y(2) . . . y(N)]∈custom-characterX×N be the output of the layer. The derivative of the loss with respect to the weight Wij can be calculated as













J




W

ij








=






n
=
1


N






k
=
1

K





δ






y
k

(
n
)




δ






W

ij











δ





J


δ


y
k

(
n
)







=





n
=
1

N





δ






y
i

(
n
)




δ






W
ij






δ

J


δ






y

?


(
n
)






=




n
=
1

N




x

?


(
n
)





δ

J


δ






y

?


(
n
)



















Therefore
,





(
17
)















J



W


=




J



Y




X
T










?



indicates text missing or illegible when filed







(
18
)







The derivative of the loss with respect to bi can be calculated as


















J




b

?




=






n
=
1


N






k
=
1

K





δ






y
k

(
n
)




δ






b
i






δ





J


δ


y
k

(
n
)







=




n
=
1

N




δ

J


δ






y

?


(
n
)

















Therefore
,






(
19
)















J



b


=




J



Y




1

N
×
1











?



indicates text missing or illegible when filed







(
20
)







The derivative of the loss with respect to xi can be calculated as


















J




x
k

(
n
)




=





i
=
1

K





δ






y
t

(
n
)




δ






x
k

(
n
)







δ

J


δ






y

?


(
n
)






=




i
=
1

K




W

?





δ

J


δ






y

?


(
n
)


















Therefore
,






(
21
)















J



X


=


W
T





J



Y











?



indicates text missing or illegible when filed







(
22
)







Batchnorm Layer

Let X=[x(1) x(2) . . . x(N)]∈custom-characterM×N and Y=[y(1) y(2) . . . y(N)]∈custom-characterM×N be the input and output of the layer, respectively. For ith row of X, the batchnorm layer can be defined as















μ

?


=


1
N






n
=
1

N



x
i

(
n
)









(
23
)












σ
i
2

=




n
=
1

N



(


x

?


(
n
)


-

μ

?



)







(
24
)













x
^

i

(
n
)


=


(


x

?


(
n
)


-

μ

?



)




(


σ
2

+
ϵ

)


-

1
2









(
25
)













y
i

(
n
)


=







y

?





x
^


?


(
n
)



+

β

?











?



indicates text missing or illegible when filed







(
26
)







The derivative of the loss with respect to the weight βi can be calculated as


















J




β
i



=





n
=
1

N






k
=
1

K





δ






y
k

(
n
)




δ






β
i






δ

J


δ






y

?


(
n
)







=




n
=
1

K




δ

J


δ






y

?


(
n
)

















Therefore
,






(
27
)















J



β


=




J



Y




1

N
×
1











?



indicates text missing or illegible when filed







(
28
)







The derivative of the loss with respect to the weight γi can be calculated as


















J




γ
i



=





n
=
1

N






k
=
1

K





δ






y
k

(
n
)




δ






γ
i







J




y

k






(
n
)







=




n
=
1

N





x
^

i

(
n
)





δ

J


δ






y

?


(
n
)


















Therefore
,






(
29
)















J



Y


=


(




J



Y




X
^


)



1

N
×
1











?



indicates text missing or illegible when filed







(
30
)







ReLU

The ReLU layer can be expressed as






y=f(x)=max{x, 0}  (31)


The derivative of the loss with respect to x can be calculated as












J



x


=

{






J



y





x
>
0





0



x
<
0









(
32
)







Non-Linear Distortion Immune End-to-End Learning for OFDM

In one embodiment, the transmit bits may be mapped to the input of a first neural network (e.g., DNN, CNN) 305 (shown in FIG. 3) and the output of the neural network 305 may control the parameters of a CS encoder 300 (e.g., amplitude encoder 320, phase encoder 322, and shift encoder 324), and the encoded CS at the output of the CS encoder 300 may be transmitted through an OFDM symbol. FIG. 3 shows tuning the CS encoder 300 for OFDM with a first Deep Neural Network (DNN) 305, and demodulating with a second DNN 310. For example, as shown in FIG. 3, M transmit bits, e.g., information bits, may be processed by first DNN 305. The DNN 305 may calculate en, kn, dncustom-character for n=0, 1, . . . , m. The calculated parameters may be processed by the amplitude, phase, and shift encoders as
















f

?




(
x
)


=


e
0

+


e
m



x

n


(
m
)




+




l
=
1


m
-
1





e
l



(


x

n


(
l
)



+

x

n


(


?

·
1

)




)









(
33
)













f
i



(
x
)


=


k
0

+




l
=
1

m




k

?




x

n


(
l
)











(
34
)














f
s



(
x
)


=


d
0

+




n
=
1

m




d
n



x


?



(
n
)














?



indicates text missing or illegible when filed







(
35
)







where x=(x1, x2, . . . xm) and x=Σj2mxj2m−j for xjcustom-character2, π denotes any permutation {1, 2, . . . , m}, and (a, b) be a GCP of length N. Then, the OFDM waveform can be expressed as












p

?




(
t
)


=




x
=
0



2
m

-
1






f
n



(
x
)


×

e

j





π







f
sign



(
x
)




×

e


α







f
r



(
x
)



+

j





β







f

?




(
x
)





×

e



j





2

π





t

T



(



f
s



(
x
)


+
xN

)













?



indicates text missing or illegible when filed






(
36
)







where








f
o



(
x
)


=

(




p
a



(

e


j





2

π





t

T


)




(

1
-

x

π


(
1
)




)


+



p
b



(

e


j





2

π





t

T


)




x

π


(
1
)





)





and fsign(x)=Σl=1m−1xπ(l)xπ(l+1), α and β are non-zero values, may be transmitted through the radio chain. The parameters α and β scale the output of encoders. For example, the impacts of amplitude encoder 320 and the phase encoder 322 on the resulting waveform vanish for α=0 and β=0, respectively. These parameters may be controlled through a communication network or prescribed (in a wireless standard). In one specific embodiment, these parameters may also be learned through neural networks.


The waveform may be implemented through an IDFT operation where the encoded CS, i.e., the sequence c where its elements are the coefficients






e



j

2

π

t

T


k





for k∈custom-character may be the sequence in the frequency domain. The shift operation shown in FIG. 3 may pad zeros to the beginning of the input sequence on each branch. The sequence on each branch may be generated through the ordering block, which yields either sequence a or the sequence b. The sum operation in FIG. 3 may append zeros to each of the sequences to apply point-to-point summation by aligning the sequence length. The summed sequence may yield to the encoded sequence c.


At the receiver side, a DFT-based receiver 200 (e.g., OFDM receiver) may be used. In one method, the received signal is processed by DFT 315 and the resulting signal in the frequency domain may be processed by second neural network DNN 310 to receive the transmitted bits.


The overall impact of the transmitter 100, channel, and receiver 200 may be repressed as an autoencoder and the learnable parameters in each layer at the transmitter 100 and receiver 200 may be learned by using a BP algorithm. The learning may be achieved through an offline or an online learning method. Training may be performed for AWGN or Rayleigh/Rician-like channels for offline training.


In some cases, it may be important to limit the mean power beside the PAPR. To normalize the signal power, the e0 parameter of the amplitude encoder 320 of the CS encoder 300 may be chosen as a function of en for n=1, 2, . . . , m as










e
0

=


-

1

2

α








n
=
1

m




ln
(


1
+

e

2

α


e
n




2

)

.







(
37
)







Hence, the neural network may provide m values for en for n=1, 2, . . . , m. Note that the variable en scales the half of the coefficients by eaen. Therefore, the encoded CS power is scaled up by








(

1
+

e

2

a


e
n




)

2

.




For all en for n=1, 2, . . . , m, the total power factor can be calculated as






γ
=




n
=
1

m





1
+

e

2

α


e
n




2

.






Therefore, to normalize the sequence power, e′ should be selected such that







e

2





α






e
0



=


1
γ

.





As a result,







e
0

=



1

2

α



ln


1
γ


=


1

2

α







n
=
1

m




ln
(


1
+

e

2

α


e
n




2

)

.








In case of real-valued output for the shift encoder (e.g., non-integer values for D0, D1, . . . D2m-1 where Dx denotes fshift(x) for x=(x1, x2, . . . xm) and x=Σj2xj2m−j for xjcustom-character2), the transmitter 100 may generate the waveform by sampling the summation of the polynomial output, as illustrated as in FIG. 4, where the polynomials on each branch may be sampled with the rate of fs. FIG. 4 shows a transmitter diagram for transmitter 100 for real-valued output of the shift encoder 324.


Learning Amplitude and Phase Bit Mappings and Golay Layer

In one method, to reduce the training complexity, the shift encoder 324 may be controlled manually (e.g., either by a communication network or it is prescribed) to adjust the position of the non-zero elements of the encoded sequence, while the amplitude encoder 320 and phase encoder 322 may be tuned with a neural network 305 and the information bits mapped to the tuned parameters.


As exemplified in FIG. 5, the information bits may be processed by multiple layers (e.g., Nk layers at the transmitter 100). These layers may be a combination of some prior-art layers such as softmax, ReLU, batchnorm, convolution layer, fully-connected layer, etc. Then, the output of these layers, e.g., vectors, may pass through a set of clipping layers 500 to avoid large numbers. The description of the clipping layer 500 is provided herein and generates m input for the amplitude encoder and m+1 input for the phase encoder of the normalized Golay encoding layer 505 described herein. The transmitted waveform in continuous-time may be calculated as











p



(
t
)


=




x
=
0



2
m

-
1







f

?




(
x
)


×

e

j





π







f
sign



(
x
)




×

e


α







f
r



(
x
)



+


?


β







f

?




(
x
)





×

e



j





2

π





t

T



(



f

?




(
x
)


+
xN

)











?



indicates text missing or illegible when filed








(
38
)







where








f
o



(
x
)


=

(




p
a



(

e


j





2

π





t

T


)




(

1
-

x

π


(
1
)




)


+



p
b



(

e


j





2

π





t

T


)




x

π


(
1
)





)





and fsign(x)=Σi=1m−1xπ(l)xπ(l+1) and fs(x)=d0n=1mdnxπ(n) are fixed. The shift encoder fs(x) may be configured based on the resource allocation indicated by the network. A cyclic prefix may also be prepended to the transmitted signal. The transmitter 100 may use a DFT operation to implement pc(t), as in an OFDM scheme. FIG. 5 shows controlling only amplitude and phase encoders of Golay layer 505 with a DNN over an OFDM transmission/reception and forming an autoencoder which captures transmitter 100, channel, and receiver 200 behaviors.


At the receiver side, an OFDM receiver 200 may be used. In one method, the received signal is processed by DFT 315 and the resulting signal in the frequency domain first processed by a matched filter obtained by exploiting the fixed fo(x). The resulting sequence may be processed by a deep neural network DNN 310 to receive the transmitted bits, e.g., ML layers may be utilized. The layers at the receiver side may be a combination of some prior-art layers such as softmax, ReLU, batchnorm, convolution layer, fully-connected layer, etc.


The overall impact of the transmitter 100, channel, and receiver 200 may be repressed as an autoencoder as shown in FIG. 5 and the learnable parameters in each layer may be learned by using a BP algorithm. The learning may be achieved through an offline or an online learning method. A Polar-to-Cartesian layer 510 described herein may also be used during the offline/online training.


Golay Layer

The output of a Golay layer 505 may be a set of sequences associated with fr(x) and fi(x) (i.e., the sequences generated by listing the values of the function as x=(x1, x2, . . . xm) ranges over its 2m values where x=Σj2mxj2m−j) given by
















f

?




(
x
)


=


e
0

+


e
m



x

n


(
m
)




+




l
=
1


m
-
1





e
l



(


x

n


(
l
)



+

x

π


(

l
·
1

)




)









(
39
)














f
i



(
x
)


=


k
0

+




l
=
1

m




k
l



x

n


(
l
)














?



indicates text missing or illegible when filed







(
40
)







where the inputs of a Golay layer 505 may be encustom-character, kn








[

0
,


2

π

β


)





for n=0, 1, . . . , m. In one embodiment, the parameter e0 in Golay layer 505 may be chosen as a function of encustom-character for n=1, 2, . . . , m as










e
0

=


1

2

α







n
=
1

m



ln
(


1
+

e

2

α






e
n




2

)







(
41
)







The derivative fr(x) with respect to en can be obtained as

















df
r



(
x
)



de
n


=

{







(


x

π
n


+

x

π

n
+
1




)

-


e

2

α






e
n




1
+

e

2

α






e
n






,




n
<
m







x

π
m


-


e

2

α






e
n




1


?



e

2

α






e
n









n

m










?



indicates text missing or illegible when filed








(
42
)







since








de
0


de
n


=

-



e

2

α






e
n




1
+

e

2

α






e
n





.






Similarly, the derivative fi(x) with respect to kn and k0 can be calculated as













df
i



(
x
)



dk
n


=

x

π
n








and




(
43
)









df
i



(
x
)



dk
0


=
1.




(
44
)







The derivative of the loss with respect to the ek(n) can be calculated as


















J




e
k

(
n
)




=





x
=
1


2
m






δ







f
r

(
n
)




(
x
)




δ






e
k

(
n
)







δ





J


δ







f
r

(
n
)




(
x
)






=




i
=
1


2
m





A

?





δ





J


δ



f
r

(
n
)




(
x
)















?



indicates text missing or illegible when filed







(
45
)







The derivative of the loss with respect to the kk(n) can be calculated as












J




k
k

(
n
)




=




x
=
1


2
m






δ







f
i

(
n
)




(
x
)




δ






k
k

(
n
)







δ





J


δ







f
i

(
n
)




(
x
)










(
46
)







The derivative of the loss with respect to the k′(n) can be calculated as












J




k
0

(
n
)




=





x
=
1


2
m






δ







f
i

(
n
)




(
x
)




δ






k




(
n
)








δ





J


δ







f
i

(
n
)




(
x
)






=




i
=
1


2
m





δ





J


δ



f
r

(
n
)




(
x
)










(
47
)







The derivative of the loss with respect to the dk(n) can be calculated as












J




d
k

(
n
)




=




x
=
1


2
m






δ







f
i

(
n
)




(
x
)




δ






d
k

(
n
)







δ





J


δ







f
i

(
n
)




(
x
)










(
48
)







The derivative of the loss with respect to the d0(n) can be calculated as


















J




d
0

(
n
)




=





x
=
1


2
m






δ







f
i

(
n
)




(
x
)




δ






d
0

(
n
)







δ





J


δ







f
i

(
n
)




(
x
)






=




i
=
1


2
m





δ





J


δ







f
r

(
n
)




(
x
)

















Therefore
,






(
49
)














J



e


=


Λ
T





J



Y








(
50
)














J



k


=


Ω
T





J



Y








(
51
)















J



d


=


S
T





J



Y













where





(
52
)






Λ
=


[





(


x

π


(
1
)



|

x

π


(
2
)




)

2





(


x

π


(
2
)



|

x

π


(
3
)




)

2








(


x

π


(

m
-
1

)



|

x

π


(
m
)




)

2




x

π


(
m
)






]

-


e

2


ae
n




1
|


?


2


ae
n










(
53
)












Ω
=

S
=

[



1



x

π


(
1
)






x

π


(
2
)









x

π


(
m
)






]












where











e
=

[




e
1




e
2







e
m




]


,

k
=

[




k
0




k
1







k
m




]


,
and











d
=



[




d
0




d
1







d
m




]

.





?




indicates text missing or illegible when filed








(
54
)







Clipping Layer

The clipping layer 500 can be expressed as






y=f(x)=min{max{x, r1}, r2}  (55)


The derivative of the loss with respect to the weight x can be calculated as












J



x


=

{



0



x


r
2








δ





J


δ





y






r
1


x


r
2






0




r
1

<
x









(
56
)







Polar-to-Cartesian Layer





y
r
=f(x, y)=custom-character{eαyejβx}=eαy cos(βx)  (57)






y
i
=g(x, y)=ℑ{eαyejβx}=eαy sin(βx)  (58)


The derivatives of the loss with respect to x and y can be calculated as












J



x


=







y
r




x






J




y
r




+





y
i




x






J




y
i





=


[





-
β







e
ay



sin


(

β





x

)






β






e
ay



cos


(

β





x

)






]



[






J




y
r










J




y
i






]







(
59
)









J



y


=







y
r




y






J




y
r




+





y
i




y






J




y
i





=


[




α






e
ay



cos


(

β





x

)






α






e
ay



sin


(

β





x

)






]



[






J




y
r










J




y
i






]







(
60
)







EXAMPLE

Assume that 9 information bits need to be transmitted. In this case, the communication system may need to generate M=29 codewords. Let m=5, H=4, α=1, and






β
=



2

π

H

=

π
2






and the permutation π=(1, 2, 3, 4, 5). Therefore, based on Equations (39), (40) and (41), the Golay encoder may be expressed as
















f
r



(
x
)


=


e

?


+


e

?




x

?



+




l
=
1

4




e
l



(


x
l

+

x

?



)









(
61
)














f
i



(
x
)


=


k
D

+




i
=
1

5




k
l



x
l












?



indicates text missing or illegible when filed







(
62
)







where the inputs of a Golay layer 505 may be encustom-character, kn∈[0,4) for n=0, 1, . . . , 5 where















e
D

=


1
2







?

=
1

5





ln


(


1
+

e

2


e
n




2

)


.





?




indicates text missing or illegible when filed









(
63
)







The OFDM waveform may be expressed as

















p
c



(
t
)


=




x
=
0



2
5

-
1





e


?


π







f
sign



(
x
)




×

e



?




f
r



(
x
)



+


?


β







f

?




(
x
)





×

e



j





2

π





t

T



(
xN
)













?



indicates text missing or illegible when filed







(
64
)







where fsign(x)=Σl=1m−1xlxl+1. In this example, learning layers at the transmitter 100 control the Golay layer parameters as 0.1≤en≤0.5 for n=1, 2, . . . , 5 and −2≤kn≤2 for n=0, 1, . . . , 5. M=512 different possible messages may be mapped to the values of en for n=1, 2, . . . , 5 and kn for n=0, 1, . . . , 5. The parameter e0 is set to







1
2






n
=
1

5



ln


(


1
+

e

2


e
n




2

)







to stabilize the mean power of the signal, which is critical for power-limited AI-based transmission. At the receiver 200, an OFDM receiver may be considered and the corresponding subcarriers are processed with several neural network layers. At the receiver 200, the last layer may be a classification layer where its size is M=512 as there 512 different codewords. The transmitter 100 and receiver 200 may be considered as an autoencoder, and the expected behavior is identity operation (i.e., receiving the transmitted message). In other words, if the first codeword (e.g. information bits are (0,0,0,0,0,0,0,0,0)) is transmitted at the transmitter 100, the first element of classification layer may be closer to 1 while other 511 elements are near 0 (i.e., one-hot vector encoding form). if the second codeword is transmitted at the transmitter 100 (e.g. information bits are (0,0,0,0,0,0,0,0,1)), the second element of classification layer may be closer to 1 while other 511 elements are near 0.


An offline training with backpropagation is adopted. The design may be performed under AWGN channel where the variance of the noise is set to 2.5 dB in this example. The layer information at the transmitter 100 and receiver 200 in this example are provided in see FIG. 9, Table 1. The training batch size is set to 100, i.e., the gradients are calculated over 100 different 9 information bits. After the layers at the transmitter 100 and receiver 200 are trained, the learned parameters may be used in an OFDM transmission as illustrated in FIG. 5.


In FIG. 6, we provide block error rate (BLER), bit error rate (BER), and spectral efficiency (SE) as compared to Shannon limit for the AI-based learning with Golay layer and Polar code under the same SE (i.e., 9 bits over 32 subcarrier) for OFDM transmission. Polar code is optimized at 3 dB SNR. AI-based learning with Golay layer is slightly better than the Polar code in this scenario in terms of BLER and BER while it offers a major improvement for PAPR, i.e., more 7 dB PAPR gain. While Golay layer keeps the PAPR less than or equal to 3 dB PAPR, Polar code causes large PAPR, i.e., 10.8 dB PAPR for 90th percentile, while also cause large fluctuations on the mean power (13.8 dB for 90th percentile). The PAPR distributions of the signals are provided in FIG. 7. FIG. 6 shows block error rate, bit error rate, and spectral efficiency of the AI-based learning with Golay layer. FIG. 7 shows a PAPR comparison.


In FIG. 8, we show the learned constellation per subcarriers while the points marked by a circle, plus, and cross indicate the elements of three different encoded sequences in the frequency domain for each OFDM subcarrier. Although the constellation per subcarrier does not follow traditional constellations, e.g., M-QAM alphabet, the sequences are still differentiable at the receiver side with good BER/BLER performance. FIG. 8 shows the distribution of the elements of the sequence on different subcarriers (i.e., learned constellation per subcarrier).


Learning Amplitude and Phase Offsets for Higher Data Rates

In one method, the sets for phase and amplitude e_n, k_n for n=0, 1, . . . , m may be predetermined and their sets are offset by neural networks to be able to transmit large amount of bits. For example, e_n∈e_(fix,n)+Δe_n and k_n∈k_(fix,n)+Δk_n where e_(fix,n)∈S={−0.5, 0.8, 1.2, 1.5} and k_(fix,n)∈Z_4={0, 1, 2, 3} and neural network is trained to obtain {Δk_n} and {Δe_n} for n=0, 1, . . . , m. In another method, neural network may obtain multiple {Δk_n} and {Δe_n} for n=0, 1, . . . , m.


At the receiver 200, a neural network DNN 310 may be utilized a long with a traditional decoder (no learning capability). While the neural network 310 subtracts the impact of the offset {Δk_n} and {Δe_n} from the received signal, the traditional decoder may decode the remaining signal.


The different parts of the methods disclosed in herein may be combined. They can be applied for the fields (e.g., computer vision, image processing, etc.). For example, the Golay layer 505 may be utilized to limit the distribution of the values for the inputs of the following layer.


The current disclosure has demonstrated the efficacy of the methods disclosed herein through computer analysis. The current disclosure may be implemented in a baseband/RF chipset of a radio communication device. The method may also be prescribed in a wireless communication standard that allows machine learning-based communications.


The disclosed methods can also be applied to communication devices that operate under power-limited link budgets while autonomously optimizing its error rate performance for auto-encoder OFDM. The disclosed method may decrease the training duration as the transmitter 100 and receiver 200 do not deal with the PAPR problem with the Golay layer 505. Golay layer 505 itself ensure the low PAPR.


While the present subject matter has been described in detail with respect to specific exemplary embodiments and methods thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the scope of the present disclosure is by way of example rather than by way of limitation, and the subject disclosure does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art using the teachings disclosed herein.

Claims
  • 1. A method for avoiding non-linear distortion in end-to-end learning communication systems, the communication system comprising a transmitter and a receiver, the method comprising: mapping transmitted information bits to an input of a first neural network;controlling, by an output of the neural network, parameters of a complementary sequence (CS) encoder, producing an encoded CS;transmitting the encoded CS through an orthogonal frequency division multiplexing (OFDM) signal;processing, by Discrete Fourier Transform (DFT), the encoded CS in a frequency domain, to produce a received information signal; andprocessing, by a second neural network, the received information signal.
  • 2. The method of claim 1, wherein the CS encoder comprises an amplitude encoder, a phase encoder, and a shift encoder.
  • 3. The method of claim 2, wherein mapping the transmitted information bits to an input of a first neural network further comprises: manually tuning the shift encoder to adjust a position of non-zero elements of the CS;tuning the amplitude encoder and the phase encoder using the first neural network to produce tuned parameters; andmapping the information bits to the tuned parameters.
  • 4. The method of claim 1, wherein the encoded CS is processed by multiple layers at the transmitter, the layers including at least a Golay layer, the method further comprising: controlling only the amplitude encoder and the phase encoder of the Golay layer; andforming an autoencoder which captures transmitter, channel, and receiver behaviors.
  • 5. The method of claim 2, wherein, sets for the amplitude encoder and phase encoder are predetermined and offset by the first neural network in order to be able to transmit a large number of information bits.
  • 6. The method of claim 5, wherein the receiver further comprises a decoder, the method further comprising: subtracting, by the second neural network, the offsets from the received information signal to produce a remaining information signal; anddecoding, by the decoder, the remaining information signal.
  • 7. The method of claim 4, wherein the layers further include a clipping layer configured to limit the amplitude of the information signal.
  • 8. The method of claim 4, wherein the layers further include a Polar-to-Cartesian layer configured to convert the coordinate system from Polar coordinates to a Cartesian coordinate system.
  • 9. An end-to-end learning communication system for avoiding non-linear distortion, the system comprising: a transmitter implemented by processing circuitry, the processing circuitry comprising a processor and a memory containing instructions executable by the processor, the processor of the transmitter configured to: map transmitted information bits to an input of a first neural network;control, by an output of the neural network, parameters of a complementary sequence (CS) encoder, producing an encoded CS; andtransmit the encoded CS through an orthogonal frequency division multiplexing (OFDM) signal; anda receiver implemented by processing circuitry, the processing circuitry comprising a processor and a memory containing instructions executable by the processor, the processor of the receiver configured to: process, by Discrete Fourier Transform (DFT), the encoded CS in a frequency domain, to produce a received information signal; andprocess, by a second neural network, the received information signal.
  • 10. The system of claim 9, wherein the CS encoder comprises an amplitude encoder, a phase encoder, and a shift encoder.
  • 11. The system of claim 10, wherein mapping the transmitted information bits to an input of a first neural network further comprises: manually tuning, by the processor of the transmitter, the shift encoder to adjust a position of non-zero elements of the CS;tuning, by the processor of the transmitter, the amplitude encoder and the phase encoder using the first neural network to produce tuned parameters; andmapping, by the processor of the transmitter, the information bits to the tuned parameters.
  • 12. The system of claim 10, wherein the encoded CS is processed by multiple layers at the transmitter, the layers including at least a Golay layer, the processor of the transmitter further configured to: control only the amplitude encoder and the phase encoder of the Golay layer; andform an autoencoder which captures transmitter, channel, and receiver behaviors.
  • 13. The system of claim 10, wherein sets for the amplitude encoder and phase encoder are predetermined and offset by the first neural network in order to be able to transmit a large number of information bits.
  • 14. The system of claim 13, wherein the processor of the receiver is further configured to: subtract, by the second neural network, the offsets from the received information signal to produce a remaining information signal; anddecode, by the decoder, the remaining information signal.
  • 15. The system of claim 12, wherein the layers further include a clipping layer configured to limit the amplitude of the information signal.
  • 16. The system of claim 12, wherein the layers further include a Polar-to-Cartesian layer configured to convert the coordinate system from Polar coordinates to a Cartesian coordinate system.
CROSS REFERENCE TO RELATED APPLICATION

This Application claims priority to U.S. Provisional Patent No. 62/913,776, filed Oct. 11, 2019, titled, Methods for Non-linear Distortion Immune End-to-End Learning with Autoencoder-OFDM.

Provisional Applications (1)
Number Date Country
62913776 Oct 2019 US