The present disclosure relates generally data processing, and more specifically to error correction in network packets.
Neural networks can be used to estimate positions of unreliable bits in a network packet based on corrupted copies of the network packet. However, neural networks may have computational complexities higher than desirable for practical applications. Thus, estimating positions of unreliable bits in the network packet can require optimizing a neural network for that specific purpose.
One or more embodiments of the present disclosure may comprise a method including receiving copies of a network packet; extracting soft information from the copies of the network packet; using the soft information to select K positions in a payload of the network packet with uncertain values of bits; selecting the K positions having largest levels of uncertainty; changing the uncertain values at the K positions to a selected combination of values to obtain a modified payload of the network packet; and calculating an error detection code for the modified payload, wherein when the modified payload matches an error detection code in the network packet, errors in the payload are corrected.
One or more embodiments of the present disclosure may comprise a system having a processor; and a memory for storing instructions, the processor executing the instructions to: receive copies of a network packet; extract soft information from the copies of the network packet; use the soft information to select K positions in a payload of the network packet with uncertain values of bits; select the K positions having largest levels of uncertainty; change the uncertain values at the K positions to selected combination of values to obtain a modified payload of the network packet; and calculate an error detection code for the modified payload, wherein when the modified payload matches an error detection code in the network packet, errors in the payload are corrected.
One or more embodiments of the present disclosure may comprise a method including extracting soft information from copies of a network packet; using the soft information to select positions in a payload of the network packet with uncertain values of bits; changing values at the positions to a combination of values to obtain a modified payload of the network packet; and calculating an error detection code for the modified payload, wherein when the modified payload matches an error detection code in the network packet, errors in the payload are corrected.
One or more embodiments of the present disclosure may comprise a method including receiving a sequence of corrupted copies of a network packet; forming an input table having the corrupted copies of the network packet as columns and values of bits at the same positions in the corrupted copies as rows; determining, based on at least one lookup table and based on values of bits in the rows of the input table, logit values corresponding to the rows of input table; and determining, based on the logit values, values of bits in the network packet.
Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements.
Referring now to the drawings,
The environment 100 may include a transmitter 110, a receiver 120, and a communication channel 140. The transmitter 110 may send network packets over the communication channel 140. The receiver 120 may receive the network packets and analyze the integrity of the network packets. If the receiver 120 determines that a network packet is corrupted, then the receiver 120 may request that the transmitter 110 retransmit the network packet.
In various embodiments, the transmitter 110 or receiver 120 may include a computer (e.g., laptop computer, tablet computer, desktop computer), a server, a cellular phone, a smart phone, a gaming console, a multimedia system, a smart television device, wireless headphones, set-top box, an infotainment system, in-vehicle computing device, informational kiosk, smart home computer, software application, computer operating system, a modem, a router, and so forth.
The communication channel 140 may include the Internet or any other network capable of communicating data between devices. Suitable networks may include or interface with any one or more of, for instance, a local intranet, a corporate data network, a data center network, a home data network, a Personal Area Network, a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network, a virtual private network, a storage area network, a frame relay connection, an Advanced Intelligent Network connection, a synchronous optical network connection, a digital T1, T3, E1 or E3 line, Digital Data Service connection, Digital Subscriber Line connection, an Ethernet connection, an Integrated Services Digital Network line, a dial-up port such as a V.90, V.34 or V.34bis analog modem connection, a cable modem, an Asynchronous Transfer Mode connection, or a Fiber Distributed Data Interface or Copper Distributed Data Interface connection. Furthermore, communications may also include links to any of a variety of wireless networks, including Wireless Application Protocol, General Packet Radio Service, Global System for Mobile Communication, Code Division Multiple Access or Time Division Multiple Access, cellular phone networks, Global Positioning System, cellular digital packet data, Research in Motion, Limited duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network. The communication channel 140 can further include or interface with any one or more of a Recommended Standard 232 (RS-232) serial connection, an IEEE-1394 (FireWire) connection, a Fiber Channel connection, an IrDA (infrared) port, a Small Computer Systems Interface connection, a Universal Serial Bus (USB) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
CRC 225 is an error-detecting code used to determine whether a network packet is corrupted. CRC 225 is generated by CRC generator 215 based on an original packet 205 (a payload) and added to the original packet 205 (a payload) to form the network packet 210. The network packet 210 is transmitted from the transmitter 110 to receiver 120 via the communication channel 140. CRC 225 is typically 3 bytes regardless of the length of the payload. When the network packet with CRC 225 is received, the receiver 120 computes the new CRC from the received payload of the network packet and compares the new CRC to the appended CRC 225. If the appended CRC 225 mismatches the new CRC computed from the received payload, the network packet 210 is corrupted.
The ARQ is a communication method in which if the network packet 210 is determined to be corrupted, the receiver 120 requests the transmitter 110 to retransmit the network packet 210. The ARQ stops when the network packet 210 is received correctly or the maximum timeout is reached. Typically, ARQ discards previously received network packets, and thus, the information from the previously received packets is not used.
ARQ+CRC is an approach widely used in Bluetooth™, Wi-Fi™ and 3G/4G/LTE/5G networks. However, ARQ+CRC may not be efficient because even if only one bit in the received network packet is erroneous (resulting in a CRC check failure), the whole network packet must be retransmitted. Thus, the Block Error Rate (BLER) of ARQ+CRC scheme can be higher than required.
Prior to requesting retransmission of the network packet, the receiver 120 can modify a few bits in the payload and test the CRC again. Given a payload of length L, and assuming that only one bit in the payload is erroneous, modifying all possible 1-bits in the payload requires checking CRC L times. However, very often due to the nature of communication systems, there will be more than 1 bit error. Flipping all possible bits requires 2L CRC checks. This is computationally infeasible and can drastically reduce the validity of the CRC.
Hence, to reduce the number of all possible bit flips, one can think about extracting soft information to determine which bits in the payload are the most unreliable (uncertain). Some embodiments of the present disclosure may provide a method for modifying the unreliable bits to test the CRC, without drastically reducing the validity of the CRC.
The NPP method 300 may commence in block 305 with receiving a network packet having a payload of x bits and the CRC. In decision block 310, the NPP method 300 may calculate CRC for the received payload and compare the calculated CRC with received CRC. If CRC check passes, then the NPP method 300 may proceed, in block 315, with processing the next network packet.
If CRC check fails, then the NPP method 300 may proceed, in block 320, with concatenating the network packet with previously received copies of the network packet. The current copy of the network packet can be stored in a memory for use in the next round (attempt) of receiving and processing a copy of the network packet.
In block 325, the NPP method 300 may extract soft information from the copies of the network packet. In some embodiments, the soft information may include expected values for bits in positions of payload of the network packet. For example, the expected values can be real numbers from 0 to 1. An expected value for a position j can be obtained by a machine learning (ML) model 330 based on values of bits at position j in all the copies of the network packet and values of bits in positions neighboring to j in all the copies of the network packet (due to the correlation caused by real world channel and modulations). Therefore, the ML model 330 determines the uncertainty level according to logit value p/(1−p).
In block 335, the NPP method 300 may include using the soft information to select K positions in the payload with most uncertain values of bits. For example, the NPP method 300 may include determining levels of uncertainty for positions of bits in the payload. A level of uncertainty for a position j can be found as minimum between a distance of an expected value at the position j from 0 and a distance of the expected value at the position j from 1. The NPP method 300 may select positions having K largest levels of uncertainty.
In block 340, the NPP method 300 may proceed with selecting a combination of values of K bits from 2K possible combinations of values of bits at the selected K positions. The NPP method 300 may change the values at the selected K positions to the selected combination of values to obtain a modified payload of the network packet.
In block 345, the NPP method 300 may proceed with calculating the CRC for the modified payload. If the CRC of the modified payload matches the CRC in the received network packet, errors in the payload are corrected and the NPP method 300 may process the next network packet in block 315.
If the CRC of the modified payload does not match the CRC in the received network packet, the NPP method 300 may proceed, in block 350, with checking whether all possible combinations of values of K bits have been selected and tested. If not all the possible combinations have been tested, the NPP method 300 may proceed, in block 340, with selecting the next combination.
If all possible combinations have been selected and tested, then the NPP method 300 may proceed, in block 355, with the determination that the network packet cannot be corrected. In this case, the NPP method 300 may request that the network packet be retransmitted. Note that since CRC has a linear structure, which could be further optimized to improve the runtime performance via another filed patent (9865US).
If all possible combinations have been selected and tested, then the NPP method 300 may proceed, in block 355, with the determination that the network packet cannot be corrected. In this case, the NPP method 300 may request that the network packet be retransmitted.
The ML model to extract soft information is a deep learning convolutional neural network (CNN) with 1-dimension inputs, as shown in
Although the ML model is simple, direct running of this ML model, for example, on a central processing unit (CPU) of a personal computer, may require over 200 ms runtime latency, and over 450 KB memory. This latency may be problematic for deploying deep learning models to any edge devices, especially Internet of things (IoT) devices like Bluetooth microcontrollers.
Currently used TinyML research, such as quantization, pruning, and model distillation, are still in their early stages, and are designed to be general purpose TinyML techniques. The computational cost of applying the current conventional interpreter-based framework (such as TF Lite Micro) is still beyond what can be allocated for microcontrollers, as shown in Table 1.
To satisfy Bluetooth microcontroller requirements, the systems and methods of the present disclosure utilize special properties of the NPP. Due to the very special properties of digital communication systems, multiple implementations of the Bluetooth microcontroller can be provided. The NPP soft information extraction is a simple and small CNN model, which only contains Conv1D and Dense layers. TF Lite Micro has multiple optimizations for other layers, which is not useful for the NPP. Thus, building an optimized implementation of the NPP using the systems and methods of the present disclosure can reduce the framework overhead of TF Micro by a large margin.
Moreover, the input space of the NPP is limited as the received sequence only contains binary values, and the convolutional window has a limited kernel size, which can potentially use memory-runtime tradeoff, through hashing, to improve the runtime efficiency.
Compared to the implementation of the NPP via common TinyML frameworks, like TF Lite Micro, in the conventional systems, the present disclosure provides methods developed specifically for the NPP, which can lead to a very low latency and memory footprint.
With respect to the size and the possible combinations of the input for NPP, the CNN has kernel size 3, with the number of rounds R, which means that for each output value, it only depends on the input of size 3R. Because the filter of the CNN filter is constant across all positions, the input space is restricted. Moreover, Bluetooth communication only takes binary inputs, which makes enumeration of all possible inputs possible. For two rounds, there are in total 2{circumflex over ( )}6=64 possible inputs, and for three rounds, there are in total 2{circumflex over ( )}9=512 possible inputs.
To support more rounds of the NPP, an additive lookup table is provided. An additive property is assumed, which means the soft information of a+b-th round equals the sum of soft information of the first a rounds, and soft information of next consecutive b rounds (i.e., from round a+1 to round a+b) or, more formally, it is assumed that:
f(R_(1, . . . ),R_M)=f(R_1, . . . ,R_i)+f(R_(i+1),R_M), for i∈{2, . . . ,M−1}
This is a very crude assumption that allows computing a large number of rounds by composing multiple results just from round 2 and 3, as shown in
The solution number 1, set forth above, describes keeping track of the number of rounds so it comes up with the proper combination of look-up table calls as mentioned, as well as buffering all the previously received packets up to the round R, which would require sizable memory storage. It would require a memory space of size R*L which increases linearly with R.
Also, the number of required calls for the lookup table increases as R grows. For example, for R=6, we have 2 calls to the lookup table, while for R=10 we would have 4 calls.
A second yet related solution would be a “Round Agnostic” approach: At any given round R, we have access to the accumulated soft information in the past that is stored in a buffer, and then compute the soft info (i.e., via calling look-up table) only for the most recently received packets.
We are required to store only up to 3 previous rounds (R-2, R-1, R) so we call the look-up table on these most recently received rounds, then store the soft information at specific rounds to be used in the future.
An example configuration of an algorithm is provided for explanatory purposes: Set the buffer initially as 0:
The buffer is actually an array of length L (same length as the original packet received) which stores the most recent soft information for all the packet bits. Let's set all the initial soft information values at 0.
b) At any round R>1, compute the soft information:
Note that the buffer is only updated at the rounds that are multiples of 3 (i.e. R=3, 6, 9, etc.) to be used at the future calls. We also can see that at any given round R, we need to store at most three recent rounds R-2, R-1, and R, which would require a memory space of only 3*L.
Also note that the additive logic from 2b still carries over to here as well, i.e. the soft information is still actually computed in an additive fashion. The performance of the round-agnostic Look-up Table shows minimal performance loss, with significant improvement on memory space reduction.
One of the challenges to tackle while implementing NPP on a microcontroller is that the look-up table cannot have an indefinitely high numerical precision, as it would take up too much memory and require too much computational power to do the downstream operations on it if we store the soft information look-up table as, say, float numbers.
Therefore, to enable implementing the lookup table in a microcontroller, which only supports int8 type, rather than float type, the performance difference between original float value and integer values, which is rounded to adjacent integers, is investigated. The performance of rounded values and float values is the same, as shown in
Based on the output of the NPP lookup table, it can be seen that, for any i-th bit position where the observed bit is equal across all the received rounds (for example, the bit number 10 is equal to 0 on all the received sequences), the NPP simply returns that observed value with high confidence. In other words, the NPP is actually only needed at the bit positions where the received bits across the rounds are not equal. This means that there definitely is an error at that particular position on one or more of the receptions.
Some embodiments can involve skipping NPP calls in the case of equal bits/bytes. Given this observation, whenever all the received bits are equal at a particular position i, the observed value with high confidence can be quickly returned and the next bit is then analyzed.
Furthermore, a plurality of bits can be compared, e.g., full bytes, across the rounds together. For example, the entire 10th received byte can be compared across all the rounds. If the bits are fully equal together, the entire byte can be returned as an error-free byte confidently and then the next byte is processed. The lookup table can be called and scanned with the moving window only when the bytes are not fully equal to each other, which means that there are reception errors within that byte, which need to be resolved with using the NPP. This approach saves a lot of time for the NPP on the microcontroller especially when the number of the total errors (i.e., the number of unequal bits) is not too large.
The components shown in
Mass data storage 1030, which can be implemented with a magnetic disk drive, solid state drive, or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit 1010. Mass data storage 1030 stores the system software for implementing embodiments of the present disclosure for purposes of loading that software into main memory 1020.
Portable storage device 1040 operates in conjunction with a portable non-volatile storage medium, such as a flash drive, floppy disk, compact disk, digital video disc, or Universal Serial Bus (USB) storage device, to input and output data and code to and from the computer system 1000 of
User input devices 1060 can provide a portion of a user interface. User input devices 1060 may include one or more microphones, an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. User input devices 1060 can also include a touchscreen. Additionally, the computer system 1000 as shown in
Graphics display system 1070 includes a liquid crystal display (LCD) or other suitable display device. Graphics display system 1070 is configurable to receive textual and graphical information and processes the information for output to the display device. Peripheral devices 1080 may include any type of computer support device to add additional functionality to the computer system. The components provided in the computer system 1000 of
The processing for various embodiments may be implemented in software that is cloud-based. In some embodiments, the computer system 1000 is implemented as a cloud-based computing environment, such as a virtual machine operating within a computing cloud. In other embodiments, the computer system 1000 may itself include a cloud-based computing environment, where the functionalities of the computer system 1000 are executed in a distributed fashion. Thus, the computer system 1000, when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.
In general, a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices. Systems that provide cloud-based resources may be utilized exclusively by their owners or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
The cloud may be formed, for example, by a network of web servers that comprise a plurality of computing devices, such as the computer system 1000, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user. The present technology is described above with reference to example embodiments. Therefore, other variations upon the example embodiments are intended to be covered by the present disclosure.
This application claims the benefit and priority of U.S. Provisional Application Ser. No. 63/158,817, filed on Mar. 9, 2021, which is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5657325 | Lou | Aug 1997 | A |
5910182 | Dent et al. | Jun 1999 | A |
5968197 | Doiron | Oct 1999 | A |
6418549 | Ramchandran | Jul 2002 | B1 |
6505253 | Chiu | Jan 2003 | B1 |
6523068 | Beser et al. | Feb 2003 | B1 |
7016658 | Kim et al. | Mar 2006 | B2 |
7124205 | Craft et al. | Oct 2006 | B2 |
7447234 | Colas et al. | Nov 2008 | B2 |
7664883 | Craft et al. | Feb 2010 | B2 |
7848350 | Inamdar | Dec 2010 | B1 |
7971131 | Ordentilch | Jun 2011 | B1 |
8223643 | Wolfgang | Jul 2012 | B1 |
8305963 | Breau | Nov 2012 | B1 |
8327232 | Budampati et al. | Dec 2012 | B2 |
8352830 | Landschaft et al. | Jan 2013 | B2 |
8437267 | Amir et al. | May 2013 | B2 |
8473821 | Taghavi Nasrabadi et al. | Jun 2013 | B2 |
8693406 | Ahmadi | Apr 2014 | B2 |
8990663 | Liu et al. | Mar 2015 | B2 |
10270564 | Djukic et al. | Apr 2019 | B2 |
10491569 | Powell | Nov 2019 | B1 |
10749594 | O'Shea et al. | Aug 2020 | B1 |
11088784 | Gopalan et al. | Aug 2021 | B1 |
11191049 | Chandrasekher et al. | Nov 2021 | B1 |
11368250 | Chandrasekher et al. | Jun 2022 | B1 |
11368251 | Chandrasekher et al. | Jun 2022 | B1 |
20010030954 | Hameleers | Oct 2001 | A1 |
20020026523 | Mallory et al. | Feb 2002 | A1 |
20020054608 | Wan | May 2002 | A1 |
20020122510 | Yakhnich et al. | Sep 2002 | A1 |
20030215029 | Limberg | Nov 2003 | A1 |
20040170120 | Reunamaki et al. | Sep 2004 | A1 |
20050094561 | Raaf | May 2005 | A1 |
20050208897 | Lyons | Sep 2005 | A1 |
20060021040 | Boulanger | Jan 2006 | A1 |
20060047862 | Shearer et al. | Mar 2006 | A1 |
20060268996 | Sethi et al. | Nov 2006 | A1 |
20070168197 | Vasilache | Jul 2007 | A1 |
20080002688 | Kim et al. | Jan 2008 | A1 |
20080002790 | Itoh | Jan 2008 | A1 |
20080126824 | Lee et al. | May 2008 | A1 |
20080225735 | Qiu | Sep 2008 | A1 |
20080250299 | Maillet et al. | Oct 2008 | A1 |
20090086711 | Capretta et al. | Apr 2009 | A1 |
20090119566 | Hiromitsu | May 2009 | A1 |
20090125779 | Vermeiden | May 2009 | A1 |
20090276686 | Liu et al. | Nov 2009 | A1 |
20100037270 | Bennett | Feb 2010 | A1 |
20100165868 | Gersemsky | Jul 2010 | A1 |
20100192175 | Bachet | Jul 2010 | A1 |
20100202416 | Wilhelmsson et al. | Aug 2010 | A1 |
20100262885 | Cheng | Oct 2010 | A1 |
20110131461 | Schulz | Jun 2011 | A1 |
20110206019 | Zhai et al. | Aug 2011 | A1 |
20110206065 | Kim et al. | Aug 2011 | A1 |
20110216787 | Ai et al. | Sep 2011 | A1 |
20120300642 | Abel et al. | Nov 2012 | A1 |
20130018889 | Jagmohan | Jan 2013 | A1 |
20130132786 | Tanigawa et al. | May 2013 | A1 |
20130198591 | Kamuf | Aug 2013 | A1 |
20130223203 | Bai | Aug 2013 | A1 |
20130250179 | Lida | Sep 2013 | A1 |
20130339036 | Baeckstroem et al. | Dec 2013 | A1 |
20140140342 | Narad | May 2014 | A1 |
20140241309 | Hilton et al. | Aug 2014 | A1 |
20150009902 | Emmanuel | Jan 2015 | A1 |
20160006842 | Tahir | Jan 2016 | A1 |
20160062954 | Ruff | Mar 2016 | A1 |
20160254881 | Meylan | Sep 2016 | A1 |
20160261375 | Roethig et al. | Sep 2016 | A1 |
20170006616 | Singh et al. | Jan 2017 | A1 |
20170012799 | Jiang et al. | Jan 2017 | A1 |
20170269876 | Mukhopadhyay et al. | Sep 2017 | A1 |
20180013808 | Petry et al. | Jan 2018 | A1 |
20180048567 | Ignatchenko | Feb 2018 | A1 |
20180101957 | Talathi | Apr 2018 | A1 |
20180267850 | Froelich et al. | Sep 2018 | A1 |
20180288198 | Pope et al. | Oct 2018 | A1 |
20180314985 | O'Shea | Nov 2018 | A1 |
20180359811 | Verzun et al. | Dec 2018 | A1 |
20190028237 | Pan et al. | Jan 2019 | A1 |
20190037052 | Deshpande | Jan 2019 | A1 |
20190097680 | O'Brien | Mar 2019 | A1 |
20190097914 | Zhong et al. | Mar 2019 | A1 |
20190392266 | Zhong et al. | Dec 2019 | A1 |
20190393903 | Mandt et al. | Dec 2019 | A1 |
20200099470 | Licardie | Mar 2020 | A1 |
20200128279 | Han et al. | Apr 2020 | A1 |
20200177418 | Hoydis et al. | Jun 2020 | A1 |
20200213152 | Choquette et al. | Jul 2020 | A1 |
20200320371 | Baker | Oct 2020 | A1 |
20200383169 | Takahashi et al. | Dec 2020 | A1 |
20210014939 | Verzun et al. | Jan 2021 | A1 |
20210034846 | Ko et al. | Feb 2021 | A1 |
20210056058 | Lee et al. | Feb 2021 | A1 |
20210120440 | Reimann et al. | Apr 2021 | A1 |
20210319286 | Gunduz | Oct 2021 | A1 |
20210392033 | Haartsen | Dec 2021 | A1 |
20220045696 | Boussard | Feb 2022 | A1 |
20220114436 | Gopalan et al. | Apr 2022 | A1 |
20220209893 | Chandrasekher et al. | Jun 2022 | A1 |
20220209894 | Chandrasekher et al. | Jun 2022 | A1 |
20220209895 | Chandrasekher et al. | Jun 2022 | A1 |
20220209896 | Gopalan et al. | Jun 2022 | A1 |
20220209897 | Jiang et al. | Jun 2022 | A1 |
20220209909 | Chandrasekher et al. | Jun 2022 | A1 |
20220210250 | Chandrasekher et al. | Jun 2022 | A1 |
Number | Date | Country |
---|---|---|
106487610 | Mar 2017 | CN |
2493081 | Jan 2013 | GB |
Entry |
---|
Kim et al., “Deepcode: Feedback Codes via Deep Learning,” [online] Jul. 2, 2018 [retrieved on Apr. 17, 2020], Retrieved from the Internet: < URL:https://arxiv.org/abs/1807.00801v1>, 24 pages. |
Gopalan et al., “Systems and Methods for Artificial Intelligence Discovered Codes”, U.S. Appl. No. 17/069,794 , filed Oct. 13, 2020, Specification, Claims, Abstract, and Drawings, 43 pages. |
Goutay et al., “Deep reinforcement learning autoencoder with noisy feedback.” 2019 International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT). IEEE, Jun. 24, 2019, 6 pages. |
Yamamoto et al., “Application of Yamamoto-Itoh coding scheme to discrete memoryless broadcast channels,” 2017 IEEE International Symposium on Information Theory (ISIT). IEEE, 2017, 5 pages. |
Aoudia et al., “End-to-end learning of commununications systems without a channel model.” 2018 52nd Asilornar Conference on Signals, Systems, and Computers. IEEE, Dec. 5, 2018, 6 pages. |
Duman et al., “On optimal power allocation for turbo codes.” Proceedings of IEEE International Symposium on Information Theory. IEEE, Jun. 29-Jul. 4, 1997, 1 page. |
Arvinte et al., “Deep log-likelihood ratio quantization.” 2019 27th European Signal Processing Conference (EUSIPCO). IEEE, 2019, 5 pages. |
“International Search Report”and “Written Opinion”, Patent Cooperation Treaty Application No. PCT/US2021/054265, dated Nov. 24, 2021, 24 pages. |
“International Search Report” and “Written Opinion”, Patent Cooperation Treaty Application No. PCT/US2021/0063356, dated Jan. 14, 2022, 9 pages. |
“international Search Report” and “Written Opinion”, Patent Cooperation Treaty Application No. PCT/US2021/061902, dated Jan. 24, 2022, 11 pages. |
Henze, Martin, “A Machine-Learning Packet-Classification Tool for Processing Corrupted Packets on End Hosts”, Diploma Thesis, Chair of Communication and Distributed Systems, RWTH Aachen University, Mar. 2011, 103 pages. |
“International Search Report” and “Written Opinion”, Patent Cooperation Treaty Application No. PCT/US2021/06617, dated Mar. 11, 2022, 12 pages. |
Kötter et al., “Coding for Errors and Erasures in Random Network Coding”, IEEE Transactions on Information Theory 54.8; 3579-3591. Mar. 25, 2008; Retrieved on Feb. 19, 2022 (Feb. 19, 2022) from <https://arxiv.org/pdf/cs/0703061.pdf> entire document; 30 pages. |
Grcar, Joseph, “How ordinary elimination became Gaussian elimination”, Historia Mathematica vol. 38 Issue 2, May 2011; Available online Sep. 28, 2010; pp. 163-218. |
“International Search Report” and “Written Opinion”, Patent Cooperation Treaty Application No. PCT/US2021/064735, dated Mar. 17, 2022, 11 pages. |
“International Search Report” and “Written Opinion”, Patent Cooperation Treaty Application No. PCT/US2021/065003, dated Mar. 22, 2022, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20220294561 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
63158817 | Mar 2021 | US |