The present invention relates to apparatus and methods for transmission and reception of packet streams across a network with high speed and reliability.
High-speed packet streaming schemes are commonly used in transmitting real-time video and other digital media across a network. Because of the real-time nature of the data, the packet transmissions typically use a non-reliable protocol, without acknowledgment from the destination or retransmission by the source when packets are lost or corrupted. Therefore, in applications requiring high data availability, a source host may transmit multiple parallel, redundant streams of the data to the destination. Each packet is thus transmitted multiple times—once in each stream—over multiple different paths through the network—in order to increase the likelihood that at least one copy of each packet will be received intact at the destination.
In this sort of scheme, there is still no guarantee that all the packets in any given stream will reach the destination, nor can it be ensured that the packets will arrive at the destination in the order in which they were transmitted. Therefore, in many applications (such as broadcast or storage of the video data), the destination computer must store, reorder and interleave packets from the two (or more) received streams in order to reconstruct the data. This solution enables reliable video reconstruction, but at the cost of a substantial memory footprint and a heavy processing burden on the host processor, which increases data latency and can limit the data throughput.
Various schemes for handling data from redundant transmissions are known in the art. For example, U.S. Patent Application Publication 2009/0034633 describes a method for simultaneous processing of media and redundancy streams for mitigating impairments. The method comprises receiving a primary stream of encoded frames and a separate stream of redundant frames. The method further comprises decoding and reconstructing in parallel the frames in the primary stream and the separate stream of redundant frames, on a real-time basis, in accordance with a specified common clock reference. The method further comprises, upon determining that a frame in the primary stream exhibits an error or impairment, determining a decoded redundant frame in the separate stream that corresponds to the impaired frame, and substituting at least a portion of the information in the decoded redundant frame for a corresponding decoded version of the impaired frame.
Embodiments of the present invention that are described herein provide efficient apparatus and methods for receiving and handling redundant data streams from a network.
There is therefore provided, in accordance with an embodiment of the invention, communication apparatus, including a host interface, which is configured to be connected to a bus of a host computer having a processor and a memory, in which a buffer is allocated for receiving a data segment. A network interface is configured to receive from a packet communication network at least first and second redundant packet streams. Each packet stream includes a sequence of data packets, which include headers containing respective packet sequence numbers and data payloads of a predefined, fixed size containing respective slices of the data segment, such that redundant first and second copies of each slice are transmitted respectively in at least a first data packet in the first packet stream and a second data packet in the second packet stream. Packet processing circuitry is configured to receive the data packets from the network interface, to map the data packets in both the first and second packet streams, using the packet sequence numbers, to respective addresses in the buffer, and to write the data payloads to the respective addresses via the host interface while eliminating redundant data so that the buffer contains exactly one copy of each slice of the data segment, ordered in accordance with the packet sequence numbers.
In some embodiments, the packet processing circuitry is configured to map the packet sequence numbers to the respective addresses, using a linear mapping defined so that the first and second copies of any given data slice are both mapped to a common address in the buffer. In a disclosed embodiment, the mapping is defined such that each packet sequence number PSN is mapped to an address equal to A+(PSN−X)×B, wherein X is an initial sequence number, B is the fixed size of the data payloads, and A is a base address of the buffer.
In some embodiments, the data payloads include video data, including multiple data segments corresponding to frames of the video data. In one embodiment, the headers contain an indication of a start and end of each frame, and the packet processing circuitry is configured to identify the indication in the headers and to select the buffer to which the data payloads are to be written responsively to the identified indication.
In a disclosed embodiment, the packet processing circuitry is configured to maintain a record of the packet sequence numbers for which the data payloads have been written to the buffer, and responsively to the record, to discard redundant copies of the data payloads so that each slice is transmitted over the bus via the host interface no more than once. Alternatively, the packet processing circuitry is configured to transmit all of the copies of each slice over the bus so that the redundant data in the buffer are overwritten.
In a disclosed embodiment, the packet processing circuitry is configured to receive a work item from the processor indicating, for each data segment, an address of the buffer and the size of the slices, and to submit a completion report to the processor when all slices of the data segment have been written to the buffer.
In one embodiment, the first and second packet streams are transmitted in accordance with a Real-time Transport Protocol (RTP), and the headers include an RTP header, which contains the packet sequence numbers.
There is also provided, in accordance with an embodiment of the invention, a method for data communication, including allocating a buffer in a memory of a host computer for receiving a data segment. At least first and second redundant packet streams are received in a network interface controller (NIC) of the host computer from a packet communication network. Each packet stream includes a sequence of data packets, which include headers containing respective packet sequence numbers and data payloads of a predefined, fixed size containing respective slices of the data segment, such that redundant first and second copies of each slice are transmitted respectively in at least a first data packet in the first packet stream and a second data packet in the second packet stream. The data packets in both the first and second packet streams are mapped, using the packet sequence numbers, to respective addresses in the buffer. The data payloads are written to the respective addresses in the buffer while eliminating redundant data so that the buffer contains exactly one copy of each slice of the data segment, ordered in accordance with the packet sequence numbers.
There is additionally provided, in accordance with an embodiment of the invention, a method for data communication, which includes allocating a buffer in a memory of a host computer for receiving a data segment. A packet stream including a sequence of data packets, which include headers containing respective packet sequence numbers and data payloads of a predefined, fixed size containing respective slices of the data segment, is received in a network interface controller (NIC) of the host computer from a packet communication network. The data packets in the packet stream are mapped to respective addresses in the buffer using a linear mapping of the packet sequence numbers to the addresses. The data payloads are written to the respective addresses in the buffer, ordered in accordance with the packet sequence numbers.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
In many real time streaming protocols, such as the Real-time Transport Protocol (RTP), each of the packets in each transmitted stream contains a respective sequence number, such as a packet sequence number (PSN), which can be used at the destination in detecting missing packets and restoring the packet data to the transmit order. In the present embodiments, as explained in detail hereinbelow, these reordering and reconstruction functions are carried out by hardware logic in the network interface controller (NIC) of the destination computer, thus offloading these tasks from the host processor. Although the disclosed embodiments relate to a single stream of data (which in some cases is duplicated in at least one redundant stream), in general the destination computer may receive and process many such streams concurrently from different sources.
In some embodiments, the destination host processor allocates a single buffer to receive the reconstructed video data in each segment of an incoming data stream, for example, in each video frame, based on the expected segment size and data rate. These stream parameters may be preset or negotiated in each instance by exchange of control messages over the network between the source and destination hosts. The packet sizes can thus be assumed to be fixed and known to the host processors. Each transmitted packet is labeled with a successive sequence number, as provided by RTP or another suitable protocol.
The receiving NIC performs the functions of data integrity checking, packet reordering, and elimination of redundant data, using the known packet data sizes and sequence numbers. Thus the receiving NIC writes the data payloads to the proper locations in the allocated buffer, in accordance with the transmit order (and irrespective of the receive order), while discarding or overwriting the data from redundant packets. The receiving NIC notifies the destination host processor only when the data segment in memory is complete. The process of packet reception and reordering is thus entirely transparent to the destination host processor, which deals only with complete data segments. Assuming the receiving NIC is able to receive data from the network at wire speed, the latency and throughput of video data transfer at the destination are limited only by the processing and bus access rates of the receiving NIC. This solution not only maximizes data bandwidth, but also substantially reduces the processing load, memory footprint, and power consumption of the host processor.
Although the above embodiments relate specifically to scenarios in which the source host transmits multiple redundant streams of packets to the destination, the principles of the present invention may similarly be applied to single streams of packets (transmitted without redundancy), as well as to schemes with higher degrees of redundancy. Furthermore, although the above embodiments make use of the packet sequence numbers of the video packets, other embodiments make use of other sorts of sequence numbers that appear in the packet header, with data payloads containing slices of a data segment having a predefined, fixed size per slice. The NIC uses the sequence numbers, along with the fixed size per slice, in mapping the data payloads to respective addresses in the assigned memory buffer using a linear mapping of the sequence numbers. For example, in an alternative embodiment (as illustrated in
A NIC 32 in each host computer 24, 26, 28, . . . , transmits the data packets over network 22 in two redundant packet streams 34 and 36, as indicated by the dashed arrows in
Streams 34 and 36, as well as the packet streams transmitted by hosts 26, 28, . . . , are addressed to a receiving (Rx) host computer 40, which is connected to network 22 by a NIC 42. Upon receiving the data packets from network 22, NIC 42 maps the packets in both of packet streams 34 and 36, using the packet sequence numbers, to respective addresses in a buffer in a memory 44 of host computer 40. NIC 42 writes the data payloads to the respective addresses while eliminating redundant data so that the buffer contains exactly one copy of each slice of the data segment, ordered in accordance with the packet sequence numbers. The video frames are thus immediately available in memory for retransmission (over a television network, for example) or other access.
In the pictured scenario, certain packets are lost or corrupted in transmission through network 22, and others reach NIC 42 out of order. NIC 42 detects and discards corrupted packets, for example by computing and checking packet checksums or other error detection codes, as provided by the applicable protocols. In the pictured example, NIC 42 has received only packets 54, 58 and 56 (in that order) from stream 34 and packets 52, 56 and 58 from stream 36. Using packet serial numbers 62, however, NIC 42 is able to directly place data from the appropriate payloads 64 in the designated buffer in memory 44, and thus reconstruct a complete image frame 66 in memory 44. As noted earlier, NIC 42 eliminates redundant data so that the buffer contains exactly one copy of each slice of the frame, ordered in accordance with the packet sequence numbers. The process of reconstruction is transparent to software running on host computer 40 and requires no involvement by the software in data reordering or eliminating redundancies.
Driver 75 creates a respective work queue 76 (conventionally referred to as a queue pair, or QP) for each incoming stream of video packets that NIC 42 is to receive. For each segment in the incoming stream (such as a video frame) that NIC 42 is to receive, driver 75 posts a corresponding work item (referred to as a work queue element, or WQE) in the appropriate work queue 76 pointing to a buffer 78 that is to receive the data in the segment. In some embodiments, this functionality is implemented as follows:
The result will be that both copies of any given data slice are mapped to a common slice address in buffer 78, and thus the corresponding, redundant data will be written to the same place in the host memory, regardless of the arrival order.
NIC 42 comprises a host interface 82, connected to bus 72, and a network interface 84, which connects to network 22. Network interface 84 in this example comprises two ports 88 with different addresses, serving as the respective destination addresses for streams 34 and 36. Alternatively, both streams may be received through the same port.
Packet processing circuitry 86 is coupled between host interface 82 and network interface 84. For the sake of simplicity,
Network interface 84 passes incoming packets in streams 34 and 36 to packet parsing logic 90. To process these packets, packet processing circuitry 86 reads and makes use of the information posted in the appropriate work queue 76 in memory 44 by driver 75. This information enables packet parsing logic 90 to locate and extract PSN 62 and payload 64 from each packet. Scatter engine 92 uses PSN 62, together with the base address of buffer 78, to map each slice 80 to the respective address in the buffer, and thus to write the data payloads to the appropriate addresses via host interface 82.
The packet illustrated in
In other words, for a packet stream beginning from an initial sequence number X, with a data payload of size B in each packet, and a buffer base address A, scatter engine 92 will write the payload of each packet to an address:
Payload address=A+(PSN−X)×B.
When the payloads of two valid packets from different, respective streams in a redundant transmission scheme map to the same address (whether they have the same or different PSNs), only one of them will ultimately be written to the buffer. Scatter engine 92 may overwrite the payload of the packet that arrives first with that of the second copy of the packet, or it may write only one of the payloads to the buffer and discard the other.
For some protocols, the segment definition is also indicated in the packet headers, thus enabling alternative implementations that reduce the involvement of driver software even further. For example:
For a NIC operating at very high speed (for example, receiving incoming video data at 400 Gbps), access from NIC 42 to memory 44 over bus 72 can become a bottleneck. To reduce the bus pressure, packet processing circuitry 86 can monitor PSNs and, when the payload from a given packet having a given PSN in one of the streams has already been written to the buffer, simply discard the corresponding packet from the other stream, rather than overwriting the data already in the buffer. For this purpose, processing circuitry 86 may maintain a record, such as a PSN vector and a rolling PSN window, for each flow. Each packet arriving causes processing circuitry 86 to flip a bit in the PSN vector. When the bit for a given PSN is set when corresponding packet from the other stream arrives, packet parsing logic 90 drops this latter packet without further processing.
In system 110, an O-RAN Distributed Unit (O-DU) 112 communicates over a network 116, such as an IP network, for example, with an O-RAN Radio Unit (O-RU) 114, in accordance with the above-mentioned eCPRI and O-RAN specifications. O-DU 112 and O-RU 114 comprise general-purpose computer processors, which have respective memories 118, 120 and are connected to network 116 by respective NICs 122 and 124. (O-DU 112 and O-RU 114 typically have other, special-purpose interfaces, such as a radio transceiver associated with the O-RU, but these features of system 110 are beyond the scope of the present description.)
O-RU 112 transmits a sequence of data packets over network 116, comprising respective headers and data payloads, which contain data segments of radio signals received by O-RU 114. Each data segment comprises one or more physical resource blocks (PRBs), which are divided into slices in the form of data samples, typically comprising alternating iSamples and qSamples, as defined by the O-RAN specification (see particularly section 6.3, including Table 6-2 on page 95). These samples have a fixed size, between 1 bit and 16 bits per sample, which is defined in the O-RAN header of each packet, as explained below with reference to
NIC 122 extracts the data samples from payload and writes them to respective addresses 154 in a buffer 152 in memory 118, using a linear mapping of the startPrbu value in field 148. The startPrbu value indicates the address 154 to which the NIC is to write the first sample in each section, followed contiguously by the succeeding samples. Specifically, in the present embodiment, NIC 122 maps the samples to addresses 154 using the following formula for the first sample in each section:
Payload address=A+(SN−X)×B.
Here X is the initial sequence number; SN is the current sequence number given by the startPrbu value in field 148; B is the fixed size of the samples, given by the udCompHdr value in field 151; and A is a base address of the buffer.
Although the embodiment described above refers specifically to transmission of data from O-RU 114 to O-DU 112, the principles of this embodiment may similarly be applied by O-RU 114 in buffering data transmitted by O-DU 112.
It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application is a continuation in part of U.S. patent application Ser. No. 15/473,668, filed Mar. 30, 2017, which claims the benefit of U.S. Provisional Patent Application 62/457,919, filed Feb. 12, 2017, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
7733464 | David et al. | Jun 2010 | B2 |
7881496 | Camilleri et al. | Feb 2011 | B2 |
8693551 | Zheludkov et al. | Apr 2014 | B2 |
9131235 | Zheludkov et al. | Sep 2015 | B2 |
9451266 | Zheludkov et al. | Sep 2016 | B2 |
20020041089 | Yasui | Apr 2002 | A1 |
20040146203 | Yoshimura et al. | Jul 2004 | A1 |
20040165091 | Takemura et al. | Aug 2004 | A1 |
20060180670 | Acosta et al. | Aug 2006 | A1 |
20070211157 | Humpoletz et al. | Sep 2007 | A1 |
20070296849 | Sano et al. | Dec 2007 | A1 |
20090021612 | Hamilton, Jr. et al. | Jan 2009 | A1 |
20090034633 | Rodirguez | Feb 2009 | A1 |
20090074079 | Lee | Mar 2009 | A1 |
20090153699 | Satoh et al. | Jun 2009 | A1 |
20090244288 | Fujimoto et al. | Oct 2009 | A1 |
20100149393 | Zarnowski et al. | Jun 2010 | A1 |
20110283156 | Hiie | Nov 2011 | A1 |
20130329006 | Boles et al. | Dec 2013 | A1 |
20150026542 | Brennum | Jan 2015 | A1 |
20160080755 | Toma | Mar 2016 | A1 |
20160277473 | Botsford | Sep 2016 | A1 |
20170171167 | Suzuki | Jun 2017 | A1 |
20200076521 | Hammond | Mar 2020 | A1 |
Entry |
---|
Wikipedia, “Common Public Radio Interface”, 1 page, Apr. 28, 2017 (downloaded from https://web.archive.org/web/20190620212239/https://en.wikipedia.org/wiki/Common_Public_Radio_lnterface). |
O-RAN Alliance, “O-RAN Fronthaul Working Group: Control, User and Synchronization Plane Specification”, ORAN-WG4.CUS.0-v01.00 Technical Specification, pp. 1-189, year 2019. |
Wikipedia, “evolved Common Public Radio Interface (eCPRI)”, pp. 1-3, May 13, 2019 (downloaded from https://web.archive.org/web/20190513130801/https://wiki.wireshark.org/eCPRI). |
Main Concept, “MainConcept Accelerates HEVC Encoding with NVIDIA RTX GPUs”, newsletter, pp. 1-4, Apr. 8, 2019 downloaded from https://www.mainconcept.com/company/news/news-article/article/mainconcept-accelerates-hevc-encoding-with-nvidia-rtx-gpus.html. |
U.S. Appl. No. 16/291,023 Office Action dated Nov. 20, 2020. |
U.S. Appl. No. 16/442,581 Office Action dated Nov. 30, 2020. |
U.S. Appl. No. 16/850,036 Office Action dated Sep. 8, 2021. |
Number | Date | Country | |
---|---|---|---|
20200092229 A1 | Mar 2020 | US |
Number | Date | Country | |
---|---|---|---|
62457919 | Feb 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15473668 | Mar 2017 | US |
Child | 16693302 | US |