The present invention relates to the field of communication network systems for node-to-node transmission of data, and more particularly to a method for playout buffering and retransmission in a distribution system for live content over the Internet.
For Over-The-Top (OTT) distribution and Video-On-Demand (VOD) systems, normally one server streams a video stream to a large number of clients. This means that the server needs to hold a per client portion of data to handle retransmission in the case that data is lost in the communication between the server and the client. This is true whether the distribution and retransmission is handled by using TCP, or using UDP with an application layer retransmission technology. A problem connected to this is that buffering of the per client portion of data to handle retransmission scales with the number of clients served.
In a server, the performance is to a high extent affected by how large memory bandwidth the server central processor unit (CPU) can achieve towards the memory system. The memory system in a server consists of a primary memory and disk, where the primary memory is smaller and faster. To achieve higher performance smaller memories may be utilized, which are even faster than the primary memory, and which are located closer to the CPU. These memories are called cache. In addition, the cache can also be organized in levels, Level 1 cache (small and very fast access), Level 2 cache (bigger and slower than the Level 1 cache), etc. Each cache memory is normally divided into an instruction cache (holding the “program”) and a data cache (holding the data being processed). So if a program and its corresponding data is in the cache the execution will be magnitudes faster than when executed on e.g. the disk.
For distribution of media content in the case of OTT and VOD, the distributed data is associated with each client device it is being sent to, and this type of distribution thus consumes a large amount of data making it likely to be stored/processed in a lower level of the memory hierarchy, i.e., Level 2 cache or primary memory.
Also, when data is stored per client, incoming data that is to be sent to many clients needs to be copied to each client. Copying data many times also slows down the performance of the server.
Data forwarding from one input interface board to an outgoing does not mean that the data needs to be copied. Data that is not processed is direct memory accessed (DMA'ed) to the memory while header information which is processed enters the data cache. It is therefore in the interest of the implementation to avoid processing as much data as possible to avoid copying and that the cache becomes “full”.
It would be advantageous to provide an at least improved and reliable method for playout buffering and retransmission which can handle OTT and VOD distribution to a large number of client devices, which method consumes less resources than the prior art solutions. This object is achieved by a method according to the present inventive concept as defined in claim 1.
Thus, in a first aspect of the present inventive concept, there is provided a method for transmitting a data stream from a server to at least two client devices comprising for at least one outgoing data stream: transmitting the data stream to the at least two client devices, and buffering a predetermined portion of the data stream in a shared buffer. Upon receiving per client requests for retransmission of data from the at least two client devices, retransmitting requested data from the shared buffer. The inventive concept is applicable in distribution of e.g. media data as in typical OTT and VOD, which is transported over unicast. Optionally, media data is distributed over multicast but with per client requests for retransmissions. For a server serving a number of client devices with the same TV-channel, in contrast to the prior art operation with TCP which provides buffering for retransmission per client, according to the present inventive concept the outgoing data stream is buffered once in a shared buffer for the many client device data streams, and the same buffered data is then utilized to serve retransmission for all client devices.
According to the invention, the method further comprises removing buffered data from the shared buffer based on if the time of interest for the buffered data has passed. This may be indicated with a timer or by utilizing time stamps distributed in the data stream to indicate if the time of interest of buffered data has passed. Buffered data is kept a predetermined time (indicating that data is too old to be valid anymore), particularly in the case of video where the time of displaying the data stream has passed, or until it is likely that all devices have received the data, i.e. when no requests for retransmission of lost packets are detected within the time of interest. This manner the clients do not have to send an ACK to acknowledge that they have received the data. Instead a timer indicating that the time of interest for the data has passed is utilized. The timer based solution significantly reduces the upstream traffic from the devices to the server since messages are only sent upstream in case of lost data downstream.
According to an embodiment of the method, it further comprises prior to the step of buffering identifying client device shared data of the data stream. By identifying client shared data in the data packets of the outgoing data stream at least candidate data for being buffered in the shared buffer is identified. The client device shared data will comprise payload data, e.g. the media content of the TV-channel, but may also include client shared header data, timestamps, subtitles, application data like tweets or social interactive data, etc.
According to an embodiment of the method, the step of buffering is performed on payload data of the data stream (and/or header data that is common for the clients). In applications where FEC or any other payload data application is used, the whole data packet will be processed and will therefore enter the processors cache memory. However, when the payload is not processed, the payload can remain in the primary memory or some lower level cache memory but the header information which needs to be processed will therefore enter the cache.
According to an embodiment of the method, per client header information is processed only once for each client, and processed per client data is buffered in said shared buffer. When sending a packet the header is constructed and it is possible to point to the shared buffer of payload data. The header will be unique for each destination but the payload data is the same. The shared buffer of the payload data can be in the cache or the primary memory. In each way it is automatically transferred to the interface card when send instruction is being issued. In the present inventive concept the payload data is stored once and is used for all clients thereby saving primary memory, avoiding copying and potential cache misses. This allows for the server to handle a significantly larger number of client network interfaces.
According to an embodiment of the method, the outgoing data stream is transported over unicast or over multicast with individual client requests for retransmissions which is advantageous. In the shared buffer the data is stored with a, for all clients, common sequence number. So when a client requests a retransmission the server finds the payload data using the sequence number. The method with one shared buffer for several client devices can thus be used both for unicast and multicast applications.
According to an embodiment of the method, when the method is concerned with multicast distribution of video, i.e. video distribution using multicast, the step of retransmitting is performed as unicast or multicast transmission. For multicast distribution of video, it can be an option to send retransmissions as unicast or multicast. This choice can depend on several aspects.
According to an embodiment of the method, unicast or multicast transmission is selected based on if there is a network installation or subnetwork (for example a stadium installation with native layer 1 or layer 2 (L1/L2) multicast. The retransmission of a packet has the same cost if it is uni- or multicast meaning that if the data is requested more than once, multicast is beneficial.
According to an embodiment of the method, if the multicast transmission uses L1/L2 multicast in at least one subnetwork but not between subnetworks, multicast transmission is selected. It might be beneficial to use multicast if the requests for retransmission from clients are in the same subnetwork but if it is different subnetworks not.
According to an embodiment of the method, if a predetermined threshold number of request are received from a multiple of client devices, multicast transmission is selected. If a large number of requests are issued it is beneficial to select multicast transmission of the retransmission.
According to an embodiment of the method, payload data is buffered in a dedicated memory instead of a cache or a primary memory of the server. This is advantageous if a payload data application is utilized, e.g. FEC, which in a similar manner as when performing checksum calculations on the packets, this is performed on the whole payload and on the network interface card.
According to an embodiment of the method, further comprising constructing packets comprising requested data from the shared buffer and per client (individual client) information selected from a list of header unique destination data.
According to an embodiment of the method, removing buffered data may be performed based on received acknowledge from all client devices.
According to a second aspect of the inventive concept, there is provided a node in a communication network comprising means for performing a method according to the present inventive concept. It may further comprise means for transmitting the outgoing data stream, e.g. a transmitter.
In a communication system arranged for node to node communication, the node comprises a memory storing computer-readable instructions, and a processor configured to execute the computer-readable instructions to a method according to the present inventive concept. Further, according to a third aspect of the inventive concept, there is provided a non-transitory computer readable storage medium storing computer-readable instructions executable by a processor to cause the processor to perform the method presented herein.
Embodiments of the present inventive method are preferably implemented in a distribution, media content provider, or communication system by means of software modules for signaling and providing data transport in form of software, a Field-Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC) or other suitable device or programmable unit, adapted to perform the method of the present invention, an implementation in a cloud service or virtualized machine (not shown in diagrams). The software module and/or data-transport module may be integrated in a node comprising suitable processing means and memory means, or may be implemented in an external device comprising suitable processing means and memory means, and which is arranged for interconnection with an existing node. The node is preferably arranged at an edge node, e.g. in communication with a streaming edge server, or is integrated in/or constitutes a streaming edge server.
Further objectives of, features of, and advantages with, the present invention will become apparent when studying the following detailed disclosure, the drawings and the appended claims. Those skilled in the art realize that different features of the present invention can be combined to create embodiments other than those described in the following.
The above will be better understood through the following illustrative and non-limiting detailed description of preferred embodiments of the present invention, with reference to the appended drawings, where the same reference numerals will be used for similar elements, and wherein:
All the figures are schematic, not necessarily to scale, and generally only show parts which are necessary in order to elucidate the invention, wherein other parts may be omitted or merely suggested.
Referring now to
In the distribution network system data transmission of the data stream DS from the ingress device to the client devices 151, 152, 153 may involve transmitting e.g. video content or other media content in the form of video packets (multi cast video packets), and e.g. audio packets. The data stream DS is received at the streaming edge server (playout server), here embodied by the server 101, from which the multiple client devices 151, 152, 153 located at different viewer locations request media content to display. The media content is distributed to the client devices 151, 152, 153 in separate data streams DS1, DS2, and DS3 (illustrated as DSx in
The client devices 151, 152, 153, connecting to the server to request live media content may be e.g. different versions of smart phones, IP connectable TV-sets or computers from different manufacturers and thus have different prestanda with respect to clock speed/frequencies, tolerances etc. The client devices thus need to be synchronized to provide simultaneous playout, i.e. to provide a synchronized playout time, of the packets of their respective instances of the media stream DSx. The data stream for transmission DSx is represented as a sequence of data packets representing a contiguous stream of information, with each data packet comprising a set of payload information representative of a segment of the stream of information corresponding thereto.
In order to handle missing packets and other types of errors, communication and distribution systems employ various techniques to handle erroneously received information. The client devices may correct the erroneously received information amongst other techniques by retransmission techniques, which enable the erroneously received information to be retransmitted to the receiver, i.e. client device, for example, by using automatic retransmission request (ARQ) or forward error correction (FEC) techniques. FEC techniques include, for example, convolutional or block coding of the data prior to modulation. FEC coding involves representing a certain number of data bits or blocks of data using a certain (greater) number of code bits or code blocks, thereby adding redundancy which permits correction of certain errors.
If there are lost packets during data transmission, any client(s) experiencing missing packets will request retransmission (RT) of the lost packets from the server 101. In addition to retransmission if there is FEC information available, the clients can either attempt to recover lost packets by FEC correction and then by retransmission, or first by retransmission and then by FEC correction on the newly transmitted data.
According to an embodiment of the invention the distribution system further comprises a control device for handling retransmission of media content to the client devices, which may be embodied by a separate control server 102, either arranged separately from and in communication with the (streaming) server 101, or integrated in the server 101 as illustrated in
Embodiments of a method for transmitting a data stream from a server to at least two client devices which comprises providing packet loss recovery for transmission of a data stream DSx in a packet-based network according to the present inventive concept will now be described with reference to
In
According to an embodiment of the method, see
According to an embodiments of the method, as illustrated in
The method steps of the present method as illustrated in
The outgoing data stream (step S220) may be transmitted using unicast or multicast with per client requests for retransmissions. When using multicast distribution of the outgoing data stream DS (step 220), e.g. video, the step of retransmission (step S230) may be performed as unicast or multicast transmission.
According to an embodiment of the method, as illustrated with dashed box S229 in
Referring now to
When a client requests a retransmission, as illustrated in
When buffered CDS in the shared buffer 501 is no longer useful, according to an embodiment of the method as illustrated in
In yet another embodiment, as illustrated in
Although illustrative embodiments of the present inventive concept have been described herein with reference to the accompanying drawing, it is to be understood that the invention is not limited to that precise embodiments thereof, but that various changes and modifications may be effected therein by one skilled in the art without department from the scope or spirit of this invention.
Number | Date | Country | Kind |
---|---|---|---|
1651289-9 | Sep 2016 | SE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/074816 | 9/29/2017 | WO | 00 |