MULTICAST BULK TRANSFER SYSTEM

Information

  • Patent Application
  • 20120320732
  • Publication Number
    20120320732
  • Date Filed
    April 09, 2012
    12 years ago
  • Date Published
    December 20, 2012
    12 years ago
Abstract
A data transfer system and method are described for providing transfer of data over a network between a sender and a plurality of receivers. Data is sent over a network to the plurality of receivers by the sender at a specified rate regardless of data loss. A receiver that identifies a lost block of data transmits a retransmission request to the sender. The sender responds to one or more retransmission requests by transmitting a repair packet to all receivers that contains blocks of data for which retransmission requests have been received.
Description
BACKGROUND

IP multicast is a technique for one-to-many and many-to-many real-time communication over an IP infrastructure in a network. It scales to a larger receiver population by not requiring prior knowledge of who or how many receivers there are. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes in the network (typically network switches and routers) take care of replicating the packet to reach multiple receivers such that messages are sent over each link of the network only once. The most common low-level protocol to use multicast addressing is User Datagram Protocol (UDP). By its nature, UDP is not reliable—messages may be lost or delivered out of order. Reliable multicast protocols such as Pragmatic General Multicast (PGM) have been developed to add loss detection and retransmission on top of IP multicast.







DETAILED DESCRIPTION OF THE INVENTION

ascp-mc as described herein relates to point-to-point transfer applications with a new point-to-multipoint transfer protocol based on IP multicast that enables data distribution to thousands of receivers in a scalable and efficient way. It solves typical large-scale distribution problems in the areas of digital cinema, digital signage, or VOD distribution to cable head-ends:

    • Send files of any size, including very large files
    • Transfer concurrently to thousands of receivers
    • Make most efficiently use of the existing network infrastructure, such as satellite broadcast networks


to transport IP multicast packets.


Design Principles of fasp-mc


fasp-mc is the reliable IP multicast transport protocol implemented in ascp-mc. It implements proprietary mechanisms that ensure reliability, scalability, transport efficiency and security. It is not compatible with or based on any of the publicly available reliable multicast protocol definitions such as PGM.


fasp-mc is based on the following principles:

    • Shortest end-to-end distribution time
    • No pre- and post-transmission delays (e.g. due to FEC coding/decoding)
    • Sender (almost) never waits for receiver feedback—it always sends data at target send rate
    • Continuous repair of packet losses while sending data
    • Optimal efficiency
    • No transmission of unneeded packets
    • Single repair packet recovers different losses on different receivers (use of FEC)
    • Minimized feedback traffic
    • Scalability
    • Support large receiver sets
    • Support transfers of very large files and file sets


fasp-mc does not implement any congestion avoidance or dynamic rate control mechanisms. It always sends data at a configurable target rate. On the one hand, this is due to the fact that congestion avoidance in multicast data distributions is highly complex. On the other hand, most multicast-enabled network environments offer ad hoc Quality of Service functionality with bandwidth reservations (satellite networks offer this natively).


High-Level Protocol Description


An overview of the IP multicast transport protocol is as follows. (A detailed description of the multicast algorithm as described above is given in Appendix I entitled “Multicast Transmission Algorithm Specification.”) The transmission of a file or a set of files with fasp-mc occurs in distinct stages, called transmission phases. A transmission always starts with the session initiation phase, in which the sender announces the transmission and receivers join that transmission. During the continuous repair phase, the actual data transmission takes place, including repair of regular data loss. An optional out-of-window repair phase is kicked off if some receivers have not received all data during the continuous repair phase.


In order to transition from one phase to the next, the protocol uses feedback from the receivers and timing information (timeouts). In addition, there is an exclusion strategy that defines how to treat “misbehaving” receivers, and a termination strategy that decides when to terminate a transmission.


Session Initiation


The session initiation phase is used by the sender to announce a transmission and determine which receivers will join the session. During this phase, the sender sends out session announcement packets periodically for a predetermined period of time. The packets contain basic session information such as feedback server address, packet size, segment size, etc. Upon reception of a session initiation packet, a receiver who wants to join the session responds with a session acknowledgement packet, which allows the sender to know which receivers take part in the transmission. Optionally, the session initiation packets might include transfer metadata that is needed by receivers before file data can be received. See next section for details.


Metadata and Data Subsessions


The transfer of file data can only occur once the receivers know where and how to store that data. The “where and how” is called transfer metadata and contains information like destination file paths and names, file sizes, access rights, etc.


Transfer metadata is usually relatively small. In that case, it is inlined in the session initiation packet. However, if many files or entire directory trees are distributed within a single transmission, the metadata can become much larger. In that case, transfer metadata is transferred just like a regular file within the so-called metadata subsession. This subsession works exactly like the data subsession that transports the actual file data subsequently.


Subsession


A subsession consists of a continuous repair phase and an optional out-of-window repair phase.


Continous Repair


The continuous repair phase is the main data transfer phase, during which the sender sends file data (also called original data) at its target bandwidth, unless it needs to resend lost packets. Packet loss information is determined from feedback that receivers send back to the sender regularly.


During continuous repair, data is handled such that all disk I/O is sequential (for performance reasons). This applies to both the sender as well as the receivers. The sender keeps data it has read from disk and sent out on the network in a read cache in memory until all receivers have acknowledged its reception. This prevents the sender from having to go back and re-read data from disk when packet loss is signalled. The receiver caches received non-contiguous data chunks in memory and flushes them to disk only when the missing packets are received—again with the goal to optimize disk throughput thanks to sequential disk access.


The sender and receiver caches work in a similar fashion as the sliding window in protocols like TCP, with two differences:


1. ascp-mc uses the sliding window mainly for performance reasons (sequential disk I/O) instead of packet numbering and reliability 2. The ascp-mc sender will not wait for full reception of the entire window by all receivers. If the window (cache) reaches a configurable maximum size, the window is just moved along and new data is sent, potentially leaving some receivers behind with losses. These are repaired later on during the out-of-window repair phase. To repair lost packets, the system does not just resend the corresponding originals. Instead, a repair packet is generated that has the potential to repair multiple uncorrelated (different) losses at different receivers. Repair packets are calculated based on forward error correction (FEC) techniques, where all packets in a segment (80 packets per default) are combined in such a way that a single repair packet can repair any single packet loss within the segment (and 2 repair packets can repair any two losses, etc.). With large receiver populations, this repair technique can reduce the amount of sent-out repair data by orders of magnitude.


Out-of-Window Repair (OOW Repair)


Due to the fact that the sender and receiver caches are limited in size, receivers might still be missing packets after the continuous repair phase. These missing packets are repaired during the OOW repair phase (named after the fact that they fell outside of the repair window of the continuous phase).


The OOW repair phase might not be able to reach the target send rate due to random disk access. However, the continuous repair phase is usually able to repair most if not all packet loss, so the amount of data to retransmit is very limited.


Exclusion Strategy


The fasp-mc protocol is designed to transfer data efficiently to thousands of receivers simultaneously. But what should it do if the various receivers behave in completely different manners (e.g. because of highly variable network conditions or different hardware performance)? What should happen if 2 out of 100 receivers observe much higher packet loss than the rest?


The answer is provided by the exclusion strategy implemented in the ascp-mc sender. It excludes individual receivers from the transmission in order to guarantee best performance for the remaining (majority of) receivers. The exclusion is based on the losses signalled by an individual receiver in relation to the loss signalled by the majority of receivers.


The exclusion strategy is configured with the following sender command line options:

    • es-lossfactor PERCENT the loss factor. A receiver is excluded if its losses are higher by this factor than the average loss of “the majority of receivers”. Default 150%
    • es-maj PERCENT the definition of what is considered “the majority of receivers”. Default 90%
    • es-startms N the earliest milestone after which the strategy starts excluding receivers. During the beginning of the transfer, the exclusion strategy remains inactive in order to prevent “false” exclusions due to statistically insufficient data. Default 10


With the default values, a receiver is excluded from the transmission if it has 1.5 more losses than 90% of the remaining receivers.


Excluding a receiver does not mean that the receiver cannot participate in the transmission anymore. Instead, the sender simply ignores an excluded receiver's feedback, i.e. it will not attempt to repair that receiver's losses. The receiver continues receiving all packets sent out by the sender and might actually be re-integrated in the transmission as a regular receiver if its average loss rate approaches that of the majority.


Termination Strategy


The decision when to terminate a transmission in the optimal case is simple: as soon as all receivers have received the entire content. With many heterogenous receivers and external transmission constraints (i.e. deadline), the problem becomes more subtle.


As with the exclusion strategy, the ascp-mc sender allows to configure the termination behavior. The termination strategy offers the following criteria and conditions for ending a transfer:


Coverage


Terminate the transmission if the coverage (i.e. the number of receivers that have successfully received all content) reaches a given percentage.


Time


Terminate the transmission if a given absolute time (deadline) is reached or if the transmission has been running for a maximum duration.


Volume


Terminate the transmission if the total amount of bytes sent exceeds a given threshold, e.g. 1.5 times the file data.


Multiple of these criteria can be used at the same time. The first match will terminate the transmission.


The coverage condition provides additional criteria that allow the transmission to continue in order to increase the coverage further, but only if the cost for doing so is reasonable. This is expressed as a coverage increase that needs to be reached without exceeding a given transmission volume increase.

    • ts-coverage PERCENT the minimum coverage to reach, i.e. the minimum percentage of receivers that must successfully receive the transmission before stopping it.
    • ts-coverage-inc PERCENT the minimum expected reception coverage increase per send volume increase for continuing the transmission after the minimum coverage is reached.
    • ts-deadline DATETIME absolute date and time until which the transmission must terminate. E.g. “2011-02-25 10:00:00 GMT”
    • ts-max-duration DURATION the maximum acceptable duration of the transmission. E.g. 4 h, 00:30:00
    • ts-max-idle-ms N the maximum number of milestones without feedback from receivers. Exceeding this limit will terminate the transmission.
    • ts-max-sendvol PERCENT the maximum send volume as percentage of the original file data. When reached, the transmission is terminated.
    • ts-sendvol-inc PERCENT the send volume increase. See—ts-coverage-inc.


Feedback Rate


A main challenge for scaling transmissions to thousands of receivers is minimizing the feedback traffic from receivers. This is achieved with multiple techniques including the reduction of the feedback information itself, the frequency of feedback messages and feedback suppression. ascp-mc offers a very simple mechanism to tune the system in that respect by exposing a maximum aggregate feedback bandwidth that the sender is willing to accept. Based on that value, the algorithm will tune all necessary parameters accordingly.


ascp-mc Application


An ascp-mc application has been developed that is available as two command-line applications (sender.sh and receiver.sh) for Linux operating systems that are bundled in the same tarball.


Installation


Extracting the tarball to an installation directory. The tarball includes the necessary Java Runtime Environment (JRE). This will create the following directory structure:



















/opt




/aspera




/ascp-mc




...




sender.sh




receiver.sh




/config




logs




/jre










Command Line Usage


In order to execute a transmission, the receivers have to be started first:


receiver.sh [OPTIONAL ARGUMENTS]


Subsequently, the sender is launched with the desired source file or directory to be transmitted.


sender.sh [OPTIONAL ARGUMENTS] SOURCE [DESTINATION]


Receiver Command Line Options


java MulticastReceiverApp [options . . . ]

    • bind-mc IPADDR_OR_NAME: Sets the ip address or the name of the network interface for multicast traffic. E.g. 10.65.22.9, eth0
    • cache-size BYTES: Sets the maximum amount of memory to allocate for the data cache. e. g. 1 GB, 1.5 g, 500 mb
    • docroot-dir PATH: The docroot directory path. All received files will be stored within this directory.
    • id (—host-id) ID: Force the host ID to the given value. If not specified, a random ID is chosen automatically. E.g. 1, 524563774234
    • idle-timeout DURATION: The maximum duration a receiver waits for packets from a sender before terminating the transfer. E.g. 2 m (=2 minutes), 00:05:00 (=5 minutes)
    • mc-group ADDRESS:PORT: Sets the multicast group. E.g. 224.224.224.0:10000
    • net-receive-buffer BYTES: Sets the socket receive buffer size. E.g. 1 MB, 1 m, 512000
    • net-send-buffer BYTES: Sets the socket send buffer size. E.g. 1 MB, 1 m, 512000
    • no-io: Disables disk io.
    • overwrite: Allow files to be overwritten with newly received data.
    • progress-frequency DURATION: The frequency at which the receiver should Log progress. E.g. 2 s (=2 seconds), 00:00:10 (=10 seconds)
    • runtime-dir PATH: The path of the directory where runtime information is stored.
    • thread-count N: Sets the number of threads to use. Defaults to twice the number of CPU cores. E.g. 4, 8
    • Z (—packet-size) BYTES: Sets the packet size in bytes, including the size of UDP/IP headers. E.g. 1 KB, 1 k, 150b, 1500.
    • h (—help): Prints this help message.


Sender Command Line Options


java MulticastSenderApp [options . . . ] SOURCE DESTINATION


SOURCE: The source path. May denote a single file ora directory. In the latter case, all files in the directory and any subdirectories are transferred. DESTINATION: The destination path, relative to the receivers' docroots.

    • bind-mc IPADDR_OR_NAME: Sets the ip address or the name of the Network interface for multicast traffic. E.g. 10.65.22.9, eth0
    • cache-size BYTES: Sets the maximum amount of memory to allocate for the data cache. E.g. 1 GB, 1.5 g, 500 mb
    • es-lossfactor PERCENT: Exclusion strategy: the loss factor. A receiver is excluded if its losses are higher by this factor than the average loss of “the majority of receivers”. E.g. 1.5, 200%
    • es-maj PERCENT: Exclusion strategy: the definition of what is considered the “majority of receivers”. E.g. 80%, 0.55
    • es-startms N: Exclusion strategy: the earliest milestone after which the strategy starts excluding receivers. During the beginning of the transfer, the exclusion strategy remains inactive in order to prevent “false” exclusions due to statistically insufficient data.
    • fb-addr ADDR: The IP address or DNS hostname to advertise to receivers for feedback traffic. E.g. 10.34.23.1, mcsender.myorg.org
    • fb-port N: The UDP port for feedback traffic
    • fec-k N: The number of packets in a segment.
    • fec-procs N: The number of processors for parallel FEC encoding.
    • id (—host-id) ID: Force the host ID to the given value. If not specified, a random ID is chosen automatically. E.g. 1, 524563774234
    • mc-group ADDRESS:PORT: Sets the multicast group. E.g. 224.224.224.0:10000
    • net-receive-buffer BYTES: Sets the socket receive buffer size. E.g. 1 MB, 1 m, 512000
    • net-send-buffer BYTES: Sets the socket send buffer size. E.g. 1 MB, 1 m, 512000
    • no-io: Disables disk io.
    • read-ahead N: The number of segments to “read ahead” when sending file data.
    • runtime-dir PATH: The path of the directory where runtime information is stored.
    • si-duration N: The duration of the session initiation process in milliseconds.
    • si-period N: The time between repeated session initiation messages in milliseconds.
    • thread-count N: Sets the number of threads to use. Defaults to twice the number of CPU cores. E.g. 4, 8
    • ts-coverage PERCENT: Termination strategy: the minimum coverage to reach, i.e. the minimum percentage of receivers that must successfully receive the transmission before stopping it.
    • ts-coverage-inc PERCENT: Termination strategy: the minimum expected reception coverage increase per send volume increase for continuing the transmission after the minimum coverage is reached.
    • ts-deadline DATETIME: Termination strategy: absolute date and time until which the transmission must terminate. E.g. “2011-02-25 10:00:00 GMT”
    • ts-max-duration DURATION: Termination strategy: the maximum acceptable duration of the transmission. E.g. 4 h, 00:30:00
    • ts-max-idle-ms DURATION: Termination strategy: the maximum acceptable duration without feedback from receivers. Exceeding this limit will terminate the transmission. e.g. 4 h, 00:30:00
    • ts-max-sendvol PERCENT: Termination strategy: the maximum send volume as percentage of the original file data. When reached, the transmission is terminated.
    • ts-sendvol-inc PERCENT: Termination strategy: the send volume increase. See—ts-coverage-inc.
    • ttl N: The time-to-live of sent out multicast packets. e.g. 2, 16
    • Z (—packet-size) BYTES: Sets the packet size in bytes, including the size of UDP/IP headers. E.g. 1 KB, 1 k, 1500b, 1500.
    • h (—help): Prints this help message.
    • l (—target-bw) BANDWIDTH: Sets the target send bandwidth in bits per second. E.g. 10 Mbps, 10 mbps, 10 m, 1000000


DEFINITIONS

Original data—Unmodified data of the source files to transfer.


FEC repair data—Repair data calculated from original data using a small-block, forward-error-correction code (Reed-Solomon) on a segment of data.


Original repair data—Retransmission of original data. Used when an entire segment needs to be resent.


Segment—A fixed-size chunk of the original data. Original data is (virtually) divided into segments, which serve as the basis for FEC computations.


The invention has been described in conjunction with the foregoing specific embodiments. It should be appreciated that those embodiments may also be combined in any manner considered to be advantageous. Also, many alternatives, variations, and modifications will be apparent to those of ordinary skill in the art. Other such alternatives, variations, and modifications are intended to fall within the scope of the following appended claims.

Claims
  • 1. A data transfer system for providing transfer of data over a network between a sender and a plurality of receivers, comprising: a sender configured to transmit blocks of data identified by sequence numbers at a specified injection rate as determined by an injection rate input;a plurality of receivers, each configured to receive the blocks of data transmitted by the sender and to detect blocks that have been lost in transmission;wherein the receivers are configured to send retransmission requests to the sender when lost blocks are detected;wherein the sender is configured to keeps blocks of data that it has sent out on the network in a read cache in a memory until all receivers have acknowledged their reception;wherein the sender is configured to respond to retransmission requests by transmitting a repair packet to all receivers that contains blocks of data for which retransmission requests have been received.
RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application Ser. No. 61/473,270, filed on Apr. 8, 2011, which is hereby incorporated by reference in its entirety. This application incorporates by reference the specifications of the following applications in their entirety: U.S. Provisional Patent Application Ser. No. 60/638,806, filed Dec. 24, 2004, entitled: “BULK DATA TRANSFER PROTOCOL FOR RELIABLE, HIGH-PERFORMANCE DATA TRANSFER WITH INDEPENDENT, FULLY MANAGEABLE RATE CONTROL”; U.S. patent application Ser. No. 11/317,663, filed Dec. 23, 2005, entitled “BULK DATA TRANSFER”; and U.S. patent application Ser. No. 11/849,782, filed Sep. 4, 2007, entitled “METHOD AND SYSTEM FOR AGGREGATE BANDWIDTH CONTROL”.

Provisional Applications (1)
Number Date Country
61473270 Apr 2011 US