§ 1.1 Field of the Invention
The present invention concerns the live streaming of coded information. More specifically, the present invention concerns providing incentives in peer-to-peer (“P2P”) video live streaming.
§ 1.2 Related Art
With the widespread adoption of broadband residential access, P2P video live streaming has become a popular service in the Internet. Several streaming systems have been successfully deployed, serving tens of thousands of simultaneous users who watch channels at rates between 300 kbps to 1 Mbps. (See, e.g., “PPLive.” [Online]. Available: http://www.pplive.com/; “PPStream.” [Online]. Available: http://www.ppstream.com/; X. Zhang, J. Liu, B. Li, and P. Yum, “DONet: A data-driven overlay network for efficient live media streaming,” Proc. of IEEE INFOCOM (2005); “UUsee.” [Online]. Available: http://www.uusee.com/; and X. Hei, C. Liang, J. Liang, Y. Liu, and K. W. Ross, “A measurement study of a large-Scale P2P IPTV system,” IEEE Trans. on Multimedia, to appear. Existing systems use single-stream encoding and mesh-pull design.
In these existing live streaming systems, free-riding is a potential problem, similar to what has been observed in P2P file sharing systems. (See, e.g., P. Colle, K. Leyton-Brown, and I. Mironov, “Incentives for sharing in peer-to-peer networks,” ACM Conference on Electronic Commerce (2001). The present inventors believe that the problem of free-riders is primarily due to the fact that the existing systems provide the same video quality to all peers, no matter what their individual upload contributions are. However, in many P2P streaming systems, participating peers will have different upload bandwidths. For example, institutional peers might have high-bandwidth access, while residential peers with DSL and cable access will have relatively low upload bandwidth.
An efficient incentive mechanism helps the performance of a P2P system. For example, BitTorrent (See, e.g., “BitTorrent.” [Online]. Available: http://www.bittorrent.com/.), one of the most popular P2P file downloading system, largely solves the incentive problem by adopting a tit-for-tat strategy. (See, e.g., B. Cohen, “Incentives build robustness in BitTorrent,” 1st Workshop on Economics of Peer-to-Peer Systems, (June 2003).) However, in file downloading systems such as BitTorrent, the primary performance measure is download time. Designing incentive mechanisms for live video streaming is more challenging than for traditional file downloading.
In view of the foregoing, it would be useful to avoid free-riding in P2P live streaming systems.
At least some embodiments consistent with the present invention use multiple sub-streams (referred to as multi-stream coding) in a P2P live streaming system. With multi-stream coding, a video is coded into multiple layers (or more generally, substreams), where more received layers (or more generally, substreams) provide better video quality. At least some embodiments consistent with the present invention perform a distributed incentive strategy in which each peer (1) measures its download rates from its neighbors, and (2) reciprocates by providing a larger fraction of its upload rate to the neighbors from which it is downloading at higher rates. Consequently, a peer with a higher upload contribution is more likely to obtain a larger share of neighbors' upload rates, thus receiving more layers (or more generally, substreams), which in turn provides better video quality. Conversely, a peer with a lower upload contribution is more likely to receive less layers (or more generally, substreams), which in turn provides a lower (but perhaps still acceptable) video quality. A free-rider with no contribution to its neighbors is less likely to be served by its neighbors. The peers may evaluate their neighbors' upload contributions in a distributed manner.
At least some embodiments consistent with the present invention use layered multi-stream coding (though other multi-stream coding schemes such as multiple description coding (“MDC”) may be used instead). With layered video coding, video is coded into layers with nested dependency such that a higher layer refines the video generated by lower layers. A higher layer can be decoded only if all the lower layers are available. Consequently, more layers provide better video quality. With MDC, a video is encoded into several descriptions with equal importance. When a video is encoded into M descriptions, the combination of any m≦M different descriptions is decodable, with more descriptions introducing a better video quality.
An exemplary environment in which embodiments consistent with the present invention may be used is introduced in § 4.1. Then, exemplary methods for performing operations consistent with the present invention are described in § 4.2. Next, exemplary apparatus for performing various operations and generating and/or storing various information in a manner consistent with the present invention are described in § 4.3. Refinements, alternatives and extensions are described in § 4.4. Finally, some conclusions about such exemplary embodiments are provided in § 4.5.
Embodiments consistent with the present invention may be used in the context of a mesh-pull delivery architecture.
Embodiments consistent with the present invention may use multi-stream coded video. MDC and layered coding are two examples of multi-stream coding. Each is introduced below.
With MDC, a video is encoded into several descriptions with equal importance. When a video is encoded into M descriptions, the combinations of any m≦M different descriptions are decodable, with more descriptions introducing a better video quality. Although MDC simplifies the system design, the redundancy significantly reduces the video coding efficiency (with the existing MDC coders). Consequently, with the same transmission bit rates, the video quality of MDC is much lower than that of a single layer coded video or a layered video.
Unlike MDC, with layered coding, a video is encoded into several layers with nested dependence. A higher layer can be decoded only if all the lower layers are available. Generally, layered coding does not lose much video coding efficiency compared with single layer coding. Layered coding in P2P video streaming has some challenges. For example, since the different layers have unequal importance due to the nested dependence of layers, scheduling algorithms should assign higher priorities to lower layers than higher layers. Second, a peer with a higher contribution should receive more layers than a peer with a lower contribution. However, it is possible that no neighbor of a peer holds the higher layers desired. In this case, to obtain the layers commensurate with its upload contribution, a high-bandwidth peer should be able to locate other high-bandwidth peers that have higher layers.
Supplier peer 210 includes state exchange protocol operation(s) 212, layer chunk request receiving operation(s) 214, supplier-side scheduling operation(s) 216, layer chunk transmission operation(s) 218, stored layer chunks 220, stored receiver peer information 222 and stored receiver peer request queues 224. Receiver peer 230 includes state exchange protocol operation(s) 232, receiver-side scheduling operation(s) 234 (which may include layer chunk selection operation(s) 236 and supply peer selection operation(s) 238), layer chunk buffer management operation(s) 240, layer chunk request transmission operation(s) 242, stored supplier peer information 250, stored layer chunk buffer 252 and stored layer chunk request (buffer) state information 254.
A video may be divided into media chunks, and the chunks may be made available at an origin server (not shown). For example, the origin server encodes a video into L layers (or more generally, substreams). Each layer (or more generally, substream) is further divided into Layer (or Substream) Chunks (“LCs”) of Δ seconds. If MD coding is used, description chunks (“DCs”) of Δ seconds may be generated instead.
When a (receiver) peer 230 wants to view the video, it may obtain a list of peers currently receiving the video and establish partnerships with several peers. Peers may exchange chunk availability information with their neighbors and request chunks that they need. For example, state exchange protocol operation(s) 212 and 232 may be used to exchange such information as indicated by communications 260. This information may be stored as receiver peer information 222 and/or supplier peer information 250.
Partner peers may then exchange LCs with each other during a discovery (e.g., an initial) period. During the discovery period, the peers may exchange LCs at an unconstrained rate (or attempt to do so). Each peer may measure its download rates from its neighbors. A peer reciprocates to its neighbors by providing a larger fraction of its upload rate to the neighbors from which it is downloading at the highest rates. In this manner, a peer with higher upload contribution is likely to be rewarded with more LCs, and hence more layers (or descriptions, or more generally, substreams) and better quality.
Referring to
The receiver peer 230 may prioritize its requests for missing but needed LCs. For example, using layer chunk request state information 254 (and supplier peer information 250), the receiver-side scheduling operation(s) 234 can generate layer chunk requests to be transmitted by layer chunk request transmission operation(s) 242. More specifically, layer chunk selection operation(s) 236 may determine a layer chunk to request, and supply peer selection operation(s) 238 may determine a peer from which to submit its request for the layer chunk. The LC request is transmitted to the selected supplier peer as indicated by communication 270.
Various exemplary methods that may be used to perform at least some of the foregoing operations are described in § 4.2 below.
As can be appreciated from the foregoing, in at least some embodiments consistent with the present invention, a peer uploads more to the neighbors from which it downloads more. To this end, a supplier may maintain a different request queue for each receiver. (Recall, e.g., 224 of
Referring again to the supplier-side scheduling operation(s) 216 of
where Kn denotes the set including all neighbors of peer n, and dn,i is the estimated download rate of supplier peer n from receiver peer i. This estimated download rate dn,i can be obtained based on the number of LCs delivered from peer i to peer n during the previous discovery time period. The value In,i equals 0 if the request queue of receiver peer i is empty, and equals 1 otherwise. The value γ is a parameter that controls the sensitivity of peer n to peer k's contribution. Therefore, a supplier peer in this example serves its neighbors in a weighted fashion, and a receiver peer that uploads more to the supplier has a higher probability of being served, thereby consuming a larger share of the supplier's uplink bandwidth.
Referring again to the receiver-side scheduling operation(s) 234 of
In at least some embodiments consistent with the present invention, receiver peers request layer chunks at the beginnings of “rounds.” At the beginning of each round, the receiver peer has a buffer state, as described above. Given this buffer state, the receiver peer decides which layer chunks to request. The receiver peer may request layer chunks from the current time up until a window of B chunk times into the future.
Normally, at the beginning of each round, there are several available, but not requested layer chunks. The receiver-side scheduling operations may be used to determine (1) which layer chunk should be requested first (Recall, e.g., layered chunk selection operation(s) 236 of
In at least some exemplary embodiments consistent with the present invention, used in a system using layered coding, a score is assigned to each available but not requested Layer Chunk (LC). Layer chunks with lower scores get requested with higher priority. Three factors of a LC may be jointly considered to calculate the score. These factors may include (i) layer index of the layer chunk, (ii) playback deadline of the layer chunk, and (iii) rarity of the layer chunk among the receiver's peers. Alternatively, one or two of these factors may be considered rather than all three. Indeed, other factors may be considered instead of, or in addition to, any of these three factors.
With layered coding, since a layer can be decoded only if all the lower layers have been received, lower-layer layer chunks should be given higher priority. Also, a layer chunk in danger of being delayed beyond the playback deadline should be given a higher priority. Additionally, rare chunks should be given higher priority. Generally, for a peer n, the score Sl,tn of a layer chunk with layer index l and playback deadline t can be expressed as:
S
l,t
n
=G=(l,t,λ) (2)
where λ is the number of duplicates of this layer chunk available in the neighbors of peer n. G(.) may be a monotonously increasing function with l, t and λ. For example, a weighted linear combination of these three factors, mathematically expressed as:
where t0 is the current time, L is the total number of layers, B is the buffer size in terms of LC times, and K is the number of neighbors of peer n, may be used. w1, w2 and w3 are weights of the three factors.
A receiver peer requests the available but not requested layer chunks from the lowest score to the highest score. Given a layer chunk, a receiver peer n can estimate the time τn,k that this layer chunk is to be received from its neighbor peer k by the following equation:
where Cn,k is the number of outstanding requests from n to k, r is the bit rate of one layer, and dn,k is the estimated download rate of peer n from peer k. In the exemplary heuristic of layered coding, a receiver peer sends a layer chunk request to the neighbor that can deliver this layer chunk at the earliest time. Therefore, if τn,k is the minimum, and if it is less than the playback deadline of the layer chunk, then receiver peer n sends the request to peer k. If, on the other hand, this layer chunk cannot meet its playback deadline, it is not requested, and the receiver requests the LC with next higher score (provided it meets the foregoing test).
The one or more processors 810 may execute machine-executable instructions (e.g., C or C++ running on the Solaris operating system available from Sun Microsystems Inc. of Palo Alto, Calif. or the Linux operating system widely available from a number of vendors such as Red Hat, Inc. of Durham, N.C.) to perform one or more aspects of the present invention. For example, one or more software modules, when executed by a processor, may be used to perform one or more of the operations and/or methods of
In one embodiment, the machine 800 may be one or more conventional personal computers or servers. In this case, the processing units 810 may be one or more microprocessors. The bus 840 may include a system bus. The storage devices 820 may include system memory, such as read only memory (ROM) and/or random access memory (RAM). The storage devices 820 may also include a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a (e.g., removable) magnetic disk, and an optical disk drive for reading from or writing to a removable (magneto-) optical disk such as a compact disk or other (magneto-) optical media.
A user may enter commands and information into the personal computer through input devices 832, such as a keyboard and pointing device (e.g., a mouse) for example. Other input devices such as a microphone, a joystick, a game pad, a satellite dish, a scanner, or the like, may also (or alternatively) be included. These and other input devices are often connected to the processing unit(s) 810 through an appropriate interface 830 coupled to the system bus 840. The output devices 834 may include a monitor or other type of display device, which may also be connected to the system bus 840 via an appropriate interface. In addition to (or instead of) the monitor, the personal computer may include other (peripheral) output devices (not shown), such as speakers and printers for example.
The operations of peers, such as those described above, may be performed on one or more computers. Such computers may communicate with each other via one or more networks, such as the Internet for example. Referring back to
Alternatively, or in addition, the various operations and acts described above may be implemented in hardware (e.g., integrated circuits, application specific integrated circuits (ASICs), field programmable gate or logic arrays (FPGAs), etc.).
§ 4.4.1 Alternatives to Layered Coding (and Layer Chunks)
Although the foregoing exemplary embodiments pertained to layered chunks, at least some embodiments consistent with the present invention may be used in the context of substreams (e.g., MD-FEC descriptions), or substream chunks. For example, the '248 provisional describes how some embodiments consistent with the present invention may be used in the context of various other substreams.
§ 4.4.2 Alternative Receiver Peer Scheduling
§ 4.4.2.1 First Alternative Receiver Peer Scheduling
Although the foregoing receiver peer scheduling operation(s) were described in the context of a layer coded video system, the present invention is not limited to such systems. For example, in MDC systems, when requesting Description Chunks (“DCs”), there are two conflicting goals. On one hand, it is desirable to have as many descriptions as possible that can be played out in the near future. On the other hand, it is desirable that there not be a lot of variation in the number of descriptions that are displayed from one description chunk time to the next (so that there is not a lot of variation in the quality of the video). At least some exemplary embodiments consistent with the claimed invention may use the following heuristic for prioritizing the description chunks that are to be requested at the beginning of a round. The heuristic starts at the first description chunk time (scanning from the current time towards the end of the window) that has no buffered or requested description chunks. Assuming there is such a description chunk time, a description chunk is chosen randomly from those that are available.
Continuing with the heuristic, since a description chunk is now requested for the sixth time, the first description chunk time with no buffered (X) or requested (−) description chunks will now be the seventh description chunk time. For this description chunk time, there are four available description chunks. One of these four available description chunks is chosen at random. After selecting the description chunk, it is possible that more than one neighbor may have this description chunk. The receiver peer sends the request to a neighbor from which the description chunk is expected to be received at the earliest time.
This heuristic continues until every description chunk time has at least one buffered (X) or requested (−) description chunk, neglecting description chunk times that have no available description chunks (that is, all circles). The heuristic is then continued, giving priority to description chunk times that have exactly one buffered or requested DC. Since with MDC, the video quality improves with an increasing number of descriptions, this heuristic seeks to incrementally obtain more description chunks, once all description chunk time slots have either (a) a buffered description chunk (X), (b) a requested description chunk (−), or (c) no available description chunks (all O's).
A receiver peer n can estimate the time τn,k that a description chunk is to be received from its neighbor peer k by the following equation:
where t0 is the current time, Cn,k is the number of outstanding requests from n to k, r is the bitrate of one description, and dn,k is the estimated download rate of peer n from peer k. Note that in the foregoing exemplary heuristic, a peer sends a description chunk request to a neighbor that can deliver the requested description chunk at the earliest time. Therefore, if τn,k is the minimum, and if it is less than this description chunk's playback deadline, then peer n sends the request to peer k. If this description chunk cannot meet its playback deadline, it is not requested, and the heuristic goes to the next description chunk. In
Given the total potential download rate of a peer, which depends on its upload contribution, if a peer requests too aggressively for description chunks with a particular description chunk time and receives a video rate higher than its potential download rate at that time, then this peer is more likely to receive a lower video rate at other times. This leads to a video quality variation at the receiver peer. In this exemplary embodiment consistent with the present invention, to deduce quality variation, a peer k may maintain a threshold Γk, which is set to half between the current download rate Dk and the potential download rate Uk, plus an aggresivity factor α, such that:
where r is the bitrate of one description. At a given description chunk time, if there are more than γk description chunks that have already been buffered or requested, peer k does not request any additional description chunk at that time. When Dk is much less than Uk, Dk is more likely to increase, or has more room to increase. Thus the peer requests more aggressively. After receiving the requested description chunks, a receiver peer with a high Uk can potentially contribute more to its neighbors. In turn, this improves the probability that this peer can be served by its neighbors. Therefore, it leads to an increased download bitrate for peer k.
The fast-start phase accelerates the convergence process from a lower Dk to the target receiving rate Dk=Uk. When Dk reaches Uk, a peer requests
descriptions, which is comparable to its uplink bandwidth contribution. This is a bandwidth probing procedure. (Note that this procedure is note used in the double-queue approach described in § 4.4.3.
As shown in
§ 4.4.2.2 Second Alternative Receiver Peer Scheduling
In at least some embodiments consistent with the claimed invention, as a receiver, each peer may adopt a random request algorithm to request the LCs from its neighbors.
Ties can be handled many ways. One possibility is for the receiver to randomly select a supplier for this LC. The receiver performs the above receiver side scheduling algorithm periodically, based on the most recent buffer maps from its suppliers.
Each request has a deadline T. If within time T (after the request has been sent) the LC has not been received, the receiver assumes that this LC cannot be served on time and requests it again based on the same algorithm in the next request period. In the supplier side, if the supplier cannot serve a request within time T after it has received this request, it will automatically remove this request from its request queue.
§ 4.4.3 Download Rate Probing
In the exemplary receiver-side scheduling described above, peer n schedules requests to peer k based on the current estimated dn,k. In particular, a layer chunk (or description chunk, or substream chunk) that is expected to arrive after its deadline is not requested. However, dn,k may not match the potential rate that peer n can receive from peer k. If a receiver requests too conservatively, it may not fully utilize the potential bandwidth that its supplier peer has available to allocate to it. On the other hand, if this receiver requests too aggressively, many of its requested layer chunks (or description chunks, or substream chunks) might not be served before their deadlines. In this case, the layer chunk (or description chunk, or substream chunk) requests in a later request round may be blocked by the earlier requests, no matter what their priorities are. An exemplary refinement over the foregoing supplier/receiver scheduling methods is now described.
With embodiments using this exemplary alternative scheduling, in addition to sending the regular requests based on the receiver-side scheduling described above, each receiver peer also sends some probing requests to its neighbors. A probing request has lower priority to be served than a regular request. As shown in
In both of these queues 910/920, the requests are served in a FIFO manner. For each receiver, the requests in the probing requests queue 920 can be served only if the regular requests queue 910 is empty. Therefore, if receiver 2 in
In some embodiments consistent with the present invention, these probing requests can be piggybacked in the message of sending regular requests such that the probing requests do not introduce much additional overhead.
§ 4.4.4 Partner Selection
In order to locate a better partner with higher uplink bandwidth and more content, in at least some embodiments consistent with the present invention, a peer periodically replaces the partner with the least contribution with a new peer. To establish a stable relationship, it is desirable for the peer to locate a partner with similar uplink bandwidth. If the uplink bandwidth of the supplying partner is too low, the receiving peer can only receive the lowest layers from the partner. Thus, this supplying partner may not hold the receiving peer's missing layers. If, on the other hand, the uplink bandwidth of the supplying partner is too high, it is likely that other neighbors of the supplying partner have higher uplink bandwidth, so that the receiving peer is dropped by the supplying partner after several rounds.
To locate a suitable partner, one simple approach is to randomly select a peer from the active peers in the system. To accelerate this process, a peer can select a partner from a pre-screened set of candidates. For example, a peer can choose a partner with the same ISP or with the same type of access. These peers are expected to have similar uplink bandwidth. Another approach to locate a candidate is based on peer's buffer maps. (See, e.g., X. Hei and Y. Liu and K.W. Ross, “Inferring network-wide quality in P2P live streaming systems,” submitted.) In such an embodiment, when two peers intend to establish partnership, they exchange their buffer maps before establishing partnership. Since a peer with high contribution is supposed to receive more layers, and vice-versa, the uplink bandwidth contribution of a peer can be evaluated based on its buffer map.
Embodiments consistent with the present invention can provide incentives to encourage peers to increase their upload rates, thereby increasing the aggregate upload rate of the P2P system. In particular, the incentive scheme strongly penalizes free-riding. If multi-stream coding is used, the video quality received by the peers can be adapted to the upload capacity of the system. If multi-stream coding is used, initial playback delay can be decreased. Specifically, when a peer starts to view a video, it can choose to receive only one substream instead of the entire streams, which can reduce the initial playback delay significantly. If single-layer coding is used, if the received rate is less than the video bit rate, there is severe video quality degradation. Specifically, as described in the '248 provisional, due to long-term bandwidth fluctuation, the system cannot support all peers. Thus, some peers will experience the severe video quality degradation. With substream trading, weak peers are “selected” to experience video degradation, while strong peers have no video degradation. The received video quality is robust to peer departures. With at least some embodiments consistent with the present invention, a peer's received rate is commensurate with its upload rate, no matter how many peers are in the system.
Benefit is claimed to the filing date of both: U.S. Provisional Patent Application Ser. No. 60/937,807 (“the '807 provisional”), titled “USING MULTI-STREAM CODING IN P2P LIVE STREAMING,” filed on Jun. 28, 2007 and listing Zhengye LIU, Shivendra S. PANWAR, Keith W. ROSS, Yanming SHEN and Yao WANG as inventors; and U.S. Provisional Patent Application Ser. No. 61/075,248 (“the '248 provisional”), titled “SUBSTREAM TRADING: TOWARDS AN OPEN P2P LIVE STREAMING SYSTEM,” filed on Jun. 24, 2008 and listing Zhengye LIU, Shivendra S. PANWAR, Keith W. ROSS, Yanming SHEN and Yao WANG as inventors. The '807 and '248 provisionals are incorporated herein by reference. However, the scope of the claimed invention is not limited by any requirements of any specific embodiments described in the '807 and the '248 provisionals.
The United States Government may have certain rights in this invention pursuant to a grant awarded by the National Science Foundation. Specifically, the United States Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract or Grant No. 0435228 awarded by the National Science Foundation.
Number | Date | Country | |
---|---|---|---|
60937807 | Jun 2007 | US | |
61075248 | Jun 2008 | US |