POD-BASED SERVER BACKEND INFRASTRUCTURE FOR PEER-ASSISTED APPLICATIONS

Abstract
A backend server for a peer-to-peer network manages nodes according to a pod-based management scheme. Each pod comprises a plurality of nodes and only nodes within the same pod can directly share data in the peer-to-peer network. The server dynamically allocates nodes to pods and dynamically allocates computing resources for pushing data to the pods based on characteristics of the incoming data stream and performance of the peer-to-peer sharing. By dynamically adjusting the pod structure and resources available to them based on monitored characteristics, the server can optimize performance of the peer-to-peer network.
Description
BACKGROUND

1. Field of the Invention


The invention relates generally to the field of peer-to-peer networking and more particularly to a backend server infrastructure for a peer-to-peer network.


2. Description of the Related Arts


A peer-to-peer network is a networking architecture for sharing information by creating direct connections between “nodes” without requiring all information to pass through a centralized server. Conventionally, one or more backend servers manage all participating nodes as a single monolithic network. By increasing the number of nodes on the network, more connections can exist between nodes and data may flow more smoothly through the network. However, as the peer-to-peer network grows (number of nodes increases), the amount of information managed by individual nodes and backend servers increase. Furthermore, as the size of the network increases, the number of nodes increases in proportion to the available server resources, complicating the processing and hindering performance of the servers.


Some conventional implementations use “super-nodes,” which are nodes on the network that are specially configured to offload stress from the servers. Super-nodes generally operate in a similar fashion to normal nodes, except a super-node supports a larger number of connections to other nodes than a regular node. Super-nodes are conventionally distributed nodes that are not controlled or managed by backend servers in any fashion. However, solutions using super-nodes still have a number of shortcomings. First, super-nodes add complexity to a network. Second, inappropriate nodes may present themselves as candidates for super-node status (thus degrading the network). Third, super-nodes may go offline with little or no notice (also degrading the network). Fourth, super-nodes are likely more vulnerable to hacking than typical nodes or servers which would be controlled by a network operator. Thus, conventional backend architectures fail to provide the performance and robustness desirable in peer-to-peer networking applications.


SUMMARY

A server manages a peer-to-peer network and distributes streaming digital content. The server assigns a plurality of nodes to a plurality of “pods.” Each node is assigned to only one pod. A node shares data with other nodes within its pod and does not share data with nodes outside its pod. The server also determines an allocation of server resources to the plurality of pods. The server resources may be, for example, processing or networking resources that determine how much data a server directly provides to the nodes within a particular pod. The server receives, from a streaming data source, a given data block from a sequence of data blocks. The server pushes the given data block to each of the plurality of pods according to the allocation of server resources. The server monitors performance of peer-to-peer sharing of the given data block within each pod and re-allocates server resources between pods of the peer-to-peer network based on the monitored performance of the peer-to-peer sharing. Furthermore, the server may monitor the bit rate of the incoming data stream and adjust the allocation of server resources based on the monitored bit rate.


The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS

The teachings of the embodiments of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings.



FIG. 1 illustrates a networking environment for peer-to-peer data sharing using a pod-based backend management structure, in accordance with an embodiment of the present invention.



FIG. 2 illustrates a distribution tree structure for modeling distribution of data blocks in the peer-to-peer network, in accordance with an embodiment of the present invention.



FIG. 3 is a flowchart illustrating a process for managing distribution of content from a server in the pod-based backend management structure, in accordance with an embodiment of the present invention.



FIG. 4 illustrates an example architecture for a computing device for use a server or node in a peer-to-peer network, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION

Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” or “an embodiment” in various places in the specification are not necessarily all referring to the same embodiment.


Some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations or transformation of physical quantities or representations of physical quantities as modules or code devices, without loss of generality.


However, all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device (such as a specific computing machine), that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.


Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems. The invention can also be in a computer program product which can be executed on a computing system.


The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the purposes, e.g., a specific computer, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Memory can include any of the above and/or other devices that can store information/data/programs. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.


The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the method steps. The structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the present invention.


In addition, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention.


Overview


FIG. 1 illustrates an example of a peer-to-peer network environment for pod-based management. In the illustrated embodiment, a server 102 receives streaming data 104 for distribution throughout the peer-to-peer network 100. The server 102 assigns a plurality of sub-servers 106 (e.g., sub-servers 106-A, 106-B) to each manage a group of nodes referred to herein as “pods” 108 (e.g., pods 108-A, 108-B). Although only a small number of nodes are illustrated in each pod 108 for clarity and convenience, a typical pod 108 can contain, for example, tens, hundreds, or thousands of nodes. The minimum size of a pod 108 could be tens or hundreds of nodes, ensuring flexibility in management and resource allocation.


The server 102 is a computing device or cluster of computing devices that provides management and coordination functions for the peer-to-peer network 100. Furthermore, the server 102 is responsible for providing the streaming data 104 to a subset of nodes for distribution throughout the peer-to-peer network. In one embodiment, the server 102 comprises a pod allocation module 112 and a plurality of sub-servers 106. Each sub-server 106 is responsible for managing nodes in its respective pod 108. Furthermore, each sub-server 106 is responsible for pushing the streaming data 104 to one more nodes in its respective pod 108. These nodes then share the data 104 with other nodes, which may then share the data with other nodes, and so on so that the data is distributed throughout the pod 108. Generally, each sub-server comprises a virtual or physical set of computing resources allocated to supporting a particular pod 108. For example, in one embodiment, each sub-server 106 comprises a single physical computer system. Alternatively, a sub-server 106 may comprises a cluster of physical computer systems that collectively manage a pod 108. In another embodiment, a sub-server 106 may comprises a physical or virtual portion of a computer system, such that one physical computer system includes multiple sub-servers 106.


A pod 108 is a collection of nodes that can share data with each other, either directly or indirectly, through a peer-to-peer data sharing protocol. Nodes within a pod are managed by the same sub-server 106. Within a pod 108, nodes that are directly connected (i.e., neighboring nodes) can directly share data with each other. Nodes within a pod 108 that are not directly connected may share data indirectly via hops through neighboring nodes. However, a node cannot directly share data with a node assigned to a different pod 108. In one embodiment, a network membership management protocol determines how nodes within a pod 108 connect with each other for data sharing. An example of a network membership management protocol is described in U.S. patent application Ser. No. _____ to Yang, et al., filed on Mar. 4, 2011, entitled “Network Membership Management for Peer-to-Peer Networks,” which is incorporated by reference herein.


In one embodiment, a pod 108 can be further broken down into a number of “sub-pods” (not shown) with each sub-pod containing a portion of the nodes in the pod 108. In one embodiment, a sub-server 106 may be assigned to support a single sub-pod instead of a full pod 108.


The pod allocation module 112 is responsible for allocating nodes into corresponding pods 108 and for allocating sub-servers 106 to pods 108. The pod allocation module 112 assigns nodes to pods in a dynamic manner. Initially, when a new node joins the peer-to-peer network 100, the pod allocation module 112 determines which pod 108 is most suitable for hosting the new node. Furthermore, the pod allocation module 112 may dynamically transfer existing nodes between pods 108 in order to optimize performance of the peer-to-peer network 100. The pod allocation module 112 may also dynamically create new pods 108 or remove pods 108 as it deems appropriate. The pod allocation module 112 may also determine how pods 108 share information about the nodes managed in the pod. Thus, the pod allocation module 112 dynamically controls the structure of the pods 108 in order to control the workload of the managing sub-servers 106 and/or other policy-based decisions that may be enforced in real-time.


In one embodiment, the pod allocation module 112 also dynamically allocates sub-servers 106 to pods 108. Furthermore, the pod allocation module 112 may dynamically adjust the structure of the sub-servers 106 by controlling the allocation of computing resources associated with each sub-server 106. Thus, for example, the pod allocation module 112 may shift computing resources from a first sub-server 106-A to a second sub-server 106-B in order to improve overall performance of the peer-to-peer network 100.


The pod-based backend architecture achieves excellent performance in peer-to-peer sharing between nodes within each pod 108. Furthermore, the architecture is scalable in a consistent fashion. For example, additional nodes added to the overall peer-to-peer network 100 can be handled by incrementally allocating more pods. Thus, consistent performance while scaling can be achieved without increasing complexity or workload of any single component.


Server-Side Push Delivery of Streaming Content

In one embodiment, the peer-to-peer network distributes streaming data 104 (e.g., audio, video, or other time-based content) according to a peer-to-peer live content delivery protocol. An example of a system and method for live content delivery is described in U.S. patent application Ser. No. ______ to Yang, et al. filed on Mar. 4, 2011 and entitled “Peer-to-Peer Live Content Delivery,” which is incorporated by reference herein. In one embodiment, the streaming data 104 is divided into a number of data blocks based on fixed time unit (e.g., a 0.5 second chunk of video) or a fixed data unit. Alternatively, variable block sizes may be used. In one embodiment, a data stream 104 is divided into a number of sub-streams with each sub-stream corresponding to a portion of the original stream. The sub-streams are not necessarily identical in terms of number of data blocks they contain or the size of the data blocks. Furthermore, for video data streams, each data block can contain partial video frames.


The distribution of an individual data block can be modeled using a distribution tree structure as illustrated in FIG. 2. For any given data block, a distribution tree 200 illustrates the flow of the data block between interconnected nodes. In the illustrated example, a root node 201 receives a data block 203. The root node 201 may correspond to a sub-server 106. The root node 201 distributes the data block 403 to one or more first level nodes 405. The first level nodes 405 then distribute the data block 403 to one or more second level nodes 407, and so on for any number of levels.


Each data block may flow through the nodes in an entirely different manner. Thus, for a window size of W data blocks, there will be W such distribution trees, with each tree corresponding to one of the data blocks. For a given data block, a physical node may appear in a distribution tree multiple times. This occurs, for example, if the node receives the data block from two or more different neighboring nodes.


The server 102 receives the streaming data 104 from a streaming data source. The streaming data source may be, for example, an external computing device or a storage device local to the server 102. In the first level of the tree 200, sub-servers 106 distribute a received data block to one or more nodes in their respective pods 108. For example, in one embodiment, a sub-server 106 push N copies of each data block into the pod 108, where N is a parameter configurable by a network administrator or a fixed value. The first level recipients 205 of the each block are chosen by a number of factors. The first level nodes 205 of such blocks and copies of the blocks could also vary from block to block. The exact number of copies of a block pushed from sub-servers 106 to the pod 108 can be determined by the sub-servers 106 and can also vary from block to block. Multiple blocks can be aggregated by the sub-servers 106 to achieve desired operation efficiency. The distribution of blocks can be dynamically adapted to real-time performance conditions of the streaming. For example, the portion of the traffic that's supported directly by the server 102 can be determined and adjusted based on current individual end-user performance and real-time business logic.


Several sample schemes to select the first level nodes 205 include (a) round-robin, with the number of nodes in each “round” being the number of original copies injected into the pod 108; (b) randomized: after each “iteration” of nodes, randomize the “master” node list to “re-shuffle” the distribution tree topology, (c) dynamically adjusting number of copies to inject to the pod 108 to adjust to changing network and performance conditions; and (d) Selective on receivers: pick the nodes that meet certain criteria to increase utilization of those nodes. These criteria can include network distance, upload capacity, ping response time, performance reported observed by other nodes.


Once the first nodes receive the data block from the sub-server 106, the first level nodes 205 distribute the data block to one or more second level nodes 207 within the pod 108. These nodes may then continue distributing the block to other nodes, and so on. Thus, each data block is dispersed throughout each pod 108. Different blocks and different copies of the same block could potentially follow distinct paths and topologies as it is distributed. These dynamics can be tuned and controlled by server-side policies.


In one embodiment, nodes can request data directly a sub-server 106 at if they are unable to obtain the data from other nodes. The server 102 may also implement logic to prevent a node from requesting too much data in this way. The amount of data a sub-server 106 can directly provide to nodes varies depending on the computing resource allocated to a particular sub-server 106 and pod 108.


The server-side push scheme provides maximum flexibility to sub-servers 106 to make real-time decisions regarding how to optimally distribute data blocks. Furthermore, these decisions may be made on a block-by-block granularity.


In one embodiment, the data distribution protocol constrains the timing of the distribution of data blocks in a manner optimized for streaming data. For example, the distribution protocol for streaming data should attempt to provide data blocks within a specified time constraint such that they can be continuously outputted. This data distribution protocol may be useful, for example, to distribute broadcasts of “live” video or other time-based streams. As used herein, the term “live” does not necessarily require that video stream is distributed concurrently with its capture (as in, for example, a live sports broadcast). Rather, the term “live” refers to data for which it is desirable that all participating nodes receive data blocks in a roughly synchronized manner (e.g., within a time period T of each other). Thus, examples of live data may include both live broadcasts (e.g. sports or events) that are distributed as the data is captured, and multicasts of previously stored video or other data.


In one embodiment, the distribution protocol attempts to ensure delivery of a data block to each subscribing node within a time period T seconds from when the server 102 initially outputs the data block. For example, in various embodiments, T may correspond to a few seconds, 5 minutes, or one hour. In one embodiment, if a node cannot receive the media block within the time period T (e.g., due to bandwidth constraints or latency), the block is no longer considered useful to the node and the node may be drop its request for the data block in favor of later blocks. Furthermore, the order in which data blocks are requested and distributed to various nodes may be prioritized in order to optimize the nodes' ability to meet the time constraints, with the goal of enabling the nodes to continuously output the streaming data.


Generally, all of the nodes in a given pod 108 correspond to viewers of the same streaming session. Multiple pods 108 may belong to the same streaming session, i.e., the nodes subscribing to a particular streaming session and sharing the streaming data may span across multiple pods. However, nodes belonging to different pods 108 do not share the streaming data with nodes in different pods 108. This limitation reduces complexity of the peer-to-peer network 100 without sacrificing performance.


Adaptive Real-Time Resource Allocation Using Pod-Based Management

The resources that a sub-server 106 may require to effectively manage a pod 108 may vary over time due to a large number of factors. For example, a live streaming session may have a large number of viewers, i.e. a large number of nodes on the peer-to-peer network 100. However, the number and characteristics of nodes may fluctuate dramatically over the course of the live streaming session as viewers come and go. Furthermore, viewers may come from a variety of network conditions, with Internet links supporting a wide variety of speeds. The pod-based backend architecture can allocate resources adaptively in real-time to accommodate anticipated workload, thus offering tremendous flexibility and efficiency.



FIG. 3 illustrates an embodiment of a process for adaptive real-time resource allocation using pod-based management. The server 102 assigns 302 new nodes to pods 108 as nodes join the peer-to-peer network 100. The server 102 may use a number of policies to determine how it assigns nodes to the pods 108. For example, in one embodiment, the server 102 assigns a new node to a pod 108 based on a resource availability metric. In this embodiment, the server 102 may assign the new node to the pod 108 that currently has the most resources available. The resource availability metric could comprise, for example, processing power available on a sub-server 106 managing the pod 108, bandwidth available for a particular pod 108, or a combination of factors. In another embodiment, the server 102 may assign a node to a pod 108 based on a proximity metric of the node to other nodes presently in the pod 108. The proximity metric can comprise, for example, network distance, ping delay distance, geo-location distance, distance based on IP transmit peering relationships, ISPs, peering relationships between ISPs, or a combination of factors. In yet other embodiments, the server 102 may assign new nodes to pods 108 based on a combination of factors.


The server 102 also allocates 304 computing resources to pods 108. For example, as described above, each pod 108 is managed by a sub-server 106 that comprises a set of computing resources. The initial allocation of computing resources may be based on a variety of factors such as, for example, characteristics of the stream 104 being served to each pod 108, characteristics of nodes within a pod 108, number of nodes within a pod 108, predicted peer-to-peer performance of data sharing within the pods, or a combination of factors.


The server 102 monitors 306 characteristics of the incoming data stream 104. For example, in on embodiment, the server 102 can obtain the bit rate information of the incoming data stream 104 one second or a few seconds or tens of seconds before the providing the data stream to the nodes. In one embodiment, more than one version of the same content may be provided, with each version supporting a different bit-rate. The bit rates of these different versions can be detected so that each version can be effectively assigned to one or more pods 108, and nodes may be allocated to those pods depending on the bit-rate the node can support. In some embodiments, the bit rates of encoded video streams vary from time to time. Two categories of encoding schemes exist: Variable Bit Rate (VBR) and Constant Bit Rate (CBR). In VBR, video data in a stream may have a sudden burst where the bit rate increases substantially. In CBR, the bit rate is generally substantially constant, but it is still not uncommon that peak or burst bit rate is substantially higher than the average bit rate. These bursts can be detected before they occur so that the server 102 can dynamically allocate the pod structure and manage resource to accommodate the changing bit rates.


The server 102 also monitors 308 performance of peer-to-peer sharing within each pod 108. For example, in one embodiment, the server 102 monitors an aggregate share ratio across all of the nodes in a given pod. A given node generally receives a portion of the streaming data directly from the server 102 and a portion of the streaming data from other nodes within its pod. The share ratio for a given node represents the fraction of the streaming data received by the given node directly from the server (as opposed to from other nodes). The aggregate share ratio for a pod provides the average share ratio across all nodes in a pod and thus provides the fraction of the stream traffic in a pod that is supplied to the nodes directly from the server 102. Measuring the aggregate share ratio allows the server 102 to estimate how much additional bandwidth will be needed in a variety of cases (for instance, when a new node joins the pod, or if the bandwidth of the stream changes substantially). Thus, the server 102 can determine, based on the aggregate share ratio, how much resources it will need to support a particular pod. The server 102 may use this information to dynamically adjust the computing resources allocated to the pod or dynamically change the node assignments to pods in order to gain more favorable performance.


Other performance metrics could also be measured to provide estimates of maximum latency. With pod-based server-side assistance, the maximum latency for a data unit to be delivered to all nodes in a pod 108 is generally determined by block size, number of hops in the distribution path, transmit queue size, and other factors. Various parameters can be measured in order to enable the server 102 to adjust various factors to meet different goals.


Based on the monitored performance of the peer-to-peer sharing and the monitored characteristics of the incoming data stream 104, the server 102 dynamically updates 310 the assignments of nodes to pods and/or dynamically updates the computing resources allocated to each pod 108. The server 102 then pushes 312 streaming data 104 to nodes in the pods 108 using the dynamically updated resources and pod assignments. For example, as described above, the server 102 can handle fluctuating bit rate of an incoming data stream 104 efficiently by monitoring the bit rate of the incoming stream and making adjustments to the pod allocations to enhance performance. For example, when bit rate of the incoming data stream 104 increases for a particular pod 108, more server resource and bandwidth resource may be required. When the bit rate drops for a particular pod 108, the pod-based management component can release certain resource to reflect the reduced demand, thus freeing up resources for other pods 108. In one embodiment, the server 102 can complete the procurement within the backend infrastructure within one second, procuring sufficient resources to meet the upcoming demand. This allows the server 102 to support data streams without any constraint on variance of bit rate or the maximum bit rate an encoding scheme can use. By managing the backend resource on such a fine-grained granularity and short time frame, the pod-based management scheme maximizes utilization of limited resources while meeting the performance and quality requirement of applications.


Furthermore, the server 102 may dynamically allocate or release resources based on the monitored performance of the peer-to-peer data sharing as it changes over time. In one embodiment, an increase in resource allocation enables a sub-server 106 to increase the amount of data pushed directly from the sub-server 106 to the nodes (therefore decreasing the reliance on peer-to-peer sharing). Using this real-time allocation, the server 102 eliminates need for rigid, upfront over-allocation before a live streaming session. For example, in one embodiment, the pod size of a pod 108 can be increased if offload of server resources is desired. The pod size of a pod 108 can be reduced to reduce the maximum latency time for a data unit to be delivered to all peers in the pod 108. For example, in one embodiment, the server 102 monitors (or indirectly determines based on other measured quantities), a latency metric comprising a maximum latency for a data unit to be delivered to all peers in the pod. The latency metric is related to the initial channel startup latency and the time differential between live content source and the playback point at a particular node.


The surging crowd at beginning of a live streaming session may require extra resources from the server 102. In this period of viewership ramp up, servers, bandwidth, transit links, and other backend sources can be allocated with the anticipated workload tied to the ramp up phase. Based on the observed pattern of number of new viewers tuning in, their network conditions, and their performance within a pod, resource requirements for the upcoming time period of one second or a few seconds or a few minutes can be projected. Extra resources required can then be procured through the pod-based scheme dynamically. For example, in one embodiment, the server 102 can pre-burst data to the nodes at the startup of an income data stream (i.e., for the first N data blocks of the streaming data 104) or to a new node that joins the network. The amount of data to be pre-bursted can be adjusted based on the particular stream or configuration.


By tuning various parameters, startup latency within one to a few seconds can be achieved. In the optimum setting for high-resolution, live, real-time video broadcast, low latency between the live source and viewers and fast video playback startup can be provided while maintaining the same video playback position among viewers. Furthermore, the pod-based architecture allows for rapid allocation and de-allocation of nodes to and from pods in order to meet the network demand without over-committing limited resources. The fine-grained mechanism of pod-based resource management can operate on the order of one or a few seconds, acquiring, allocating, or releasing server and other resources rapidly.


Pod-Based Management in a Content Delivery Network (CDN)

Pod-based management can be utilized within a Content Delivery Network (CDN) environment. Its resource considerations on available CPU cycles, processing powers, computer memory, internal bandwidth, external and transmit bandwidth can be applied directly to a CDN infrastructure. Pod-based management can be applied to multiple transit links to implement specific policies regarding bandwidth, transit pricing, or real-time transit performance metrics to adapt to different goals. Similarly, it can be applied to multiple CDN providers and to arbitrage over multiple CDN choices to achieve desired business and performance goals.


Server Assistance to Detect and Repair Isolated Peers

In one embodiment, sub-servers 106 can send out periodic heartbeat messages, which are propagated their respective pods 108. The propagation can be performed as part of the membership management protocol, or the data sharing protocol, or outside those protocols. A node-side timeout detects potential isolation of the node from the sub-server 106. A node can select to re-initiate its membership within the pod by submitting a join operation to the sub-server at that point. In one embodiment, nodes also transmit heartbeat messages to the sub-server 106 periodically. The server 102 may remove a node from a pod 108 if such a heartbeat message has not been received after a certain timeout period. Buffer acknowledgements can also be used as implicit heartbeats.


System Architecture


FIG. 4 is a high-level block diagram illustrating an example of a computing device 400 that could act as a node or a server 102 (or sub-server 106) on the peer-to-peer network 100. Illustrated are at least one processor 402, and input controller 404, a network adaptor 406, a graphics adaptor 408, a storage device 410, and a memory 412. Other embodiments of the computer 400 may have different architectures with additional or different components. In some embodiments, one or more of the illustrated components are omitted.


The storage device 410 is a computer-readable storage medium such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory 412 store instructions and data used by the processor 402. The pointing device 426 is a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 424 to input data into the computer system 400. The graphics adapter 408 outputs images and other information for display by the display device 422. The network adapter 406 couples the computer system 400 to a network 430.


The computer 400 is adapted to execute computer program instructions for providing functionality described herein. In one embodiment, program instructions are stored on the storage device 410, loaded into the memory 412, and executed by the processor 402 to carry out the processes described herein.


The types of computers 400 operating on the peer-to-peer network can vary substantially. For example, a node comprising a personal computer (PC) may include most or all of the components illustrated in FIG. 4. Another node may comprise a mobile computing device (e.g., a cell phone) which typically has limited processing power, a small display 422, and might lack a pointing device 426. A server 102 may comprise multiple processors 402 working together to provide the functionality described herein and may lack an input controller 404, keyboard 424, pointing device 426, graphics adapter 408 and display 422. In other embodiments, the nodes or the server could comprises other types of electronic device such as, for example, a personal digital assistant (PDA), a mobile telephone, a pager, a television “set-top box,” etc.


The network 430 enables communications among the entities connected to it (e.g., the nodes and the server). In one embodiment, the network 430 is the Internet and uses standard communications technologies and/or protocols. Thus, the network 430 can include links using a variety of known technologies, protocols, and data formats. In addition, all or some of links can be encrypted using conventional encryption technologies. In another embodiment, the entities use custom and/or dedicated data communications technologies.


Upon reading this disclosure, those of skill in the art will appreciate still additional alternative designs for pod-based management of nodes in a peer-to-peer network having the features described herein. Thus, while particular embodiments and applications of the present invention have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus of the present invention disclosed herein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims
  • 1. A method performed by a server for distributing streaming digital content in a peer-to-peer network, the method comprising: assigning a plurality of nodes to a plurality of pods, wherein each node is assigned to only one pod, wherein a node shares data with other nodes within its pod and does not share data with nodes outside its pod;determining an allocation of server resources to the plurality of pods;receiving from a streaming data source a given data block from a sequence of data blocks;pushing the given data block to each of the plurality of pods according to the allocation of server resources;monitoring performance of peer-to-peer sharing of the given data block within each pod; andre-allocating server resources between pods of the peer-to-peer network based on the monitored performance of the peer-to-peer sharing.
  • 2. The method of claim 1, further comprising: re-assigning at least one node to a different pod based on the monitored performance of the peer-to-peer sharing.
  • 3. The method of claim 1, further comprising: detecting a change in bit stream of data pushed to a first pod in the plurality of pods;responsive to detecting an increase in the bit stream, increasing the allocation of computing resources devoted to the first pod; andresponsive to detecting a decrease in the bit stream, decreasing the allocation of computing resources devoted to the first pod.
  • 4. The method of claim 1, wherein assigning the plurality of nodes to the plurality of pods comprises: detecting a new node joining the peer-to-peer network;determining an available resource metric for each of the plurality of pod; andassigning the new node to a pod having the most currently available resources.
  • 5. The method of claim 1, wherein assigning the plurality of nodes to the plurality of pods comprises: detecting a new node joining the peer-to-peer network; anddetermining a distance metric associated with each of the plurality of pods relative to the new node; andassigning the new node to a pod having a smallest distance metric relative to the new node.
  • 6. The method of claim 1 wherein the re-allocating the server resources comprises modifying an amount of data pushed to the nodes in the pod directly from the server.
  • 7. The method of claim 1, wherein the monitored performance comprises an aggregate share ratio indicating a fraction of stream traffic provided to the nodes in the pod directly from the server.
  • 8. A computer-readable storage medium storing computer-executable instructions for distributing streaming digital content in a peer-to-peer network, the instructions when executed causing a processor to perform steps including: assigning a plurality of nodes to a plurality of pods, wherein each node is assigned to only one pod, wherein a node shares data with other nodes within its pod and does not share data with nodes outside its pod;determining an allocation of server resources to the plurality of pods;receiving from a streaming data source a given data block from a sequence of data blocks;pushing the given data block to each of the plurality of pods according to the allocation of server resources;monitoring performance of peer-to-peer sharing of the given data block within each pod; andre-allocating server resources between pods of the peer-to-peer network based on the monitored performance of the peer-to-peer sharing.
  • 9. The computer-readable storage medium of claim 8, the instructions when executed further causing the processor to re-assign at least one node to a different pod based on the monitored performance of the peer-to-peer sharing.
  • 10. The computer-readable storage medium of claim 8, the instructions when executed further causing the processor to perform steps including: detecting a change in bit stream of data pushed to a first pod in the plurality of pods;responsive to detecting an increase in the bit stream, increasing the allocation of computing resources devoted to the first pod; andresponsive to detecting a decrease in the bit stream, decreasing the allocation of computing resources devoted to the first pod.
  • 11. The computer-readable storage medium of claim 8, wherein assigning the plurality of nodes to the plurality of pods comprises: detecting a new node joining the peer-to-peer network;determining an available resource metric for each of the plurality of pod; andassigning the new node to a pod having the most currently available resources.
  • 12. The computer-readable storage medium of claim 8, wherein assigning the plurality of nodes to the plurality of pods comprises: detecting a new node joining the peer-to-peer network; anddetermining a distance metric associated with each of the plurality of pods relative to the new node; andassigning the new node to a pod having a smallest distance metric relative to the new node.
  • 13. The computer-readable storage medium of claim 8, wherein the re-allocating the server resources comprises modifying an amount of data pushed to the nodes in the pod directly from the server.
  • 14. The computer-readable storage medium of claim 8, wherein the monitored performance comprises an aggregate share ratio indicating a fraction of stream traffic provided to the nodes in the pod directly from the server.
  • 15. A system for distributing streaming digital content in a peer-to-peer network, the system comprising: one or more processors; anda computer-readable storage medium storing computer-executable instructions that when executed by the one or more processors cause the one or more processors to perform steps including: assigning a plurality of nodes to a plurality of pods, wherein each node is assigned to only one pod, wherein a node shares data with other nodes within its pod and does not share data with nodes outside its pod;determining an allocation of server resources to the plurality of pods;receiving from a streaming data source a given data block from a sequence of data blocks;pushing the given data block to each of the plurality of pods according to the allocation of server resources;monitoring performance of peer-to-peer sharing of the given data block within each pod; andre-allocating server resources between pods of the peer-to-peer network based on the monitored performance of the peer-to-peer sharing.
  • 16. The system of claim 15, the instructions when executed further causing the one or more processors to re-assign at least one node to a different pod based on the monitored performance of the peer-to-peer sharing.
  • 17. The system of claim 15, the instructions when executed further causing the one or more processors to perform steps including: detecting a change in bit stream of data pushed to a first pod in the plurality of pods;responsive to detecting an increase in the bit stream, increasing the allocation of computing resources devoted to the first pod; andresponsive to detecting a decrease in the bit stream, decreasing the allocation of computing resources devoted to the first pod.
  • 18. The system of claim 15, wherein assigning the plurality of nodes to the plurality of pods comprises: detecting a new node joining the peer-to-peer network;determining an available resource metric for each of the plurality of pod; andassigning the new node to a pod having the most currently available resources.
  • 19. The system of claim 15, wherein assigning the plurality of nodes to the plurality of pods comprises: detecting a new node joining the peer-to-peer network; anddetermining a distance metric associated with each of the plurality of pods relative to the new node; andassigning the new node to a pod having a smallest distance metric relative to the new node.
  • 20. The system of claim 15, wherein the re-allocating the server resources comprises modifying an amount of data pushed to the nodes in the pod directly from the server.
  • 21. The system of claim 15, wherein the monitored performance comprises an aggregate share ratio indicating a fraction of stream traffic provided to the nodes in the pod directly from the server.
RELATED APPLICATIONS

This application claims priority from U.S. provisional application No. 61/311,141 entitled “High Performance Peer-To-Peer Assisted Live Content Delivery System and Method” filed on Mar. 5, 2010, the content of which is incorporated by reference herein in its entirety.

Provisional Applications (1)
Number Date Country
61311141 Mar 2010 US