Content distribution networks (CDNs) face capacity and efficiency issues associated with the increase in popularity of on-demand audio/video streaming. Future Internet usage is expected to include a rich variety of online multimedia services accessed from an ever growing number of multimedia capable mobile devices. As such, CDN providers are challenged to provide solutions that can deliver bandwidth-intensive, delay-sensitive, on-demand video-based services over increasingly crowded, bandwidth-limited wireless access networks. One major cause for the bandwidth stress facing wireless network operators is the difficulty to exploit the multicast nature of the wireless medium when wireless users or access points do not experience the same channel conditions or access the same content at the same time.
In one embodiment a network element (sender) for a wireless Content Delivery Network is provided that is configured to receive a request from receiver devices for a video segment over the wireless CDN. The network element computes a number of descriptors of the video segment scheduled to be delivered to each receiver device, where the number of descriptors is determined for each receiver device based on channel conditions between the network element and the respective receiver devices. The network element clusters a set of descriptors for each receiver device into a minimum number of Generalized Independent Sets (GISs) based on the computed number of descriptors and the channel conditions between the network element and the receiver devices, and generates a multicast codeword encoding the clustered descriptors for each receiver device using the minimum number of GISs. The generated multicast codeword is transmitted from the network element to each of the receiver devices in response to the received request.
In one embodiment the network element determines the clustered set of descriptors scheduled for each receiver device and the minimum number of Generalized Independent Sets (GISs) based on cached content at each of the receivers.
In one embodiment the network element generates the multicast codeword such that each receiver device is enabled to decode its requested video segment at a rate that is equal to the maximum rate achievable based on the channel conditions.
In one embodiment the network element generates the multicast codeword as a fixed length codeword.
In one embodiment the network element clusters the set of descriptors scheduled for each receiver device into a minimum number of Generalized Independent Sets (GISs) such that descriptors in a given GIS that are scheduled for different receiver devices can be transmitted in a same time-frequency slot without affecting decodability of the multicast codeword at the receiver devices.
In one embodiment the network element determines the minimum number of Generalized Independent Sets (GISs) as a determined channel-aware chromatic-number of GISs that cover a conflict graph.
In one embodiment the network element encodes the clustered descriptors scheduled for each receiver device into fixed-length channel codewords for each receiver based on channel conditions between the network element and each receiver device and combines the fixed-length channel codewords into the multicast codeword.
In one embodiment the network element determines the fixed-length channel codewords for each receiver device using a channel codebook that is determined based on the channel conditions between the network element and each receiver device.
In one embodiment the network element transmits the channel codebook to each of receiver devices.
In one embodiment the network element generates the multicast codeword encoding the clustered descriptors for each receiver device using the minimum number of GISs determined via a Channel-Aware Chromatic Index Coding (CA-CIC) algorithm.
In one embodiment the network element generates the multicast codeword encoding the clustered descriptors for each receiver device using the minimum number of GISs determined via a Channel-Aware Hierarchical greedy Coloring (CA-HgC) algorithm.
Various aspects of the disclosure are described below with reference to the accompanying drawings, in which like numbers refer to like elements throughout the description of the figures. The description and drawings merely illustrate the principles of the disclosure. It will be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles and are included within spirit and scope of the disclosure.
As used herein, the term, “or” refers to a non-exclusive or, unless otherwise indicated (e.g., “or else” or “or in the alternative”). Furthermore, as used herein, words used to describe a relationship between elements should be broadly construed to include a direct relationship or the presence of intervening elements unless otherwise indicated. For example, when an element is referred to as being “connected” or “coupled” to another element, the element may be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Similarly, words such as “between”, “adjacent”, and the like should be interpreted in a like fashion.
In the following description, illustrative embodiments will be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented as program modules or functional processes include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and may be implemented using existing hardware at existing network elements (e.g., base stations, base station controllers, NodeBs eNodeBs, etc.). Such existing hardware may include one or more Central Processors (CPUs), digital signal processors (DSPs), application-specific-integrated-circuits, field programmable gate arrays (FPGAs) computers or the like.
Although a flow chart may describe the operations as a sequential process, many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed, but may also have additional steps not included in the figure. A process may correspond to a method, function, procedure, subroutine, subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function.
As disclosed herein, the term “storage medium” or “computer readable storage medium” may represent one or more devices for storing data, including read only memory (ROM), random access memory (RAM), magnetic RAM, core memory, magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other tangible machine readable mediums for storing information. The term “computer-readable medium” may include, but is not limited to, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instruction(s) and/or data.
Furthermore, example embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a computer readable storage medium. When implemented in software, a special purpose processor or special purpose processors will perform the necessary tasks.
A code segment may represent a procedure, function, subprogram, program, routine, subroutine, module, software package, class, or any combination of instructions, data structures or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
The present disclosure is directed towards a wireless video delivery paradigm based on the use of channel-aware caching and coded multicasting that allows simultaneously serving multiple cache-enabled access points that may be requesting different content and experiencing different channel conditions. The present disclosure addresses the caching-aided coded multicast problem as a joint source-channel coding problem and describes a framework that preserves the cache-enabled multiplicative throughput gains of the error-free scenario, while ensuring independent maximum per-receiver (access point) channel rates, unaffected by the presence of receivers with worse channel conditions.
Some of the notable aspects of the present disclosure described in detail below include 1) an information theoretic framework for wireless video distribution that takes into account the specific characteristics of the wireless propagation channel (i.e., channel rates) in the presence of any combination of unicast/multicast transmission and wireless edge caching. 2) A channel-aware caching-aided coded multicast video delivery scheme, referred to herein as Random Popularity based caching and Channel-Aware Chromatic-number Index Coding (RAP-CA-CIC), which provides for the highest admissible video throughput to each receiver (helper) for the given propagation conditions, i.e., avoiding throughput penalizations from the presence of receivers experiencing worse propagation conditions. 3) A novel polynomial-time approximation of RAP-CA-CIC, referred to as Random Popularity based caching and Hierarchical greedy Channel Aware Coloring (RAP-CA-HgC) with running time at most cubic in the number of receivers and quadratic in the number of (per-receiver) requested video descriptions.
As shown in
The wireless links interconnecting the network element 151 to the receiver nodes 200 are assumed to be lossy links. Specifically, the lossy links between the sender and the receivers are assumed to be stochastically degraded binary broadcast channels (BC). While the binary field is particularly convenient for ease of presentation, the present disclosure is not limited to any particular model and can be extended to general stochastically degraded broadcast channels, with the case of arbitrary additive noise broadcast channels being particularly immediate. The channel noise analysis includes both a binary symmetric broadcast channel (BS-BC) and a binary erasure broadcast channel (BE-BC).
BS-BC Case:
The channel output Yu[t] observed by the u-th receiver at the t-th channel use takes values in the binary alphabet Y≡{0,1} and is given by
Y
u
[t]=X[t]+Z
u
[t],
where X[t] denotes the binary encoded symbol sent by the sender at the t-th channel use, and Zu[t] denotes the additive noise of the channel corresponding to receiver u, modeled as a Bernoullian variable with parameter given by the channel degradation εu, i.e., Z[t]:B(εu). Denoting the achievable channel rate as and denoting H(εu) as the binary entropy function the relationship may be defined as: ηu≦1−H(εu).
BE-BC Case:
The channel output Yu[t] observed by the u-th receiver at the t-th channel use exactly reproduces the channel input X[t] with probability (1−εu) and otherwise indicates an erasure event, with probability εu. In this case, Yu[t] takes values in the binary alphabet Y≡{0,1,å} so that an erasure event is represented by the erasure symbol “å”. The achievable channel rate ηu then satisfies ηu≦1−εu. Without loss of generality, it is assumed that 0≦ε1≦ε2≦ . . . ≦εU≦1.
The transmitter 152, receiver 154, memory 156, and processor 158 may send data to and/or receive data from one another using the data bus 159. The transmitter 152 is a device that includes hardware and any necessary software for transmitting wireless signals including, for example, data signals, control signals, and signal strength/quality information via one or more wireless connections to other network elements in a communications network.
The receiver 154 is a device that includes hardware and any necessary software for receiving wireless signals including, for example, data signals, control signals, and signal strength/quality information via one or more wireless connections to other network elements in a communications network.
The memory 156 may be any device capable of storing data including magnetic storage, flash storage, etc.
The processor 158 may be any device capable of processing data including, for example, a special purpose processor configured to carry out specific operations based on input data, or capable of executing instructions included in computer readable code. For example, it should be understood that the modifications and methods described below may be stored on the memory 156 and implemented by the processor 158 within network element 151.
It should be understood that the below modifications and methods may be carried out by one or more of the above described elements of the network element 151. For example, the receiver 154 may carry out steps of “receiving,” “acquiring,” and the like; transmitter 152 may carry out steps of “transmitting,” “outputting,” “sending” and the like; processor 158 may carry out steps of “determining,” “generating”, “correlating,” “calculating,” and the like; and memory 156 may carry out steps of “storing,” “saving,” and the like.
The present disclosure is described, without limitation, in the context of a video streaming application. It is accordingly assumed that the sender has access to a content library F={1, . . . , m} containing m files, where each file has entropy equal to F bits. Each receiver uεU has a cache with storage capacity MuF bits (i.e., Mu files). Mu,f denotes the fraction of file fεF stored (cached) at receiver u, such that ΣfMu,f≦Mu. Without loss of generality, the files are represented by binary vectors Wfε2F.
Each file fεF represents a video segment, which is multiple description coded into D descriptions using, for example, one of conventional coding schemes. Each description is packaged into one information unit or packet for transmission. Each packet is represented as a binary vector of length (entropy) B=F/D bits. A low-quality version of the content can be reconstructed once a receiver is able to recover any description. The reconstruction quality improves by recovering additional descriptions and depends solely on the number of descriptions recovered, irrespective of the particular recovered collection. Hence, there are D video qualities per segment, where the entropy in bits of quality dε{1, . . . , D}, containing d descriptions, is given by Fd=Bd. Note that in video streaming applications, each video segment typically has the same (playback) duration, which is denoted by Δ in time units (or channel uses) herein. Hence, the difference in quality levels solely depend on the number of bits Fd.
Video segments are characterized by a popularity or demand distribution Q=[qf,u], u=1, . . . , U, f=1, . . . , m, where qf,uε[0,1] and Σf=1mqf,u=1 (e.g., receiver u requests file f with probability qf,u). Without loss of generality up to index reordering, is is assumed qf,u has non-increasing components q1,u≧ . . . ≧qm,u. Let fu denote the random request at receiver u. The realization of fu is denoted by fu.
It is noted that the present disclosure is directed to a general video on-demand setting, in which receiver requests follow an arbitrary popularity distribution. As such, the demand message set cannot be represented as a degraded message set since a given receiver's demand is not necessarily a subset of another receiver's demand. The present disclosure includes and generalizes any combination of degraded message sets, via possible overlapping of receiver demands, and message cognition, via available cached or side information.
Various aspects are described in which the sender (e.g., network element 151) implements a video delivery system to receivers 200 in two phases: a caching (or placement) phase followed by a transmission (or delivery) phase. Prior to describing various aspects pertaining to the operations of the sender device in the transmission or delivery phase (the primary focus of the present disclosure), an overview of the operations of the sender device during the caching phase is described first for completeness.
Caching Phase
The caching phase occurs during a period of low network traffic. In this phase, using the knowledge of the demand distribution and the cache capacity constraints, the sender decides what content to (transmit and) store at each receiver. Because of the time-scale separation between caching and transmission phases, the caching phase is sometimes referred to as the placement or pre-fetching phase.
In one aspect, the sender may be configured to implement the caching phase using Algorithm 1 below, where each data file ‘f’ that belongs to the library ‘F’ is divided into ‘B’ equal-size packets, represented as symbols of a finite field:
During the caching phase, the network element 151 exploits the fact that a video segment is multiple description coded into D descriptions, each of length B=F/D bits. In the following, such descriptions are referred to as packets and denoted by {Wf,l:l=1, . . . , D}. At the beginning of time, a realization of the library {Wf} is revealed to the sender. The sender fills the caches of each of the U receivers through a set of U encoding functions {Zu}, such that Zu({Wf}) denotes the codeword stored in the cache of receiver u.
Thus, caching phase as implemented by the network element works as follows. Each receiver, instructed by the sender, randomly selects and stores in its cache a collection of pf,uMuD distinct packets of file fεF received from the sender, where pu=(pl,u, . . . , pm,u) is a vector with components 0≦pf,u≦1/Mu, ∀f, such that Σf=1mpf,u=1, referred to as the caching distribution of receiver u. Hence, the arbitrary element pf,u of pu represents the fraction of the memory Mu allocated to file f. The resulting codeword Zu({Wf}) transmitted by the sender and stored in the cache of receiver u is given by:
Where, lf,iu is the index of the i-th packet of file f cached by receiver u. The tuples of indices (luf,1, . . . , luf,p
distinct subsets of size pf,uMuD in the set of D packets of file f. The random collection of cached packet indices across all receivers are referred to as the cache configuration, denoted by ={Zu}. A given realization of M the cached information is denoted by M.
Moreover, Mu,f denotes the vector of indices of the packets of file f cached by receiver u. The caching policy described above is referred to as RAndom fractional Popularity-based (RAP) caching, which is completely characterized by the caching distribution P={pu}, and is illustrated as Algorithm 1 implemented by the network element 151.
A notable property of RAP as described above of choosing the identity of the packets to be cached uniformly at random is that it increases the number of distinct packets cached across the network. That the receivers 200 thus cache different packets of the same file results in higher efficiciencies in terms of coded multicast opportunities during the transmission phase that are described below. This is one of the reasons why multiple description coding may be considered more suitable than scalable video coding, since randomly cached packets (descriptions) can then be used for decoding higher quality video versions. In contrast, with scalable video coding, randomly cached descriptions may be wasted if not all previous descriptions are delivered during the transmission phase. While such wastage could be avoided by only caching descriptions up to the maximum number that can be delivered to a given receiver, this is not a good solution because it creates another problem that there is too much overlap in cached information, which in turn reduces coded multicast opportunities during the delivery phase. In addition, any deviation due to imperfect knowledge of demand and channel conditions can again lead to wasted cached information, reducing system robustness.
Transmission Phase
After the caching phase, and in the transmission (or delivery) phase (the primary focus of the present disclosure), the network is repeatedly used in a time slotted fashion with time-slot duration Δ time units, given by the video streaming application. It is assumed that each receiver requests one video segment per time-slot from the sender. Based on the receiver requests, the stored contents at the receiver caches, and the channel conditions, the sender decides at what quality level to send the requested files (i.e., how many descriptions to send to each receiver), encodes the chosen video segments into a codeword, and sends it over the broadcast channel to the receivers, such that every receiver decodes the requested segment (at the corresponding quality level) within Δ time units. The scheduled quality level for receiver u is denoted by du.
Thus, given the video segment playback duration Δ, the sender during the transmission or delivery phase is configured to use the knowledge about the cache contents {Zu} that are already cached at the receivers 200, the channel conditions {εu} between the sender and the respective receivers 200, and the receiver requests f, schedules the appropriate quality level du for each requested video segment through a multicast encoder. The multicast encoder is implemented using a variable-to-fixed encoding function where X denotes the transmitted codeword as:
X=X({Zu},{εu},f).
Here, X is a binary sequences of finite length Δ, f=[f1, f2, . . . , fU], with fjεF, denotes realization of the receiver random request vector f=[f1, f2, . . . , fU].
The operations each of the blocks illustrated in block diagram 300 of
In step 402, the network element 151 receives a request for a video segment from receivers 200 during the delivery phase. Since the network element 151 has already performed the caching method described above, each destination device 200 may be assumed to have requested only those packets that were not cached (or stored) as a result of the caching method. Thus, the delivery phase consists of providing to each destination device 200 the missing part(s) of the requested files, i.e., the packets missing from that receiver devices' 200 memory.
In step 404, in response to receiving the requests from the receivers 200, the network element 151 (e.g., scheduler 302) computes the quality (i.e., the number of descriptors for the requested video segment) of the packets to be delivered to the receivers in response to the requests (step 404) based on the channel conditions between the sender and the respective receivers. Since for a given realization of the packet-level cache configuration and the channel rates {ηu}, the joint source-channel multicast encoder 300 is a variable-to-fixed encoder that, at each realization of the request vector f, encodes the scheduled descriptions for each receiver {du} into a multicast codeword X of fixed length Δ, in this step the scheduler 302 is configured to compute the number of descriptions (which represent quality as described above) scheduled for each requesting receiver u as:
where ψ(f,M) is a function of the packet-level cache realization M and of the request vector realization f, whose expression is given by:
The above computation of du(f) by scheduler 302 ensures that, for a codeword of length Δ, receiver u obtains du descriptions (each of size B) at a rate ηu/ψ(f,M), which is the maximum achievable rate when the remaining U−1 receivers have the same channel degradation ηu.
In step 406, the network element 151 (e.g., channel aware source compressor 304) then clusters the set of packets (descriptions) scheduled for each receiver {du} into a determined minimized set of equivalent classes, each of which are referred to herein as the generalized independent set (GIS), which function is implemented by the channel-aware source compressor 304. In particular, the channel-aware source compressor 304 clusters the entire set of scheduled packets (descriptions) into the minimum number of GISs satisfying the following conditions: First, any two packets in a GIS scheduled for different receivers can be transmitted in the same time-frequency slot without affecting decodability; and second, the set of packets in a GIS scheduled for receiver u, denoted by Pu, satisfies:
to ensure that each receiver is scheduled a number of packets (descriptions) proportional to its channel rate (i.e., based on the channel conditions).
As described further below this minimization corresponds to a NP-hard optimization problem related to finding the minimum number of GISs that cover a properly constructed conflict graph. The minimum number of GISs that are determined by the channel-aware source compressor 304 to cover the conflict graph is referenced herein as the channel-aware chromatic-number, χCA, which are then transmitted, to the receivers using one of two associated transmission schemes that are fully described, with examples, in the sections below.
In step 408, the network element 151 (e.g., the concatenated source-channel encoder 306 collectively) generates the multicast codeword X in which the scheduled descriptions {du} are encoded and transmits the multicast codeword to the receiver in response to the received request. In particular, the outer channel encoder 310 generates channel codewords of length n, while the inner source encoder 308 generates the final multicast codeword X of length Δ, by concatenating linear combinations of n-length channel codewords.
As shown in
where E[ψ(f,M)] is the expectation of ψ(f,M) taken over the random request vector f and random cache configuration given by:
E[ψ(f,M)]=min{φ(p,Q),
and where:
in which Y denotes a random set of l elements selected in an i.i.d. manner from the set of files (with replacement).
More particularly, the outer channel encoder 308 illustrated in
The decoding function being expressed as gu:Yn→{1, 2, . . . , 2nη
is not a multiple of n, then zero-padding may be applied to the |Pu|B bits.
The network element 151 notifies the computed codebooks to each receiver 200 at network setup or anytime channel conditions change. Hence, each receiver 200 is aware, not only of its own codebook, but also of the codebooks of the other receivers 200.
In order to ensure that receiver “u” obtains du descriptions at a channel rate ηu/ψ(f,M), the channel codeword length is set to
as noted above. As also noted above, the fact that n depends on the demand realization implies that the channel codebook may need to be updated and notified to each receiver 200 at each request round, which may lead to significant overhead. In order to eliminate or mitigate this overhead, the channel codeword length is set to
which, as noted earlier, is equivalent to set the time-slot duration to be equal to the video segment playback duration in average, and to set the average channel rate at which receiver “u” obtains du descriptions to ηu/E[ψ(f,M)].
The inner source encoder 310 generates the combined final multicast codeword X by XORing the channel codewords {u(Pu)} belonging to the same GIS, and concatenating the resulting codewords of length n. Since the number of the GISs produced by the channel-aware source compressor 304 is given by χCA, X is obtained by concatenating χCA codewords of length n, resulting in a multicast codeword of average length Δ.
Having described the operational steps above, two algorithmic embodiments of the channel-aware source compressor are described now. Each of the two embodiments build on generalizations of existing caching-aided coded multicast schemes to the case of noisy broadcast channels. Specifically, the first embodiment is referenced herein as the CA-CIC as the channel aware extension of the chromatic index coding scheme, and the second embodiment, referenced herein as the CA-HgC as the channel-aware extension of the polynomial-time hierarchical greedy coloring algorithm.
Channel-Aware Chromatic Index Coding (CA-CIC)
It is noted that finding a coded multicast scheme for the broadcast caching network is equivalent to finding an index code with side information given by the cache realization. A well-known index code is given by the chromatic number of the conflict graph, constructed according to the demand and cache realizations. Given the cache realization, a file-level demand realization (given by the request vector f) can be translated into a packet-level request vector containing the missing packets at each receiver. A request for file fu by receiver u (step 402) may thus be considered equivalent to requesting D−|Mu,f| packets corresponding to all missing (i.e., uncached) packets of the highest quality level of video segment fu. However, based on the channel degradations, the sender schedules a subset of the missing packets duε{1, . . . , D(1−Mu,f)}, above (step 404). Denoting as W the scheduled random packet-level configuration and by Wu,f the set of packets of file f scheduled for receiver u, the corresponding index coding conflict graph M,W=(V,E) may be defined and constructed as where 1) Each vertex νεV represents a scheduled packet, uniquely identified by the pair {ρ(ν)μ(ν)}, where ρ(ν) indicates the packet identity associated to vertex ν and μ(ν) represents the receiver for whom it is scheduled. In total, we have |V|=Σuεdu vertices. And, 2) For any pair of vertices ν1, ν2, it can be understood that vertex ν1 interferes with vertex ν2 if the packet associated to the vertex ν1, ρ(ν1), is not in the cache of the receiver associated to vertex ν2, μ(ν2), and ρ(ν1) and ρ(ν2) do not represent the same packet. Then, there is an edge between ν1 and ν2 in E if ν1 interferes with ν2 or ν2 interferes with ν1.
In the constructed graph M,W, the set of vertices scheduled for the same receiver, i.e., the set of vertices νε such that μ(ν)=u, is fully-connected. Based on this consideration, M,W is referenced herein as a U-clustered graph.
Thus, in the present disclosure, the generalized independent sets determined in step 406 may be determined as (s1, . . . , sU)-GIS from a U-clustered graph M,W as an ordered set of U fully connected sub-graphs {P1, . . . PU} such that for all u=1, . . . , U, and all νεPu, μ(ν)=u (i.e., all the packets in Pu are scheduled for receiver u); |Pu|=su≧0 (i.e., the number of packets in Pu is equal to su); and for all i≠u, Pu and Pi are mutually disconnected (i.e., no edges exist between any two subgraphs). Note that when su≦1, ∀uεU, the generalized information set (s1, . . . , sU)-GIS become the classical definition of independent set.
Based on the definition of the GIS above, a further definition of channel-aware valid coloring and channel-aware chromatic number of the conflict graph are introduced as follows. First, a (η1, . . . , ηU)-channel-aware valid vertex-coloring is obtained in step 406 by covering the conflict graph M,W with (s1, . . . , sU)-GISs that further satisfy
and assigning the same color to all the vertices in the same GIS, referenced herein as Channel-Aware Valid Vertex-Coloring.
Secondly, The (η1, . . . , ηU,) channel-aware chromatic number of a graph H is defined as
where C denotes the set of all (η1, . . . , ηU) channel-aware valid vertex-coloring of M,W, and φ(c) is the total number of colors in M,W for the given (η1, . . . , ηU) channel-aware valid vertex-coloring φ, referenced herein as the Channel-aware Chromatic Number.
It can thus be shown that for a given conflict graph M,W constructed according to cache realization, demand realization M, scheduled descriptions
and scheduled random packet-level configuration W, a tight upper-bound for the the channel-aware chromatic number χCA (HM,W), when the number of descriptions D→∞, is given by: χCA(HM,W)=ψ(f,M)+o(D).
Thus, the GISs associated to the chromatic number χCA(M,W) are then used by the concatenated source-channel encoder 306 to generate the final multicast codeword X (step 408).
As a concrete example, consider a network with U=3 receivers, denoted as U={1, 2, 3} and m=3 files denoted as F={, , C}. It is assumed that the channel rates are equal to η1=½, η2=¼, η3=¼, and Δ=8 time-units. It is also assumed that demand and caching distributions are such that [ψ(,)]=2. Furthermore, as described above, each file is multiple description coded into D descriptions, e.g., ={1, 2, . . . , D}, each of length B=F/D=1 bits.
In the example above, the sender is configured to schedules 2 descriptions for receiver 1, i.e., d1=2, and one description for receivers 2 and 3, i.e., d2=d3=1. It may be assumed further that the caching realization is given by: M1,A={1, 2}, M1,C={C1}; M2,B={1, 2}, M2,C={C1}; M3,A={1}, M3,B={1,3}. Suppose that receiver 1 requests , receiver 2 requests and receiver 3 requests C such that M1,B={1, 2}, M2,A={1} and M3,C={C1}. The corresponding U-clustered conflict graph H, that may be generated by the channel-aware source compressor 304 is is shown in
Continuing the example,
The channel-aware vertex coloring is also depicted in
Channel-Aware Hierarchical Greedy Coloring (CA-HgC)
In view of the exponential complexity of CA-CIC embodiment described above, an alternative embodiment referenced as Channel-Aware Hierarchical greedy Coloring (CA-HgC) is provided, which is a coded multicast algorithm that fully accounts for the broadcast channel impairments, while exhibiting polynomial-time complexity. The CA-HgC algorithm discussed below may be implemented in the channel-aware source compressor 304.
The CA-HgC algorithm works by computing two valid colorings of the conflict graph M,W, referred to as CA-HgC1 and CA-HgC2. CA-HgC then compares the corresponding number of colors achieved by the two solutions and selects the coloring with minimum number of colors. It is noted that CA-HgC1 coincides with the conventional naive (uncoded) multicast scheme. In fact, CA-HgC1 computes the minimum coloring of M,W subject to the constraint that only the vertices representing the same packet are allowed to have the same color. In this case, the total number of colors is equal to the number of distinct requested packets, and the coloring can be found in O(|V|2). On the other hand, CA-HgC2 is described by the algorithm illustrated in
In the following, a walk-through of Algorithm of
Let Kν={uεU:ρ(ν)εWu∪Mu} denote the set of receivers that request and/or cache packet ρ(ν). The i-th vertex hierarchy of M,W is initialized to the set of vertices for whom the number of receivers requesting and/or caching its packet identity is equal to i, i.e., i={ν:|Kν|=i}. CA-HgC2 proceeds in decreasing order of hierarchy starting from the U-th hierarchy.
The subroutine GISfunction (i, ν,i) is called for each vertex ν in the i-th hierarchy in increasing order of cardinality |Kν|. If the subroutine returns a non-empty set , then the vertices in are colored with the same color (lines 6-10). Otherwise, the uncolored vertex ν is moved to the next hierarchy, i.e., i-1 (lines 11-12). This procedure is iteratively applied for any hierarchy, until all the vertices in the conflict graph M,W are colored. The procedure returns the number of colors, |c|, as well as the set of codewords X.
Regarding the subroutine GISfunction (Hi, ν,i) shown in
As a concrete example, consider the same conflict graph M,W shown in
In this section, the decodability of the multicast codewords transmitted by the sender to the receivers is described. From the observation of its channel output Yu, representing its noisy version of the transmitted codeword X, each receiver is able to decode the du descriptions of its requested video segment, Wuε scheduled and transmitted by the sender as Ŵu=λu(Yu, Zu, f), via its own decoding function λu, which consists of two stages.
First, the noisy version of the codeword u(j) is obtained by performing the inverse of the XOR-function. This processing can be conducted since, by design, the codewords of different receivers belonging to the same GISs do not interfere with each other (i.e., they can be generated from the cached information of receiver u). In fact, receiver u is informed by the sender of: i) all the U codebooks; ii) the packets from its cache that are present in the received XORed codeword, along with the corresponding codebooks used to construct the codeword.
Secondly, the recovered noisy codeword u(j) is sent to the channel decoder of receiver u, which reconstructs the bits associated to the set Pu according to the channel rate ηu. Hence, in the limit D→∞, the proposed sequence of (Δ, {εu})-delivery schemes is admissible as defined below.
Each receiver uεU, after observing its channel output Yu, decodes the requested file Wf
Thus, a sequence of (Δ, {εu})-delivery schemes is called admissible if:
It is noted that the variable-to-fixed nature of the multicast encoder is due to the fact that, while the length of the transmitted codeword X is fixed to the time-slot duration Δ, given by the video segment playback duration, the amount of information bits transmitted by the encoder is variable and given by ΣuduB.
Although aspects herein have been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure. It is therefore to be understood that numerous modifications can be made to the illustrative embodiments and that other arrangements can be devised without departing from the spirit and scope of the disclosure.
This application is related to commonly assigned, co-pending U.S. application Ser. No. 14/514,938 filed Oct. 15, 2014, and also to commonly assigned, co-pending U.S. application Ser. No. 14/514,905 filed Oct. 15, 2014 the entire contents of both of which are incorporated herein by reference.