This invention relates to Internet Protocol Television (IPTV) networks and in particular to caching of video content at nodes within the network.
In an IPTV network, Video on Demand (VOD) and other video services generate large amounts of unicast traffic from a Video Head Office (VHO) to subscribers and, therefore, require significant bandwidth and equipment resources in the network. To reduce this traffic, and subsequently the overall network cost, part of the video content, such as most popular titles, may be stored in caches closer to subscribers. For example, a cache may be provided in a Digital Subscriber Line Access Multiplexer (DSLAM), Central Office (CO) or in Intermediate Offices (IO). Selection of content for caching may depend on several factors including size of the cache, content popularity, etc.
What is required is a system and method for optimizing the size and locations of cache memory in IPTV networks.
In one aspect of the disclosure, there is provided a method for optimizing a cache memory allocation of a cache at a network node of an Internet Protocol Television (IPTV) network comprising defining a cacheability function and optimizing the cacheability function.
In one aspect of the disclosure, there is provided a network node of an Internet Protocol Television network comprising a cache, wherein a size of the memory of the cache is in accordance with an optimal solution of a cache function for the network.
In one aspect of the disclosure, there is provided a computer-readable medium comprising computer-executable instructions for execution by a first processor and a second processor in communication with the first processor, that, when executed cause the first processor to provide input parameters to the second processor, and cause the second processor to calculate at least one cache function for a cache at a network node of an IPTV network.
Reference will now be made to specific embodiments, presented by way of example only, and to the accompanying drawings in which:
In a typical IPTV architecture 10, illustrated in
To reduce the cost impact of unicast VoD traffic on the IPTV network 10, part of the video content may be stored in caches closer to the subscribers. In various embodiments, caches may be provided in some or all of the DSLAMs, COs or IOs. In one embodiment, a cache may be provided in the form of a cache module 15 that can store a limited amount of data, e.g. up to 3000 TeraBytes (TB). In addition, each cache module may be able to support a limited amount of traffic, e.g. up to 20 Gbs. The cache modules are convenient because they may be provided to use one slot in corresponding network equipment.
In one embodiment, caches are provided in all locations of one of the layers, e.g. DSLAM, CO, or IO. That is, a cache will be provided in each DSLAM 14 of the network, or each CO 16 or each IO 18.
The effectiveness of each cache may be described as the percentage of video content requests that may be served from the cache. Cache effectiveness is a key driver of the economics of the IPTV network.
Cache effectiveness depends on several factors including the number of titles stored in the cache (which is a function of cache memory and video sizes) and the popularity of titles stored in the cache which can be described by a popularity distribution.
Cache Effectiveness increases as cache memory increases, but so do costs. Transport costs of video content are traded for the combined cost of all of the caches on the network. Cache effectiveness is also a function of the popularity curve. An example of a popularity distribution 20 is shown in
Zipf=1/xa
As the popularity curve flattens cache effectiveness decreases.
In order to find optimal location and size of cache memory, an optimization model and tool is provided. The tool selects an optimal cache size and its network location given typical metro topology, video contents popularity curves, cost and traffic assumptions, etc. In one embodiment, the tool also optimizes the entire network cost based on the effectiveness of the cache, its location and so on. Caching effectiveness is a function of memory, and popularity curve, with increasing memory causing an increased efficiency (and cache costs), but reduced transport costs. The optimization tool may therefore be used to select the optimal memory for the cache to reduce overall network costs.
An element of the total network cost is the transport bandwidth cost. Transport bandwidth cost is a function of bandwidth per subscriber and the number of subscribers. Caching reduces bandwidth upstream by the effectiveness of the cache, which, as described above, is a function of the memory and popularity distribution. The transport bandwidth cost problem is depicted graphically in
T
d=#sub*BW/sub
TCO is the transport cost to the Central Offices 32 and is represented as:
T
co
=#d*T
d
TIO is the transport cost to the Intermediate Offices 33 and is represented as:
T
IO=#IO*Tco
VHO Traffic is the transport cost of all VHO traffic on the network from the VHO 34 and is represented as:
VHO Traffic=τTIO
The required transport bandwidth can be used for dimensioning equipment such as the DSLAMs, COs and IOs and determining the number of each of these elements required in the network.
The parameter table 40 may be incorporated into a wider optimization tool for use in a network cost calculation.
A flowchart 50 for determining network cost is illustrated in
Network Cost 510=Equipment Cost+Transport Cost.
The Equipment Cost is the cost of all DSLAMs, COs, IOs and VHO as well as the VoD servers and caches. The Equipment cost can be broken down by considering the dimensioning for each of the DSLAM, CO and IO. DLSAM dimensioning (step 501) requires cost considerations of:
CO dimensioning (step 502) requires:
IO Dimensioning (step 503) requires:
VHO dimensioning (step 504) requires:
The equipment cost will also include the cache cost, which is equal to the common cost of the cache plus the memory cost. The transport cost of the network will be the cost of all GE connections 506 and 10 GE connections 505 between the network nodes.
Different video services (e.g. VoD, NPVR, ICC, etc) have different cache effectiveness (or hit rates) and different size of titles. A problem to be addressed is how can a limited resource, i.e. cache memory, be partitioned between different services in order to increase the overall cost effectiveness of caching.
The problem of optimal partitioning of cache memory between several unicast video services may be considered as a constraint optimization problem similar to the “knapsack problem”, and may be solved by, e.g. method of linear integer programming. However, given the number of variables described above, finding a solution may take significant computational time. Thus, in one embodiment of the disclosure, the computational problem is reduced by defining a special metric—“cacheability”—to speed-up the process of finding the optimal solution. The cacheability factor takes into account cache effectiveness, total traffic and size of one title per service. The method uses the cacheability factor and iterative process to find the optimal number of cached titles (for each service) that will maximize overall cache hit rate subject to the constraints of cache memory and throughput limitations.
Cache Effectiveness function (or Hit Ratio function) depends on statistical characteristics of traffic (long- and short-term title popularity) and on effectiveness of a caching algorithm to update cache content. Different services have different Cache Effectiveness functions. A goal is to maximize cache effectiveness subject to the limitations on available cache memory M and cache traffic throughput T. In one embodiment, Cache effectiveness is defined as a total cache hit rate weighted by traffic amount. In an alternative embodiment, cache effectiveness may be weighted with minimization of used cache memory.
The problem can be expressed as a constraint optimization problem, namely:
maxΣi=1nTiFi(└Mi/Si┘)
subject to:
Σi=1NMi≦M
and
Σi=1NTiFi(└Mi/Si┘)≦T
where
The cache effectiveness Fi (n) is a ratio of traffic for the i-th service that may be served from the cache if n items (titles) of this service may be cached.
This problem may be formulated as a Linear Integer Program and solved by LP Solver.
Continuous formulation of this problem is similar to the formulation above:
maxΣi=1nTiFi(Mi/Si)
subject to
Σi=1NMi≦M
and
Σi=1NTiFi(Mi/Si)≦T
and may be solved using a Lagrange Multipliers approach. The Lagrange multipliers method is used for finding the extrema of a function of several variables subject to one or more constraints and is a basic tool in nonlinear constrained optimization. Lagrange multipliers compute the stationary points of the constrained function. Extrema occur at these points, or on the boundary or at points where the function is not differentiable. Applying the method of Lagrange multipliers to the problem:
These equations describe stationary points of the constraint function. An optimal solution may be achieved in stationary points or on the boundary (e.g., where Mi=0 or Mi=M).
In the following a “cacheability” function is defined:
that quantifies the benefit of caching per unit of used memory (m) for the i-th service (i=1, 2, . . . , N).
To illustrate how cacheability functions may be used to find optimal solution of this problem a simplified example having only two services may be considered. If the functions f1 and f2 are plotted on the same chart (
Once cache memories have been determined using the cacheability functions and cache effectiveness functions, the cache allocations can be inserted into the network cost calculations for determining total network costs. In addition, the cacheability functions and cache effectiveness functions can be calculated on an ongoing basis in order to ensure that the cache is partitioned appropriately with cache memory dedicated to each service in order to optimize the cache performance.
In one embodiment, the optimization tool may be embodied on one or more processors as shown in
Although embodiments of the present invention have been illustrated in the accompanied drawings and described in the foregoing description, it will be understood that the invention is not limited to the embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions without departing from the spirit of the invention as set forth and defined by the following claims. For example, the capabilities of the invention can be performed fully and/or partially by one or more of the blocks, modules, processors or memories. Also, these capabilities may be performed in the current manner or in a distributed manner and on, or via, any device able to provide and/or receive information. Further, although depicted in a particular manner, various modules or blocks may be repositioned without departing from the scope of the current invention. Still further, although depicted in a particular manner, a greater or lesser number of modules and connections can be utilized with the present invention in order to accomplish the present invention, to provide additional known features to the present invention, and/or to make the present invention more efficient. Also, the information sent between various modules can be sent between the modules via at least one of a data network, the Internet, an Internet Protocol network, a wireless source, and a wired source and via plurality of protocols.
This application claims the benefit of U.S. Provisional Application No. 60/969,162 filed Aug. 30, 2007, and PCT/US08/10269 filed Aug. 29, 2008, the disclosures of which are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US08/10269 | 8/29/2008 | WO | 00 | 2/12/2010 |
Number | Date | Country | |
---|---|---|---|
60969162 | Aug 2007 | US |