1. Technical field
The present invention relates to computer networks and, more particularly, to the caching and delivery of multimedia contents (e.g., text, audio, video, software, etc., and any combination of them) in Internet Protocol (IP) networks including content delivery network (CDN) services.
2. Description of related art
Web caching or content caching means that the most popular (Web) content (also known as over-the-top content or OTT content) is stored and delivered from a Service Provider network, rather than from an Origin Server being the original content location on the Web. Service providers and network operators widely deployed caching to reduce bandwidth over the peering link and improve Quality of Experience (QoE) for subscribers. Content caching typically requires business relations between content owners and network operators. Content owners provide content to the network operators, while network operators cache and deliver the content to the subscribers from their own Content Delivery Networks (CDNs).
A content delivery network or content distribution network (CDN) is a system of computers containing copies of data placed at various nodes of a network, more particularly, CDN is a collection of web caches distributed across multiple locations to deliver content more efficiently to users. When skillfully designed and implemented, a CDN can improve access to the data it caches by placing a number of copies closer to end users resulting in increased access bandwidth to the data, better scaling, resiliency and reduced latency. Origin (web) server often contains initial content copy and has access to content metadata for generating content specific responses, e.g. content headers and caching headers, when serving content request. Web cache does not have access to content metadata for generating content specific responses, therefore it instead caches the content and content responses from the Origin web server. Data or media content types often cached in CDNs include multimedia objects (audio and video objects), web objects (text, graphics, URLs and scripts), downloadable objects (media files, software, documents), applications, live media (events), and database queries. While Web caching is simple in concept (storing the most popular Internet content and delivering it from the operator's network, rather than always retrieving it from a remote content source), it must be performed in a way that ensures the integrity of services, content and the network.
In deploying on-net CDNs, Service Providers are aiming to serve a growing consumer population that watches premium content from many different online sources. But Service Providers do not typically have business relations with all online Content Providers and therefore some content is not initially provided by content owners to the network operators for delivery via CDNs. Most network operators have a need to reduce transport costs, improve QoE and manage traffic surges for online content even if content is not initially provided by content owners. Transparent caching is an emerging technology of caching which address these challenges. These solutions enable service providers to cache and deliver over-the-top (OTT) content from inside their networks. Transparent caching can be viewed as one use (application) of a CDN, no different than other uses (e.g., multi-screen video delivery, multi-tenant CDN for B2B customers, CDN-assisted Video on Demand, . . . ). Both content delivery networks and transparent caching systems cache content at the operator's network edge. Over half of all network operators are expected to deploy transparent caching and CDNs by 2014.
The term ‘transparent caching’ refers to the fact that the content is cached and delivered without the involvement—and often without the knowledge—of the content owners. Transparent caching often refers to caching that:
With transparent caching, content is stored and served from the edge of the operator's network, saving core and IP transit network resources and accelerating delivery to the subscriber. Transparent caching automatically intercepts popular Web (Internet) content and serves content requests from the cache, instead of transiting across the network and peering point to the Origin Web location. By reducing demand on transit bandwidth and minimising delays, network operators can deliver better QoE especially during peak periods and reduce peering costs.
Since this type of caching is ‘transparent’ or ‘invisible’ to the content owners, network operators can benefit from traditional caching advantages when business relations with the content owners are not possible for some reasons. Transparent caching has aforementioned characteristics of traditional caching, for example, delivering content from locations close to the subscribers, maintaining content ‘freshness’, preserving end-to-end business rules and application logic such as geo-restrictions, and ensuring content security.
The best known prior art solutions of transparent caches are deployed on ‘data path’ basis and illustrated on
There are several shortcomings of the prior art solutions based on deploying transparent caches on the data path, which can be summarized as follows:
Therefore, there is a need of enabling transparent caching for all existing subscriber clients without risk of causing network outrage and without reliance on load-balancers.
In light of the present need for an enhanced solution for transparent caching which overcomes all the above-mentioned shortcomings of the ‘on-the-data path’ based transparent caching, a brief summary of various exemplary embodiments on here proposed “out-of-path” basis is presented.
Some simplifications and omissions may be made in the following summary, which is intended to highlight and introduce some aspects of the various exemplary embodiments, but not to limit its scope. Detailed descriptions of preferred exemplary embodiments adequate to allow those of ordinary skill in the art to make and use the invention concepts will follow in later sections.
The present invention is well suited for all known subscriber clients, e.g. Xbox, and does not require client modifications.
The present invention is applicable to Internet and online CDNs. The present invention suggests an ‘out-of-path’ method/system for transparent caching which enables deployment of a single transparent cache deep in network locations, without risk of causing network outrage in the case that transparent cache fails.
In an embodiment of the invention, the ‘out-of-path’ proposal uses mirroring of the content traffic to duplicate (mirror) content traffic and send to the transparent cache a copy of the traffic. In an exemplary embodiment, mirroring of the content traffic can be done in a number of ways, e.g. using the port mirror capability provided by Alcatel-Lucent 7750 Service Router (SR), other Border Network Gateway (BNG) or alternatively using network taps. 7750 SR port mirroring is the most cost efficient way to duplicate traffic for transparent caching.
According to an aspect of the invention, a method of transparent caching is provided for multimedia content delivering in which:
and the method comprises the following steps:
In a possible embodiment of the invention, if the taken decision is to deliver the content from transparent cache or other CDN location, the method further comprises taking over content delivery by the transparent cache, or by other cache, and stopping delivery from the Origin Server.
In another possible embodiment of the invention, if the taken decision is to deliver the content from the Origin Server, the method allows the content delivery from the Origin Server to the client proceed as usual.
Multimedia content can be defined by means of one or more media objects or media segments. In an exemplary embodiment, the invention is applicable to either progressive HTTP content delivery or segmented HTTP content delivery (also known as Dynamic Adaptive Streaming over HTTP: DASH). The Origin Server can either store the content or acquire it from a content source node of the Internet Content Provider's network (or by other means) where the content to be delivered over Internet is originally stored.
In an embodiment of the invention, the Transparent Cache Server is the network entity of the operator's network, e.g. transparent cache(s) attached to the CDN, in charge of deciding whether to deliver the content or let the Origin Server do it. In an alternative embodiment, if the cache decides not to deliver the content, or the cache simply fails, the Origin Server delivers the content. The cache may use duplicated (mirrored) traffic for caching the content for future requests (also known as filling the cache). In another alternative embodiment, if the cache decides to deliver the content, then the cache itself ‘takes over’ content delivery session from the Origin Server impersonating it and disconnecting said Origin Server. This takeover of the content delivery session and the disconnection of the Origin Server by the Transparent Cache Server are perfomed transparently to the client.
In one embodiment, in order to take over the session, the transparent cache needs to spoof the origin (web) server and the client, and, in addition, mimic them on network level, e.g. with TCP sequence (SEQ) and acknowledgement (ACK) numbers. The transparent takeover (and final disconnection of the web server) by the transparent cache may be comprised of several steps:
Step i) Session takeover at the transport level followed by takeover on application level, e.g. using the HyperText Transfer Protocol (HTTP).
In an embodiment of the invention, Transparent Cache Server spoofs and mimics the Origin Server by intercepting TCP session and inserting application level redirect message (e.g. HTTP 302 Redirect) into communication between the Origin Server and the client. The redirect message points to the transparent cache server itself or to other nominated cache. This step is transparent because the redirect message appears to the client as a genuine message from the Origin Server.
Step ii) Session takeover at the transport or network layer, e.g. using the Transmission Control Protocol (TCP).
In an embodiment of the invention, if the client does not respond to the redirect message of Step i) and the session continues between the client and the Origin Server, the Transparent Cache Server continues to take over transport or network session. For example, during TCP session, the cache spoofs the client towards the Origin Server, and, in turn, spoofs the Origin Server towards the client mimicking Origin TCP SEQ numbers and TCP ACK numbers, and mimicking client to reset connection with the Origin Server.
Step iii) As soon as decision is made to deliver content by the Transparent Cache Server, the cache server may attempt to prevent web server from communicating with the client mimicking client behaviour when data buffers are full, e.g. sending to the Origin server TCP packets with window size set to value 0. This step protects the cache from loosing taken over sessions until the Origin server is fully disconnected, for example, if packet straddles.
Step iv) The Transparent Cache Server disconnects the Origin (Web) server without affecting the client by mimicking client connection reset behaviour.
Steps iii) and iv) may be repeated multiple times until success due to network latency and race conditions.
In alternative embodiment, the cache may chose not to proceed with session takeover at the transport or network layer described in steps ii) and iv), for example, if the client does not respond to the application level takeover. The cache can instead learn about such client behavior. The cache can then change (re-write) manifest file specifying URL for segmented HTTP content to point to the cache for subsequent requests from the same client. In this scenario the cache can execute steps i) and iii), but deliver manifest file instead of inserting redirect message in step i). The cache may also choose to deliver manifest file without previous failed application takeover attempts, for example, if the cache can learn by other means that the client does not support application level takeover. Such other means can include pre-provisioned metadata or information acquired from intercepted content requests.
If the Transparent Cache Server decides not to deliver the content, there is no takeover of content delivery session by the transparent cache; instead, the content delivery session continues between the Origin Server and the client. But also in the case that the cache decides to deliver the content and, for example, the Origin Server has not been disconnected fast enough, the content delivery session can continue between the Origin Server and the client. In this case, parts of the takeover steps need to be repeated until the Transparent Cache Server successfully disconnects the Origin Server and can continue delivering the request ‘impersonating’ it.
The order of the method steps is not a critic issue, so other method steps orders are also possible.
According to another aspect of the invention a transparent cache server is provided, comprising:
In a possible embodiment, in case that decision is taken to deliver the content from a transparent cache, the transparent cache server further comprises:
Another aspect of the invention relates to a system for media content delivery which is a telecommunications network of any known network topology (e.g. ring, mesh, etc.) comprising at least a transparent cache server and an origin server as the ones above defined.
According to another aspect of the invention, a computer program product is provided, comprising computer-executable instructions for performing any of the steps of the method previously disclosed, when the program is run on a computer and a digital data storage medium is also provided encoding a machine-executable program of instructions to perform any of the steps of the method previously disclosed.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
The presented embodiments potentially have the following advantages when compared with the prior art:
Some embodiments of the method, system and device in accordance with embodiments of the present invention are now described, by way of example only, and with reference to the accompanying drawings, in which:
Throughout the figures like reference numerals refer to like elements.
Similar ‘on-the-data path’ prior-art solutions are available from other transparent caching providers including Cisco, Juniper and PeerApp.
The present inventions may be embodied in other specific devices, system and/or methods. The described embodiments are to be considered in all respects as only illustrative and not restrictive. In particular, the scope of the invention is indicated by the appended claims rather than by the description and figures herein. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
If transparent cache server (301) decides to deliver the content (33), the steps for taking over content delivery session are illustrated in
If the cache server (301) cannot take over the session using application level HTTP redirect, e.g. if the client (4) does not answer to the redirect message and the session continues (34) between the client (4) and the Origin server (301), then the cache server (301) can fall back to taking over the delivery session at transport or network level.
In alternative embodiment, If the cache server (301) cannot take over the session using application level redirect, e.g. if the client (4) does not answer to the redirect message and the session continues (34) between the client (4), the cache can instead change or re-write manifest file specifying URL for segmented HTTP content to point to the cache. In this case the procedure is similar to
A person of skill in the art would readily recognize that steps of various above-described methods can be performed by programmed computers. Herein, some embodiments are also intended to cover program storage devices, e.g., digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, wherein said instructions perform some or all of the steps of said above-described methods. The program storage devices may be, e.g., digital memories, magnetic storage media such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. The embodiments are also intended to cover computers programmed to perform said steps of the above-described methods.
The description and drawings merely illustrate the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its scope. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof. It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Number | Date | Country | Kind |
---|---|---|---|
12382486.4 | Dec 2012 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2013/074335 | 11/21/2013 | WO | 00 |