The present invention pertains to the field of mobile networks and in particular to a system and method for improving user quality of experience when using a mobile network.
Communication networks provide connectivity between end points such as User Equipment (UE), servers, and other computing devices. The quality of experience (QoE) for users accessing a network is affected by the speed of the network at all points between a user and the end point that is exchanging data with the user's UE. A number of techniques have been implemented in communication networks to improve the network performance, and increase the users' QoE for a given network infrastructure. Two of these techniques are pre-fetching data and caching data to improve network performance.
Pre-fetching data involves transferring data across the network in advance of it being required at the receiving UE. Caching data involves transferring data across the network and storing a copy of cached data logically local to a receiving UE. Caching can be on the end point itself, or at the edge of the network, to manage data demand across the network between the receiving UE and the transmitting end point.
The advantages of pre-fetching and caching include: reduced latency, load balancing backhaul, improved QoE (i.e. avoiding video stalling or pixilation), and reduced peak traffic demand on the network. In general, these techniques improve QoE as the same amount of data is made available to a requesting end point, without the need for the full data set to be transferred across the network fast enough to meet the real-time usage requirements of the requesting UE.
With the development of next generation networks (e.g. 5G networks), the network architecture is split into a user plane handling the data, and a control plane which manages the network functions of network entities across the network. Pre-fetching and caching schemes developed for legacy networks may be insufficient to meet the data traffic requirements of next generation networks.
U.S. patent application Ser. No. 14/606,633 (Publication No. US 2016/0217377) “Systems, Devices and Methods for Distributed Content Interest Predication and Content Discovery”, and International Patent Application No. PCT/IB2015/058796 (Publication No. WO 2016/120694) “Systems, Devices and Methods for Distributed Content Pre-Fetching in Mobile Communication Networks” propose novel methods for pre-fetching based upon predicted content demand by the receiving UE, and a predicted future location of the receiving UE.
By pre-fetching and caching data using predicted content demand and future location, the content data transfer across the network may be managed, reducing peak data traffic demands. With high priority content data transferred in anticipation of an UE's future location, delays in data transfer due to network inefficiencies or traffic congestion are accommodated since the data is pre-located logically proximate to the receiving UE's future location in advance of the content consumption by the receiving UE. In effect, traffic is managed by leaving more time for the content data to transfer across the network to the receiving UE.
While these novel methods are effective at managing data traffic demands generally, they are limited by their reliance on considering only a predicted content demand and a predicted future location of the receiving UE. Accordingly, there is a need for a system and method for pre-fetching data that addresses some of the limitations of the prior art.
This background information is provided to reveal information believed by the applicant to be of possible relevance to the present invention. No admission is necessarily intended, nor should be construed, that any of the preceding information constitutes prior art against the present invention.
A communication system and method for pre-fetching content to be delivered to User Equipment connected to a communication network is provided. The system and method may include at least one network entity operative to evaluate at least one network resource metric measuring an operating condition of the communication network, and operative to send to a UE connected to the network a pre-fetching message to trigger pre-fetching of the content.
In an implementation, a communication system is provided that is operative to pre-fetch content for delivery to a UE connected to a communication network, the system comprising: at least one network entity operative to evaluate at least one network resource metric measuring an operating condition of the communication network, and operative to send a pre-fetching message to trigger pre-fetching of the content.
In an aspect, the evaluation comprises considering: Predicted future UE location and handover to connect to the communication network; Predicted future available data bandwidth on communication network data links connecting the UE to a transmitting endpoint providing the content; and, Predicted future buffer/cache status available to receive the pre-fetched data.
In an implementation, a communication system is provided that is operative to pre-fetch content for delivery to a UE connected to a communication network, the system comprising: at least one network entity operative to evaluate at least one network resource metric measuring an operating condition of the communication network, and operative to send to a UE connected to the network a pre-fetching message to trigger pre-fetching of the content.
In an aspect of the present invention, there is provided a method for execution at a control plane function. The method comprises transmitting, towards a user equipment, a prefetching message comprising future data rates; and receiving from the user equipment, a prefetching acknowledgement message. In an embodiment, the control plane function is a Session Management Function. In a further embodiment, the method further comprises, after receiving the prefetching acknowledgement message, transmitting towards at least one of user plane functions and radio access network nodes, instructions to setup a user plane data path.
Further features and advantages of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
It will be noted that throughout the appended drawings, like features are identified by like reference numerals.
As used herein, a communication network (or simply a “network”) refers to a collection of communicatively coupled devices which interoperate to facilitate communication between various endpoint devices, such as User Equipment devices. The term “User Equipment” (UE) is used herein for clarity to refer to endpoint devices which are configured to communicate with a network either via fixed line connection, or via radios operating according to a predetermined protocol. UEs include UEs as defined by the 3rd Generation partnership project (3GPP), mobile devices (e.g. wireless handsets) and other connected devices, including Machine-to-Machine (M2M) devices (also referred to as Machine Type Communications (MTC) devices). A mobile device need not be mobile itself, but is a device that can communicate with a network which is capable of providing communication services as the device moves. A network may include, for instance, at least one of a radio access portion which interfaces directly with UEs via radio access and a fixed line portion which interfaces directly with UEs via fixed line access, in combination with a backhaul portion which connects different network devices of the network together. The network may further comprise various virtualized components as will become readily apparent herein. A primary forward looking example of such a network is a Fifth Generation (5G) network. The pre-fetching methods and systems described in this application are applicable to both current 3G and 4G networks, as well as future proposed networks such as 5G networks.
It has been proposed that 5G networks be built with various network technologies that allow for the network to be reconfigured to suit various different needs. These technologies can also allow the network to support network slicing to create different sub-networks with characteristics suited for the needs of the traffic they are designed to support. The network may include a number of computing hardware resources that provide processors and/or allocated processing elements, memory, and storage to support network entities including functions executing on the network, as well as a variety of different network connectivity options connecting the computing resources to each other, and making it possible to provide service to mobile devices. The control plane of the network includes functionality to evaluate data traffic needs in the user plane, and to manage and re-configure the network entities to adjust the available network resources to meet specific Quality of Service (QoS) or QoE requirements.
As described above, pre-fetching and caching data by predicting content demand and future UE location, may be used to manage the transfer of content data transfer across the network to reduce peak data traffic demands. Some users may be static, their future locations could be the same locations. However, the traffic demand in a location can be changed in the future and the network can predict the future demand based on the current user activities. High priority content data may be transferred in anticipation of an UE's future location and cached either at an edge node of the network or on the destination UE. Since data is buffered logically local to the destination UE, delays in data transfer due to network inefficiencies or traffic congestion are accommodated since the data has more time to traverse the network before being required for consumption by the destination UE.
These novel methods are effective at managing data traffic demands generally, and assist in managing traffic demand by extending the time period available for data transfer across the RAN and to a receiving UE. Since the available time period is extended, the process can accommodate periods of network congestion and/or delayed delivery of data and still deliver sufficient data to maintain the QoE at the receiving UE. The methods have a limited range of data transfer scheduling options, as they rely solely on considering predicted content demand and reception location. Accordingly, a decision to transfer data is based on the predicted future need at a predicted location.
The present application provides for a system and methods for pre-fetching and caching data which further take into account network conditions and infrastructure. By adding into consideration network conditions and infrastructure, pre-fetching and caching decisions can be made which are optimized to local conditions within the network. For instance, backhaul links, spectrum availability, cache and buffer size at the local level may be considered and used to direct the pre-fetching and caching of content data to maintain the end user's QoE.
By way of example in a wireless network context, when a receiving UE has a given current location and anticipated future content demand and reception location, the decision to pre-fetch, the size of the content data to be cached, and the cache location may be determined based upon availability of network resources to transfer the content data to the edge of the RAN, the spectrum available to transfer the content data from the edge of the RAN to the receiving UE, as well as cache resources available on the path between the current location and the predicted reception location. Accordingly, if a receiving UE is predicted to be moving towards a low bandwidth location, content data may be pre-fetched and cached at one or more Access Nodes (ANs) that are located on the path between the current location and the predicted low bandwidth location that are able to provide the receiving UE with high bandwidth connections. The specific caching location may further be determined based on available caching resources, as well as network resources available to transfer the data from the source location to possible cache locations within the predicted time window that the receiving UE will be within range of each potential caching location. Thus, the pre-fetched content data may be transferred to the receiving UE over a high bandwidth connection for caching at a selected caching location that is predicted to be on the path between the receiving UE's current location and the area of low-bandwidth connections.
In general, the present application provides for a system and method for pre-fetching data to selected cache locations that serve corresponding ANs based on the predicted movement of receiving UEs and the availability of network resources servicing the locations between the current location of each receiving UE and predicted future locations of that receiving UE.
The decision to pre-fetch and cache data may be triggered by identifying that the receiving UE is either in, or moving to, an area with high bandwidth availability and/or low-cost connectivity such as WiFi APs, small cell ANs, etc.
The decision to pre-fetch and cache data may be triggered by identifying that the receiving UE is predicted to move into a future location with low bandwidth availability and/or high-cost connectivity.
The decision to pre-fetch and cache data may be triggered by identifying that the network resources may be constrained in the future. For instance, building congestion within the network serving the receiving UE's current or predicted future location triggers the pre-fetching of predicted content to avoid transferring data during a period of network congestion.
Incorporating data pre-fetching that includes network resource availability into the pre-fetching decision process has a number of benefits which may be realized by a network operator, including reduced packet latency, enhanced user QoE, and potential cost reductions and efficiency improvements. Network resource based pre-fetching improves network operations as they allow networks to support a higher time average data rate with more stable characteristics (i.e. less data rate fluctuations). As a result, periods of data stalling can be reduced, and re-buffering delay in streamed content such as video is reduced. Furthermore, during periods or network locations with capacity, data may be pre-fetched to reduce data traffic requirements during high demand periods or at network locations that lack capacity. This inherently improves QoE during the high demand period or low bandwidth network locations as less data traffic is demanded at that time, reducing network congestion.
In an implementation, systems and methods for pre-fetching data are provided that adaptively retrieve data to ANs and (or) UEs before actual requests based on at least one of:
In an aspect, the systems and methods may include RAN pre-fetching without an application layer in RAN. In the aspect, data pre-fetching requests may be initiated by a UE. Conveniently, RAN caching nodes may cache pre-fetched data that has not been sent (or “requested”) by the UE yet, in anticipation of future UE data demands. The RAN caching helps smooth the backhaul traffic, as well as radio link traffic as content may be delivered to the UE in advance of anticipated demand to avoid dropping streamed data when a radio link becomes congested.
In an implementation, a system may include interaction between network entities such as Access and Mobility Management Function (AMF), Traffic Engineering Function (TEF), Database (DB), and Session Management Function (SMF), and pre-fetching-related functionality enacted by those entities. For instance, in an aspect, the entities may be operative to:
In an implementation, the TEF can be integrated into SMF, and can be physically implemented in the same computer, or the same data centre.
In an implementation, the AMF and SMF can be integrated in a single function.
In some embodiments at least one of user subscription information and historical data may be stored in a standalone network function. In some such embodiments, the network function storing such data (and in some embodiments acting as the DB) may be one of a Unified Data Management (UDM) function and a Unified Data Repository (UDR) function. In some embodiments (which may overlap with the embodiments described above, policies may be stored and maintained in a Policy Control Function (PCF) or other such network function).
While the present application refers to a User Plane Function (UPF), in some implementations similar functionality may be provided by a Packet Gateway (PGW).
In an implementation, a signalling protocol is provided to support an adaptive pre-fetching method. The protocol permits content to be pulled by the UE/RAN from the transmitting end point (e.g. content server). In some aspects, the protocol may permit the content to be pulled by the UE/RAN from the other RANs where the data is available. In an aspect, the protocol permits content to be pushed by the transmitting end point (e.g. content server) to the UE or RAN (or by the RAN to the UE/other RANs), based upon the instructions/assistance provided by the SMF. In an aspect, the signalling protocol may be implemented to assist with consumption of streamed video data. In an aspect, the signalling protocol may be extended to other services, such as web and FTP.
In an implementation, a pre-fetching system and method is provided that enables data pre-fetching based upon evaluating at least one network resource metric. In an aspect, the pre-fetching system and method is adaptive and determines whether to pre-fetch data based upon, at least in part, current or historical network resource patterns. In an aspect, the at least one network resource metric comprises a future predicted network resource metric. In an aspect, the at least one network resource metric comprises a historical resource metric or pattern of metrics. In an aspect, the at least one network resource metric comprises a combination of a future predicted network resource with a historical resource metric or pattern of metrics.
In an implementation, a pre-fetching system and method is provided that enables data pre-fetching to the edge of a network.
In an aspect, the edge comprises a buffer/cache located at an AN. In an aspect, the AN is selected based upon to store pre-fetched data in advance of a UE connecting to that AN.
In an aspect, the edge comprises a buffer/cache located on the UE. In an aspect, the UE received and stores the pre-fetched data in advance of its need to consume the data. In an aspect, the UE receives and stores the pre-fetched data based upon instructions received from a network entity. In an aspect, the UE receives and stores the pre-fetched data based upon recommendations received from a network entity and the one or more UE-specific conditions. In an aspect, the UE-specific condition may comprise a user selection. In an aspect, the UE-specific condition may comprise a UE buffer status. In an aspect, the network entity evaluates at least one network resource metric to determine that pre-fetching is desirable. In an aspect, the at least one network resource metric may comprise a predicted future network condition. The predicted future network condition may comprise a bandwidth of a data link predicted to connect the UE to the network in the future. The predicted future network condition may comprise a predicted bandwidth of a backhaul link connected to an AN that is predicted to connect the UE to the network in the future.
A pre-fetching network entity, not illustrated in
The pre-fetching network entity also takes as input information including the UE cache status 14, the Quality of Experience (QoE) of the user of the UE 5, current network status, and predicted future network status. The network status may include, for instance, user mobility, network loads, predicted network loads, network costs, and available network resources). In some aspects, the network status may further take into consideration predicted future locations of the UE 5 at future time periods, and the corresponding network resources available to serve the UE 5 at those predicted future locations.
In an aspect, the information may be pulled by the UE 5 or the RAN. In an aspect, the information may be pushed by the transmitting end point to the UE 5 and/or the RAN.
Based on the predicted future data consumption estimate, and the available/predicted network resources, the pre-fetching network entity makes a pre-fetching decision whether or not to pre-fetch data to one or more predicted caching location ANs 15 for caching on the cache associated with each of the one or more predicted caching location ANs 15. In
A network resource metric is a quantifiable value that, alone or in combination, expresses whether a location is a low service location 22. The network resource metric may be applied to a current network condition, or may provide an estimate of a future network condition as may be predicted based upon a combination of current network conditions and historical network conditions. As an example, time-based data traffic patterns may predict network congestion at certain times of day in certain locations. These predictions may be used to determine whether or not to pre-fetch data to avoid a future low service location 22. As another example, historical mobility patterns may indicate that UEs 5 travelling along a certain path experience a low service location 22 at a particular point in the path (for instance travelling through poor coverage, or travelling into a tunnel). A UE 5 that is determined to be travelling along a current path that matches the historical mobility pattern may be predicted to be entering into a low service location 22 when it reaches the same point along the path.
Network resource metrics may constitute any measurable value that is shown through modelling to be correlated to low service events. For instance, a general QoE value assigned to a location may identify a low service location 22 if it falls below a threshold value. The QoE value may show a periodicity or predictability based upon an evaluation of historical data. Other examples of network resource metrics may include, for instance a buffer status, a mobility pattern (e.g. following a train line), a time of day at a specific location, a current network congestion measure, a current network resource measure, and a combination of current metrics and historical metric patterns.
The UE 5 is predicted to be served by a low-bandwidth connection 25 in the predicted low service location 22, and accordingly would not be able to receive sufficient data to maintain the user's QoE. The low-bandwidth connection 25 is by way of example only, and the predicted low service location 22 may result from a network condition such as a lack of network resource availability to the predicted low service location 22, a higher cost connection, network congestion, or other network condition that may reduce the user's QoE.
With the determination that the UE 5 is likely moving to the predicted low service location 22 and would be unable to maintain the user's QoE at the predicted low service location 22, the pre-fetching network entity evaluates the predicted path and pre-fetches data to one or more predicted caching ANs 15 at one or more predicted caching locations 17. The pre-fetching network entity selects between the one or more predicted caching locations 17 based upon a likelihood that the UE 5 will connect to the RAN through each of the corresponding one or more predicted caching location ANs 15 as well as the network resources available to serve the UE 5 at that predicted caching location 17. In some aspects, the one or more predicted caching locations 17 may each receive a same set of pre-fetched data, the pre-fetched data can be uncoded or encoded. If the pre-fetched data is encoded, for example by a fountain coding, the coded data in each caching location can be encoded differently so that the UE later can get some portion of coded data from different caches to recover the pre-fetched data. In some aspects, different sets of pre-fetched data may be pre-fetched at the one or more predicted caching locations 17, for instance where the pre-fetching is likely to occur over multiple predicted caching ANs 15.
In next generation networks the pre-fetching network entity may comprise the coordinated action of multiple network entities. For example, in proposed 5G networks, the pre-fetching network entity may comprise coordinated action between an access and mobility management entity (AMF), a session management entity (SMF), and a traffic engineering function (TEF). In some aspects, the SMF may comprise a part or all of the functions of the TEF.
The AMF 210 maintains a mobility context of each UE 5 connecting to the RAN, and predicts future mobility patterns, locations and handovers between ANs. The SMF 220 interacts with the UE 5 and the receiving endpoints, such as the DASH server 233, to coordinate data exchanges such as video segment transmissions from the transmitting endpoint to the UE 5 to maintain the communication session. The TEF 215 interacts with the various network resources, the AMF 210, and the SMF 220 to predict future data transmission rates for UEs 5 from the transmitting end point to each UE 5. The data transmission rates may be segmented by backhaul, and wireless connections. The users' DB 225 maintains historic information of data consumption by UEs 15 and network traffic and resource availability. The historic information may be pushed from the users' DB 225 to the network entities, or may be pulled from the users' DB 225 by the network entities on demand as required.
It will be understood that the AMF 210 may, in addition to predicting future mobility patterns, provide predictions of future UE locations, handovers and mobility patterns as a service within the network. Other network functions, and possibly entities outside the core network, or outside the core network control plane, may subscribe to such services, or may request such services through a service based interface of the AMF. In one example, the service based interface used for these requests may be the Namf interface. In other embodiments, the TEF 215 may interact with the AMF 210 and SMF 220 to participate in providing predicted future data transmission rates as a network service. The predictions of future data transmission rates may be provided on a per-UE basis for UEs specified by the entity requesting the service. In other embodiments, predictions may be provided for a group of UEs that share a common characteristic (e.g. a set of UEs that have a common mobility pattern, for example a group of UEs that are all on the same train).
The pre-fetching operations may be driven by different entities in the network. For instance, the pre-fetching may be a UE-driven pre-fetching operation, or alternatively the pre-fetching may be a RAN-driven pre-fetching operation.
In the case of a UE-driven pre-fetching operation (Option 1: O1), 8 UE pre-fetching schemes may be provided, for example:
In the case of a RAN-driven pre-fetching operation (Option 2:)2), 8 RAN pre-fetching schemes may be provided, for example:
Pre-fetching Mechanisms may include, for instance:
With reference to the above discussion, in some embodiments of such a method and system, the RAN nodes and the AMF 220 may send the network status information to the TEF 215 in response to a request (or a configuration set in response to receipt of a request) from other functions. In some embodiments, examples of other functions may include an SMF, the AMF, and another AMF. These other network functions (or functional entities) can request these services (e.g. any or all of the different types of predictions) from the AMF, and in some embodiments may do so using a service based interface such as the Namf interface.
In some embodiments, the DB may send information automatically (upon detection of a trigger) or periodically. The manner in which the DB sends information may be determined in accordance with a request that initialized the reporting. The DB may also send information in response to receipt of a request (e.g. a one off request). The requests from other network entities for a service from the DB may be send through a service based interface such as Nudm.
The TEF 215 may be involved in the generation of data for estimates of future resource availability (e.g. estimated future data rates). The TEF 215 may generate these estimates on its own, or it may provide information to another network function for use in the generation of such an estimate. In some embodiments, the TEF 215 may generate the estimates in response to a request from a network function such as SMF 220. The request for services such as prediction or the generation of an estimate may be received by TEF 215 over a service based interface such as Ntef.
In the above discussion, reference to video segment lengths may, in some embodiments, represent a duration during which UE 5 is expected to experience any or all of a service interruption, a reduced transfer rate and a high (or elevated) service cost). In some embodiments the video segment length may be a function of such a duration but not the duration itself. In some embodiments, the pre-fetching message sent from the SMF to the UE may include a duration during which the UE may experience at least one of high cost and low throughput, allowing the UE to determine how many video segments should be pre-fetched.
If the UE 5 determines that it needs to continue consuming content, then in step 415 the UE 5 further determines whether it is available to pre-fetch data. The determination may include, for instance, evaluating a current state of the UE buffer to determine whether it is already full, evaluating the current content being downloaded to determine whether all segments have been downloaded, and prompting the user to provide user input indicating whether to accept or reject the pre-fetched data. If the UE 5 determines in step 416 that it is not available to pre-fetch data, then in step 417 the UE 5 sends a negative acknowledgement to the SMF 220. If the UE 5 determines that it is available to pre-fetch data, then in step 420 the UE 5 sends an acknowledgement to the SMF 220 confirming that it is able to pre-fetch data. In step 430 the UE 5, or a client operating on the UE 5 such as a DASH client, selects the content to pre-fetch (i.e. the number of video segments to pre-fetch, specific video segments with suitable data rate and suitable video quality), and in step 430 the UE 5, or the client operating on the UE 5, sends a pre-fetching data request to the transmitting end point identifying the content to be pre-fetched. For example, in the case of video content the UE 5 will send to the transmitting end point a pre-fetching data request identifying the video segments to be prefetched.
Referring to
In some embodiments, the SMF may determine a set of potential serving RAN nodes. The nodes within this set may be any of the current serving AN nodes, handover target AN nodes, or AN nodes along a path associated with the mobility pattern of the UE. The SMF may then setup one or more UP data paths (including UPFs) to the RAN nodes in the set of potential serving RAN nodes. This can be done to facilitate the delivery of data requested by the UE in response to receipt of the pre-fetching message.
In step 510 TEF 215 receives network reports from the RAN. In step 515 the users' DB 225 provides user data to the AMF 210-, and in step 520 the users' DB 225 provides user data to TEF 215. In step 525, based at least in part upon the received user data, the AMF 210 transmits a mobility report to the TEF 215. In step 530 the UE 5 may transmit a QoE report, including UE buffer status, to the SMF 220. Based on the network report, user data, and mobility report, in step 535 TEF 215 transmits a data rate prediction to the SMF 220. The SMF 220 evaluates the QoE report and the data rate prediction, and may determine that pre-fetching is available and may be useful. With the determination, in step 540 the SMF 220 transmits a pre-fetching request to the UE 5. The UE 5 evaluates the pre-fetching request, and if it is determined to pre-fetch, in step 545 the UE 5 transmits a pre-fetching acknowledgement to the SMF 220. In step 550 the SMF 220 transmits pre-fetching information, such as the data rate and length of the video segments (e.g. a video segment comprises N seconds of video data). In step 555 the SMF 220 transmits a data rate request to the UPF 232. In step 560 the SMF 220 transmits a data rate request to the RAN. In step 565 the UE 5 transmits a pre-fetching data request (e.g. the DASH HTTP request in
In some embodiments, the AMF 210 may send the mobility report based on he requests received from other functions (such as the SMF 220). These requests may be sent to the AMF 210 through a service based interface such as Namf. In some embodiments, the predictions of future data rates may be provided to the SMF 220 by the TEF, and may be provided in accordance with requests sent to the TEF by other network functions such as the SMF, through a service based interface.
The user display interface 600 may present a user selection message to a user of the UE 5, such as a pop-up message as indicated in
The SMF 220, or a network entity in communication with the SMF 220, may assess both the QoE report provided by the UE 5 in combination with an evaluation of available, and predicted, network resources (i.e. network status including current and predicted channel conditions and current and predicted backhaul conditions) to produce a pre-fetching decision. The pre-fetching decision may be directed to maximize network utility, backhaul utilization, spectrum utilization, and QoE. As an output, the pre-fetching decision may determine network resource allocation, identify one or more pre-fetching locations, and the size of the pre-fetched data at each location.
In step 710 TEF 215 receives network reports from the RAN. In step 715 the users' DB 225 provides user data to the AMF 210, and in step 720 the users' DB 225 provides user data to TEF 215. In step 725, based at least in part upon the received user data, the AMF 210 transmits a mobility report to the TEF 215. In step 730 the UE 5 may transmit a QoE report, including UE buffer status, to the SMF 220. Based on the network report, user data, and mobility report, in step 735 TEF 215 transmits a data rate prediction to the SMF 220. The SMF 220 evaluates the QoE report and the data rate prediction, and may determine that pre-fetching is available and may be useful. In step 740 the SMF 220 transmits a pre-fetching request to the UE 5 that comprises a pre-fetching message including the pre-fetching information, such as the data rate and length of the video segments (e.g. a video segment comprises N seconds of video data). The UE 5 receives the pre-fetching message, comprising a pre-fetching request and including the pre-fetching information. In step 745 the UE 5 generates and transmits a pre-fetching acknowledgement to the SMF 220 based on the received pre-fetching message. In step 750 the SMF 220 transmits new UP path setup information to the UPF 232. In step 755, the SMF 220 transmits the new UP path setup information to the RAN. The new UP path setup may include, for instance, the AN node of the RAN to receive the pre-fetched data as well as the new UP path to the selected node. In step 760, the SMF 220 transmits a data rate request to the UPF 232. In step 765 the SMF 220 transmits a data rate request to the RAN. The data rate requests may include, for instance an updated QoS policy in the from of QoS modify messages to set new QoS parameters of the pre-fetched data. In the case of video, the new QoS parameters may include, for instance, video flow parameters including maximum bitrate (MBR) for enforcement by QoS enforcement functions operative on the network. In step 770 the SMF 220 transmits a cache preparation request to the selected node of the RAN. In step 775 the UE 5 transmits a pre-fetching data request (e.g. the DASH HTTP request in
A simulation was conducted to illustrate operation of data-prefetching. In the example, the simulation included the following parameters:
Flow level simulation
The decision to implement pre-fetching may be based upon a number of factors. By way of example, a decision algorithm may be used to allocate backhaul links, spectrum, caching, and buffers based upon current and/or predicted network resource availability. For example, a decision algorithm may be defined as:
The buffer and cache status at a particular time T may be determined by tracking current buffer/cache status, and including a predicted buffer/cache data input and a predicted buffer/cache predicted output to produce a predicted future buffer/cache status. For instance:
Referring to
The objective of the pre-fetching decision is to maximize overall network utility, for instance to maximize throughput, streamed video quality (e.g. Mean Opinion Score—MOS), and to minimize stalling or network congestion. Infrastructure limitations may be modelled, for instance as:
b
i
[t+T]
p
i
≤B
i
b
i
[t+2T]
p
i
≤B
i
c
i
[t+2T]
≥C
i
b
i
[t+2T]
p
i
≥B
i
The resulting problem may be expressed as a convex (linear) problem that may be solved in polynomial time.
Based on the foregoing description, it may be appreciated that aspects of the present invention may provide at least some of the following features:
Although the present invention has been described with reference to specific features and embodiments thereof, it is evident that various modifications and combinations can be made thereto without departing from the invention. The specification and drawings are, accordingly, to be regarded simply as an illustration, and the scope of claims are as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within their scope.
This application is based on, and claims benefit of U.S. Provisional Patent Application Ser. No. 62/434,932 filed Dec. 15, 2016, the contents of which is incorporated herein by reference, inclusive of all filed appendices.
Number | Date | Country | |
---|---|---|---|
62434932 | Dec 2016 | US |