The present application relates to the field of communication technology, and specifically to a joint recommendation and cache optimization method based on a collaboration of multiple base stations.
In recent years, with the booming mobile internet and the widespread use of mobile devices, the traffic in mobile networks is grown exponentially. However, the explosive growth of traffic in the mobile networks will increase the burden of backhaul links, to cause network congestion and seriously degrade user quality of service (QoS), which can be effectively solved by a mobile edge caching (MEC) technology. By caching some files at the edge of the network, such as small base stations and mobile device terminals, users can get the files nearby, to reduce the delay of users to get the requested content and ease the load on the backhaul link. Therefore, mobile edge caching technology becomes a hot research topic in the field of information and communication.
The key to mobile edge caching technology is how to cache files in devices at the edge of the network. In addition, in order to improve the performance of mobile edge caching, a user behavior reshaping mechanism can be introduced. In recent years, recommendation mechanisms have received increasing attention because of their ability to reshape user request behavior patterns. Recommendation mechanisms can be introduced into caching techniques to recommend appropriate cached content to users, which will increase the probability of their request for the recommendation content and thus improve the cache hit rate.
The main objective of the present application is to provide an array substrate, which aims to improve the uniformity of display brightness of the display panel.
In order to achieve the above objective, the present application provides the following features:
A joint recommendation and cache optimization method based on a collaboration of multiple base stations, each base station in a target area randomly caching contents stored to the cloud, when each base station in the target area receives a to-be-responded content request, the method comprising:
In an embodiment, the method further includes: based on the access optimization sub-model, the cache optimization sub-model and the recommendation optimization sub-model, when receiving the to-be-responded content request, the base station responding to the to-be-responded content request, and obtaining the recommendation list of cached contents that correspond to the to-be-responded content request, according to a equation below:
calculating a time delay
from the to-be-responded content request u connecting with the base station j to obtain the recommendation list f of the cached contents, where cfj is a cache variable, Lf is a size of the recommendation list f of the cached contents, vuj is a transmission rate between the to-be-responded content request u and the base station j, and u (·) is a step function satisfying
v, is a transmission rate of a wired link between the base stations, and vc is a rate of the base station downloading files from the cloud;
based on the time delay from the to-be-responded content request connecting with the base station to obtain the recommendation list of cached content, constructing the optimization problem according to a formula below:
where P1 is the optimization problem, C1 limits each to-be-responded content request to access only one base station, and C2 limits a range of base stations that be accessed by the to-be-responded content request, and C3 constrains a sum of sizes of the cached files in the base station not to exceed a maximum cache capacity of the base station, and C4 limits the cached content in each recommendation list of the cached contents to be cached in at most one base station, C5 constrains a size of the recommendation list of each to-be-responded content request, C6, C7 and C8 indicate that a cache, an access of the to-be-responded content request and cache variables all vary from 0 to 1, where l is an access decision variable, J is a number of base stations, F is a total number of contents stored to the cloud, U is a total number of to-be-responded content requests, r is a recommendation decision variable, Ru is a cache decision variable, and Ru is a size of the recommendation list of to-be-responded content request u.
In an embodiment, the method included constructing the optimization problem in step C comprises a recommendation sub-problem, an access sub-problem, and a cache sub-problem, and obtaining the recommendation sub-problem according to a formula below:
In an embodiment, the solving the recommendation sub-problem, based on the recommendation optimization sub-model, by using the optimization method of simulated annealing includes:
In an embodiment, solving the access sub-problem, based on the access optimization sub-model, by using an optimization method of a coalition game includes:
In an embodiment, solving the cache sub-problem based on the cache optimization sub-model by using a cache optimization method of an improved greedy algorithm includes:
In an embodiment, the method includes: performing an iterative loop, by the access optimization sub-model, the cache optimization sub-model and the recommendation optimization sub-model, based on their corresponding optimization sub-problems and a number of iterations N;
after N iterations, obtaining the target base station, the optimal cache contents and the recommendation list of the to-be-responded content request, and responding to the to-be-responded content request.
A joint recommendation and cache optimization method based on a collaboration of multiple base stations in the present application, following technical effects can be brought by using the above technical solution compared with the prior art.
1. The method of the present application aims at minimizing the total transmission delay of the system, and can bring about a significant reduction in delay with a lower cache capacity by jointly optimizing recommendation decision, access decision and cache decision.
2. Most studies start from the global popularity of content and assume that content request preferences are homogeneous, ignoring that content request preferences vary from person to person, such that the caching performance of the system is reduced. Therefore, in order to improve the performance of mobile edge caching, a recommendation mechanism is introduced to reshape the request behavior patterns of users. By recommending appropriate cached content, the probability of requesting for the recommendation content is increased, to improve the cache hit rate.
3. To solve the joint optimization problem in the present application, it is decoupled into three sub-problems, namely, the recommendation optimization sub-problem, the access optimization sub-problem, and the cache optimization sub-problem. The reconunendation optimization sub-problem is solved by using the recommendation decision optimization method based on simulated annealing algorithm, and the range of files in the recommendation list is restricted by defining the files of interest to the user in order to take into account the user experience, to avoid the recommendation mechanism causing user resentment; the user access problem is solved by using the method based on the coalition game to obtain the access decision. The cache optimization method based on the improved greedy algorithm is used to solve the cache optimization sub-problem to obtain the cache decision.
In order to better understand the technical aspects of the present application, specific embodiments are given and illustrated below with the drawings.
Aspects of the present application are described in the present application with reference to the drawings, in which a number of illustrated embodiments are shown. Embodiments of the present application need not be defined to include all aspects of the present application. It should be understood that the multiple ideas and embodiments described above, and those described in more detail below, can be implemented in any one of many ways, due to the fact that the ideas and embodiments disclosed in the present application are not limited to any one embodiment. In addition, some aspects of the present application may be used alone, or in any appropriate combination with other aspects of the present application application.
Referring to
The joint recommendation and cache optimization method in the collaboration scenario of multiple base stations in the present application assumes different sizes of files to minimize the total transmission delay of the system by jointing the optimization recommendation, the user access, and cache decisions under the constraints of the size of recommendation list, cache capacity, and channel bandwidth.
The mobile edge caching network model of the present application is shown in
indicates the preference of the user u for the file f, that is the probability of the to-be-responded request u sent by the user requesting the cached contents corresponding to the files, without the introduction of a recommendation mechanism.
The individual model mechanisms involved in the present application are as follows.
The user access and caching mechanism:
Combined with the content in step A, when a signal-to-noise ratio of the to-be-responded content request sent by the user to the base station is greater than a threshold value, the user can access the base station; otherwise, the user cannot access the base station. the set of base stations accessible to the user u is Js, i.e., the target base station. The threshold value of the signal-to-noise ratio is SINRthreshold if the user u accesses the base station j, then its received signal-to-noise ratio SINRu,j is required to meet the following conditions:
SINRu,j ≥ SINRthreshold, u ∈ U, j ∈J u.
An access variables of the user u and the base station j are luj ∈ {0,1}, if luj =1, then it means that the user u is connected to the base station j and the global user access decision is
Considering that a user may be covered by more than one base station, but can only access one base station, there exists a constraint
the number of users accessing the base station
Since the cache capacity of each base station is limited, there exists
Where Cj is the cache capacity of the base station (unit: KB), Lƒ is the size of the file (unit: KB), cƒj is the cache variable, cƒj ∈ {0,1}, if cƒj=1, then the file ƒ is cached in the base station j and all base stations cache decision is c@(cƒj)j∈J, ƒ∈F. In order to avoid redundancy caused by multiple base stations caching the same file in the system, each file is limited to be cached in at most one base station, so there exists the constraint:
The user preference and recommendation mechanism:
is the user’s preference for a file ƒ and satisfies a condition a recommendation mechanism being introduced in the cache can reshape the probability distribution of user requesting for files, so that user requesting for files are influenced by both the original preference and the recommendation content. For the user u∈U and the file f ∈F, the recommendation variable is ruƒ ∈{0,1}, if ruƒ=1, then the file ƒ is then the file is recommended to the user u, and the global recommendation decision variable is r@(ruƒ)ƒ∈F u∈U.
Considering that the screen size is limited when the user uses mobile devices, the number of files recommended to each user is constrained by
where Rn is the size of the recommendation list of the user u, and the list of files recommended to the user is Ru, ||Ru||=Ru.
After the recommendation mechanism is introduced, the probability of requesting for a file ƒ∈Ru in the recommendation list when the user u accepts the recommendation is
Similarly, when the user u does not accept the recommendation, the probability of requesting for a file f ∈ F \ Ru, not in the recommendation list is
Finally, the probability
of requesting for a file f∈F by the user u∈U is
where Δu is the probability Δu ∈ (0,1) that the user accepts the recommendation.
The probability Δu of the user accepting recommendation is introduced because not all users are willing to accept recommendation, and different users have different acceptance of recommendation mechanism, and the probability is closely related to the recommended files. For example, if all the files in the recommendation list are of interest to the user, then his acceptance probability of the recommendation mechanism is high, and the probability of the user requesting for the files in the list is higher; otherwise, if there is content in the recommendation list that is not of interest to the user, then the user’s request probability due to the recommendation mechanism is reduced, and the user prefers to access files outside the recommendation list, which reduces the cache hit rate of the system. In the present application, the files of interest to the user u refer to the files λu · F ranked in the top of the user’s preferences, where λu indicates the percentage of interest of the user u, that is the proportion of the files of interest to the user to all files, F is the total number of files, λu ∈ (0,1], u ∈ U .
is a set of files that are interest to the user and appear in the recommendation list, where
is a rank of the file f in the user’s preference for all files. The number of files in the
The expression for calculating the probability of user’s u acceptance of the recommendation mechanism in the embodiment is:
is a size of the recommendation list of the user u .
In the present application, frequency division multiplexing is used within each base station, and all base stations multiplex the same frequency band, so that there is interference between cells and no interference within cells. It is assumed that any base station can obtain all channel state information. For the user u, the received signal-to-noise ratio of the received signal from the base station
is a channel gain from the base station j to the user u, Pj is a transmit power of the base station j, the noise is zero-mean Gaussian white noise, and the variance is
The channel fading model of the present application uses the COST231-Hata model, and the channel gain hu,j of the base station j transmitting signal to the user u is calculated as follows hu,j=46.3+33.9lg(ƒ)-13.82lg(hh)-a(hm)+{44.9-6.55lg(hh)}duj+Cm.
Where ƒ is the base station operation frequency (unit: MHz), hh is an effective height of the base station antenna (unit: m), hm is a user terminal antenna height (unit: m), a(hm) is a user terminal height factor, dui is a distance between the user u and the base station j (unit: km), and Cm is the ground fading correction value (unit: dB) . The user terminal height factor is calculated as follows a(hm)=8.29[lg(1.54hm)]2-1.1.
Assuming that all base stations are multiplexed with the bandwidth B, and each base station B distributes the bandwidth resources evenly to the connected users. Since the number of users connected to the base station j is kj, the bandwidth obtained by each user is
and the information transmission rate between the user u and the base station j∈Jucan be obtained according to shannon’s formula as
In order to improve user’s satisfaction, the present application expects to reduce the time delay of user obtaining files through optimization, and when the to-be-responded content request u∈U of the user accesses to the base station j∈J, obtaining the cached contents ƒ∈F is divided into three cases.
The user accesses a base station j that caches the requested file, and the user downloads the file directly from the accessed base station, at this time, the time delay for obtaining the file is
the size of the file ƒ, the transmission rate between the user and the base station is vuj.
The user accesses a base station j that has not cached the requested file, but other base stations have cached the file. The file cached in the other base station is obtained through the wired link between base stations, and the user then downloads the file through the accessed base station. The time delay at this time is
is the transmission rate of the wired link between the base stations
All base stations do not cache the file requested by the user, so they can only download the file from the cloud to the base station through the backhaul link, and then the user downloads the file through the base station. The time delay at this time is
is the rate at which the base station downloads the file from the cloud server.
The ti me delay T1,T2,T3 of the above three scenarios satisfy T1<T2<T3, Considering that the transmission rate of the wired link between base stations must be much greater than the transmission rate of the backhaul link, that is, vs>vc.
According to the above analysis, it can be noted that the time delay of obtaining the file ƒ when the user u assesses to the base station j is:
where cƒj is the cache variable and u (·) is the step function that satisfies
According to the formula for obtaining the time delay, it is known that the content of the recommendation list, the transmission rate between the user and the base station and the cache content of the base station have an impact on the time delay, where the recommendation content is related to the recommendation policy, the transmission rate is related to the user’s access mechanism, and the cache content of the base station is related to the cache policy. Therefore, by combining the processes from step B to step D, the access optimization sub-model, the cache optimization sub-model, and the recommendation optimization sub-model are constructed for joint optimization to minimize the total time delay of the obtained content request.
The present application proposes a joint recommendation and cache optimization method based on a collaboration caching scenario of multiple base stations, with recommendation, user access and cache policy as optimization variables, the optimization problem of minimizing the total time delay of the system is constructed under the constraints of cache capacity, the size of recommendation list, and bandwidth, etc., and is decoupled into three sub-problems of recommendation, user access and cache, which are solved by three methods.
As shown in
Although the present application is disclosed with embodiments as described above, it is not intended to limit the present application. Those skilled in the art to which the present application belongs may make various changes and modifications without departing from the scope of the present application. Therefore, the scope of the present application shall be subject to that defined in the claims.
Number | Date | Country | Kind |
---|---|---|---|
202210305642.7 | Mar 2022 | CN | national |
This application is a continuation application of International Application No. PCT/CN2022/107290, filed on Jul. 22, 2022, which claims priority to Chinese Patent Application No. 202210305642.7, filed on Mar. 25, 2022. The disclosures of the above-mentioned applications are incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/107290 | Jul 2022 | WO |
Child | 18183383 | US |