A FAIR TASK OFFLOADING AND MIGRATION METHOD FOR EDGE SERVICE NETWORKS

Information

  • Patent Application
  • 20240264883
  • Publication Number
    20240264883
  • Date Filed
    February 22, 2023
    a year ago
  • Date Published
    August 08, 2024
    4 months ago
Abstract
The present invention discloses a fair task offloading and migration method for edge service networks, taking the Pareto optimality of the utility function of all user tasks executed by the edge system as the optimization objective, this approach not only takes into account the constraints of edge network resources, but also ensures the maximization of the utility function of all user tasks in the system, it proposes a new quantitative measurement index for improving the task utility quality under multi-user competition. In addition, the present invention uses the graph neural network and reinforcement learning algorithm to solve the final optimization goal, this algorithm has high execution efficiency and returns accurate approximate results, which is particularly suitable for the scene of edge network system under multi-user complex tasks, so that when multi-user tasks compete for network resources, the edge computing network system can efficiently obtain the Pareto optimal result of multi-user utility function, greatly improving the service quality and user experience of edge network environments.
Description
TECHNICAL FIELD

The present invention belongs to the technical field of load balancing on edge computing network, in particular to a fair task offloading and migration method for edge service networks.


DESCRIPTION OF RELATED ART

In recent years, with the increasing popularity of edge computing technology, the user computing load offloading and migration technology has developed rapidly. However, server resources and link resources in the edge environment are often scarce, resulting in servers in the edge environment may not be able to support all current service types; due to the limited variety of services deployed on the server, some user tasks require migration through the current server to be executed on other servers. In response to the mobility of users and the diversity of server requests in edge network environments, and the limited computing resources of edge servers in the region, it is worth researching how to reasonably allocate different computing resources and network bandwidth resources to different users based on fair execution of these user computing offloading loads, in order to ensure that the service experience of all users reaches a fair state.


Each user has personalized needs, such as bandwidth sensitive tasks (such as live streaming services, video on demand services, etc.), and computing sensitive tasks (such as AI algorithm calculation tasks, virtual reality graphics rendering tasks, etc.); the existing research on service algorithms of edge computing networks for real-time task response needs mostly focuses on the overall efficiency of tasks, from the perspective of network bandwidth and server resource utilization, it is relatively rare to consider Pareto solutions for the utility needs of multiple different user individuals. Due to the marginal effect of the user utility function, allocating different types and quantities of resources to users may not necessarily achieve the highest overall utility for a single user; in addition, in an edge network environment with limited overall resources, there are resource allocation conflicts among multiple users, which makes it difficult to fully meet the personalized needs of users and achieve Pareto optimization solutions. In the edge network environment shown in FIG. 3, there are 5 users and 3 edge servers. Users can choose to offload tasks to any edge server in the overlapping area of edge server services; how to select edge servers for task execution for users and schedule task migration between different servers, while maximizing the utility function for all users, is a challenging problem.


Reference [Yao Zewei, Lin Jiawen, Hu Junqin, et al. PSO-GA Based Approach to Multi-edge Load Balancing [J]. Computer Science, 2021, 48(S02):8] proposed a method for balancing edge environment load, with the goal of minimizing the maximum response time of user tasks, using heuristic algorithms to make decisions on user tasks, so that multiple user tasks are assigned to different servers for execution. Reference [Liang Bing, Ji Wen. Multiuser computation offloading for edge-cloud collaboration using submodular optimization[J]. Journal on Communications, 2020, 41 (10): 12] proposed the problem of maximizing the user's utility function, which simply adds and sums the user's utility, with the goal of maximizing the efficiency of all tasks and scheduling user tasks for execution. The above two strategies cannot guarantee that all users' tasks are executed fairly, and may result in sacrificing the quality of service of certain individual users for improving overall performance, resulting in excessive execution time of certain user tasks or a decrease in user's utility function.


SUMMARY THE PRESENT INVENTION

In response to the marginal effect of resources demand for users and the nonlinear conflict of resources allocation, the present invention provides a fair task offloading and migration method for edge service networks under network operation, which can greatly improve the overall utility requirements of all users and achieve Pareto optimality for all users in edge network environments with limited resources.


A fair task offloading and migration method for edge service networks, comprising the following steps:

    • (1) detecting task requests of mobile users in the edge network, and determining all currently detected user task requests;
    • (2) enumerating migration paths of all user task requests transmitted to the edge server;
    • (3) establishing a fair utility evaluation function for evaluating user tasks;
    • (4) optimizing and solving the fair utility evaluation function, the migration path for each user task request can be determined.


Furthermore, for any user task request in step (2), the following three types of migration paths are enumerated:

    • E1 is a set of migration paths for edge servers directly connected to users;
    • E2 is a set of migration paths for edge servers that are indirectly connected to users in a single hop manner, and edge servers that are directly connected to users do not have processing capabilities;
    • E3 is a set of migration paths for edge servers that are indirectly connected to users in a single hop manner, and edge servers that are directly connected to users have processing capabilities.


Furthermore, an expression of the fair utility evaluation function is as follows:










max


U

=


(


U
1

,


,

U
N


)

T









U
i

=




k

𝒦




w
k



U
i
k




,


U
i
k

=


x

i
,
k


1
-

α
i
k




1
-

α
i
k
















i
l



𝒩





trans

(

i
l

)


<

c
l


,

l















i
v



𝒩
v




c

o

m

p


(

i
v

)



<

c
v


,

v

𝒱










    • among them: U; is the utility function for executing the i-th user task request, if the i-th user task request cannot be unloaded and executed, then Ui=Ul, and Ul is a constant, i is a natural number and 1≤i≤N, N is the number of all users, Uik is the utility value of resource k occupied by executing the i-th user task request, wk represents the weight of resource k, and custom-character is the resource set, comprising bandwidth resources and computing resources, αik represents a custom parameter for the personalized utility function of resource k for the i-th user task request, and xi,k represents the quantity value of resource k occupied by the i-th user task request; l represents any transmission link in the edge network, custom-character represents a set of transmission links in the edge network, v represents any edge server in the edge network, custom-character represents a set of edge servers in the edge network, custom-character is the set of all user task requests that occupy the transmission link l, custom-character is the set of all user task requests that are calculated on the edge server v, il is any user task request in the set custom-character, iv is any user task request in the set custom-character, trans(i) is the amount of the bandwidth resource occupied by the user task request il on the transmission link l, comp(iv) is the amount of the computational resource occupied by the user task request iv on the edge server v, cl is the actual bandwidth upper limit of the transmission link l, and cv is the actual maximum computing power of the edge server v.





Each edge server can perform several services, and each user can only perform one service.


Furthermore, the specific process of optimizing and solving the fair utility evaluation function in step (4) is as follows:

    • 4.1. establishing a user shared neural network layer and N user decision models, wherein the user shared neural network layer comprises a graph neural network and a nonlinear aggregation function, N represents the number of all users;
    • 4.2. using the graph neural network to extract information about nodes and edges in the edge network, and combining this information into feature vectors; the nodes correspond to mobile users and edge servers in the edge network, and the edges correspond to the transmission links between mobile users and edge servers, as well as the transmission links between edge servers;
    • 4.3 using the graph neural network to extract information about nodes and edges in the task feasible migration path subgraph, and combining the information into feature vectors;
    • 4.4 for any migration path in the task feasible migration path subgraph, the graph neural network is used to extract the information of nodes and edges, and the information are combined into feature vectors;
    • 4.5 inputting all the extracted feature vectors into each user decision model, and the user decision model outputs the selected probability of each migration path corresponding to the user task requests;
    • 4.6 first determining the priority of each user task request for offloading, then determining the offloading migration path for each user's task request, and then offloading each user's task request based on the priority and offloading migration path.


Furthermore, the information of the edges comprises the bandwidth occupied by each user task request on the transmission links between the mobile users and the edge servers, as well as the actual bandwidth upper limit of the transmission links; the information of the nodes comprises the amount of computing resources required for user task requests and the actual maximum computing power of the edge servers.


Furthermore, the task feasible migration path subgraph corresponds one-to-one with the user task requests, which is composed of all user task migration paths corresponding to the request, the user task migration path represents a specific migration path used by the user task request to perform offloading.


Furthermore, the parameters θuser i of the user decision model are optimized and solved through a reward function;







r
i

=


1
T





t



max



{


η

(



U
i

(

t
+
1

)

-


U
i

(
t
)


)

,
δ

}










    • among them, ri represents the reward function of the i-th user decision model, Ui(t+1) and Ui(t) represents the utility function of executing the i-th user task request during the t+1 and t iterations, respectively, η is a given parameter, t is a natural number, T is the total number of iterations, and δ is a positive smaller given parameter.





Furthermore, the parameters of the user shared neural network layer are iterated through the following formula until the utility function of all user task requests reaches Pareto optimality;








θ
share

(

t
+
1

)

=



θ
share

(
t
)

-



i



α
i





R




i

(


θ
share

,

θ

user


i



)











    • among them, θshare (t+1) and θshare(t) represent the parameters of the user shared neural network layer during the t+1 and t iterations, respectively, αi is the weight value of the reward function for the i-th user decision model, ∇R represents calculating the partial derivative, custom-charactershare, θuser i)=1/ri.





Furthermore, the weight value αi is obtained by solving the following loss function by using the Frank Wolfe algorithm;






min








i



α
i





R




i

(


θ
share

,

θ

user


i



)






2
2







    • wherein, ∥ ∥2 represents a 2-norm.





Furthermore, in step 4.6, for the selected probabilities of all migration paths requested by any user task, the highest of the first K selected probabilities is averaged as the priority for offloading the user task request, the migration path corresponding to the highest selected probability is taken as the offloading migration path by the user task request, with K being a natural number greater than 1.


The present invention takes the Pareto optimality of the utility function of all user tasks executed by the edge system as the optimization objective, this approach not only takes into account the constraints of edge network resources, but also ensures the maximization of the utility function of all user tasks in the system, it proposes a new quantitative measurement index for improving the task utility quality under multi-user competition. In addition, the present invention uses the graph neural network and reinforcement learning algorithm to solve the final optimization goal, this algorithm has high execution efficiency and returns accurate approximate results, which is particularly suitable for the scene of edge network system under multi-user complex tasks, so that when multi-user tasks compete for network resources, the edge computing network system can efficiently obtain the Pareto optimal result of multi-user utility function, greatly improving the service quality and user experience of edge network environments.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 (a) is a schematic diagram of the network topology that user tasks can connect to.



FIG. 1 (b) is an enumeration diagram of feasible migration paths for user tasks.



FIG. 2 is a schematic diagram of the neural network algorithm framework for calculating user migration path decisions.



FIG. 3 is a schematic diagram of the edge network system.





DETAILED DESCRIPTION OF THE INVENTION

In order to provide a more specific description of the present invention, the following will provide a detailed explanation of the technical solution of the present invention in conjunction with the accompanying drawings and specific implementation methods.


A fair task offloading and migration method for edge service networks of the present invention, comprising the following steps:

    • (1) detecting task requests of mobile users in the edge network, and determining all currently detected user task requests;
    • (2) enumerating migration paths of all user task requests transmitted to the edge server;
    • (3) establishing a fair utility evaluation function for evaluating user tasks:











max


U

,

U
=


(


U
1

,


,

U
n


)

T


,




n






𝒩






    • among them: custom-character represents the user set, and i represents the user index; if a user task can be executed by an edge server, the fair utility function representing the user is defined as follows; otherwise, Ui=δ, and δ is a small normal number that is customized by the edge system environment in advance, this constant indicates that when user tasks can only be executed locally and cannot be migrated to the edge server for execution, user utility will be greatly reduced.











U
i

=




k

𝒦




w
k



U
i
k




,


U
i
k

=


x

i
,
k


1
-

α
i
k




1
-

α
i
k










    • wherein, the parameter αik represents the personalized utility function of each user for resource k, k represents the type of user resource demand, wk represents the weight of resource type, custom-character wk=1, and xi,k represents the number of resource k occupied by user i, in this embodiment, custom-character has two values: k=1 represents bandwidth resources, and k=2 represents computing resources.





As shown in FIG. 1 (b), users and servers are viewed as the vertices of the graph, where the set of servers is represented by custom-character, the set of transmission links between users and servers is represented by custom-character, the transmission link from user i to server j is represented by li,j, and the migration link from server j to server k is represented by ljk, where i represents users, j, k represents edge servers, and i, j, and k belong to the set custom-character, li,j and ljk belong to the set custom-character, and each link in the set custom-character has bandwidth constraints, the maximum limit of user tasks that can be transmitted is custom-character trans(i)<cl, l∈custom-character, where cl is the actual bandwidth parameter of the system network and custom-character is the set of all users using the link l; each edge server in custom-character has computational bandwidth constraints, and the maximum number of user tasks it can carry is custom-character comp(i)<cv, v∈custom-character, where cv is the actual performance parameter of the system edge server and custom-character is the set of all users when performing task calculations on edge server v.


(4) optimizing and solving the fair utility evaluation function, the migration path for each user task request can be determined, the specific implementation is as follows:


We define the server computing load as: custom-characterworkloadi,j+(workloadjin−workloadjout), where custom-characterj represents the set of users connected by server j, workloadi,j represents the amount of tasks unloaded by user i to server j, workloadjin represents the amount of tasks transferred by other servers to server j, and workloadjout represents the amount of tasks transferred by server j to other servers.


Task offloading refers to the ability of users to send tasks to designated edge servers for execution within the service scope of the edge server; task migration refers to the migration of user task to an edge server other than themselves after offloading the destination server and transferring the task from one edge server to another; through the above task offloading and task migration, computing tasks of the users can ultimately reach a certain server for task execution.


The above task transmission path search algorithm is as follows: firstly, i represents the user index, m represents the server index, defined Sm as the set of adjacent servers within one hop of server m, defined Oic as the set of servers that cover service scope of user i, and the set of server indexes that can execute user i's tasks is Oie, then Oic-e=Oic\Oie represents servers that is directly connected to the user, but cannot execute the tasks submitted by user i. Using Oice=Oic∩Oie represents servers that is directly connected to the user i, and the servers are capable of performing tasks offloaded by user i.


Adding the direct link between the server and the user in Oice to the set Eidirect, traversing the server index of Oic-e, and recording as z, defining a set E1migration=z×Sz∩Oie; traversing the server index of Oice, and recording as j, defining a set E2migration=j×Sj∩Oie.


The first part of the migration link set (the paths composed of user nodes and the first server nodes that cannot perform tasks, as well as a server node with task services) is E1migration the second part of the migration link set (the paths composed of user nodes and server nodes that can perform services). From the above steps, the migration path set that user i can perform task transmission is: Eidirect∪E1migration∪E2migration.


Decomposing the feasible connection graph of each user (i.e. FIG. 1 (a)) to obtain the feasible connection subgraph of each user (i.e. FIG. 1 (b)), combining the information extraction of each user's feasible connection graph, performing edge and fixed point feature learning on each user's subgraph, and obtaining the graph embedding feature information of each user's task migration path in FIG. 1 (b). On the basis of embedding feature information in the connection feasibility map of all users at all current times, prioritizing the current users, and users with higher priority can perform task offloading and execution; once the user offloads the task, based on the embedding feature information in the user subgraph, a feasible task execution plan is selected, which is to assign an edge server to the task for task offload, if the task needs to be migrated, the edge server for task migration is also specified. The optional execution strategy scheme for user tasks consists of all possible migration paths for the tasks, as shown in FIG. 1 (b). Task 4 of user 1 can be executed by migrating three migration strategies: subgraph 1, subgraph 2, and subgraph 3 to the edge server; using ei,s represents a graph encoding feature vector of the task migration path s of user i, the set of feasible execution migration paths for user i's task is custom-character, where s belongs to custom-character.


The following is a detailed description of two types of embedding encoding information for the edge server network system, including embedding encoding information for each element node in the user feasible migration subgraph and embedding encoding information for each element node in the edge network global graph.


Embedding encoding information for feasible migration subgraphs for user i:


The set of all links in the feasible migration subgraph for user i is custom-character, the set of servers is φ, and the feature vectors of user nodes in the migration path are ui; the feature vector feal=ol1, ol2, . . . , custom-character, oln (n∈custom-character) of the link l represents the bandwidth resources occupied by the user n on the link l, while the feature vector of the server c is feac=[pc1, pc2, . . . , custom-character], pcn(n∈custom-character) represents the computing resources occupied by user n on the server c. Using a graph neural network g( ) to calculate the embedding encoding information [ωisub, ωic, ωil]=g(ui, Fl, Fc) of all element nodes (users, links, servers) in the migration subgraph of user i, feal∈Fl, feac∈Fc, Fl and Fc represent the set of feature vectors for all links and servers in the feasible migration subgraph. ωic=[c1, c2, . . . , custom-character], ωil=[l1, l2, . . . , l|φ|], ck(k∈φ) is embedding encoding information for servers, lk(k∈custom-character) is embedding encoding information for links, ωisub(i∈custom-character) is embedding encoding information for users, |custom-character| represents the size of the set of links, and |φ| represents the size of the set of servers.


For global edge network embedding encoding information:


The feature vector of the link l is feal=[ol1, ol2, . . . , custom-character], oln, n∈custom-character represents the bandwidth resources occupied by the user n on the link l. The feature vector of the server c is feac=[pc1, pc2, . . . , custom-character], pcn (n∈custom-character) represents the computational resources occupied by user n on the server c. Using the graph neural network f( ) to calculate the embedding encoding information [ωug, ωgc, ωgl]=g(ui, Fl, Fc) of all element nodes (users, links, servers) in the migration subgraph of user i, feal∈Fl, feac∈Fc, Fl and Fc represent the set of feature vectors for all links and servers in the feasible migration subgraph. ωgc=[c1, c2, . . . , custom-character], ck(k∈custom-character) is the embedding encoding information for all servers, |custom-character| represents the size of the set of links; ωgl=[l1, l2, . . . , custom-character], lk(k∈ε) is the embedding encoding information for links, ωug=[u1, u2, . . . , custom-character] is embedding encoding information for all users, |custom-character| representing the size of the set for all servers.


Embedding encoding information after CNN network aggregation:


The input of the CNN network is the [ωisub, ωic, ωil] of the feasible migration subgraph of all users and the [ωug, ωgc, ωgl] of the global edge network, the output of the CNN network is [ωu, ωl, ωc], where ωu represents the embedding encoding information of all users after aggregation, ωl represents the embedding encoding information of all links after aggregation, and ωc represents the embedding encoding information of all servers after aggregation.


Using Ri=[ei1, ei2, . . . , eis, . . . , custom-character] represent a set of feature vectors of the task migration path of user i, starting from the migration user, the edge server nodes of the user, task migration link, task path, and the edge server node that the task finally reaches as elements in the set; if task migration path of user i s∈Eidirect, the feature vector is esi=[ωu(i, s), ωl(i, s), ωc (i, s), 0,0], otherwise s∈E1migration∪E2migration, the feature vector is:






e
s
i=[ωu(i, s), ωl(i, s), ωc(i, s), ωl(i, s), ωc(i, s)]

    • among them: ωu(i, s) represents the embedding encoding information of user i on the migration path s, ωl(i, s) and ωc(i, s) respectively represents the embedding encoding information of server c and link I belonging to user i's task migration path s, and l′ and c′ represents the link and server of the second hop of user i's task migration path s.


As shown in FIG. 2, R={R1, R2, . . . , Ri, . . . , custom-character} is defined as a set of feature vectors for each user's migration path, Z represents the output of each user's decision model, the i-th user component Zi in Z represents the user's task migration path decision vector.


We aim to maximize fairness for all users, and in the design of the present invention, the system uses the fair utility function value generated by the decision as the evaluation criterion; after the system makes a decision, if all users' fair utility can increase, the system reward function value is (Ui(t+1)−Ui(t))*η, which η is a constant, otherwise, the reward function value for the user is 0. The system is modeled by maximizing the reward function value as a reward in the reinforcement learning model.


The specific description of the system reward function is:







r
i

=


1
T





t



max



{


η

(



U
i

(

t
+
1

)

-


U
i

(
t
)


)

,
δ

}










    • wherein: δ is a smaller positive number, and is a given parameter.





The goal of the reinforcement learning algorithm of the present invention is to maximize the reward function. The system uses the policy gradient method to train the offloading and transfer decisions of user tasks, and defines the parameters in the user decision model neural network as θuser i. So policy represents the probability that the system will take action at during the state st, with a value of πθuser i (st, at), where action refers to the server selection and migration path selection for the system to offload user tasks. In the operation of the system, each user decision model adopts a formula to update the neural network parameters θuser i of the decision model: θuser iuser i+αΣt∈Tθuser i logπθuser i(st, at)(Σt′=t|T|rt′−bt), α represents a customized learning rate parameter, and bt represents a benchmark value btt′∈trt′ defined to reduce the average user reward generated by the decision.


The above reinforcement learning algorithm ensures the maximum fair utility function value for all users, but cannot guarantee that the utility of each user is the fairest state; in order to obtain Pareto optimality for all users, the present invention proposes a Pareto fairness algorithm for calculating neural network parameters. As shown in FIG. 2, the parameters of the neural network graph f(⋅), g(⋅) table and CNN network are merged and defined as shared parameters θshare, which R are the output of the user shared neural network layer, and custom-character=rt is the loss function corresponding to user i, once the reinforcement learning algorithm mentioned above is updated θuser i, α1, α2, . . . , αn are solved using the FRANK WOLFE SOLVER method:







min









i

𝒩




α
i





R




i

(


θ
share

,

θ

user


i



)






2
2



and






i

𝒩



α
i



=
1






    • among them, ε represents the accuracy requirements of the system and the set system parameters, after multiple iterations of the algorithm steps mentioned above, if the accuracy requirements of the above equation are still met, then the system assumes that all users have met the Pareto fairness requirements, and the shared parameters at this time are θshare.





The set of migration paths for user feasible task transmission is defined as Ei=Eidirect∪E1migration∪E2migration, where the elements of the output vector of the neural network system shown in FIG. 2 correspond one-to-one to the decision vector Zi of the task migration path for each user i, each element in the Zi is used as a judgment value for whether each task path is selected for Ei, which Zik represents the judgment value of the kth feasible task migration path Ei selected by user i.


The present invention designs an optimal sorting function tpk(X)=X/k, which returns the average value of the largest first k components in the input vector, defining the user who prioritizes the task migration path in each iteration of the system as i*=argmax; {V1, V2, . . . , Vn}, V=[tpk(Z1), tpk(Z2),tpk(Z3), . . . ,tpk(ZN)]. Once the user is selected, the system assigns the optimal task migration path to them, using the largest component k=argmaxk{Zik} in each user's decision vector as the final selected task migration path index for each user, selecting the k-th migration path in the set Ei of migration paths for user i's task transmission.


Under the constraint conditions custom-charactertrans(i)<cl, l∈custom-character and custom-charactercomp(i)<cv, v∈custom-character, repeating the above selection until any constraint is not met. At this point, the path selection decision vectors for each user are obtained, and users without assigned migration paths choose to execute locally.


According to the neural network algorithm framework shown in FIG. 2, backpropagation training is conducted. On the premise of meeting the accuracy requirements ε, the edge network is simulated for user task input until the system reward function tends to converge. At this time, the parameters in the neural network are used as the final parameters of the edge network user task utility fairness load transfer method.


The above description of the embodiments is for the convenience of ordinary technical personnel in the art to understand and apply the present invention. Those familiar with the art can clearly make various modifications to the above embodiments and apply the general principles explained here to other embodiments without the need for creative labor. Therefore, the present invention is not limited to the aforementioned embodiments. According to the disclosure of the present invention, the improvements and modifications made by those skilled in the art should be within the scope of protection of the present invention.

Claims
  • 1. A fair task offloading and migration method for edge service networks, comprising the following steps: (1) detecting task requests of mobile users in the edge network, and determining all currently detected user task requests;(2) enumerating migration paths of all user task requests transmitted to the edge server;(3) establishing a fair utility evaluation function for evaluating user tasks, wherein, an expression of the fair utility evaluation function is as follows:
  • 2. The fair task offloading and migration method for edge service networks according to claim 1, wherein, for any user task request in step (2), the following three types of migration paths are enumerated: E1 is a set of migration paths for edge servers directly connected to users;E2 is a set of migration paths for edge servers that are indirectly connected to users in a single hop manner, and edge servers that are directly connected to users do not have processing capabilities;E3 is a set of migration paths for edge servers that are indirectly connected to users in a single hop manner, and edge servers that are directly connected to users have processing capabilities.
  • 3. (canceled)
  • 4. (canceled)
  • 5. The fair task offloading and migration method for edge service networks according to claim 1, wherein, the information of the edges comprises the bandwidth occupied by each user task request on the transmission links between the mobile users and the edge servers, as well as the actual bandwidth upper limit of the transmission links; the information of the nodes comprises the amount of computing resources required for user task requests and the actual maximum computing power of the edge servers.
  • 6. (canceled)
  • 7. The fair task offloading and migration method for edge service networks according to claim 1, wherein, the parameters θuser i of the user decision model are optimized and solved through a reward function;
  • 8. The fair task offloading and migration method for edge service networks according to claim 7, wherein, the parameters of the user shared neural network layer are iterated through the following formula until the utility function of all user task requests reaches Pareto optimality;
  • 9. The fair task offloading and migration method for edge service networks according to claim 8, wherein, the weight value αi is obtained by solving the following loss function by using the Frank Wolfe algorithm;
  • 10. The fair task offloading and migration method for edge service networks .according to claim 1, wherein, in step 4.6, for the selected probabilities of all migration paths requested by any user task, the highest of the first K selected probabilities is averaged as the priority for offloading the user task request, the migration path corresponding to the highest selected probability is taken as the offloading migration path by the user task request, with K being a natural number greater than 1.
Priority Claims (1)
Number Date Country Kind
202210994286.4 Aug 2022 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2023/077586 2/22/2023 WO