METHOD FOR SUPPORTING ADAPTIVE UNLOADING OF MULTI-INTERNET OF THINGS (IOT) APPLICATIONS IN EDGE ENVIRONMENT

Information

  • Patent Application
  • 20240192995
  • Publication Number
    20240192995
  • Date Filed
    January 12, 2024
    a year ago
  • Date Published
    June 13, 2024
    8 months ago
Abstract
A method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in an edge environment includes: constructing an application compliant with a universal program structure supporting on-demand computing unloading; for the application compliant with the universal program structure supporting on-demand computing unloading, extracting a program fragment flowchart through static code analysis to provide internal flow information of the application for generation of an unloading solution; analyzing a peripheral mobile edge computing (MEC) environment and the application through an unloading solution generating algorithm based on a multi-task particle swarm optimization-genetic algorithm to obtain the optimal unloading solution; and performing, by the application, computing unloading by taking a service as a particle size according to the unloading solution to minimize a system overhead under a circumstance that a deadline constraint of each application is satisfied. The method supports the computing unloading of different types of applications in the MEC environment.
Description
TECHNICAL FIELD

The present invention belongs to the technical field of Internet of Things, and particularly relates to a method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in an edge environment.


BACKGROUND

With rapid development of Internet of Things (IoT) and IoT equipment, a variety of resource-intensive IoT applications, for example, human face recognition, sound-semantic analysis, interactive games and augmented reality, are generated. Limited by processing capacity, memory capacity, and battery capacity, most resource-intensive IoT applications cannot operate directly on the IoT equipment.


Computing unloading is an effective method to solve resource constraints of the IoT equipment. Particularly, with proposal of mobile cloud computing (MCC), all or part of computing tasks are unloaded to a cloud server which provides huge storage and computing resources. More particularly, an application is divided at a certain particle size, and some of computing intensive tasks are unloaded to cloud for execution. Other simple tasks are processed locally, thereby shortening the response time of the application.


However, the distance between the IoT equipment and the cloud server is relatively far, which may cause a significant execution delay. In addition, massive data transmission may result in traffic congestion in a core network. To solve the above problems, mobile edge computing (MEC) is introduced. Mobile edge provides a computing capacity and storage resources superior to those of the IoT equipment and is closer to the IoT equipment, which may significantly shorten the delay.


The intensive tasks are unloaded to the cloud server or an edge server, which is indeed capable to alleviate the computing and storage pressures of the IoT equipment. However, variety of application types and complicated and variable operating environments of the MEC result in difficulty to compute unloading in MEC. Moreover, to overcome the expensive overhead brought by computing unloading is also a nonnegligible problem. Specifically speaking, there are two challenges below:


Challenge 1: an enabling mechanism. For various types of IoT applications, to design a universal method which is capable to decouple different types of strongly coupled single applications into different functional modules and support computing on-demand unloading becomes a challenge.


Challenge 2: unloading decision making. In the MEC, the computing resources are scattered in the IoT equipment, the edge server, and the cloud server, and the resources of different computing platforms have significant differences in performance. Different applications may scramble for resources in executive processes. In such a complicated and varied network environment, to obtain an appropriate unloading strategy to minimize the money overhead under a circumstance of satisfying a deadline of each application is another challenge.


Most previous studies focus on solving computing unloading of Android applications. More people transfer the target to unload a deep neural network (DNN) due to development of the DNN. However, the studies are only targeted to specific types of applications and are of no universality. Moreover, the studies are targeted to the computing unloading problem in the single application and cannot make full use of the computing resources scattered and varied in the MEC environment.


SUMMARY

An object of the present invention is to provide a method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in an edge environment. The method supports the computing unloading of different types of applications in the MEC environment, and minimizes the system overhead generated by the computing unloading.


To achieve the above object, the present invention adopts the following technical solution: a method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in an edge environment, including:

    • constructing an application compliant with a universal program structure supporting on-demand computing unloading;
    • for the application compliant with the universal program structure supporting on-demand computing unloading, extracting a program fragment flowchart through static code analysis to provide internal flow information of the application for generation of an unloading solution;
    • analyzing a peripheral mobile edge computing (MEC) environment and the application through an unloading solution generating algorithm based on a multi-task particle swarm optimization-genetic algorithm (MPSO-GA) to obtain the optimal unloading solution; and
    • performing, by the application, computing unloading by taking a service as a particle size according to the unloading solution to minimize a system overhead under a circumstance that a deadline constraint of each application is satisfied.


Further, the universal program structure supporting on-demand computing unloading is designed through a design mode supporting independent deployment and execution of modules of the different types of applications; the design mode satisfies the following conditions: each module of the application is capable to be independently deployed and executed at different computing nodes; the module is capable to determine a computing node where a module or a database to be invoked is located, so as to guarantee invoking validity; the invoked module is capable to receive an input of an invoking module and return a correct executive result;

    • there are two types of services in the application compliant with the design mode: a pure service, using local data without invoking other services; a compound service, relying on data from other services and required to invoke other services; the compound service mainly comprises the following attributes:
    • (1) key-value type data Params/result: all external variables required by an invoked service in an execution process are imported through Params, i.e., there is no sentence accessing to the external variables in the service; when the service needs to modify the external variables, the external variables are exported by virtue of result; and the external variables refer to variables accessed in the execution process of the service and defined outside the service;
    • (2) a character string seq: a character string seq is maintained in the application, and an invoke path from a current service to a primary service is recorded; and, to maintain the seq, when the service is invoked, the seq is imported in the form of parameter, and a name of the invoked service is added into the seq;
    • (3) an Invoke function: the compound service invokes other services through the Invoke function; the Invoke function achieves two functions: a) the Invoke function controls forwarding: whether the service is locally executed or remotely invoked is determined, and if the service is remotely invoked, which computing node invoking the service is determined; and b) data transmission: all parameters required by the invoked service are serialized, and the returned service executive result is deserialized; and
    • (4) a Call function: the service establishes a connection with the database through the Call function; the Call function determines an address of the database through a name of the database to be interacted and returns a connecting channel for data transmission with the database, so as to ensure normal data interaction between the service and the database.


Further, the unloading solution generating algorithm describes an application unloading program in the MEC environment from the aspects of a network model, an application model, and a performance model, and solves the application unloading problem through the multi-task particle swarm optimization-genetic algorithm (MPSO-GA);

    • a network environment comprises a cloud server CS, a plurality of edge servers ES, and a plurality of Internet of things devices DS; the MEC environment is modeled as a map G=(N,E) including a set N={n1, n2, . . . , nw} of the computing nodes, and a set E is used to represent a sub connection thereamong; the computing nodes are represented by ni=<Ψi, ωi, cicom>, where Ψi represents computing power of the computing node ni; it is assumed that a storage capacity of each node is capable to satisfy a corresponding demand; ωi represents the maximum number of tasks capable of being executed by the node ni, wherein the value thereof is equal to a nuclear number of the node; cicom represents a computing cost by using the computing node ni per second; ei,j is used to represent a connection between ni and nj, and is related to a bandwidth vi,j and the minimum link delay rtti,j, and ci,jtran is used to represent a transmission overhead per second between the nodes ni and nj;
    • it is assumed that there is a system going to be executed for a time θ in the unloading process; there are a plurality of applications A={a1, a2, . . . , ag} from IoT equipment in the system and the applications ai are executed at a time interval of μi, so each application is executed for ξi=θ/μi times within a time period θ, and dli is used to represent a deadline of the application ai; wherein each application is compliant with a universal program structure; and the application is modeled as the program fragment flowchart Flowi=(Di, Si, Pi, Qi).


to compute a response time and a money overhead of the application, an unloading solution corresponding to the service and database is constructed, where SPi={sp(si1), sp(si2), . . . , sp(sio))} represents a deployment address of the service in the application ai; for each program fragment sij,k, the executive address thereof is dependent on the deployment address of the service sij, i.e., sp(sij,k)=sp(sij); and DP={dp(bi1) , dp(bi2) , . . . , dp(bil)} represents a deployment address of the database in the application ai;

    • in the λith executive process of the application ai,







AT

λ
i


(

s
i

j
,
k


)




is used to represent a time when a task executing the program fragment sij,k arrives at the computing node; a time when the fragment is actually executed is presented as








ST

λ
i


(

s
i

j
,
k


)

;


WT

λ
i


(

s
i

j
,
k


)





represents a queuing time of the fragment on the node, wherein:











WT

λ
i


(

s
i

j
,
k


)

=



ST

λ
i


(

s
i

j
,
k


)

-


AT

λ
i


(

s
i

j
,
k


)






(
1
)









    • a data importing time when the precursor fragment six,y transfers the data to the program fragment sij,k is as follows:













IT

(

s
i

j
,
k


)

=



α
i

j
,
k



v


sp

(

s
i

x
,
y


)

,

sp

(

s
i

j
,
k


)




+

rtt


sp

(

s
i

x
,
y


)

,

sp

(

s
i

j
,
k


)








(
2
)









    • a time OT(sij,k) when the program fragment sij,k occupies the node sp(sij,k) comprises a self-task execution time ET(sij,k) and a data transmission time DT(sij,k, dih) between different databases during execution, where OT(sij,k) is computed as follows:













OT

(

s
i

j
,
k


)

=


ET

(

s
i

j
,
k


)

+







h
=
1

l



DT

(


s
i

j
,
k


,

d
i
h


)







(
3
)












=



β
i

j
,
k



ψ

sp

(

s
i

j
,
k


)



+







h
=
1

l




γ

(


s
i

j
,
k


,

d
i
h


)


v


dp

(

d
i
h

)

,

sp

(

s
i

j
,
k


)





+

rtt


dp

(

d
i
h

)

,

sp

(

s
i

j
,
k


)








(
4
)









    • a finish time










FT
x

(

s
i

j
,
k


)




of sij,k is the sum of a start time and an occupied time on the node sp(sij,k), computed as follows:











FT

λ
i


(

s
i

j
,
k


)

=



ST

λ
i


(

s
i

j
,
k


)

+

OT

(

s
i

j
,
k


)






(
5
)









    • a start time when the application ai is executed for the λith time is a task arrival time










AT

λ
i


(

s
i

1
,
1


)




of the first program fragment si1,1 executing the primary service, and the finish time is a time







FT

λ
i


(

s
i

1
,
f


)




when the last fragment si1,f of the primary service is executed, wherein the response time







RT

λ
i


(

a
i

)




when the application ai is executed for the λith time is computed as follows:











RT

λ
i


(

a
i

)

=



FT

λ
i


(

s
i

1
,
f


)

-


AT

λ
i


(

s
i

1
,
1


)






(
6
)









    • when the computing node sp(sij,k) is executed, a using overhead of the program fragment sij,k occupying the node is computed as follows:









NC(sij,k)=OT(sij,k)csp(sij,k)com   (7)

    • during execution of the program fragment sij,k, a transmission overhead generated by interaction between the imported data and the database is computed as follows:






TC(sij,k)=IT(sij,k)csp(six,y),sp(sij,k)tranh=1lDT(sij,k, dih)cdp(dih),sp(sij,k)tran   (8)


The money overhead MC(sij) generated by invoking the service sij once is the sum of the using overheads and the transmission overheads of all the program fragments included by the service, computed as follows:






MC(sij)=Σk=1f(NC(sij,k)+TC(sij,k))   (9))


The money overhead MC(ai) by executing the application ai once is the sum of


products between the money overheads of all services and the number of times when the services are invoked, computed as follows:






MC(ai)=Σj=1oMC(sij)sij,Time   (10)


An optimization object of the unloading solution generating algorithm is to obtain an unloading solution for services and databases to minimize the total money overhead under the condition that the response time of each application satisfies the time constraint within the time period θ; and finally, a formalized definition of the application unloading problem is as follows:









{






MC
total

=

min







i
=
1

g



MC

(

a
i

)



ξ
i













RT

λ
i


(

a
i

)



dl
i








i

=
1

,


,
g








λ
i


=
1

,


,

ξ
i








.





(
11
)







Further, a specific implementation method for the unloading solution generating algorithm based on the multi-task particle swarm optimization-genetic algorithm is as follows:

    • a coding strategy of the unloading solution generating algorithm satisfies the following three principles:
    • integrity: each candidate solution in a problem space can be encoded into a corresponding coding particle;
    • non-redundancy: each candidate solution in the problem space only corresponds to one coding particle; and
    • viability: each coding particle in a code space can correspond to the candidate solution in the problem space;
    • the coding strategy uses the particle Z to represent a candidate solution for cost driven unloading of all services and databases in the MEC environment, wherein the ith particle in the tth iteration is described as (12):






Z
i
t=(zi,1t, zi,2t, . . . , zi,pt, zi,p+1t, . . . , zi,p+qt)   (12)






z
i,k
6=(nj)i,kt   (13)


where p and q respectively represent total quantities of the services and databases in the whole system; in the equation (13) zi,kt (k=1, 2, . . . , p) represents the ith particle for the tth iteration of the kth service deployed by the computing node nj, and zi,kt (k=p+1, p+2, . . . , p+q) represents the ith particle for the tth iteration of the (k−q)th database deployed by the computing node nj;

    • each code bit of the particle represents the deployment address of the corresponding service or database, satisfying a completeness principle; different coding particles respectively represent different unloading solutions, and a certain feasible solution in the problem space only corresponds to one coding particle in the code space, satisfying a non-redundancy principle; however, some candidate solutions corresponding to the particle may not satisfy an expiration date constraint, so that the response times of some applications exceed a time limit, thereby, not satisfying a survival principle; all particles are divided into two categories:
    • feasible particles: in an unloading strategy corresponding to the particles, each application satisfies the expiration date constraint; and
    • infeasible particles: in the unloading strategy corresponding to the particles, execution of at least a certain application cannot satisfy the expiration date constraint;
    • to compare different particles, a fitness function is adopted to measure; the particles with small fitness represent better candidate solutions; because some solutions may make the response times of the applications exceed a stipulated expiration date, the particles are compared in the following three circumstances:
    • circumstance 1: one particle is the feasible solution while the other is the infeasible solution; and in this case, the feasible solution is selected, with the fitness function thereof being defined as follows:










F

(

Z
i
t

)

=

{




0
,





if





j




λ
j





,




RT

λ
j


(

a
j

)


Z
i
t




dl
j








1
,



else








(
14
)









    • circumstance 2: the two particles both are the infeasible solutions; the particle with a smaller sum with a deadline difference because the particle is closer to the feasible solution and may be turned into the feasible solution in subsequent evolution, with the fitness function thereof being defined as follows:













F

(

Z
i
t

)

=







j
=
1

g









λ
j

=
1


ξ
j




(




RT

λ
j


(

a
j

)


(

Z
i
t

)


-

dl
j


)






(
15
)









    • circumstance 3: the two particles both are the feasible solutions; the particle with lower money overhead is selected, with the fitness function thereof being defined as follows:









F(Zit)=MCtotal(Zit)   (16)


Further, an implementation flow of the MPSO-GA is as follows:

    • S1: initializing related parameters of the MPSO-GA, including an initial swarm size δ, a maximum iteration number of times itersmax, a maximum inertia weight ϵmax, a minimum inertia weight ϵmin, an initial value and a finish value σpstar σgstar σPend and σgend of the acceleration coefficient; and then randomly generating an initial total;
    • S2: calculating fitness of each particle according to equations (14)-(16); and selecting the optimal solution of each particle itself, and selecting the particle with the optimal fitness as the globally optimal solution in the current generation;
    • S3: updating each particle according to the equation (13), and re-calculating the fitness of each particle;


S4: updating the optimal individual particle of each particle; and if there is a solution better than an original particle, updating the globally optimal particle; and


S5: if an iteration stopping condition is satisfied, finishing the algorithm; and otherwise, returning to S3 for continuous iteration.


Compared with the prior art, the present invention has the following beneficial effects: provided is a method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in an edge environment, including the following steps: constructing an application compliant with a universal program structure supporting on-demand computing unloading; for the application compliant with the structure, extracting a program fragment flowchart through static code analysis; analyzing a program fragment flowchart and a context through an unloading solution generating algorithm based on a multi-task particle swarm optimization-genetic algorithm; and finally, performing, by each application, computing unloading by taking a service as a particle size according to the unloading solution to minimize the overhead under a circumstance that a deadline constraint of each application is satisfied. The method supports the computing unloading by different types of applications taking the service as the particle size. Compared with other classical methods, 2.11-17.51% of the system cost can be saved under the circumstance that the deadline constraint is satisfied.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of implementing a method in an embodiment of the present invention.



FIG. 2 is a schematic diagram of a design mode supporting on-demand computing unloading in the embodiment of the present invention.



FIGS. 3A and 3B show two types of services included in an application in the embodiment of the present invention.



FIGS. 4A and 4B are schematic diagrams of analyzing the application in the embodiment of the present invention.



FIG. 5 is a schematic diagram of a coding particle in an unloading solution in the embodiment of the present invention.



FIG. 6 is a schematic diagram of mutation operation in the embodiment of the present invention.



FIG. 7 is a schematic diagram of a crossover operation in the embodiment of the present invention.



FIG. 8 is a flowchart of implementing an MPSO-GA in the embodiment of the present invention.



FIG. 9 is a schematic diagram of an application execution condition in different scenarios in the embodiment of the present invention.



FIG. 10 is a schematic diagram of comparing response times of applications in the embodiment of the present invention.



FIG. 11 is a schematic diagram of an application scenario in the embodiment of the present invention.



FIGS. 12A-12C show system costs of different unloading strategies under different expiration data constraints in a simple network environment in the embodiment of the present invention.



FIG. 13 shows system costs of different unloading strategies under different expiration data constraints in a complicated network environment in the embodiment of the present invention.





DETAILED DESCRIPTION OF THE EMBODIMENTS

Further description of the present invention will be made below in combination with drawings and embodiments.


It is to be noted that the detailed description below is exemplary and is intended to further describe the application. Unless specified otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by those of ordinary skill in the art to which the application belongs.


It should be noted that the terms used herein are merely to describe specific implementation modes rather than being intended to limit the exemplary implementation modes according to the present application. As used herein, unless otherwise specified in the context, the singular form is further intended to include plural form. In addition, it is to be further understood that when the terms “comprise” and/or “include” are used in the description, it indicates that there are features, steps, operations, apparatuses, assemblies and/or their combinations.


As shown in FIG. 1, the embodiment provides a method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in an edge, including the following steps:

    • 1. Constructing an application compliant with a universal program structure supporting on-demand computing unloading.
    • 2. For the application compliant with the universal program structure supporting on-demand computing unloading, extracting a program fragment flowchart through static code analysis to provide internal flow information of the application for generation of an unloading solution.
    • 3. Analyzing a peripheral mobile edge computing (MEC) environment and the application through an unloading solution generating algorithm based on a multi-task particle swarm optimization-genetic algorithm (MPSO-GA) to obtain the optimal unloading solution.
    • 4. Performing, by the application, computing unloading by taking a service as a particle size according to the unloading solution to minimize a system overhead under a circumstance that a deadline constraint of each application is satisfied.


1. Overview of the Method


FIG. 1 shows an implementation principle of the method. For the nodes in FIG. 1, we use circles to represent different types of operating data, and squares to represent algorithms. The method mainly includes two portions: an upper half portion represents an unloading enabling mechanism and a lower half portion represents the unloading solution generating algorithm.


The unloading enabling mechanism is a key component through which the application can perform computing unloading. The method designs a design mode with universality supporting on-demand computing unloading and proposes a corresponding program structure. The application compliant with the program structure is capable to decouple a plurality of services, and different services can be executed in a distributed manner at different computing nodes (including the IoT equipment, cloud and edge). The application determines specific computing nodes according to configuration files in the executive process, so as to achieve on-demand unloading of the program. Moreover, the application can obtain the program fragment flowchart through static code analysis to support generation of a subsequent unloading strategy.


The unloading solution generating algorithm analyzes the peripheral MEC environment and the application to obtain the optimal unloading solution. The method carries out formalized definition on the application unloading problem to be optimized by establishing a system model, an application model, and a performance model. An unloading solution for services and databases is generated through an MPSO-GA, so that of each application minimizes the overall money overhead under the circumstance that a cut-off constraint is satisfied.


2. Unloading Mechanism
2.1 Design Mode

As a conventional single application cannot be split, the application only can be executed on the IoT equipment or a remote server based on the overall application. To achieve on-demand computing unloading of the application, the method provides the design mode where different types of applications are independently deployed and executed in modules to design a universal program structure supporting on-demand computing unloading. The design mode satisfies the following properties:

    • (1) Independence: each module of the application is capable of being independently deployed and executed at different computing nodes.
    • (2) Validity: the module is capable to determine the computing modes where the invoked modules or the database are located to guarantee the invoking validity.
    • (3) Correctness: the invoked module is capable to receive the input of the invoking module and return a correct executive result.


The present invention provides an application design mode to satisfy the above attributes, as shown in FIG. 2. The mode regards the application as a primary service formed by a plurality of loosely coupled and cooperatively interconnected services. Each service will not affect other services in the executive process to ensure that the service can be independently deployed and executed at the computing nodes. The service has to invoke the services or databases through a controller. The controller is responsible for confirming the deployment positions of the services or databases and supporting data transmission, so as to ensure that the services are capable of being proceeded correctly.

    • 2.2 Program Structure


In the method, provided is a program structure of an application to satisfy the design mode in FIG. 2. There are two types of services in the application compliant with the design mode: a pure service, using local data without invoking other services (shown in FIG. 3A); a compound service, relying on data from other services and required to invoke other services (shown in FIG. 3B). The service mainly includes the following attributes:

    • (1) Params/result (key-value type data): all external variables required by an invoked service in an execution process are imported through Params, i.e., there is no sentence accessing to the external variables in the service. When the service needs to modify the external variables, the external variables are exported by virtue of result. The external variables refer to variable accessed in the execution process of the service and defined outside the service. Therefore, each service is stateless, so that the independence of the service is ensured.
    • (2) seq (a character string type): a character string seq is maintained in the application, and an invoke path from a current service to a primary service is recorded. To maintain the seq, when the service is invoked, the seq is imported in the form of parameter, and a name of the invoked service is added into the seq.
    • (3) Invoke (function): the compound service needs to invoke other services through the Invoke function. The Invoke function achieves two functions: 1) the Invoke function controls forwarding: whether the service is locally executed or remotely invoked is determined, and if the service is remotely invoked, which computing node invoking the service is determined. (2) Data transmission: all parameters required by the invoked service are serialized, and the returned service executive result is deserialized. A pseudocode algorithm 1 of the Invoke function is shown as follows:












Algorithm1: Invoke

















Input: ServiceNamei, Paramsi, Seq



Variables: SP



Output: result










  1
procedure Invoke (ServiceNamei, Paramsi, Seq)



  2
 Address = SP [Seq + ServiceNamei]



  3
 If Address == Local then



  4
  result = local (ServiceNamei, Paramsi, Seq)



  5
  return result



  6
 Else



  7
  result = remote (ServiceNamei, Paramsi, Seq)



  8
  return result



  9
 end if



 10
 end procedure










First, the invoking path from the service to be invoked to the primary service is determined through seq and serviceName, and the address where the service is executed is determined from the configuration file SP according to the invoking path. SP stores deployment addresses (the 2nd line) of different services. If the address is equal to Local, a local function locally executes the service directly and returns a result. It is to be noted that the Local here represents the computing node when the service is invoked. (the 3rd-6th lines). If the address is not equal to Local, the service;, Parameter;, and seq are sent to an agent program at a remote node through a remote function, and a remote agent invokes the service; through a reflection mechanism and finally returns a result. (The 7th-9th lines)

    • (4) Call (function): the service shall establish a connection with the database through the Call function. The Call function determines an address of the database through a name of the database to be interacted and is turned to a connecting channel for data transmission with the database, so as to ensure normal data interaction between the service and the database. As shown in an algorithm 2 specifically,












Algorithm2: call

















Input: seq, database



Variables: DP, Account



Output: connect










 1
procedure call(database)



 2
 host = DP [ database]



 3
 user = Account [host+ database].name



 4
 password = Account [host+ database]. pass



 5
 connect = mysql. connect (host, user, password,




 database)



 6
 return connect



 7
end procedure










First, the deployment address host of the database is determined from a configuration file DP, and DP stores the deployment addresses of different databases according to DatabaseNamei (the 2nd line). Then corresponding users and passwords (the 3rd-4th lines) are acquired from Accounts recording the database names and passwords through the host and database; and finally, a data transmission channel is established, and the channel (the 5th-6th lines) is returned.


In conclusion, “Paramsi” and “result” ensure the independence of the service. The controller formed by “seq”, “Invoke”, and “call” ensures the validity and correctness of the service invoking other services and databases in the executive process.


2.3 Application Analysis

To provide internal flow information of the program to generate the subsequent unloading solution, it is needed to analyze the program. In the present invention, the program fragment flowchart is extracted through static code analysis, with an implementation method as follows:

    • first, an unloaded content, i.e., a quantity of the services and databases in the application needs to be determined, the databases being uniquely identified by names according to stipulation in 2.2, and the services being uniquely identified by the invoking paths from the service to the primary services. Then, the unloading solution needs to be evaluated; the method further divides each of the services into program fragments and records an execution sequence of the fragments. Finally, in the method, the application is modeled as the program fragment flowchart. FIGS. 4A and 4B show an example of analyzing the application. For lines in FIGS. 4A and 4B, solid lines represent that the service is split into program fragments, a broken line represents an executive process inside the service, and dotted lines represent an actual executive flow of the program fragments in the application.


The program fragment flowchart of the it h application is represented by a symbol Flowi=(Di, Si, Pi, Qi) Di={di1, di2, . . . , dil} represents a set of databases in the application, and dij.Name is used to represent a name of the database dij. Si={si1, si2, . . . , sio} represents a set of services in the application, and sij.Name is used to represent a name of the service sij. sij.callSeq represents an invoking path from the service sij to the primary service si1. sij.Time represents the total number of times that the service sij is invoked. In the presence of service invoking, each service is segmented into a plurality of serial code fragments in the actual executive process, i.e., sij={sij,1, sij,2, . . . , sij,f}. Pi is a set which records an actual program fragment executive sequence, and the program fragments in the set can be repeated. The sequence in which the program fragments are added into the set is the actual executive sequence in the application. For two adjacent fragments, in the method, the previous fragment is called as a precursor fragment of a next fragment. Qi represents a set which records correlations between the program fragments and the databases, where (sij,k, dih)∈Qi represents that there is an interaction between sij,k in the executive process and the database dih.


Each of the code fragments is defined as a triad sij,k=custom-characterαij,k, βij,k, γ(sij,k, dih)custom-character, where αij,k represents a size of data imported into a fragment sij,k; βij,k represents a task load of a fragment sij,k; and γ(sij,k, dih) represents a total size of data of a fragment sij,k in the executive process and a database dih interacting with each other, and if there is no interaction, it is set to be 0. (as prediction of specific information of the program fragments is not the content studied by the method, development personnel need to provide it in the format of json.)


The addresses of the Invoke function and the Call function in source codes are analyzed by a static code analysis technology to obtain a program fragment flowchart Flowi=(Di, Si, Pi, Qi) of the application i, with a pseudocode shown in an algorithm 3.












Algorithm 3 Extracting the program segment


flow chart from i-th program















Input: The source code of the i-th program


   The service name of the main service


Output: A program segment flow chart Flowtext missing or illegible when filed  = (Dtext missing or illegible when filed , Stext missing or illegible when filed , Ptext missing or illegible when filed , Qtext missing or illegible when filed )








 1:
function GETFLOW(stext missing or illegible when filed , Utext missing or illegible when filed )


 2:
 pId ← 1, stext missing or illegible when filed  ← stext missing or illegible when filed  + stext missing or illegible when filed , Ptext missing or illegible when filed  ← Ptext missing or illegible when filed  + stext missing or illegible when filed


 3:
 for each utext missing or illegible when filed  ∈ Utext missing or illegible when filed  do


 4:
  keywords ← GETKEYWORDS(utext missing or illegible when filed )


 5:
  if ∃”invoke” ∈ keywords then


 6:
   Name ← GETSERVICE(utext missing or illegible when filed )


 7:
   callSeq ← stext missing or illegible when filed .callSeq + Name


 8:
   if ∃stext missing or illegible when filed  ∈ stext missing or illegible when filed  and stext missing or illegible when filed .callSeq == callSeq then


 9:
    stext missing or illegible when filed .Time ← stext missing or illegible when filed .Time + 1, stext missing or illegible when filed  ← stext missing or illegible when filed


10:
   else


11:
    stext missing or illegible when filed .Name ← Name, stext missing or illegible when filed .callSeq ← callSeq


12:
    stext missing or illegible when filed .Time ← 1, Stext missing or illegible when filed  ← Stext missing or illegible when filed  + stext missing or illegible when filed


13:
   end if


14:
   Utext missing or illegible when filed  ← GETUNITS(stext missing or illegible when filed .Name)


15:
   pId ← pId + 1, stext missing or illegible when filed  ← stext missing or illegible when filed  + stext missing or illegible when filed , Ptext missing or illegible when filed  ← Ptext missing or illegible when filed  + stext missing or illegible when filed


16:
   GETFLOW(stext missing or illegible when filed , Utext missing or illegible when filed )


17:
  end if


18:
  if ∃”call” ∈ keywords then


19:
   Name ← GETDATABASE(utext missing or illegible when filed )


20:
   if ∃dtext missing or illegible when filed  ∈ Dtext missing or illegible when filed  and dtext missing or illegible when filed .Name == Name then


21:
    dtext missing or illegible when filed  ← dtext missing or illegible when filed


22:
   else


23:
    dtext missing or illegible when filed .Name ← Name, Dtext missing or illegible when filed  ← Dtext missing or illegible when filed  + dtext missing or illegible when filed


24:
   end if


25:
   if (stext missing or illegible when filed , dtext missing or illegible when filed ) ∉ Qtext missing or illegible when filed  then


26:
    Qtext missing or illegible when filed  ← Qtext missing or illegible when filed  + (stext missing or illegible when filed , dtext missing or illegible when filed )


27:
   end if


28:
  end if


29:
 end for


30:
end function


31:
stext missing or illegible when filed .Name ← main( ), stext missing or illegible when filed .callSeq ← stext missing or illegible when filed .Name, stext missing or illegible when filed .Time ← 1


32:
Utext missing or illegible when filed  ← GETUNITS(stext missing or illegible when filed .Name)


33:
Stext missing or illegible when filed  ← Stext missing or illegible when filed  + stext missing or illegible when filed , Dtext missing or illegible when filed  ← ϕ, Ptext missing or illegible when filed  ← ϕ, Qtext missing or illegible when filed  ← ϕ


34:
GETFLOW(stext missing or illegible when filed , Utext missing or illegible when filed )






text missing or illegible when filed indicates data missing or illegible when filed







By taking the source codes of the ith application and the service name of the primary service as inputs, the algorithm 3 constructs the corresponding program fragment flowchart, including the following steps: first, assigning information of the primary service to si1, and extracting a corresponding sentence set Ui1 through si1.Name corresponding to the 31st-32nd lines; then adding the primary service si1 into a null set Si, and setting Di, Pi, and Qi to be null, corresponding to the 33rd line; and finally, invoking a getFlow() function to construct the program fragment flowchart in recursive fashion, corresponding to the 34th line. The step of getFlow() function corresponds to the 1st to 30th lines; and sia and Uia are taken as inputs of the function to respectively represent the currently analyzed service and the sentence set corresponding to the service. In the 2nd line, pId is used to record the number of the current program fragment, the number of the current program fragment is initialized as 1, and the current program fragment sia,pId is added into sia and Pi. Each sentence in the sentence set is operated as follows: acquiring keywords in the sentence uia,j, corresponding to the 4th line. If the keywords include the keyword ‘invoke’, whether the invoked service has appeared is judged according to the invoking path callSeq. If the invoked service has appeared, the invoking number of time is added with 1; if the invoked service has not appeared, Si is added, and the corresponding sentence set is obtained through the service name, corresponding to the 5th-14th lines. In the 15th line, pId is updated, and the current new program fragment sia,pId is added into sia and Pi. In the 16th line, the service sib is subjected to getFlow() recursion. If the keywords in the sentence uia,j include the keyword “call”, in a similar way, whether the database dik has been in the set Di is judged first, and if the database is not in the set, the database is added. Then, whether (sia,pId, dih) has been in the set Qi is judged. If not, it is added, corresponding to the 18th-29th lines.


3 Unloading Solution Generating Algorithm

The unloading solution generating algorithm describes the application unloading problem in the MEC environment from the network model, the application model and the performance model. The network model describes “the network environment of the application”; the application model defines “a plurality of applications compliant with the program structure proposed in the method to be unloaded”; and the performance model corresponds to “the content of attempt to optimize” (introduced respectively in 3.1, 3.2, and 3.3); then, the problem to be solved by the method is described according to the above three models (section 3.4); and finally, the application unloading problem is solved through the MPSO-GA (section 3.5).


3.1 Network Model

The network environment includes a cloud server (CS), a plurality of edge servers (ES), and a plurality of IoT equipment (DS). The MEC environment is modeled as a map G=(N,E) including a set N={n1, n2, . . . , nw} of the computing nodes, and a set E is used to represent sub-connections thereamong. The computing node is represented by ni=<Ψi, ωi, cicom>, where ⊥i represents the computing capacity of the computing node ni, which is usually measured by a CPU; as the method mainly focuses on seeking for the unloading solution in the study, we assume that the storage capacity of each node is capable to satisfy the corresponding demand; ωi represents the number of tasks capable of being executed by the node ni at most, and its value is equal to the nuclear number of the node; cicom represents the computing cost of the computing node ni per second. ei,j represents a connection between ni and nj, which is related to the bandwidth vi,j and the minimum link delay rtti,j. ci,jtran is used to represent the transmission overhead between the nodes ni and nj per second.


3.2 Application Modeling

It is assumed that there will be a system going to be executed for a time θ in the


unloading process. There are a plurality of applications A={a1, a2, . . . , ag} from IoT equipment in the system and the applications ai are executed at a time interval of μi, so each application is executed for ξi=θ/μi times within a time period θ, and dli is used to represent a blocking time of the application ai. Each of the applications is compliant with a universal program structure. The application is modeled as the program fragment flowchart Flowi=(Di, Si, Pi, Qi) according to 2.3.


3.3 Performance Modeling

To compute the response time and the money overhead of the application, an unloading solution corresponding to the service and database has to be appointed. SPi{sp(si1), sp(si2), . . . , sp(sio)} represents the deployment address of the service in the application ai. As far as each program fragment sij,k is concerned, the executive address is dependent on the deployment address of the service sij, i.e. sp(sij,k)=sp(sij). DPi={dp(bi1), dp(bi2), . . . , dp(bil)} represents the deployment address of the database in the application ai.


3.3.1 Response Time of the Application

In the λith executive process of the application ai,







AT

λ
i


(

s
i

j
,
k


)




is used to represent a time when a task executing the program fragment sij,k arrives at the computing node. A time when the fragment is actually executed is presented as








ST

λ
i


(

s
i

j
,
k


)

;


WT

λ
i


(

s
i

j
,
k


)





represents a queuing time of the fragment on the node, wherein:











WT

λ
i


(

s
i

j
,
k


)

=



ST

λ
i


(

s
i

j
,
k


)

-


AT

λ
i


(

s
i

j
,
k


)






(
1
)







A data importing time when the precursor fragment six,y transfers the data to the program fragment sij,k is as follows:










IT

(

s
i

j
,
k


)

=



α
i

j
,
k



v


sp

(

s
i

x
,
y


)

,

sp

(

s
i

j
,
k


)




+

rtt


sp

(

s
i

x
,
y


)

,

sp

(

s
i

j
,
k


)








(
2
)







A time OT(sij,k) when the program fragment sij,k occupies the node sp(sij,k) comprises a self-task execution time ET(sij,k) and a data transmission time DT(sij,k, dih)


between different databases during execution, where OT(sij,k) is computed as follows:










OT

(

s
i

j
,
k


)

=


ET

(

s
i

j
,
k


)

+







h
=
1

l



DT

(


s
i

j
,
k


,

d
i
h


)







(
3
)












=



β
i

j
,
k



ψ

sp

(

s
i

j
,
k


)



+







h
=
1

l




γ

(


s
i

j
,
k


,

d
i
h


)


v



dp

(

d
i
h

)


s

,

p

(

s
i

j
,
k


)





+

rtt


dp

(

d
i
h

)

,

sp

(

s
i

j
,
k


)








(
4
)







A finish time







FT
x

(

s
i

j
,
k


)




of sij,k is the sum of a start time and an occupied time on the node sp(sij,k), computed as follows:











FT

λ
i


(

s
i

j
,
k


)

=



ST

λ
i


(

s
i

j
,
k


)

+

OT

(

s
i

j
,
k


)






(
5
)







A start time when the application ai is executed for the λith time is a task arrival time







AT

λ
i


(

s
i

1
,
1


)




of the first program fragment si1,1 executing the primary service, and the finish time is a time







FT

λ
i


(

s
i

1
,
f


)




when the last fragment si1,f of the primary service is executed. The response time







RT

λ
i


(

a
i

)




when the application ai is executed for the λith time is computed as follows:











RT

λ
i


(

a
i

)

=



FT

λ
i


(

s
i

1
,
f


)

-


AT

λ
i


(

s
i

1
,
1


)






(
6
)







3.3.2 Money Overhead of the Application

When the computing node sp(sij,k) is executed, a using overhead of the program fragment sij,k occupying the node is computed as follows:






NC(sij,k)=OT(sij,k)csp(sij,k)com   (7)


during execution of the program fragment sij,k, a transmission overhead generated by interaction between the imported data and the database is computed as follows:






TC(sij,k)=IT(sij,k)csp(six,y),sp(sij,k)tranh=1lDT(sij,k, dih)cdp(dih),sp(sij,k)tran   (8)


the money overhead MC(sij) generated by invoking the service sij once is the sum of the using overheads and the transmission overheads of all the program fragments included by the service, computed as follows:






MC(sij)=Σk=1f(NC(sij,k)+TC(sij,k))   (9))


The money overhead MC(ai) by executing the application ai once is the sum of


products between the money overheads of all services and the number of times when the services are invoked, computed as follows:






MC(ai)=Σj=1oMC(sij)sij.Time   (10)


3.4 Problem Expression

The optimization object of the method is to obtain an unloading solution for services and databases to minimize the total money overhead under the condition that the response time of each application satisfies the time constraint within the time period θ. Finally, a formalized definition of the application unloading problem is as follows:









{





MC
total

=

min







i
=
1




g




MC

(

a
i

)



ξ
i















RT

λ
i


(

a
i

)




dl
i





i



=
1

,


,


g





λ
i



=
1

,


,

ξ
i









(
11
)







3.5 MPSO-GA

For the unloading solutions SP and DP, it is aimed to explore the optimum mapping from the services and the databases to different computing nodes, so as to minimize the total money overhead under the circumstance that the deadline constraint of each application is satisfied. To explore the optimum mapping from the services and the databases to the computing nodes has been proved a NP problem. A conventional PSO has been widely applied to solving a continuous optimization problem. The optimum mapping from the services and the databases to the computing nodes is a discrete problem which needs a novel encoding method. In addition, to avoid premature convergence of a conventional particle swarm optimization, it is needed to introduce an appropriate particle updating strategy. To overcome the defects of the conventional PSO algorithm, the method provides an MPSO-GA to explore the optimal unloading strategy of multiple tasks in the MEC environment. The application unloading strategy based on MPSO-GA is described below.


(1) Problematic Codes

A good coding strategy can improve the search efficiency and performance of the PSO algorithm, and the coding strategy of the algorithm satisfies the following three principles:


Definition 1 (integrity): each candidate solution in a problem space can be encoded into a corresponding coding particle.


Definition 2 (non-redundancy): each candidate solution in the problem space only corresponds to one coding particle.


Definition 3 (viability): each coding particle in a code space can correspond to the candidate solution in the problem space.


A usual coding strategy hardly satisfies the three principles at the same time. The method uses the particle Z to represent a candidate solution for cost driven unloading of all services and databases in the MEC environment, where the ith particle in the tth iteration is described as (12):






Z
i
t=(zi,1t, zi,1t, . . . , zi,p1, zi,p+1t, . . . zi,p+qt)   (12)






z
i,k
t=(nj)i,kt   (13)


where p and q respectively represent total quantities of the services and databases in the whole system. In the equation (13) zi,kt(k=1, 2, . . . , p) represents the ith particle for the tth iteration of the kth service deployed by the computing node nj, and zi,kt (k=p+1, p+2, . . . , p+q) represents the ith particle for the tth iteration of the (k−q)th database deployed by the computing node nj.


Property 1: the coding strategy is capable to satisfy the integrity and non-redundancy at the same time, but is unnecessarily capable to satisfy the viability.


Each code bit of the particle represents the deployment address of the corresponding service or database, thereby satisfying the principle of integrity. Different encoding particles respectively represent different unloading solutions. A certain feasible solution in a problem space only corresponds to one encoding particle in the code space, thereby satisfying the principle of non-redundancy. However, some candidate solutions corresponding to the particles may not satisfy the expiration date constraint, i.e., the response times of some applications exceed the time limit, so that the principle of viability cannot be satisfied. In the method, all particles can be divided into two categories: feasible particles and infeasible particles, defined below.


Definition 4: (feasible particles): in an unloading strategy corresponding to the particles, each application satisfies the expiration date constraint.


Definition 5: (infeasible particles): in the unloading strategy corresponding to the particles, execution of at least a certain application cannot satisfy the expiration date constraint.


(2) Fitness Function

To compare different particles, it is needed to measure the particles with the fitness function. Usually, the particles with small fitness represent better candidate solutions. This work pursues an unloading solution, which satisfies the expiration data constraint of each application while minimizing the total money overhead. Therefore, the particles with the lower money overhead can be regarded as better solutions. Because some solutions may make the response times of the applications exceed a stipulated expiration date, the particles are compared in the following three circumstances:

    • circumstance 1: one particle is the feasible solution while the other is the infeasible solution. In this case, the feasible solution is selected, with the fitness function thereof being defined as follows:










F

(

Z
i
t

)

=

{




0
,

if





j





λ
j





,




RT

λ
j


(

a
j

)


Z
i
t




dl
j








1
,
else










(
14
)









    • Circumstance 2: the two particles both are the infeasible solutions. The particle with a smaller sum with a deadline difference because the particle is closer to the feasible solution and may be turned into the feasible solution in subsequent evolution, with the fitness function thereof being defined as follows:













F

(

Z
i
t

)

=






j
=
1




g









λ
j

=
1



ξ
j



(




RT

λ
i


(

a
j

)


(

Z
i
t

)


-

dl
j


)







(
15
)









    • Circumstance 3: the two particles both are infeasible solutions. The particle with lower money overhead is selected, with the fitness function thereof being defined as follows:









F(Zit)=MCtotal(Zit)   (16)


(3) Updating Strategy:

Conventional particle swarm optimization includes three major parts: inertia, individual cognition, and social cognition. Iterative update of each particle is affected by the individual optimum address and the globally optimal address of the current generation. Premature falling into global local optimum is a major defect of the particle swarm optimization algorithm. To improve the search ability of the algorithm, the method introduces the crossover operator and the mutation operator of the genetic algorithm for particle update. An iterative update of the ith particle in the (t+1)th iteration is shown in the equation (17):






Z
i
t+1
=F
g(Fp(Fu(Zit, ϵ, ϕu), pBestit, σp, ϕg)   (17)


where Fu() represents mutation operation, Fg() and Fp() represents crossover operations, and ϵ is the inertial weight. σp and σg are acceleration coefficients. ϕu, ϕp, and ϕg are the random numbers in the interval of [0,1].


For an inertial portion, a velocity of the particle is defined as:










X
i

t
+
1


=



F
u

(


Z
i
t

,
ε
,

ϕ
u


)

=

{





M
u

(

Z
i
t

)





ϕ
u


ε







Z
i
t






else











(
18
)







where Mu() represents the mutation operator, and ϵ represents the threshold when


the mutation happens. When the random number ϕu is less than ϵ, the mutation operation is executed. First, an address ind1 in the particle is selected randomly, and then a computing node is selected randomly to replace the computing node mapped onto the original ind1. FIG. 6 depicts the mutation operator, the address thereof is randomly selected as ind1, the mapped computing node is changed from 2 to 3.


Property 2: the mutation operator can change the particle from being feasible to infeasible, vice versa.


Individual recognition and social recognition equations in the particle update are as follows:










Y
i

t
+
1


=



F
p

(


X
i

t
+
1


,

pBest
i
t

,

σ
p

,

ϕ
p


)

=

{





C
p

(


X
i

t
+
1


,

pBest
i
t


)





ϕ
p



σ
p








X
i

t
+
1







else











(
19
)















Z
i

t
+
1


=



F
g

(


Y
i

t
+
1


,

gBest
t

,

σ
g

,

ϕ
g


)

=

{





C
p

(


Y
i

t
+
1


,

gBest
t


)





ϕ
g



σ
g








Y
i

t
+
1







else












(
20
)







where Cp() represents the crossover operation, pBestit represents the optimal historical particle of the ith individual in the tth iteration, gBestt represents the optimal historical particle among all the particles in the tth iteration, and σp g) represents a threshold of individual crossover. When the random number ϕp g) is less than a threshold σp g), the crossover operation is executed. FIG. 7 depicts an example of the crossover operation. First, addresses ind1 and ind2 in an old particle are randomly selected, and then the address in the old particle is replaced by the same address in pBestit (or gBestt) to finally obtain a new particle.


Property 3: one particle subjected to the crossover operator can be changed from being infeasible to feasible, vice versa.


(4) Mapping from the Particle to the Unloading Result


For one particle, it is needed to compute the execution time of each application and the total money overhead in the corresponding unloading solution. Therefore, the method provides an evaluation algorithm to map the particle to the unloading result. The thought of the evaluation algorithm is to compute an execution start time and an execution finish time of each program fragment by simulating the actual executive process of the program fragment, so as to compute the response time and the total money overhead of each of the application in the scheduling solution.


First, a variable curTime is set to represent the current time, initialized as 0. The input service and database unloading solutions are respectively SP and DP . A symbol λi is used to represent the number of times when the application is currently executed, with an initial value being 1. Three attributes








AT

λ
i


(

s
i

j
,
k


)

,


FT

λ
i


(

s
i

j
,
k


)

,

and




RET

λ
i


(

s
i

j
,
k


)






are defined for each program fragment sij,k, respectively representing a task arrival time of the executed fragment, a time when the fragment is executed, and a residual execution time of the fragment, and being initialized as follows:









{






AT

λ
i


(

s
i

j
,
k


)

=

{




0
,







if


j

=
1

,

k
=
1

,


and



λ
i


=
1







None
,




else














FT

λ
i


(

s
i

j
,
k


)

=
None









RET

λ
i


(

s
i

j
,
k


)

=

OT

(

s
i

j
,
k


)










(
21
)







Then the response time and the money overhead when each application is executed can be computed through the equations (6) and (10) in 3.2.3. An algorithm 4 introduces major steps of the evaluation algorithm:












Algorithm 4 The Deployment Scheme Evaluation















Input: The Deployment Scheme BP and SP


   The program segment flow charts Flow = (Dtext missing or illegible when filed , Stext missing or illegible when filed , Ptext missing or illegible when filed , Qtext missing or illegible when filed )


Output: The response time of each application text missing or illegible when filed (atext missing or illegible when filed )


    The total cost of the system MCtext missing or illegible when filed








 1:
Initialize curTime ← 0, λtext missing or illegible when filed  ← 1, text missing or illegible when filed (stext missing or illegible when filed ) ← (0 or None),




text missing or illegible when filed (stext missing or illegible when filed ) ← 0, text missing or illegible when filed (stext missing or illegible when filed ) ← OT(stext missing or illegible when filed ), slice ← ∞



 2:
# Fill lanes.


 3:
for each ntext missing or illegible when filed  ∈ N do


 4:
 while ntext missing or illegible when filed .empty > 0 do


 5:
  if text missing or illegible when filed (stext missing or illegible when filed ) ≤ curTime then


 6:
   add stext missing or illegible when filed  into ntext missing or illegible when filed


 7:
   ntext missing or illegible when filed .empty ← ntext missing or illegible when filed .empty − 1


 8:
  end if


 9:
 end while


10:
end for


11:
# Find the smallest time slice.


12:
for each ntext missing or illegible when filed  ∈ N do


13:
 for each stext missing or illegible when filed  in ntext missing or illegible when filed  do


14:
  if text missing or illegible when filed (stext missing or illegible when filed ) < slice then


15:
   slice ← text missing or illegible when filed (stext missing or illegible when filed )


16:
  end if


17:
 end for


18:
end for


19:
# Calculate the remaining processing time of program fragments.


20:
for each ntext missing or illegible when filed  ∈ N do


21:
 for each stext missing or illegible when filed  in ntext missing or illegible when filed  do


22:
   text missing or illegible when filed (stext missing or illegible when filed ) ← text missing or illegible when filed (stext missing or illegible when filed ) − slice


23:
  if text missing or illegible when filed (stext missing or illegible when filed ) = 0 then


24:
    text missing or illegible when filed (stext missing or illegible when filed ) ← curTime


25:
   ntext missing or illegible when filed .empty ← ntext missing or illegible when filed .empty + 1


26:
  end if


27:
  # Gets the next programs fragment to execute.


28:
  if stext missing or illegible when filed  is the last fragment of Ptext missing or illegible when filed  and λtext missing or illegible when filed  < text missing or illegible when filed  then


29:
   λtext missing or illegible when filed  ← λtext missing or illegible when filed  + 1, text missing or illegible when filed (stext missing or illegible when filed ) ← max((λtext missing or illegible when filed  − 1) text missing or illegible when filed  μtext missing or illegible when filed ,curTime)


30:
  else if stext missing or illegible when filed  is not the last fragment of Ptext missing or illegible when filed  then


31:
    text missing or illegible when filed (stext missing or illegible when filed ) ← curTime + IT(stext missing or illegible when filed )


32:
  end if


33:
 end for


34:
end for


35:
for each atext missing or illegible when filed  ∈ A do


36:
 if λtext missing or illegible when filed  < text missing or illegible when filed  then


37:
  goto line 3


38:
 end if


39:
end for


40:
Calculation text missing or illegible when filed (atext missing or illegible when filed ) and MC(atext missing or illegible when filed ) according to Eq. (6) and (10)






text missing or illegible when filed indicates data missing or illegible when filed







The algorithm 4, by taking the unloading solution and the application flowcharts as the inputs, finally obtains the execution time of each application and the total money overhead of the system, including the following specific steps:


S1 (the 2nd-10th lines): filling channels: adding each program fragment onto the computing node in sequence according to the unloading solution till there is no null channel on the computing node (ni.empty is used to represent the residual channel number of the node ni, with the initial value thereof being ωi), where the program fragments put into the channels need to satisfy the condition that the task to execute the program fragment has arrived the computing node at the current time, i.e.,









AT

λ
i


(

s
i

j
,
k


)

<
curTime

;




S2 (the 12th-18th lines): searching for the minimum timeslice: first, traversing the program fragments being executed on each computing node to find out the shortest residual execution time








RET

λ
i


(

s
i

j
,
k


)

,




and assigning the shortest residual execution time to a timeslice slice ; and then adding the current time curTime with slice to represent a time with a duration of slice


S3 (the 20th-34th lines): computing a residual time of the program fragment: subtracting the timeslice slice from the residual time







RET

λ
i


(

s
i

j
,
k


)




of the program fragment being executed to represent a time during a duration slice being executed; when the residual time







RET

λ
i


(

s
i

j
,
k


)




is 0, it represents that the program fragment has been executed completely, and a finish time







FT

λ
i


(

s
i

j
,
k


)




is set to be curTime; then removing the fragment from the channel, and adding 1 into an idle channel number ni.empty of the node; if it is not the last fragment, taking out the program fragment six,y to be executed from Pi, where a time







AT

λ
i


(

s
i

x
,
y


)




of task arrival is the sum of curTime and a data importing time IT(six,y); if the program fragment is the last fragment of Pi and the current execution times are smaller than the total execution times it indicates that the application is executed completely to enter a next execution;







AT

λ
i


(

s
i

1
,
1


)




of the first program fragment executed next time is the maximum value of an ideal triggering time and the current time; and

    • repeating the above steps until each application ai has been executed for ξi times, and finally, calculating the response time







RT

λ
i


(

a
i

)




and the money overhead MC(ai) of each application executed through the equations (6) and (10).


(5) Parameter Setting

The inertia weight ϵ may greatly affects the search ability and the astringency of the PSO algorithm. The greater the value of ϵ is, the stronger the global search ability is. The less the value of ϵ is, the stronger the local search ability is. A typical inertia weight adjustment method is as follows:









ε
=


ε
max

-


iters
cur

·



ε
max

-

ε
min



iters
max








(
22
)







where ϵmax and ϵmin respectively represent the maximum value and the minimum value of the initialized ϵ, and iterscur and itersmax respectively represent the number of iterations of current algorithm and the maximum number of iterations initialized.


Change of ϵ in the classical update strategy is related to the number of iterations, which cannot fit a nonlinear characteristic of the minimum money overhead under multiple tasks well. Therefore, the method designs an adaptively adjusted discrete adjustment method based on advantages and disadvantages of the current swarm particles to adjust the inertia weight ϵ, shown as follows:









ε
=


ε
max

-


(


ε
max

-

ε
min


)

·

exp

(


d

(

Z
i

t
-
1


)



d

(

Z
i

t
-
1


)

-
1.01


)







(
23
)













d

(

Z
i

t
-
1


)

=



div

(


Z
i

t
-
1


,

gBest

t
-
1



)


p
+
q


=





j
=
1


p
+
q



τ
j



p
+
q







(
24
)







where d(Zit−1) represents a difference between the ith particle Zit−1 iterated currently for the (t−1)th time and the globally optimal solution gBestt−1 iterated for the (t−1)th time; τi is a statistical factor; when the value τj is 1, it represents that the computing nodes mapped by Zit−1 and gBestt−1 on the jth code are the same, and otherwise, the value is 0. Therefore, a search capability of the algorithm can be adaptively adjusted according to the difference between the current particle and the globally optimal particle.


In addition, the two recognition factors σp and σg of the algorithm are set [24] by adopting a linear increase and decrease strategy, similar to the equation (22). σpstar and σgstar respectively represent iterated initial values of parameters σp and σg, and σp and σgend respectively represent iterated final values of the parameters σp and σg.


(6) MPSO-GA Flow


FIG. 8 depicts a flowchart of the mentioned MPSO-GA, an implementation process thereof including the following steps:


S1: initializing related parameters of the MPSO-GA, including an initial swarm size δ, a maximum iteration number of times itersmax, a maximum inertia weight ϵmax, a minimum inertia weight ϵmin, an initial value and a finish value σpstar σgstar σpend and σgend of the acceleration coefficient; and then randomly generating an initial total;


S2: computing the fitness of each particle according to the equation (14)-(16): selecting the optimal solution of each particle itself, and selecting the particle with the optimal fitness as the globally optimal solution in the current generation;


S3: updating each particle according to the equation (13), and re-computing the fitness of each particle;


S4: updating the optimal individual particle of each particle; where if there is a solution better than an original particle, updating the globally optimal particle; and


S5: if an iteration stopping condition is satisfied, finishing the algorithm; and otherwise, returning to S3 for continuous iteration.


4 Evaluation of the method


In this part, we evaluate the method from two aspects. On the one hand, the validity of the method (section 4.1) is verified by an experiment in an actual scenario. On the other hand, the superiority (section 4.2) of the MPSO-GA in solving the multi-task and multi-server unloading problem is verified through a lot of simulation experiments.


4.1 Validity Verification

To verify the validity of the method, experiments are carried in a real scenario according to the following research questions (RQ).


RQ1 How about the effect of the method in improving the performance of the application? (Section 4.1.2)


RQ2 Whether the extra execution overhead generated by method is within an acceptable range? (Section 4.1.3)


4.1.1 Experiment Settings

(1) Application: we have achieved three real world applications of the program structure mentioned herein, including a license plate recognition application (hereinafter referred to as LPRA), a target detection application (hereinafter referred to as TDA), and a human face recognition application (hereinafter referred to as FRA). In addition, the LPRA is an Android application, TDA and FRA are DNN-based applications, and the source codes can be found in our GitHub project 1.









TABLE 1







Contexts of the IoT equipment in different scenarios











Scenario 1
Scenario 2
Scenario 3
















E1

V = 1.0 MB/S
V = 1.5 MB/S





RTT = 70 ms
RTT = 40 ms



E2


V = 1.0 MB/S






RTT = 70 ms



C1
V = 0.6 MB/S
V = 0.6 MB/S
V = 0.6 MB/S




RTT = 100 ms
RTT = 100 ms
RTT = 100 ms










(2) Network environment: the network environment includes four computing nodes: one IoT equipment (D1) and three remote servers (E1, E2, and C1). As shown in Table 1, we have designed three network scenarios, where the scenario 1 is only connected to the C1, the scenario 2 is connected to the E1 and C1, and the scenario 3 is connected to all the remote servers. Each element in Table 3 represents the data transmission rate v and the minimum link delay rtt, and less rtt and greater v represent a better message capacity. We use a network simulation tool Dummynet to control an available bandwidth.


(3) Equipment: we use workstations to simulate the edge servers (E1 and E2) and the cloud server (C1), where the E1 is equipped with 2.5 GHz, a 2-core CPU, and a 4GB RAM, the E2 is equipped with 3.0 GHz, a 4-core CPU, and a 8GB RAM, and the C1 is equipped with 3.9 GHz, a 8-core CPU, and a 16GB RAM. In addition, we also use two low-performance equipment as the IoT equipment (D1): Huawei Honor MYA-AL10 (1.4 GHz, 4-core CPU, 2 GB RAM) and Raspberry Pi Pi 3 B+ (1.4 GHz, 4-core CPU, 1 GB RAM). The LPRA runs on the MYA-AL10. The TDA and FRA run on the Raspberry Pi Pi 3 B+.


(3) To measure the validity of the method, we have discussed three different states of the application: (i) the original application (hereinafter referred as to OA) which cannot support computing unloading is only executed on the IoT equipment. (ii) The reconstructed application is locally executed on the IoT equipment (hereinafter referred as to RAL), and the equipment has been reconstructed from the OA to satisfy the program structure mentioned herein. (iii) The re-configured application (hereinafter referred as to RAO) is unloaded according to the unloading solution generated by the MPSO-GA, and the unloading solution can use the peripheral servers. On the basis of a rigorous principle, we have repeated the process 20 times to avoid unnecessary mistakes.


4.1.2 RQ1: Performance Improvements Caused by the Method

To measure the performance improvements of the method, we have executed the reconstructed applications (including LPRA, TAD, and FRA) in three different scenarios according to the unloading solution obtained by the MPSO-GA, and the experimental results are shown in FIG. 9.


According to the results, compared with the OA, the response times of the applications after computing unloading are shortened effectively, where the LPRA shortens the time by 3.97-19.38%, the TDA shortens the time by 29.3-46.27%, and the FRA shortens the time by 23.4844.64%. To execute a neural network model usually needs a huge computing capacity. Therefore, to unload the computing intensive tasks to servers with high performance may lead to excellent performance improvements. The LPRA is a k mean value-based license plate recognition application. To balance the data transmission time and the execution time of the tasks in the LPRA, most tasks in the LPRA are more suitably executed locally, particularly in a poor network environment. Therefore, computing unloading brings non-significant performance improvements to the LPRA.


At the same time, in different network environments, the performance improvements of the application are different. In the scenario 3, the peripheral network resources are more abundant and high in communication capacity, so that the performance improvements are obviously superior to those in other conditions. In the scenario 1 only connected to the cloud server, most tasks are still locally executed, so that the performance can only be improved by 3.97-29.3%.


Generally speaking, the method can provide different scenarios and applications with the performance improvements to a certain extent.


4.1.3 RQ2 The Extra Execution Overhead Generated by Method

Compared with the original application, the structure mentioned herein generates the extra execution overhead which includes the following three major parts: (1) external variables or results needed to be packaged before and after invoking the service; (2) the invoked service or database needed to be judged by the controller (the Invoke or Call function) to execute; and (3) communication and response time generated by interaction between the controller and the service or database. To measure the extra overhead, OA of different types of applications is used as a basis.


The difference between RAL and OA represents the extra overhead of the method. The difference between RAO and OA represents the performance improvement effect, where RAO takes the mean response time in different scenarios of RQ1. The experimental results are shown in FIG. 10.


Viewed from the experimental results, the TDA and LPRA generate more extra overhead in the execution process. In comparison, the overhead of the FRA can be ignored. An important reason for this result is that compared with the TDA and TPRA, the FRA is of a simpler structure and includes fewer services and databases. The extra overhead for unloading is mainly generated when the services or databases are invoked. The more the services or databases included in the application are, the more the extra overhead needed is.


In addition, by comparing two broken lines in FIG. 10, we can see that compared with the mean optimization time gain for computing unloading, the extra execution overhead can be ignored, particularly for the application with more complicated tasks.


Generally speaking, the extra execution overhead generated by method is within the acceptable range.


4.2 Superiority Verification

To verify the superiority of the MPSO-GA, wide simulation experiments are carried out to solve the following research questions (RQ):


RQ3: whether the MPSO-GA can obtain the optimal unloading solution in a simple network environment where different applications are independently executed?


RQ4: whether the MPSO-GA can obtain the optimal unloading solution in a more complicated network environment where different applications scramble for resources in the execution process?


It is worth noting that parameter initialization in the MPSO-GA in the experiment is simulated, where δ=100, itersmax=1000, ϵmax=0.8, ϵmin=0.2, σpstar=0.9, σpend=0.2, σgstar=0.4, and σgend=0.9.


4.2.1 Experiment Settings

(1) application: the same with the section 4.1, the application type includes LPRA, TDA, and FRA. In a literature, it is usually assumed that the task execution time and the allocated computing resource quantity are in a linear relationship, and the relationship between the data transmission time and the network communication capacity is the same. Therefore, by collecting the execution times and the data transmission times on different performance equipment, the computing workloads and the data transmission amplitudes of different program fragments in the application are obtained in advance, and then they are computed by least square linear regression. In addition, each type of application needs to appoint a deadline constraint to check whether the unloading solution is feasible. The deadline limit is set below.





dl1={3.1, 3.4, 3.8, 4.2} dl2={4, 5.1, 6.3, 7.6} dl3={2.6, 3, 3.5, 4.2}


dl1, dl2, and dl3 are respectively deadlines of plate license recognition, target detection and human face recognition. The smaller the value is, the stricter the time constraint is.









TABLE 2







Related configurations of the equipment















Computing
Core


Execution


Server
Type
capacity
number
Overhead
Application
frequency
















n1, n2, n3
0
1.4
1
0
LPRA
{90s, 60s, 30s}


n4, n5, n6
0
1.4
1
0
TDA
{30s, 20s, 10s}


n7, n8, n9
0
1.4
1
0
FRA
{60s, 40s, 20s}


n10, n11, n12
1
2.5
2
2


n13, n14, n15
2
3
4
3


n16
3
3.9
8
5
















TABLE 3







Network connection condition














Minimum link




Server type
Bandwidth
delay
Overhead
















0↔1
1.5 MB/S
40 ms
1.2



0↔2
1.0 MB/S
70 ms
0.6



1↔2
3.0 MB/S
10 ms
0.4



0/1/2↔3
0.6 MB/S
100 ms 
0.08










(2) Network environment: in this part, we have designed an application scenario, as shown in FIG. 10. A cloud edge environment in the scenario totally includes 9 IoT equipment (n1-n9), 6 edge servers (n10-n15), and 1 cloud server (n16). Configurations, overheads and execution application conditions of different types of equipment are specifically shown in Table 2. The application only triggers exaction on the IoT equipment. The execution frequency in Table 2 represents that different applications will be executed automatically at set intervals. Each IoT equipment is connected to two adjacent edge servers, the adjacent edge servers are interconnected, and all the IoT equipment and edge servers are connected to the cloud server, as shown in FIG. 10. Specific network conditions and overheads among different types of computing nodes are shown in Table 2.


(3) Contrast algorithm: to compare and evaluate the performance of the MPSO-GA in the MEC environment, the following three methods are introduced: (i) the conventional PSO cannot be directly used in the discrete problem herein, so that the discrete problem is converted into the continuous problem by way of taking a remainder, and the fitness function is the same as that of PSOGA; (ii) GA uses a binary problem coding method, the dimensionality thereof being equal to the number of the servers, and the fitness function thereof being the same as that in the PSOGA; and (iii) Idea: the optimal solution in the experimental process is kept to approach the real optimal solution as far as possible. In the experiment, the unloading results of same configuration may be different. Therefore, through 30 times of repeated experiments, the mean value of the feasible solution therein is taken to measure the system overhead.


4.2.2 RQ3: Independent Execution Condition of the Application in Different Deadline Constraints

To make sure that the applications will not scramble for resources, the different types of applications are respectively executed in different time periods. It is assumed that in a time period of 0-180 s, the plate license recognition application is executed at n1, n2, and n3; in a time period of 180-240 s, the target detection application is executed at n4, n5, and n6; and in a time period of 240-360 s, the human face recognition application is executed at n7, n8, and n9. The experimental results of the different types of applications in the different deadline constraints respectively correspond to FIGS. 12A, 12B, and 12C. The positive axis in the figure represents the system overhead of the feasible solution in the solution, and the negative axis represents the sum of the differences between the response times and the time constraints of the applications in the solution when there is no feasible solution. The folded line represents the rate of generating the feasible solution by the algorithm in 30 times of repeated experiments.


Viewed from the experimental results, by using the MPSO-GA, PSO and GA, the system overhead is reduced with widening of the deadline. This is because that the strategy based a meta-heuristic algorithm tends to assign more tasks to the servers with low overhead when the deadline is not strict. It can be seen from FIGS. 12A and 12B that when the deadline is the strictest, there may be a condition that the feasible solution cannot be found. The results further show that the system overheads of the different types of application are different. The reason is that the applications include different services among which various data is transmitted. For example, the TDA includes more complicated services. To arrive at the deadline, most tasks have to be unloaded to the servers with high performance, so that more computing overheads are needed.


Viewed either from the saving of the system overhead, the deadline difference or the proportion of the feasible solution, the effect obtained by the MPSO-GA is superior to that of PSO and GA because the MPSO-GA can adaptively adjust the search capability according to current condition and perform iterative evolution from the global angle. The PSO algorithm has the problems of poor local search capability and low search precision, which is difficult to search for the optimal solution. The performance of the GA is greatly affected by the deadline constraint because the search range thereof is local during each iteration. Generally speaking, compared with the PSO and GA, the system overheads of the MPSO-GA are respectively saved by about 3.95%-11.56% and 2.11%-17.51%.


4.2.3 RQ4: Simultaneous Execution Condition of the Applications in Different Deadline Constraints


To research the decision-making effect of the PSOGA in the more complicated network environment, the applications of the IoT equipment n1-n9 are triggered simultaneously at a time 0 s and are executed for 360 s. The experimental results are shown in FIG. 13:


viewed from the experimental results, the three algorithms cannot obtain the feasible solution in the two stricter time constraint conditions. Moreover, the PSO and GA have the feasible solution only in the loosest time constraint. Different from RQ3, different applications scramble for resources in the execution process, so that the response times of some applications cannot satisfy the deadline. Therefore, regardless of the second strict time constraint, there is no feasible solution. However, it can be found by comparison that the solution generated in the two conditions by the MPSO-GA can make the response time of the overall application be closer to the time constraint.


At the same time, with increase of the quantity of the applications, the quantities of the services and databases needed to be determined are increased, and the size of the problem space increases exponentially. Therefore, the difficulty to obtain an appropriate unloading solution by the algorithm is greatly increased. In spite of this, the MPSO-GA is still capable to obtain the feasible solution in the third strict time constraint.


The method is capable to support the computing unloading of various applications in the MEC and multi-task and multi-server unloading decision making. The experimental results show that the method can significantly improve the performance of the application in different scenarios, and the extra execution overhead is acceptable. Besides, by simulating the comparison experiment, it is verified that the method has more opportunities to obtain the feasible solution in the stricter time constraint condition; and moreover, in the looser time constraint condition, the overhead of the obtained feasible solution is minimum.


The above is merely preferred embodiments of the present invention and is not limitation to the present invention in other forms. Those skilled in the art may alter or modify equivalent embodiments with equivalent changes by means of the disclosed technical content. Any subtle modifications, equivalent changes and modifications made on the embodiments in accordance with the technical substance of the present invention shall come within the scope of the technical scheme of the present invention without departing the content of the technical scheme of the present invention.

Claims
  • 1. A method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in an edge environment, comprising the following steps: constructing an application compliant with a universal program structure supporting on-demand computing unloading;for the application compliant with the universal program structure supporting on-demand computing unloading, extracting a program fragment flowchart through static code analysis to provide internal flow information of the application for generation of an unloading solution;analyzing a peripheral mobile edge computing (MEC) environment and the application through an unloading solution generating algorithm based on a multi-task particle swarm optimization-genetic algorithm (MPSO-GA) to obtain the optimal unloading solution; andperforming, by the application, computing unloading by taking a service as a particle size according to the unloading solution to minimize a system overhead under a circumstance that a deadline constraint of each application is satisfied.
  • 2. The method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in the edge environment according to claim 1, wherein the universal program structure supporting on-demand computing unloading is designed through a design mode supporting independent deployment and execution of modules of the different types of applications; the design mode satisfies the following conditions: each module of the application is capable to be independently deployed and executed at different computing nodes; the module is capable to determine a computing node where a module or a database to be invoked is located, so as to guarantee invoking validity; the invoked module is capable to receive an input of an invoking module and return a correct executive result; there are two types of services in the application compliant with the design mode: a pure service, using local data without invoking other services; a compound service, relying on data from other services and required to invoke other services; the compound service mainly comprises the following attributes:(1) key-value type data Params/result: all external variables required by an invoked service in an execution process are imported through Params, i.e., there is no sentence accessing to the external variables in the service; when the service needs to modify the external variables, the external variables are exported by virtue of result; and the external variables refer to variables accessed in the execution process of the service and defined outside the service;(2) a character string seq: a character string seq is maintained in the application, and an invoke path from a current service to a primary service is recorded; and, to maintain the seq, when the service is invoked, the seq is imported in the form of parameter, and a name of the invoked service is added into the seq;(3) an Invoke function: the compound service invokes other services through the Invoke function; the Invoke function achieves two functions: a) the Invoke function controls forwarding: whether the service is locally executed or remotely invoked is determined, and if the service is remotely invoked, which computing node invoking the service is determined;and b) data transmission: all parameters required by the invoked service are serialized, and the returned service executive result is deserialized; and(4) a Call function: the service establishes a connection with the database through the Call function; the Call function determines an address of the database through a name of the database to be interacted and returns a connecting channel for data transmission with the database, so as to ensure normal data interaction between the service and the database.
  • 3. The method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in the edge environment according to claim 2, wherein an implementation method for the Invoke function comprises the following steps: first, determining an invoke path from a service to be invoked to the primary service through the seq and serviceName, and determining an address where the service is executed in a configuration file SP according to the invoke path, wherein the SP stores deployment addresses of different services, and if the address is equal to Local and the Local represents the computing node when the service is invoked, the service is locally executed directly through a local function, and a result is returned; and if the address is not equal to the Local, service, Parameter, and seq are sent through a remote function to an agent program on a remote node, and a remote agent invokes the service, through a reflection mechanism and finally returns a result.
  • 4. The method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in the edge environment according to claim 2, wherein an implementation method for the Call function comprises the following steps: first, determining a deployment address host of the database from a configuration file DP, wherein the DP stores the deployment addresses of different databases according to DatabaseNamei; then acquiring corresponding users and passwords from Accounts recording the database names and passwords through the host and database; and finally, establishing a data transmission channel, and returning the channel.
  • 5. The method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in the edge environment according to claim 2, wherein the method for extracting the program fragment flowchart through static code analysis comprises the following steps: first, determining an unloaded content, i.e., a quantity of the services and databases in the application, the databases being uniquely identified by names, and the services being uniquely identified by the invoking paths from the services to the primary service; then, evaluating the unloading solution, further dividing each of the services into program fragments and recording an execution sequence of the fragments; and finally, modeling the application as the program fragment flowchart;the program fragment flowchart of the ith application is represented by a symbol Flowi=(Di, Si, Pi, Qi); where Di{di1, di2, . . . , dil} represents a set of the databases in the application, and dij.Name is used to represent a name of the database dij; Si={si1, si2, . . . , sio} represents a set of the services in the application, and sij.Name is used to represent a name of the service sij, sij.callSeq represents an invoking path from the service sij to the primary service si1, and sij.Time represents the total number of times that the service sij is invoked; in the presence of service invoking, each of the services is segmented into a plurality of serial code fragments in an actual executive process, i.e., sij={sij,1, sij,2, . . . , sij,f}; Pi is a set recording a sequence in which the actual program fragments are executed, and the program fragments in the set can be repeated; the sequence in which the program fragments are added into the set is the actual execution sequence thereof in the application, and for two adjacent fragments, the previous fragment is called as a precursor fragment of a next fragment; Qi represents a set recording correlations between the program fragments and the databases, where (sij,k, dih)∈Qi represents that there is an interaction between sij,k in the executive process and the database dih;each of the code fragments is defined as a triad sij,k=Εij,k, βij,k, γ(sij,k, dih), where αij,k represents a size of data imported into a fragment sij,k; βij,k represents a task load of a fragment sij,k; and γ(sij,k, dih) represents a total size of data of a fragment sij,k in the executive process and a database dih interacting with each other, and if there is no interaction, it is set to be 0;the addresses of the Invoke function and the Call function in source codes are analyzed by a static code analysis technology to obtain a program fragment flowchart Flowi=(Di, Si, Pi, Qi) of the application i; and by taking the source codes of the ith application and the service name of the primary service as inputs, constructing the corresponding program fragment flowchart comprises the following steps:first, assigning information of the primary service to si1, and executing a corresponding sentence set Ui1 through si1.Name; then, adding the primary service si1 into a null set Si, and setting Di, Pi, and Qi to be null; and finally, invoking a getFlow() function to construct the program fragment flowchart in a recursive fashion; wherein the getFlow() function takes sia and Uia and as inputs of the function, respectively representing the currently analyzed service and the sentence set corresponding to the service; pId is used to record the number of the current program fragment, the initial number is set to be 1, and the current program fragment sia,pId is added into sia and Pi; each sentence in the sentence set is operated as follows: acquiring a keyword in a sentence uia,j; if the keyword comprises a keyword “invoke”, judging whether the invoked service has appeared previously through the invoking path callSeq; if the invoked service has appeared previously, adding 1 to the invoking number of times, and if the invoked service has not appeared previously, adding Si, and obtaining the corresponding sentence set through the service name; updating the pId, and adding a currently new program fragment sia,pId into sia and Pi; performing getFlow() recursion on the service sib; if the keyword in the sentence uia,j comprises a keyword “call”, judging whether the database dik has been in the set Di already first in a similar way, and if the database has not been in the set, adding the database; and then judging whether (sia,pId, dih) is in the set Qi, if not, adding it into the set.
  • 6. The method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in the edge environment according to claim 1, wherein the unloading solution generating algorithm describes an application unloading problem in the MEC environment from the aspects of a network model, an application model, and a performance model, and solves the application unloading problem through the multi-task particle swarm optimization-genetic algorithm (MP SO-GA); a network environment comprises a cloud server CS, a plurality of edge servers ES, and a plurality of Internet of things devices DS; the MEC environment is modeled as a map G=(N,E) including a set N={n1, 2, . . . , nw} of the computing nodes, and a set E is used to represent a sub connection thereamong; the computing nodes are represented by ni=<Ψi, ωi, cicom>, where Ψi represents computing power of the computing node ni; it is assumed that a storage capacity of each node is capable to satisfy a corresponding demand; ωi represents the maximum number of tasks capable of being executed by the node ni, wherein the value thereof is equal to a nuclear number of the node; cicom represents a computing cost by using the computing node ni per second; ei,j is used to represent a connection between ni and nj, and is related to a bandwidth vi,j and the minimum link delay rtti,j, and ci,jtran used to represent a transmission overhead per second between the nodes ni and nj;it is assumed that there is a system going to be executed for a time θ in the unloading process; there are a plurality of applications A={a1, a2, . . . , ag} from IoT equipment in the system and the applications ai are executed at a time interval of μi , so each application is executed for ξi=θ/μi times within a time period θ, and dli is used to represent a deadline of the application ai; wherein each application is compliant with the universal program structure; and the application is modeled as the program fragment flowchart Flowi=(Di, Si, Pi, Qi);to compute a response time and a money overhead of the application, an unloading solution corresponding to the service and database is constructed, where SPi={sp(si1), sp(si2), . . . , sp(sio)} represents a deployment address of the service in the application ai; for each program fragment sij,k, the executive address thereof is dependent on the deployment address of the service sij, i.e., sp(sij,k)=sp(sij); and DPi={dp(bi1), dp(bi2), . . . , dp(bil)} represents a deployment address of the database in the application ai;in the λith executive process of the application ai,
  • 7. The method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in the edge environment according to claim 6, wherein a specific implementation method for the unloading solution generating algorithm based on the multi-task particle swarm optimization-genetic algorithm is as follows: a coding strategy of the unloading solution generating algorithm satisfies the following three principles:integrity: each candidate solution in a problem space can be encoded into a corresponding coding particle;non-redundancy: each candidate solution in the problem space only corresponds to one coding particle; andviability: each coding particle in a code space can correspond to the candidate solution in the problem space;the coding strategy uses the particle Z to represent a candidate solution for cost driven unloading of all services and databases in the MEC environment, wherein the ith particle in the tth iteration is described as (12): Zit=(zi,1t, zi,2t, . . . , zi,pt, zi,p+1t, . . . , zi,p+qt)   (12)zi,kt=(nj)i,kt   (13)where p and q respectively represent total quantities of the services and databases in the whole system; in the equation (13) zi,kt (k=1, 2, . . . , p) represents the ith particle for the tth iteration of the kth service deployed by the computing node nj, and zi,kt (k=p+1, p+2, . . . , p+q) represents the ith particle for the tth iteration of the (k−q)th database deployed by the computing node nj;each code bit of the particle represents the deployment address of the corresponding service or database, satisfying a completeness principle; different coding particles respectively represent different unloading solutions, and a certain feasible solution in the problem space only corresponds to one coding particle in the code space, satisfying a non-redundancy principle; however, some candidate solutions corresponding to the particle may not satisfy an expiration date constraint, so that the response times of some applications exceed a time limit, thereby, not satisfying a survival principle; all particles are divided into two categories:feasible particles: in an unloading strategy corresponding to the particles, each application satisfies the expiration date constraint; andinfeasible particles: in the unloading strategy corresponding to the particles, execution of at least a certain application cannot satisfy the expiration date constraint;to compare different particles, a fitness function is adopted to measure; the particles with small fitness represent better candidate solutions; because some solutions may make the response times of the applications exceed a stipulated expiration date, the particles are compared in the following three circumstances:circumstance 1: one particle is the feasible solution while the other is the infeasible solution; and in this case, the feasible solution is selected, with the fitness function thereof being defined as follows:
  • 8. The method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in the edge environment according to claim 7, wherein the step that the evaluation algorithm, by taking the unloading solution and the application flowcharts as the inputs, obtains the execution time of each application and the total money overhead of the system comprises the following specific steps: S1: filling channels: adding each program fragment onto the computing node in sequence according to the unloading solution till there is no null channel on the computing node, where ni.empty is used to represent the residual channel number of the node ni, with the initial value thereof being ωi; and the program fragments put into the channels need to satisfy the condition that the task to execute the program fragment has arrived the computing node at the current time, i.e.,
  • 9. The method for supporting adaptive unloading of multi-Internet of Things (IoT) applications in the edge environment according to claim 7, wherein an implementation flow of the MPSO-GA comprises the following steps: S1: initializing related parameters of the MPSO-GA, including an initial swarm size δ, a maximum iteration number of times itersmax, a maximum inertia weight ϵmax, a minimum inertia weight ϵmin, an initial value and a finish value σpstar σgstar σpend and σgend of the acceleration coefficient; and then randomly generating an initial total;S2: calculating fitness of each particle according to equations (14)-(16); and selecting the optimal solution of each particle itself, and selecting the particle with the optimal fitness as the globally optimal solution in the current generation;S3: updating each particle according to the equation (13), and re-calculating the fitness of each particle;S4: updating the optimal individual particle of each particle; and if there is a solution better than an original particle, updating the globally optimal particle; andS5: if an iteration stopping condition is satisfied, finishing the algorithm; and otherwise, returning to S3 for continuous iteration.
Priority Claims (1)
Number Date Country Kind
202211475176.3 Nov 2022 CN national
CROSS REFERENCE TO THE RELATED APPLICATIONS

This application is the continuation application of International Application No. PCT/CN2023/097420, filed on May 31, 2023, which is based upon and claims priority to Chinese Patent Application No. 202211475176.3, filed on Nov. 23, 2022, the entire contents of which are incorporated herein by reference.

Continuations (1)
Number Date Country
Parent PCT/CN2023/097420 May 2023 WO
Child 18411071 US