CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority of Chinese Patent Application No. 202410373303.1, filed on Mar. 29, 2024, the entire contents of which are incorporated herein by reference.
TECHNICAL FIELD
The present invention involves the field of watermark embedding, particularly to a watermark embedding method based on service invocation data.
BACKGROUND TECHNOLOGY
With the rapid development of information technology, it is imperative to enhance the protection and copyright management of digital content. Therefore, digital watermarking technology is widely utilized to achieve effective traceability and copyright protection of digital content. Digital watermarking is a technical means of embedding specific information into digital content that is not easily detected or destroyed. It can be utilized to identify the source of content, track unauthorized duplication and distribution, and provide functions such as tampering detection.
The embedding of watermarks in service invocation data also needs to consider the integrity, availability, and security of the data. During the process of embedding the watermark, it is essential to minimize any impact on the original data in order to avoid disrupting normal data processing and analysis. Meanwhile, the embedded watermark information must have a certain level of robustness to resist data tampering and attacks.
Therefore, it is urgent to develop a watermark embedding method suitable for service invocation data that can realize the copyright traceability and protection while guaranteeing the data integrity and availability.
SUMMARY OF THE INVENTION
The purpose of the present invention is to provide a watermark embedding method based on service invocation data.
For the purpose of achieving the above objectives, the present invention is implemented according to the following technical scheme:
The present invention comprises the following steps:
- Obtaining service invocation data, and preprocessing the invocation data;
- Obtaining the key data through screening the preprocessed invocation data based on relevant weights, then, adding timestamps to the key data to obtain enhanced data;
Selecting the contribution degree of the enhanced data to obtain high-quality data, then, encoding the high-quality data to generate encoded data;
- Constructing a data watermark embedding model by employing the encoded data; and then, inputting the service invocation data to be embedded into the data watermark embedding model, and thus the embedding results can be output.
Further, the method for obtaining the key data through screening the preprocessed invocation data based on relevant weights, including: calculating the relevant weights applied to the invocation data:
- wherein, the maximum value of data E is represented by max(E), the minimum value of data E is represented by min(E), and the relevant weight of data E is represented by
(E), the initial relevant weight of data E is represented by
o(E), and the proportion of the category to which data E belongs in random sampling C is represented by
(cs(C)), the proportion of data invoked by class b is represented by
(b), the ath nearest neighbor data of class b invocation data is represented by ga, the data value of correlation E is represented by C[E], the sampling data value of the ath invocation data E of the nearest neighbor is represented by sa[E], the ath sampling data is represented by sa, the category to which data belongs in random sampling C is represented by cs(C), and the number of sampled data is represented by
, in addition, the nearest neighbor data is represented by g, the difference in correlation E between the invocation data C and the sampling data sa is represented by df(E, C, sa), and the difference in nearest neighbor ga between the invocation data C and the sampling data sa is represented by df(E, C, ga);
- Performing a descending sort on the invocation data according to the relevant weights, presetting a threshold for these weights, and subsequently screening the dependent sets based on this threshold;
- Mapping the position of the exploratory factor and the dependent set, and the expression is:
- wherein, the mapping function is represented by
(·), the ath data related to the ith exploratory factor is represented by qi,a, the random number is represented by r, and the natural constant is represented by e, then, calculating the fitness value of the exploratory factor:
- wherein, the fitness is represented by R, the misclassification rate is represented by er, the number of data in the dependent set is represented by M, the importance of the misclassification rate is represented by α, the importance of the dependent subset is represented by ω, and the number of selected dependent subsets is represented by ML;
- Comparing the fitness of the exploratory factor, updating the global and local optimal solutions, and updating the position of exploratory factor, and the expression is:
- wherein, the velocity of the ith exploratory factor in the d-dimension is represented by θi,d, the position of the ith exploratory factor in the d-dimension is represented by qi,d, the inertia weight of the exploratory factor is represented by ψ, and the learning factors are represented by β1 and β2, in addition, the random constants are represented by r1 and r2, the global optimal position is represented by qsi,d, the individual optimal position is represented by bsi,d, and the updated position of the exploratory factor is represented by {acute over (θ)}i,d;
- Implementing an adaptive t-distribution perturbation strategy, iterating continuously until the maximum number of iterations is reached, and then, outputting the screened remaining data as the key data.
Further, the method for adding timestamps to the key data to obtain enhanced data; including:
Calculating the nearest point and the distance to the key data:
- wherein, the distance from the pth nearest neighbor point to the cth key data point is represented by
c(p), the dimension is represented by d, the number of dimensions is represented by e, the cth sample in the dth dimension is represented by Qcd, and the pth nearest neighbor point in the dth dimension is represented by Qpd;
- Calculating the sum of distances from the nearest neighbors of the sample point to the key data:
- wherein, the number of the nearest neighbors is represented by
, and the sum of the distances between the cth key data and the nearest neighbors is represented by
;
- Performing a descending sort according to the sum of distances, presetting the range of values for the neighborhood parameters, and distributing the neighborhood parameters equally to the neighborhood based on the sum of distances between the key data and the neighboring points, and the expression is:
∈[
min,
max]
- wherein, the neighborhood parameter is represented by
, the maximum value of the neighborhood parameter is represented by
max, and the minimum value of the neighborhood parameter is represented by
min, in addition, the sum of the distances between the first key data and the neighboring points is represented by
, the control parameter is represented by ζ, and the maximum value of the sum of distances between the key data and the neighboring points is represented by
;
- Calculating the weights of local neighbors and the weights of the original local linear structure:
- wherein, the enhanced weight is represented by χw, the weight of the neighboring sequence structure between the key data Qc and the yth neighbor is represented by Ucy, the weight of the original local linear structure is represented by χL, and the 2-norm function is represented by ∥·∥2, in addition, the yth neighbor of the key data is represented by Ucy, the cth key data is represented by Qc, the minimum parameter value function is represented by argmin(·), and the attenuation coefficient between the cth key data and the yth neighbor is represented by ψcy;
- Calculating the importance weight:
ϕ=δ1χh+δ2χL
- wherein, the importance weight is represented by ϕ, the weight of the neighboring sequence structure is represented by χh, the sequence coefficient is represented by δ1, and the linear coefficient is represented by δ2;
- Taking the nearest neighbor points corresponding to the importance weights greater than or equal to 0.372 as insertion points for timestamps, and then, the enhanced data can be output by inserting timestamps.
Further, the method for selecting the contribution degree of the enhanced data to obtain high-quality data, including:
- Calculating the distance between the enhanced data:
- wherein, the dissimilarity degree between the jth and the sth data is represented by ωjs, the distance between the jth and sth data is represented by ρjs, the conditional probability is represented by W, and the numerical distance between the jth and the sth data is represented by kjs;
- Calculating the cumulative contribution of the enhanced data:
- wherein, the cumulative contribution degree is represented by
, the jth explained variance ratio is represented by ξj, the offset value between the jth and the sth data is represented by Fjs, the distance is represented by ρ, and the genetic factor is represented by υ;
- Outputting the enhanced data with a cumulative contribution greater than 1 as high-quality data;
- Further, the method for encoding the high-quality data to generate encoded data, including:
- Calculating the upper limit of pairwise error probability:
- wherein, the channel matrix was represented by R, the precoding matrix is represented by K, the pairwise error probability was represented by H(·), the high-quality data is represented by A, the encoded data is represented by Á, the error rate was represented by η, and the norm function is represented by ∥·∥;
- Calculating the probability density of channel:
- wherein, the probability density of channel R is represented by
(R), the mean value of channel is represented by Rτ, the transmission covariance is represented by Vt, the transmission inverse matrix is represented by V−1, the rank of the matrix is represented by tr[·], the number of channel vectors is represented by N, and the determinant is represented by dt(·), in addition, the number of the fth signal vectors is represented by Nf;
- Calculating the minimum objective function:
- wherein, the objective function is represented by Q, the minimum distance is represented by D, the inverse matrix of the transmission covariance is represented by Vt−1, the adjustment matrix of channel is represented by B, and the adjustment parameter is represented by ϑ, in addition, the precoding matrix of channel R is represented by KR, the code distance of the optimal precoding matrix is represented by μo, the inverse matrix of channel adjustment is represented by B−1, and the mean value of channel R is represented by RτR;
- Working out the constraint objective function, and the expression is:
(B,
)=tr(RτB−1RτR)−Nf log dt(B)+
[tr(φ)−ϑ]
- wherein, the Lagrange coefficient is represented by
, the constraint objective function of the channel adjustment matrix B and the Lagrange coefficient
is represented by
(B,
), and the precoding matrix is represented by φ;
- Calculating the optimal encoding matrix:
- wherein, the optimal encoding matrix is represented by Ko(Á), the right singular vector is represented by X, and the left singular vector is represented by Y, thus, the encoding data is output based on the optimal encoding matrix.
Further, the method for constructing a data watermark embedding model by employing the encoded data, including:
- The data watermark embedding model comprises time series partitioning algorithm, Hash algorithm, Fourier transform algorithm, genetic algorithm, and machine learning algorithm;
- The time series partitioning algorithm divides the encoded data into training data and testing data according to the chronological order;
- The clustering screening algorithm divides the training data into multiple clusters and excludes data that is far from the clusters in order to obtain selected data;
- The Hash algorithm can compute a fixed-length hash value by performing hashing computation on the selected data, and then encrypt important data by utilizing the hash value to obtain the encrypted data;
- The Fourier transform algorithm performs transformation of frequency domain on encrypted data to obtain transformed data;
- The genetic algorithm can find out optimal embedding position among multiple embedding positions through iterative optimization of the object to be embedded;
The machine learning algorithm can embed the transformed data based on the optimal embedding position.
Secondly, the embodiment of the present application provided an electronic device comprising:
A processor; and a memory arranged to store computer executable instructions, which allows the processor to execute the method described in the first aspect when running the executable instructions.
Thirdly, the embodiment of the present application provided computer-readable storage medium containing one or more programs that enable an electronic device to execute the method described in the first aspect while being executed by the electronic device comprising multiple application programs.
The beneficial effects of the present invention include:
The present invention is a watermark embedding method based on service invocation data. Compared with existing technologies, the present invention has the following technical effects:
The present invention can improve the accuracy of the embedding of watermark in service invocation data through steps including preprocessing, adding timestamps, screening data, data encoding, and constructing model, etc., thereby improving the accuracy of and optimizing the process of the embedding of watermark. In addition, it can not only greatly save resources and improve work efficiency, but also enables the embedding of watermark in service invocation data. This enables real-time implementation of encryption and encoding for embedding watermarks in service invocation data, which is crucial for the specified embedding method. It also possesses a certain universality to adapt to different standards and requirements for watermark embedding methods in service invocation data.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a flowchart of the steps of a watermark embedding method based on service invocation data according to the present invention;
FIG. 2 is a structural schematic diagram of an electronic device described in the embodiments of this specification.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The present invention will be further described through specific embodiments, and the illustrative embodiments and explanations provided herein are utilized to explain the present invention without limiting the present invention.
The present invention provides a watermark embedding method based on service invocation data, consisting of the following steps:
As shown in FIG. 1, this embodiment consisted of the following steps:
- Obtaining service invocation data, and preprocessing the invocation data;
- In the practical evaluation, the requested URL was https://api.example.com/users/123, the HTTP method GET was utilized, and the request header was Content Type: application/json—Authorization: Bearer token123; in addition, there was no request body, and the invoked timestamp was 2023-04-01T100:00:00Z, the invoker IP address was 192.168.1.100, and the invocation duration was 200 ms, as well as the invoked service interface name was getUserById; furthermore, the information including whether there were network errors, or whether the service was unavailable and the parameters were correct, the running status indicating success, as well as the processing flow validation request header→query the database→return user information, the user information of execution results had returned successfully, and the response time was 150 ms, the throughput was 100 req/s, the error rate was 0%;
- Obtaining the key data through screening the preprocessed invocation data based on relevant weights, then, adding timestamps to the key data to obtain enhanced data;
- In the practical evaluation, the key data included that the requested URL was https://api.example.com/users/123, the HTTP method GET was utilized, and the request header was Content Type: application/json—Authorization: Bearer token123, in addition, there was no request body, and the invoked timestamp was 2023-04-01T100:00:00Z, the invoker IP address was 192.168.1.100, and the invocation duration was 200 ms; furthermore, the information including whether there were network errors, or whether the service was unavailable and the parameters were correct, the running status indicating success and the user information of execution results had returned successfully, and the response time was 150 ms, the throughput was 100 req/s, the error rate was 0%;
- The enhance data included that the requested URL was https://api.example.com/users/123, the HTTP method GET was utilized, and the request header was Content Type: application/json—Authorization: Bearer token123, in addition, there was no request body, and the invoked timestamp was 2023-04-01T1:00:00 Z, the invoked event timestamp was 2023-04-01T1:00:00 Z, the invoker IP address was 192.168.1.100, the invocation duration was 200 ms; furthermore, the information including whether there were network errors, or whether the service was unavailable and the parameters were correct, the running status indicating success and the user information of execution results had returned successfully, and the response time was 150 ms, the throughput was 100 req/s, the error rate was 0%;
- Selecting the contribution degree of the enhanced data to obtain high-quality data, then, encoding the high-quality data to generate encoded data;
- In the practical evaluation, the high-quality data included that the requested was URL https://api.example.com/users/123; the Request header was Content Type: application/json—Authorization: Bearer token123, and there was no request body, as well as the invoked timestamp was 2023-04-01T1:00:00 Z, the invoked event timestamp was 2023-04-01T1:00:00 Z, the invocation duration was 200 ms, in addition, the running status indicated success, and the response time was 150 ms, the throughput was 100 req/s, the error rate was 0%;
- The encoded data include that the requested URL was https://api.example.com/users/123, the request header was Content Type: application/json—Authorization: Bearer token123, in addition, there was no request body, and the invoked timestamp was 2023-04-01T100:00:00Z, the invoked event timestamp was 2023-04-01T100:00:00Z, the invocation duration was 200 ms, and the running status indicated success, the response time was 150 ms, as well as the throughput was 100 req/s, the error rate was 0%, wherein, the encode were: 00000001, 000000 10, 000000 11, 0, 10111101000110000101, 101111010001100001010, 11001000, 1, 10010110, 1100100, 0, respectively;
- Constructing a data watermark embedding model by employing the encoded data; and then, inputting the service invocation data to be embedded into the data watermark embedding model, and thus the embedding results can be output.
In this embodiment, the method for obtaining the key data through screening the preprocessed invocation data based on relevant weights includes:
- Calculating the relevant weights applied to the invocation data:
- wherein, the maximum value of data E was represented by max(E), the minimum value of data E was represented by min(E), and the relevant weight of data E was represented by
(E), the initial relevant weight of data E was represented by
o(E), and the proportion of the category to which data E belongs in random sampling C was represented by
(cs(C)), the proportion of data invoked by class b was represented by
(b), the ath nearest neighbor data of class b invocation data was represented by ga, the data value of correlation E was represented by C[E], the sampling data value of the ath invocation data E of the nearest neighbor was represented by sa[E], the ath sampling data was represented by sa, the category to which data belongs in random sampling C was represented by cs(C), and the number of sampled data was represented by
, in addition, the nearest neighbor data was represented by g, the difference in correlation E between the invocation data C and the sampling data sa was represented by df(E, C, sa), and the difference in nearest neighbor ga between the invocation data C and the sampling data sa was represented by df(E, C, ga);
- Performing a descending sort on the invocation data according to the relevant weights, presetting a threshold for these weights, and subsequently screening the dependent sets based on this threshold;
- Mapping the position of the exploratory factor and the dependent set, and the expression was:
- wherein, the mapping function was represented by
(·), the ath data related to the ith exploratory factor was represented by qi,a, the random number was represented by r, and the natural constant was represented by e, then, calculating the fitness value of the exploratory factor:
- wherein, the fitness was represented by R, the misclassification rate was represented by er, the number of data in the dependent set was represented by M, the importance of the misclassification rate was represented by α, the importance of the dependent subset was represented by ω, and the number of selected dependent subsets was represented by ML;
- Comparing the fitness of the exploratory factor, updating the global and local optimal solutions, and updating the position of exploratory factor, and the expression was:
- wherein, the velocity of the ith exploratory factor in the d-dimension was represented by θi,d, the position of the ith exploratory factor in the d-dimension was represented by qi,d, the inertia weight of the exploratory factor was represented by ψ, and the learning factors were represented by β1 and β2, in addition, the random constants were represented by r1 and r2, the global optimal position was represented by qsi,d, the individual optimal position was represented by bsi,d, and the updated position of the exploratory factor was represented by {acute over (θ)}i,d;
- Implementing an adaptive t-distribution perturbation strategy, iterating continuously until the maximum number of iterations is reached, and then, outputting the screened remaining data as the key data.
In this embodiment, the method for adding timestamps to the key data to obtain enhanced data includes:
- Calculating the nearest point and the distance to the key data:
- wherein, the distance from the pth nearest neighbor point to the cth key data point was represented by
c(p), the dimension was represented by d, the number of dimensions was represented by e, the cth sample in the dth dimension was represented by Qcd, and the pth nearest neighbor point in the dth dimension was represented by Qpd;
- Calculating the sum of distances from the nearest neighbors of the sample point to the key data:
- wherein, the number of the nearest neighbors was represented by
, and the sum of the distances between the cth key data and the nearest neighbors was represented by
;
- Performing a descending sort according to the sum of distances, presetting the range of values for the neighborhood parameters, and distributing the neighborhood parameters equally to the neighborhood based on the sum of distances between the key data and the neighboring points, and the expression was:
- wherein, the neighborhood parameter was represented by
, the maximum value of the neighborhood parameter was represented by
max, and the minimum value of the neighborhood parameter was represented by
min, in addition, the sum of the distances between the first key data and the neighboring points was represented by
, the control parameter was represented by ζ, and the maximum value of the sum of distances between the key data and the neighboring points was represented by
;
- Calculating the weights of local neighbors and the weights of the original local linear structure:
- wherein, the enhanced weight was represented by χw, the weight of the neighboring sequence structure between the key data Qc and the yth neighbor was represented by Ucy, the weight of the original local linear structure was represented by χL, and the 2-norm function was represented by ∥·∥2, in addition, the yth neighbor of the key data was represented by Ucy, the cth key data was represented by Qc, the minimum parameter value function was represented by argmin(·), and the attenuation coefficient between the cth key data and the yth neighbor was represented by ψcy;
- Calculating the importance weight:
ϕ=δ1χh+δ2χL
- wherein, the importance weight was represented by ϕ, the weight of the neighboring sequence structure was represented by χh, the sequence coefficient was represented by δ1, and the linear coefficient was represented by δ2;
- Taking the nearest neighbor points corresponding to the importance weights greater than or equal to 0.372 as insertion points for timestamps, and then, the enhanced data can be output by inserting timestamps;
- In this embodiment, the method for selecting the contribution degree of the enhanced data to obtain high-quality data includes:
- Calculating the distance between the enhanced data:
- wherein, the dissimilarity degree between the jth and the sth data was represented by ωjs, the distance between the jth and sth data was represented by ρjs, the conditional probability was represented by
, and the numerical distance between the jth and the sth data was represented by kjs;
- Calculating the cumulative contribution of the enhanced data:
- wherein, the cumulative contribution degree was represented by
, the jth explained variance ratio was represented by ξj, the offset value between the jth and the sth data was represented by Fjs, the distance was represented by ρ, the genetic factor was represented by υ;
- Outputting the enhanced data with a cumulative contribution greater than 1 as high-quality data.
In this embodiment, the method for encoding the high-quality data to generate encoded data includes:
- Calculating the upper limit of pairwise error probability:
- wherein, the channel matrix was represented by R, the precoding matrix was represented by K, the pairwise error probability was represented by H(·), the high-quality data was represented by A, the encoded data was represented by Á, the error rate was represented by η, and the norm function was represented by ∥·∥;
- Calculating the probability density of channel:
- wherein, the probability density of channel R was represented by
(R), the mean value of channel was represented by Rτ, the transmission covariance was represented by Vt, the transmission inverse matrix was represented by V−1, the rank of the matrix was represented by tr[·], the number of channel vectors was represented by N, and the determinant was represented by dt(·), in addition, the number of the fth signal vectors was represented by Nf;
- Calculating the minimum objective function:
- wherein, the objective function was represented by Q, the minimum distance was represented by D, the inverse matrix of the transmission covariance was represented by Vt−1, the adjustment matrix of channel was represented by B, and the adjustment parameter was represented by ϑ, in addition, the precoding matrix of channel R was represented by KR, the code distance of the optimal precoding matrix was represented by μo, the inverse matrix of channel adjustment was represented by B−1, and the mean value of channel R was represented by RτR;
- Working out the constraint objective function, and the expression was:
(B,
)=tr(RτB−1RτR)−Nf log dt(B)+
[tr(φ)−ϑ]
- wherein, the Lagrange coefficient was represented by
, the constraint objective function of the channel adjustment matrix B and the Lagrange coefficient
was represented by
(B,
), and the precoding matrix was represented by φ;
- Calculating the optimal encoding matrix:
- wherein, the optimal encoding matrix was represented by Ko(Á), the right singular vector was represented by X, and the left singular vector was represented by Y, thus, the encoding data is output based on the optimal encoding matrix.
In this embodiment, the method for constructing a data watermark embedding model by employing the encoded data includes:
- The data watermark embedding model comprised time series partitioning algorithm, Hash algorithm, Fourier transform algorithm, genetic algorithm, and machine learning algorithm;
- The time series partitioning algorithm divided the encoded data into training data and testing data according to the chronological order;
- The clustering screening algorithm divided the training data into multiple clusters, removed data that is far from the clusters, and obtained selected data;
- The Hash algorithm could compute a fixed-length hash value by performing hashing computation on the selected data, and then encrypted important data by utilizing the hash value to obtain the encrypted data;
- The Fourier transform algorithm performed transformation of frequency domain on encrypted data to obtain transformed data;
- The genetic algorithm could find out optimal embedding position among multiple embedding positions through iterative optimization of the object to be embedded;
- The machine learning algorithm could embed the transformed data based on the optimal embedding position.
FIG. 2 is a structural schematic diagram of an electronic device utilized in the embodiment of the present application. Referring to FIG. 2, at the hardware level, the electronic device included a processor, and optionally also included the internal bus, the network interfaces, and the memorizer. Wherein, the memorizer could include memory, such as high-speed Random-Access Memory (RAM), and could also include non-volatile memory, such as at least one disk storage. Certainly, the electronic device could also include hardware required for other businesses.
The processor, the network interfaces, and the memorizer could be interconnected through an internal bus, which could be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, or an EISA (Extended Industry Standard Architecture) bus. In addition, the bus could be divided into address bus, data bus, and control bus, etc. To facilitate representation, only one bidirectional arrow is utilized in FIG. 2, but it does not indicate that there is only one bus or one type of bus.
The memorizer was utilized to store programs. More precisely, the program referred to program code, including computer operation instructions. The memorizer could include both memory and non-volatile memory, providing instructions and data to the processor.
The processor was utilized to read the corresponding computer program from the non-volatile memory, and then store them into the memory for running. It constituted a user intention recognition device to build multi-dimensional human-computer interaction scenarios at the logical level. In addition, a processor was utilized to execute programs stored in memory, specifically for executing any of the aforesaid watermark embedding methods based on service invocation data.
The aforesaid embodiment shown in FIG. 1 of the present application disclosed a watermark embedding method based on service invocation data. It could be applied to processors or implemented by processors. A processor could be an integrated circuit chip with signal processing capabilities. During the implementation, each step of the above said method could be completed through hardware integrated logic circuits or software instructions in the processor. Such processor could be a general-purpose processor, including the Central Processing Unit (CPU), and the Network Processor (NP), etc.; The Digital Signal Processor (DSP), the Application Specific Integrated Circuit (ASIC), the Field Programmable Gate Array (FPGA), or other programmable logic devices, the discrete gate or transistor logic devices, or discrete hardware components are also available. All of them could implement or execute the disclosed methods, steps, and logic block diagrams described in the embodiments of the present application. A general-purpose processor could be a microprocessor or any conventional processor. The steps of the method disclosed in the embodiments of the present application could be directly reflected as being executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor. The software modules could be located in the mature storage media in this field, such as Random-Access Memory, flash memory, Read-Only Memory, Programmable Read-Only Memory, or Electrically Erasable Programmable Read-Only Memory, and registers, etc. Such storage medium was located in the memory, and the processor could access the information from the memory to complete the steps of the method with its hardware.
The electronic device could also execute a watermark embedding method based on service invocation data as shown in FIG. 1, and implement the functions of the embodiment shown in FIG. 1. There is no more detailed description of the embodiments of the present application.
The present embodiment also proposed a computer-readable storage medium that stored one or more programs. Such one or more programs, including the instructions, could perform any of the aforesaid watermark embedding methods based on service invocation data in case that the instructions were executed by an electronic device comprising multiple application programs.
The persons skilled in the art should understand that the embodiments of the present application can be provided in forms of methods, systems, or computer program products. Therefore, the present application supports the embodiments implemented by means of full hardware, or full software, or a combination of software and hardware. Moreover, the present application supports the form of a computer program product implemented on one or more computer-sensitive storage media (including but not limited to disk storage, CD-ROM, and optical memory, etc.) containing computer-sensitive program codes.
The present application is described with reference to the flowchart and/or block diagram of the method, the device (system), and the computer program product according to the embodiments of the application. It should be understood that each process and/or step in the flowchart and/or block diagram, as well as the combination of processes and/or steps in the flowchart and/or block diagram, can be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, dedicated computer, embedded processor, or other programmable data processing devices to generate a machine, which may create a device for implementing the functions specified in one or more processes in the flowchart and/or one or more steps in the block diagram through the execution of the instructions by the processor of the computer or other programmable data processing devices.
These computer program instructions can also be stored in computer-readable memory which enables a computer or other programmable data processing device to operate in a specific manner, making the instructions stored in the computer-readable memory produce a manufactured product including the instruction device. Such instruction device supports implementing the functions specified in one or more processes in the flowchart and/or one or more steps in the block diagram.
These computer program instructions can also be loaded onto a computer or other programmable data processing devices, enabling a series of operational steps to be executed on the computer or other programmable devices to complete the process by computer, thereby providing steps for implementing the functions specified in one or more processes in the flowchart and/or one or more steps in the block diagram through the instructions executed on the computer or other programmable devices.
In a typical configuration, a computing device consists of one or more processors (CPUs), input/output interfaces, network interfaces, and a memory.
The memory refers to the Volatile Memory, the Random-Access Memory (RAM), and/or the Non-Volatile Memory of the computer-readable media, including the Read-Only Memory (ROM) or the flash RAM. In addition, the memory represents an example of computer-readable media.
The computer-readable media refers to the permanent and non-permanent, removable and non-removable media, which can be utilized to store information by means of any method or technology.
The information refers to computer-readable instructions, data structures, modules of programs, or other data. The examples of storage media for computers include, but are not limited to, Phase Change Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technologies, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassette tapes, magnetic tape or magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be utilized to store information that can be accessed by computing devices. According to the definition described herein, the computer-readable media does not include transitory computer-readable media (transitory media), such as modulated data signals and carriers.
It should be noted that the terms “including”, “containing” or any other variation thereof are intended to encompass non-exclusive inclusion, thereby making the process, the method, the goods or equipment consisting of a series of elements not only includes the said elements, but also includes other elements not explicitly listed, or also consists of elements inherent to such process, method, goods or equipment. Without further limitations, the elements defined by the phrase “including a . . . ” does not exclude the existence of other identical elements other than the process, method, goods, or device comprising the said elements.
The persons skilled in the art should understand that the embodiments of the present application can be provided in forms of methods, systems, or computer program products. Therefore, the present application supports the embodiments implemented by means of full hardware, or full software, or a combination of software and hardware. Moreover, the present application supports the form of a computer program product implemented on one or more computer-sensitive storage media (including but not limited to disk storage, CD-ROM, and optical memory, etc.) containing computer-sensitive program codes.
The above description is only a preferred embodiment of the present invention without the intention of limiting the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present invention should be included in the scope of protection of the present invention.