This application claims priority from Korean Patent Application No. 10-2022-0055525 filed on May 4, 2022 and Korean Patent Application No. 10-2022-0133768 filed on Oct. 18, 2022 in the Korean Intellectual Property Office, and all the benefits accruing therefrom under 35 U.S.C. 119, the contents of which in its entirety are herein incorporated by reference.
The present disclosure relates to a graph embedding method and system.
Graph embedding refers to converting a graph into a vector or matrix in an embedding space. Recently, research has been vigorously conducted on ways to embed graphs using a neural network, and such neural network is referred to as a graph neural network (GNN).
Different GNNs (e.g., GNNs having different structures or operating in different manners) produce different embedding vectors for the same graph. Thus, if embedding vectorscan be integrated without loss of information (or loss of expressive power), more powerful embedding representations can be created.
In order to prevent loss of information during the integration of individual embedding vectors, a method of generating an integrated embedding vector by concatenating the individual embedding vectors together may be considered. However, this method clearly has a problem that the complexity of tasks (e.g., downstream tasks such as classification and regression) associated with the dimension quantity of an integrated embedding vector increases proportionally to the number of individual embedding vectors to be concatenated together to form the integrated embedding vector.
An aspect of an example embodiment of the present disclosure provides a graph embedding method capable of integrating various embedding representations of a graph without loss of information (or loss of expressive power) and a system performing the graph embedding method.
An aspect of an example embodiment of the present disclosure provides a graph embedding method capable of integrating various embedding representations of a graph without increasing the complexity of any associated tasks and a system performing the graph embedding method.
An aspect of an example embodiment of the present disclosure provides a graph embedding method capable of embedding a graph together with node information and topology information of the graph and a system performing the graph embedding method.
However, aspects of the present disclosure are not restricted to those set forth herein. The above and other aspects of the present disclosure will become more apparent to one of ordinary skill in the art to which the present disclosure pertains by referencing the detailed description of the present disclosure given below.
According to an aspect of an example embodiment of the present disclosure, there is provided a graph embedding method performed by at least one computing device. The graph embedding method includes acquiring a first embedding representation and a second embedding representation of a target graph, changing the second embedding representation by reflecting a specific value into the second embedding representation, and generating an integrated embedding representation by aggregating the first embedding representation and the changed second embedding representation.
In some embodiments, one of the first embedding representation and the second embedding representation may be generated by an embedding method that aggregates information of neighbor nodes that form the target graph, and the other one of the first embedding representation and the second embedding representation may be generated by an embedding method that reflects topology information of the target graph.
In some embodiments, the first embedding representation and the second embedding representation may be generated by embedding the target graph via different graph neural networks (GNNs).
In some embodiments, the specific value may be an irrational number.
In some embodiments, the specific value may be a value based on a learnable parameter, and the graph embedding method may further include predicting a label for a predefined task based on the integrated embedding representation, and updating a value of the learnable parameter based on a result of the predicting.
In some embodiments, the reflecting the specific value into the second embedding representation may be performed based on a multiplication operation, and the aggregating the first embedding representation and the changed second embedding representation may be performed based on an addition operation.
In some embodiments, the acquiring the first embedding representation and the second embedding representation may include acquiring a first embedding matrix and a second embedding matrix of the target graph, the first embedding representation and the second embedding representation having different sizes, and acquiring the first embedding representation and the second embedding representation by performing a resizing operation on at least one of the first embedding matrix and the second embedding matrix.
In some embodiments, the acquiring the first embedding representation and the second embedding representation may include acquiring the first embedding representation via a neighbor node information aggregation scheme-based GNN, and acquiring the second embedding representation by extracting topology information of the target graph using the first embedding representation.
In some embodiments, the generating the integrated embedding representation may include performing a pooling operation on the first embedding representation and the changed second embedding representation, and generating the integrated embedding representation by aggregating results of the pooling operation.
In some embodiments, the generating the integrated embedding representation may include acquiring a third embedding representation of the target graph, changing the third embedding representation by reflecting another specific value into the third embedding representation, and generating the integrated embedding representation by aggregating the first embedding representation, the changed second embedding representation, and the changed third embedding representation.
In some embodiments, the generating the integrated embedding representation may include acquiring a third embedding representation through a k-th embedding representation (k being a natural number of 3 or greater), changing the third embedding representation through the k-th embedding representation by reflecting another specific value into the third embedding representation through the k-th embedding representation, and generating the integrated embedding representation by aggregating the first embedding representation, the changed second embedding representation, and the changed third embedding representation through k-th embedding representation.
According to an aspect of an example embodiment of the present disclosure, there is provided a graph embedding system. The graph embedding system includes at least one processor, and a memory configured to store program code executable by the at least one processor, the program code including: acquiring code configured to cause the at least one processor to acquire a first embedding representation and a second embedding representation of a target graph; changing code configured to cause the at least one processor to change the second embedding representation by reflecting a specific value into the second embedding representation; and generating code configured to cause the at least one processor to generate an integrated embedding representation by aggregating the first embedding representation and the changed second embedding representation.
According to an aspect of an example embodiment the present disclosure, there is provided a non-transitory computer-readable recording medium storing program code executable by at least one processor, the program code including: acquiring code configured to cause the at least one processor to acquire a first embedding representation and a second embedding representation of a target graph; changing code configured to cause the at least one processor to change the second embedding representation by reflecting a specific value into the second embedding representation; and generating code configured to cause the at least one processor to generate an integrated embedding representation by aggregating the first embedding representation and the changed second embedding representation.
According to the aforementioned and other embodiments of the present disclosure, an integrated embedding representation of a target graph may be generated by reflecting a specific value into at least some of a variety of individual embedding representations of the target graph and then aggregating the individual embedding representations. In this case, as different embedding representations may be prevented from cancelling each other out during aggregation, various embedding representations may be integrated without loss of information (or expressive power). For example, the cancellation of different embedding representations may be effectively prevented by multiplying at least one of the different embedding representations by an irrational number during aggregation (e.g., addition).
Also, the specific value may be deduced based on a learnable parameter. Thus, as learning for generating an integrated embedding representation proceeds, an optimal specific value capable of preventing loss of information (or expressive power) in individual embedding representations may be produced naturally and accurately.
Also, various embedding representations (e.g., embedding vectors) of a target graph may be aggregated by an addition operation. In this case, as the size (or the dimension quantity) of an integrated embedding representation does not increase regardless of an increase in the number of individual embedding representations to be aggregated, problems such as an increase in the complexity of any associated task (e.g., a downstream task such as classification or regression) may be easily addressed.
Also, even embedding representations (e.g., embedding matrices) having different sizes may be easily integrated by a resizing operation implemented based on a multilayer perceptron.
Also, a robust integrated embedding representation of a target graph may be generated by integrating a node information-based embedding representation and a topology information-based embedding representation, and by using the integrated embedding representation, the accuracy of various tasks associated with graphs (e.g., downstream tasks such as classification or regression) may be considerably improved.
Also, a second embedding representation may be generated by extracting topology information of a target graph from a first embedding representation output via a neighbor node information aggregation module. In this case, as the neighbor node information aggregation module functions as a type of shared neural network, an integrated graph neural network (GNN) capable of generating (or outputting) an integrated embedding representation may be easily established.
It should be noted that the effects of the present disclosure are not limited to those described above, and other effects of the present disclosure will be apparent from the following description.
The above and other aspects and features of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
Hereinafter, example embodiments of the present disclosure will be described with reference to the attached drawings. Advantages and features of the present disclosure and methods of accomplishing the same may be understood more readily by reference to the following detailed description of example embodiments and the accompanying drawings. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the disclosure to those skilled in the art, and the present disclosure will only be defined by the appended claims.
In adding reference numerals to the components of each drawing, it should be noted that the same reference numerals are assigned to the same components as much as possible even though they are shown in different drawings. In addition, in describing the present disclosure, when it is determined that the detailed description of the related well-known configuration or function may obscure the gist of the present disclosure, the detailed description thereof will be omitted.
Unless otherwise defined, all terms used in the present specification (including technical and scientific terms) may be used in a sense that may be commonly understood by those skilled in the art. In addition, the terms defined in the commonly used dictionaries are not ideally or excessively interpreted unless they are specifically defined clearly. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. In this specification, the singular also includes the plural unless specifically stated otherwise in the phrase.
In addition, in describing the component of this disclosure, terms, such as first, second, A, B, (a), (b), may be used. These terms are only for distinguishing the components from other components, and the nature or order of the components is not limited by the terms. If a component is described as being “connected,” “coupled” or “contacted” to another component, that component may be directly connected to or contacted with that other component, but it should be understood that another component also may be “connected,” “coupled” or “contacted” between each component.
Embodiments of the present disclosure will be described with reference to the attached drawings.
Referring to
The first and second embedding representations 14 and 15 may refer to vector or matrix representations of the graph 13. Here, the term “matrix” may encompass the concept of a tensor, and the term “embedding representation” may also be referred to as an embedding vector, matrix, or code or a latent representation, depending on cases (or types).
The first embedding representation 14 may include at least some graph information different from the second embedding representation 15 or may be generated from a different GNN from the second embedding representation 15. For example, the first embedding representation 14 may be generated from the first GNN 11, and the second embedding representation 15 may be generated from the second GNN 12, which has a different expressive power from the first GNN 11. In this example, as illustrated in
The first and second embedding representations 14 and 15 may be the final outputs of the first and second GNNs 11 and 12 or may be intermediate products of the first and second GNNs 11 and 12 that result from inner processing performed in the first and second GNNs 11 and 12. The first and second GNNs 11 and 12 may refer to different models or different parts of the same model (e.g., an integrated model of
GNNs (or embedding representations) having different expressive powers may mean that the GNNs produce different embedding representations (e.g., embedding representations having different embedded information) for the same graph or classifies the same graph into different graph classes. Specifically, it is assumed that when the task of classifying first through fourth graphs is performed using embedding representations of the first and second GNNs 11 and 12, the first and second graphs, but not the third and fourth graphs, are distinguished from each other (i.e., classified into different graph classes) by the first GNN 11, which is, for example, a neighbor node information aggregation scheme-based GNN, whereas the third and fourth graphs, but not the first and second graphs, are distinguished from each other (i.e., classified into different graph classes) by the second GNN 12, which is, for example, a topology information extraction scheme-based-based GNN. In this case, the first and second GNNs may be understood as having different expressive powers.
Also, when the first GNN 11 (or the first embedding representation 14) is referred to as having a stronger expressive power than the second GNN 12 (or the second embedding representation 15), the first GNN 11 (or the first embedding representation 14) may contain more information than the second GNN 12 (or the second embedding representation 15) or may mean that the set of graphs classified the first GNN 11 is larger that of the second GNN 12. Specifically, it is assumed that when the task of classifying first through fourth graphs is performed using the embedding representations of the first and second GNNs 11 and 12, all ten graphs may be distinguished from one another using the embedding representations of the first GNN 11, but some of the ten graphs may not be able to be distinguished from one another using the embedding representations of the second GNN 12. In this case, the first GNN 12 may be understood as having a stronger expressive power than the second GNN 12 and being a more excellent model capable of better reflecting variations into an input graph to generating embedding representations than the second GNN 12.
The graph embedding system 10 may learn modules (e.g., GNNs or resizing modules) and parameters for generating the integrated embedding representation 16 by performing a predefined task (e.g., classification or regression). If a target task of the integrated embedding representation 16 is already determined, the graph embedding system 10 may perform learning using the target task.
The graph embedding system 10 may perform the target task by using the learned modules and parameters or may provide the integrated embedding representation 16 to a device so that the device may perform the target task.
It will be described later how the graph embedding system 10 generates the integrated embedding representation 16 with reference to
The graph embedding system 10 may be implemented as at least one computing device. For example, all the functions of the graph embedding system 10 may be implemented in a single computing device, different functions of the graph embedding system 10 may be implemented in different computing devices, or a particular function of the graph embedding system 10 may be implemented in two or more computing devices.
Here, the term “computing device” may encompass nearly all types of arbitrary devices equipped with a computing function, and an exemplary computing device will be described later with reference to
The operation of the graph embedding system 10 has been described so far with reference to
The structure and operation of a neighbor node information aggregation scheme-based GNN will hereinafter be described with reference to
Referring to
The first through n-th blocks 31-1 through 31-n may repeatedly aggregate information of neighbor nodes that form the graph 33. The first through n-th blocks 31-1 through 31-n may be configured as multilayer perceptrons (i.e., fully-connected layers), but the present disclosure is not limited thereto.
The pooling module 32 may generate (or output) the embedding representation 34 of an appropriate size by performing a pooling operation. The pooling operation is already well known in the art to which the present disclosure pertains, and thus, a detailed description thereof will be omitted. The pooling module 32 may also be referred to as a pooling layer, a readout layer, or a readout module.
The embedding representation 34, which is generated by the GNN 30, may well reflect node information of the graph 33, but does not reflect topology information of the graph 33 (e.g., information on the general shape of the graph 33). Thus, if the embedding representation 34 is used to perform a task that the topology information of the graph 33 plays an important role in, the performance of the corresponding task may be degraded.
The GNN 30 will be described in further detail with reference to
Referring to
Specifically, the GNN 40 may generate an embedding matrix (e.g., a 3D matrix having a size of v*v*p) for the node tuple by repeatedly aggregating the feature matrix 43 via a plurality of first through n-th blocks 41-1 through 41-n, and may finally generate (or output) the embedding vector 45 for an input graph via a pooling module 42.
A GNN such as a provably powerful graph network (PPGN) may operate in a similar manner to the GNN 40 of
The structure and operation of a topology information extraction scheme-based-based GNN will hereinafter be described with reference to
Referring to
The GNN 50 may be configured to include a plurality of first through n-th blocks 51-1 through 51-n, a topology information extraction module 52, and a pooling module 53 and may further include other modules (not illustrated), in some embodiments.
The GNN 50, like the GNN 30, may repeatedly aggregate information of neighbor nodes that form the graph 54. The first through n-th blocks 51-1 through 51-n may be configured as multilayer perceptrons (i.e., fully-connected layers), but the present disclosure is not limited thereto.
The topology information extraction module 52 may extract the topology information of the graph 54. For example, the topology information extraction module 52 may extract the topology information of the graph 54 by calculating a persistence diagram. The persistence diagram and how to calculate the persistence diagram are already well known in the art to which the present disclosure pertains, and thus, detailed descriptions thereof will be omitted.
The pooling module 53 may generate (or output) the embedding representation 54 of an appropriate size by performing a pooling operation.
The embedding representation 55, which is generated by the GNN 50, may well reflect the topology information of the graph 54, but does not reflect node information of the graph 54. Thus, if the embedding representation 55 is used to perform a task that the node information of the graph 54 plays an important role in, the performance of the corresponding task may be degraded.
The GNN 50 will hereinafter be described in further detail with reference to
Referring to
Specifically, the GNN 60 may generate an embedding matrix for each node by repeatedly aggregating the feature matrix 64 via a plurality of first through n-th blocks 41-1 through 41-n, and may finally generate (or output) the embedding vector 65 for an input graph via a pooling module 63.
A GNN such as graph filtration learning (GFL) may operate in a similar manner to the GNN 60 of
The GNNs that may be referenced in some embodiments of the present disclosure, i.e., the GNNs 30, 40, 50, and 60, have been described so far with reference to
A graph embedding method according to some embodiments of the present disclosure will hereinafter be described with reference to
The embodiment of
Referring to
As already mentioned above, the first and second embedding representations may be the final outputs of GNNs or may be intermediate products of GNNs and 82 that result from inner processing performed in the GNNs. For example, the first and second embedding representations may be embedding vectors obtained by a pooling operation or may be embedding matrices (e.g., 2D or 3D matrices) obtained before a pooling operation. If the first and second embedding representations are embedding vectors obtained by a pooling operation, a pooling operation may not be performed, unlike as illustrated in
In S72, a specific value (e.g., a scalar value) may be reflected into the second embedding representation, and as a result, the second embedding representation may be changed. For example, referring to
A method to deduce (or generate) the specific value may vary.
In some embodiments, the specific value may be set in advance. For example, the specific value, which is a value based on a type of hyperparameter (e.g., E), may be set in advance by a user. For example, the specific value may be set to an irrational number because the cancelation of two embedding representations may be effectively prevented by multiplying one (or both) of the two embedding representations by an irrational number during aggregation (e.g., an addition operation).
Alternatively, in some embodiments, the specific value may be a value based on a learnable parameter. If the learnable parameter is E, the specific value may be the value of E itself or may be the sum of E and an irrational number. The graph embedding system 10 may predict the label of a predefined task (i.e., a task for learning) using an integrated embedding representation and may update the value of the learnable parameter based on the difference between the predicted label and a correct label (i.e., the exact label of the target graph). In this case, as learning for generating an integrated embedding representation proceeds, an optimal specific value capable of preventing loss of information (or expressive power) in individual embedding representations may be produced naturally and accurately.
In S73, an integrated embedding representation of the target graph may be generated by aggregating the first and second embedding representations. For example, referring to
Although not specifically illustrated in
If the predefined task is a classification task, the prediction module 91 may be implemented as a neural network layer (e.g., a fully-connected layer) configured to be able to predict class tables, but the present disclosure is not limited thereto. That is, the structure of the prediction module 91 may vary.
According to the graph embedding method of
A graph embedding method according to some embodiments of the present disclosure will hereinafter be described with reference to
The embodiment of
Specifically, the graph embedding system 10 may perform a resizing operation on the first and second embedding representations 103 and 104. For example, if the first and second embedding representations 103 and 104 are matrix-type representations, the graph embedding system 10 may adjust the size of the first and second embedding representations 103 and 104 by a matrix size-resizing operation.
Thereafter, the graph embedding system 10 may perform operations, such as a specific value reflection operation and a pooling operation, and may generate the integrated embedding representation 107. For example, the graph embedding system 10 may generate a vector-type integrated embedding representation 107 by performing an addition operation on the first and second embedding vectors 105 and 106, which are obtained by the pooling operation.
Also,
Also,
In some embodiments, a multilayer perceptron may be applied even to a case where the first and second embedding representations 103 and 104 have the same size. In this case, the multilayer perceptron may convert the first and second embedding representations 103 and 104 to an appropriate embedding space (e.g., a space of other embedding representations or a common embedding space) and may thus also be referred to as a conversion module or layer or as a projection module or layer.
According to the graph embedding method of
A graph embedding method according to some embodiments of the present disclosure will hereinafter be described with reference to
The embodiment of
Specifically, the graph embedding system 10 may perform a resizing operation on the first through k-th embedding representations 112-1 through 112-k and may reflect a specific value 113 or 114 in all or some of the first through k-th embedding representations 112-1 through 112-k. The specific values 113 and 114 may be different values (e.g., different irrational numbers) or may be values based on different learnable parameters. For example, the specific value 113, which is reflected into the second embedding representation 112-2, may be a first irrational number, and the specific value 114, which is reflected into the k-th embedding representation 112-k, may be a second irrational value different from the first irrational number. Accordingly, the first through k-th embedding representations 112-1 through 112-k may be effectively prevented from canceling each other out during aggregation (e.g., addition).
Thereafter, the graph embedding system 10 may perform a pooling operation and may generate the integrated embedding representation 116 by aggregating first through k-th embedding representations 115-1 through 115-k obtained by the pooling operation. For example, the graph embedding system 10 may generate the integrated embedding representation 116 by performing an addition operation on the first through k-th embedding representations 115-1 through 115-k obtained by the pooling operation.
A graph embedding method according to some embodiments of the present disclosure will hereinafter be described with reference to
The embodiment of
Specifically, the graph embedding system 10 may generate a first embedding representation 123 or 124 of the target graph 121 via the neighbor node information aggregation module 122. The first embedding representation 123 may be of a 3D matrix type, and the first embedding representation 124 may be of a 2D matrix type.
Thereafter, the graph embedding system 10 may extract topology information of the target graph 121 by analyzing the first embedding representation 123 or 124, and may generate a second embedding representation (not illustrated) by reflecting the extracted topology information.
Thereafter, the graph embedding system 10 may perform operations, such as a resizing operation, a specific value reflection operation, and a pooling operation, on the first embedding representation 123 or 124 and the second embedding representation, and may generate the integrated embedding representation 125 by aggregating the results of the operations. For example, the graph embedding system 10 may generate the integrated embedding representation 125, which is of a vector type.
The graph embedding method of
Referring to
Thereafter, the graph embedding system 10 may generate a second embedding representation 135 having topology information of the target graph 131 reflected thereinto by calculating a persistence diagram using the first embedding representation 133. For example, if the first embedding representation 133 is a 3D embedding matrix, the graph embedding system 10 may generate a 2D embedding matrix 134 by extracting diagonal elements of the 3D embedding matrix, and may generate the 2D embedding representation 135 by calculating a persistence diagram for the 2D embedding matrix 134. Here, the diagonal elements of the 3D embedding matrix are extracted because they are where node information is integrated. Obviously, the 2D embedding matrix 134 may be generated in various other manners.
Thereafter, the graph embedding system 10 may obtain embedding representations 137-1 and 137-2 by performing operations, such as a resizing operation, a specific value reflection operation, and a pooling operation, on the first and second embedding representations 133 and 135, and may generate an integrated embedding representation 138 by aggregating the embedding representations 137-1 and 137-2.
According to the graph embedding method of
Experimental results regarding the performance of the above-described graph embedding methods (hereinafter, the proposed methods) according to some embodiments of the present disclosure will hereinafter be described.
The inventors of the present disclosure conducted an experiment on the proposed methods to evaluate the accuracy of a graph classification task using the MUTAG, PTC, PROTEINS, and NCH datasets, which are bioinformatics datasets, in consideration that the higher the accuracy of such task, the better the performance of the proposed methods. Specifically, the inventors of the present disclosure generated integrated embedding representations via integrated GNNs, such as that illustrated in
As is clear from Table 1, the proposed methods exhibit better performance than PPGNs, regardless of the types of the datasets, presumably, because integrated embedding representations generated by the proposed methods not only include node information, but also topology information of the input graphs. This shows that the proposed methods may produce more robust embedding representations than neighbor node information aggregation scheme-based GNNs and may hardly cause loss of information (or expressive power) during the generation of integrated embedding representations.
Also, the inventors of the present disclosure conducted an experiment on the proposed methods to evaluate the accuracy of a regression task using the Quantum Machines 9 (QM9) dataset, which is a bioinformatics dataset. Specifically, the inventors of the present disclosure generated integrated embedding representations using integrated GNNs, such as that illustrated in
As is clear from Table 2, the proposed methods exhibit better performance than PPGNs, even for the regression task, and this shows that the proposed methods may generally improve the performance of various tasks associated with graphs.
An exemplary computing device that may implement the graph embedding system 10 will hereinafter be described with reference to
Referring to
The processor 141 may control the general operations of the other elements of the computing device 140. The processor 141 may be configured to include at least one of a central processing unit (CPU), a microprocessor unit (MPU), a microcontroller unit (MCU), a graphics processing unit (GPU), and another arbitrary processor that is already well known in the art to which the present disclosure pertains. The processor 141 may perform an operation for at least one application or program for executing operations and/or methods according to some embodiments of the present disclosure. The computing device 140 may include at least one processor 141.
The memory 142 may store various data, commands, and/or information. The memory 142 may load the computer program 146 from the storage 145 to execute the operations and/or methods according to some embodiments of the present disclosure. The memory 142 may be implemented as a volatile memory such as a random-access memory (RAM), but the present disclosure is not limited thereto.
The bus 143 may provide a communication function between the other elements of the computing device 140. The bus 143 may be implemented as an address bus, a data bus, a control bus, or the like.
The communication interface 144 may support wired/wireless Internet communication for the computing device 140. The communication interface 144 may also support various communication methods other than Internet communication. To this end, the communication interface 144 may be configured to include a communication module that is well known in the art to which the present disclosure pertains.
The storage 145 may non-transitorily store at least one computer program 146. The storage 145 may be configured to include a nonvolatile memory such as a read-only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), or a flash memory, a hard disk, a removable disk, or another arbitrary computer-readable recording medium that is well known in the art to which the present disclosure pertains.
The computer program 146 may include one or more instructions that allow the processor 141 to perform the operations and/or methods according to some embodiments of the present disclosure, when loaded in the memory 142. That is, the processor 141 may perform the operations and/or methods according to some embodiments of the present disclosure by executing the loaded instructions.
For example, the computer program 146 may include one or more instructions for performing the operations of: acquiring first and second embedding representations of a target graph; changing the second embedding representation by reflecting a specific value into the second embedding representation; and generating an integrated embedding representation by aggregating the first embedding representation and the changed second embedding representation. In this example, the graph embedding system 10 may be implemented by the computing device 140.
In some embodiments, the computing device 140 may refer to a virtual machine implemented based on cloud technology. For example, the computing device 140 may be a virtual machine run on one or more physical servers included in a server farm. In this example, at least some of the processor 141, the memory 142, and the storage 145 may be virtual hardware, and the communication interface 144 may be implemented as a virtualized networking element such as a virtual switch.
The exemplary computing device 140 that may implement the graph embedding system 10 has been described so far with reference to
Embodiments of the present disclosure have been described above with reference to
The technical features of the present disclosure described so far may be embodied as computer readable codes on a computer readable medium. The computer readable medium may be, for example, a removable recording medium (CD, DVD, Blu-ray disc, USB storage device, removable hard disk) or a fixed recording medium (ROM, RAM, computer equipped hard disk). The computer program recorded on the computer readable medium may be transmitted to other computing device via a network such as internet and installed in the other computing device, thereby being used in the other computing device.
Although operations are shown in a specific order in the drawings, it should not be understood that desired results may be obtained when the operations must be performed in the specific order or sequential order or when all of the operations must be performed. In certain situations, multitasking and parallel processing may be advantageous. According to the above-described embodiments, it should not be understood that the separation of various configurations is necessarily required, and it should be understood that the described program components and systems may generally be integrated together into a single software product or be packaged into multiple software products.
In concluding the detailed description, those skilled in the art will appreciate that many variations and modifications may be made to the example embodiments without substantially departing from the principles of the present disclosure. Therefore, the disclosed example embodiments of the disclosure are used in a generic and descriptive sense only and not for purposes of limitation.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0055525 | May 2022 | KR | national |
10-2022-0133768 | Oct 2022 | KR | national |