The present invention relates generally to a system and a method for adaptively dividing a graph network into a plurality of subnetworks.
Graph neural networks can be trained for geographical mapping application, for example, based on locations of existing interconnected nodes (e.g., point of interest (POI)) within a graph network (e.g., map), to identify how the nodes are scattered across the network, classify the nodes. This allows better route planning and better resource allocations for a certain subnetwork, especially region with high POI concentrations as well as make predictions on the subnetwork in which a new or unknown POI is located.
However, such graph neural network may not be accurate for countries or cities with localized clusters of POIs due to geospatial concept drift, in which the statistical distribution of POIs varies from one region to another within a single country or city. To address geospatial concept drift, one method is to divide large POI graph network into multiple subnetwork and assign a single set of parameters (e.g., model) to be trained and optimized in each subnetwork. This can be done using graph partitioning algorithms. However, the objectives of these algorithms are to divide the graph based on present connectivity and structure and not necessarily to maximize model learning and parameters optimization. In addition, these algorithms do not take into account the different available choices of graph partitioning models and parameters from the user, implying that certain graph models and parameters would fare much worse under the same partitioning algorithm.
There is thus a need to devise a novel method and system for adaptively dividing a graph network into a plurality of subnetworks to address the issues, more particularly, to improve the applicability of graph partitioning algorithm to all types of graph and choice of graph models and parameters.
Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and this background of the disclosure.
In a first aspect, the present disclosure provides a method for adaptively dividing a graph network, the graph network comprising a plurality of nodes, each of the plurality of nodes being connected to at least one other node of the plurality of nodes, the method comprising: for each pair of subnetworks from a plurality of subnetworks within the graph network, calculating an association score based on a first accuracy metric in predicting a first latent attribute of at least one first node of a first subnetwork of the each pair of subnetworks using parameters optimized for accurately predicting a second latent attribute of at least one second node of second subnetwork of the each pair of subnetworks; and forming one of a plurality of new subnetworks within the graph network from each pair of a pair set of subnetworks from the plurality of subnetworks based on a result of determining a sum of the association scores of the pair set of subnetworks is higher than that of another pair set of subnetworks from the plurality of subnetworks.
In a second aspect, the present disclosure provides a system for adaptively dividing a graph network, the graph network comprising a plurality of nodes, each of the plurality of nodes being connected to at least one other node of the plurality of nodes, the system comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with at least one processor, cause the server at least to: for each pair of subnetworks from a plurality of subnetworks within the graph network, calculate an association score based on a first accuracy metric in predicting a first latent attribute of at least one first node of a first subnetwork of the each pair of subnetworks using parameters optimized for accurately predicting a second latent attribute of at least one second node of a second subnetwork of the each pair of subnetworks; and form one of a plurality of new subnetworks within the graph network from each pair of a pair set of subnetworks from the plurality of subnetworks based on a result of determining a sum of the association scores of the pair set of subnetworks is higher than that of another pair set of subnetworks from the plurality of subnetworks.
Additional benefits and advantages of the disclosed embodiments will become apparent from the specification and drawings. The benefits and/or advantages may be individually obtained by the various embodiments and features of the specification and drawings, which need not all be provided in order to obtain one or more of such benefits and/or advantages.
Embodiments and implementations are provided by way of example only, and will be better understood and readily apparent to one of ordinary skill in the art from the following written description, read in conjunction with the drawings, in which:
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been depicted to scale. For example, the dimensions of some of the elements in the illustrations, block diagrams or flowcharts may be exaggerated in respect to other elements to help to improve understanding of the present embodiments.
The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background of the invention or the following detailed description. It is the intent of this invention to present a service request allocation system and method under lack of available service providers condition.
Graph network—a graph network (also called network diagram) is a type of data representation and visualization that shows interconnections between a set of data in the form of nodes. The connections between nodes are typically represented through links or edges to shows the type of relationship and/or dependency between the nodes. It can be used to interpret the structure of a network through looking for any clustering of the nodes, how densely the nodes are connected or by how the diagram layout is arranged.
Node—a node represents a data point and is connected to at least one other node through links or edges, thereby forming a graph network. A graph network of nodes can be directed/undirected and/or weighted/unweighted depending on the features of data entries. In the context of routing a map, the graph network may be directed. In various embodiments below, a node may represent a road or the like on a map from which one or more points of interest on the map are accessible; a link or edge may represent connection between two roads for example when both roads are physically connected. Additionally or alternatively, a weighted graph network may use to illustrate the weight or relevancy of a node with respect to other nodes, for example by taking into account a count, a frequency of each data point (node) in the data entries. In various embodiments below, the term “node” may be used interchangeably with “vertex”, and may be denoted using letter V.
Subnetwork—a subnetwork is a partition of a graph network comprising a cluster or group of nodes within the graph network. Conventionally, nodes with similar data are classified and grouped together under a same subnetwork and thereby dividing the graph network into multiple subnetworks. In an embodiment, each node may be exclusively grouped into only one subnetwork, therefore the subnetworks are mutually exclusive. In such embodiment, boundaries of between a subnetwork and another subnetwork(s) within the graph network can be identified from the links that connect the nodes of the subnetwork to the nodes of the other subnetwork(s). In the context of a graph network being a map, a subnetwork may refer to a submap, region or area of the map. In various embodiments below, the term “subnetwork” may be used interchangeably with “cluster” or “partition”.
Subnetwork pair—in various embodiments below, each subnetwork is paired with another subnetwork to form a subnetwork pair, and all subnetwork pairs within the graph network will form a pair set (combination). For example, four subnetworks can form two pairs but have three different pair sets. An optimal pair set may be determined and selected, for example, based on a sum of association scores of the pair set, and each subnetwork pair (e.g., subnetworks A and B) may be merged to form a single larger subnetwork (herein may referred to as “new subnetwork”) within the graph network (e.g., subnetwork A′). In various embodiments below, the term “pair set” may be used interchangeably with “pair combination”.
Parameter—a parameter is used by a graph neural network to process data, for example, data relating to a plurality of subnetworks (partitions) in a graph network, and determine, categorize and/or predict a latent attribute of a node. In various embodiments of the present disclosure, such parameter is optimized by a graph neural network based on existing data to better process new data. In various embodiments of the present disclosure, more than one graph neural network, each having its own set of parameters, is allocated to operate in each subnetwork. Each graph neural network is trained and its parameters are optimized only on its allocated subnetwork using data of the nodes in the subnetwork. In one embodiment, every subnetwork within the graph network is allocated with a common graph neural network having a common set of parameters which then is optimized separately based on its own data. In an alternatively embodiment, different graph neural networks having different set of parameters may be selected for different subnetworks.
In various embodiments below, a set of parameters of a graph neural network may be denoted using the letter “M”. For example, a set of parameters of a graph neural network optimized on/for a subnetwork A is denoted as MA. The parameters set MA is optimized such that it can be used to perform an algorithm(s) to determine, categorize and/or predict a latent attribute of each node (or a new node) in the subnetwork A. Similarly, respective sets of parameters of two or more graph neural networks optimized on/for a subnetwork A are denoted as MA1, MA2, etc.
Latent attribute—a latent attribute or latent variable is hidden feature and information associated with a node which are not used as a basis to connect nodes to form a graph network and reflected in the connections of the graph network. Each node may be associated with multiple latent attributes or types of latent attributes. In the context of a map where a node is a road, examples of latent attributes includes number of POIs that are accessible from the road, a type of the roadway, an average traffic speed on the road at any given time, an average number of vehicles traversing the road at any given time.
Accuracy metric—an accuracy metric is a measurement of accuracy of a set of parameters can accurately determine, categorize and predict a latent attribute of a node in a map or a subnetwork. Such accuracy metric can be used to calculate association score of a subnetwork pair to determine whether to merge two different subnetworks into a single larger new subnetwork. Additionally or alternatively, such accuracy metric can be used to determine a target node from a subnetwork and a reassignment score of another subnetwork to determine whether to reassign the target node to the other subnetwork and refine or shifting the boundary between the subnetwork pair. More details relating to the association score and reassignment score will be discussed below.
In one embodiment, an accuracy metric can be calculated by running a node through a graph neural network to obtain a labelled score, and this labelled score is then compared to the true label/or known latent attribute of the node to obtain the accuracy metric. Alternatively, the labelled score is compared to the true label/known latent attribute of the node to obtain a interference loss in the graph neural network, and the accuracy metric of the graph neural network in predicting the latent attribute of the node is derived from the loss.
Association score—an association score relates to a suitability of a set of parameters optimized for a subnetwork (e.g., subnetwork A) in accurately determining or predicting a latent attribute of one or more nodes of another subnetwork, and vice versa. It is a score that is calculated to determine a subnetwork pair (e.g., subnetworks A and B) within a graph network can be merged into one single, larger new subnetwork (e.g., subnetwork A′). In various embodiment below, an association score of subnetworks A and B is denoted as S(MA, MB) or Smerge(MA, MB). Such association score of a subnetworks A and B is calculated based on the accuracy metric of a set of parameters optimized for subnetwork A in accurately predicting a latent attribute of one or more nodes of subnetwork B and/or the accuracy metric of a set of parameters optimized for subnetwork B to accurately predicting a latent attribute of one or more nodes of subnetwork A. Additionally, the association score is calculated further based on a degree of connectedness, e.g., the number of links, between nodes in the subnetwork pairs.
In various embodiments, a higher association score indicates a greater suitability of a set of parameters optimized for a subnetwork in a subnetwork pair in accurately determining (or a worse suitability of a set of parameters optimized for a subnetwork in accurately determining) a latent attribute of a node of its counterpart subnetwork and vice versa.
In various embodiments, an association score of each different subnetwork pair within the graph network is calculated, and, for each pair set (combination), the association score of each subnetwork pair of the pair set will be summed up to determine an optimal pair set from the different pair sets of the subnetworks. For example, when there are four subnetworks (e.g., subnetworks A, B, C and D) within a graph network, there are three different pair sets and therefore three different sums of association scores of the subnetwork pairs are calculated. An optimal pair set may be selected from that with the highest sum among the sums of association scores. Each subnetwork pair (e.g., subnetwork pair A and B and subnetwork pair C and D) of the optima pair set may then be merged to form a single larger new subnetwork within the graph network (e.g., subnetworks A′ and C′). In one embodiment, such boundary shifting is achieved by (re-)identifying, (re-)determining or (re-)categorizing each node in two subnetworks in a subnetwork pair (e.g., subnetworks A and B) in the pair set of subnetworks as a node of a single larger new subnetwork (e.g., subnetwork A′).
Reassignment score—a reassignment score relates to a suitable of a set of parameters optimized for a subnetwork (subnetwork A) in accurately predicting a latent attribute of a target node from another subnetwork (e.g., subnetwork B). It is a score that is calculated to determine a subnetwork (e.g., subnetwork A) to reassign a target node from a target subnetwork (e.g., subnetwork B), where a target node is a node identified from a target subnetwork (e.g., subnetwork B) based on an accuracy metric in predicting a latent attribute of the node by the parameters optimized for the target subnetwork. For example, by comparing the accuracy metric predicting the latent attribute of each nodes within the target subnetwork by the set of parameters optimized for the target subnetwork, a node with the lowest accuracy metric in predicting its latent attribute by the set of parameters optimized for the target subnetwork may be identified as a target node. In another embodiment, a node(s) with an accuracy metric in predicting its latent attribute is lower than a threshold accuracy metric will be identified as a target node(s).
A reassignment score of another subnetwork (e.g., subnetwork A) is calculated based on the accuracy metric for a set of parameters optimized for the other subnetwork in accurately predicting the latent attribute of the target node from the target subnetwork. Additionally, the reassignment score is calculated further based on a degree of connectedness, e.g., the number of links, between nodes in subnetworks A and B.
A higher reassignment score indicates a greater suitability of a set of parameters optimized for a subnetwork in accurately determining or predicting the latent attribute the target node from the target subnetwork.
In various embodiments, upon identifying a target node, a reassignment score may be calculated for every other subnetwork, apart from the target subnetwork in which the target node belongs, and a subnetwork is selected from that with the highest reassignment score. The target subnetwork (e.g., subnetwork B) and the selected subnetwork (e.g., subnetwork A) may then be reformed by removing the target node from the target subnetwork and identifying the selected subnetwork to further comprise the target node (i.e., the target node becomes one of the node of the selected subnetwork). As a result of such reassignment or migration of the target node, the boundary between the target subnetwork and the selected subnetwork in the graph network is refined. The process may be repeated with another node of the target subnetwork or a node of other subnetwork(s) until an optimal partition boundary in the graph network is obtained. In one embodiment, such boundary shifting is achieved by (re-)determining or (re-)categorizing the target node in the target subnetwork to be in the other subnetwork.
Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
It is to be noted that the discussions contained in the “Background” section and that above relating to prior art arrangements relate to discussions of devices which form public knowledge through their use. Such should not be interpreted as a representation by the present inventor(s) or the patent applicant that such devices in any way form part of the common general knowledge in the art.
Some portions of the description which follows are explicitly or implicitly presented in terms of algorithms and functional or symbolic representations of operations on data within a computer memory. These algorithmic descriptions and functional or symbolic representations are the means used by those skilled in the data processing arts to convey most effectively the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities, such as electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated.
Unless specifically stated otherwise, and as apparent from the following, it will be appreciated that throughout the present specification, discussions utilizing terms such as “receiving”, “calculating”, “determining”, “updating”, “generating”, “initializing”, “outputting”, “receiving”, “retrieving”, “identifying”, “dispersing”, “authenticating” or the like, refer to the action and processes of a computer system, or similar electronic device, that manipulates and transforms data represented as physical quantities within the computer system into other data similarly represented as physical quantities within the computer system or other information storage, transmission or display devices.
The present specification also discloses apparatus for performing the operations of the methods. Such apparatus may be specially constructed for the required purposes, or may comprise a computer or other device selectively activated or reconfigured by a computer program stored in the computer. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various machines may be used with programs in accordance with the teachings herein. Alternatively, the construction of more specialized apparatus to perform the required method steps may be appropriate. The structure of a computer will appear from the description below.
In addition, the present specification also implicitly discloses a computer program, in that it would be apparent to the person skilled in the art that the individual steps of the method described herein may be put into effect by computer code. The computer program is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein. Moreover, the computer program is not intended to be limited to any particular control flow. There are many other variants of the computer program, which can use different control flows without departing from the spirit or scope of the invention.
Furthermore, one or more of the steps of the computer program may be performed in parallel rather than sequentially. Such a computer program may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a computer. The computer readable medium may also include a hard-wired medium such as exemplified in the Internet system, or wireless medium such as exemplified in the GSM mobile telephone system. The computer program when loaded and executed on such a computer effectively results in an apparatus that implements the steps of the preferred method.
To reduce reliance on 3rd party mapping applications, it is essential for map-related application such as ride hailing application to create an internal knowledge base of point of interest (POI) waypoints. While open-sourced map, such as the Open Street Map (OSM) exists, the information contained therein is not as comprehensive as compared to said 3rd party maps.
One way is to enable real-time mapping and POI discovery through the use of driver-partners and map operators on the ground. However, given the vastness of land in Southeast Asia, it is important for map-related application developer to identify and prioritize search in regions with high concentrations of POI, in order to fully utilize the limited resources available for real-time mapping.
A graph neural network can be trained based on existing country level POI locations collected, for example from the above real-time mapping and POI discovery method. This neural network can learn country specific details of how POIs are scattered across the region, in order to identify promising regions of high POI concentration for dispatching gig-workers.
Unfortunately, such an approach may not be accurate for countries with dense clusters of POIs, such as Singapore. This is due to geospatial concept drift, in which the statistical distribution of POIs vary from one region to another within a single country.
To address geospatial concept drift, one solution would be to divide our large POI graph network into multiple sub-graphs and assign a single model to be trained over each subgraph.
This can be done using graph partitioning algorithms. Unfortunately, the objectives of these algorithms are to divide the graph based on present connectivity and structure and not necessarily to maximize model learning. In addition, these algorithms do not take into account the different available choices of graph models from the user, implying that certain graph models would fare much worse under the same partitioning algorithm.
There is thus a need to develop a graph partitioning and boundary refinement algorithm to address the problems above, which can be applied to all types of graphs and choice of graph neural network to better pinpoint candidate regions with good amounts of POI for on-the-ground mapping and data collection and exhibit better POI forecast results.
In the following paragraphs, various embodiments of the present disclosure which relates to a system and a method for adaptively dividing a graph network into a plurality of subnetworks.
According to the present disclosure, more than one graph neural network, each having its own set of parameters, is allocated to operate in each subnetwork. Each graph neural network is then trained and its parameters are optimized only on its allocated subnetwork using data of the nodes in the subnetwork.
According to the present disclosure, a two-step process comprising a macro graph partitioning step and a cluster refinement step is utilized to improve graph boundary segmentation for a selected choice of graph neural networks. The process is applicable to generic graph based mixture of expert models where the graphs and deep learning models are arbitrary. The term mixture of experts (MoE) is used to address a machine learning framework with multiple graph neural networks, where each neural network specializes in a single data point (node), or multiple data points (nodes) within a subnetwork.
In step 204, a cluster (subnetwork) refinement framework is carried out using a selected choice of graph neural networks 206 to iteratively migrate and reassign nodes (i.e., target nodes) between the set of subnetworks/partitions {P1 . . . Pk}, contingent on the performance of the selected graph neural network(s) and, as a result, generate final set of subnetworks/partitions {P*1 . . . P*k}. More details of the iterative reassignment of target nodes are shown in
According to various embodiments of the present disclosure, the process of adaptively dividing a graph network and refining boundaries of subnetworks of the graph network can implemented through a system.
The system comprises a requestor device 302, a provider device 304, an acquirer server 306, a coordination server 308, an issuer server 310 and a graph network division server 312.
A user may be any suitable type of entity, which may include a consumer, company, corporation or governmental entity (i.e., requestor) who looking to purchase or request for a good or service (e.g., graph network data or graph network division service) via a coordination server 308, an application developer, company, corporation or governmental entity (i.e., provider/merchant) who looking to sell or provide a good or service (e.g., graph network data or graph network division service) via a coordination server 308.
A requestor device 302 is associated with a customer (or requestor) who is a party to, for example, a request for a good or service that occurs between the requestor device 302 and the provider device 304. The requestor device 302 may be a computing device such as a desktop computer, an interactive voice response (IVR) system, a smartphone, a laptop computer, a personal digital assistant computer (PDA), a mobile computer, a tablet computer, and the like.
The requestor device 302 may include user credentials (e.g., a user account) of a requestor to enable the requestor device 302 to be a party to a transaction. If the requestor has a user account, the user account may also be included (i.e., stored) in the requestor device 302. For example, a mobile device (which is a requestor device 302) may have the user account of the customer stored in the mobile device.
In one example arrangement, the requestor device 302 is a computing device in the form of a watch or similar wearable and is fitted with a wireless communications interface (e.g., a NFC interface). The requestor device 302 can then electronically communicate with the provider device 304 regarding a transaction or coordination request. The customer uses the watch or similar wearable to make a request regarding the transaction or coordination request by pressing a button on the watch or wearable.
A provider device 304 is associated with a provider who is also a party to the request for a good or service that occurs between the requestor device 302 and the provider device 304. The provider device 302 may be a computing device such as a desktop computer, an interactive voice response (IVR) system, a smartphone, a laptop computer, a personal digital assistant computer (PDA), a mobile computer, a tablet computer, and the like.
Hereinafter, the term “provider” refers to a service provider and any third party associated with providing a good or service for purchase via the provider device 304. Therefore, the user account of a provider refers to both the user account of a provider and the user account of a third party (e.g., a travel coordinator or merchant) associated with the provider.
If the provider has a user account, details of the user account may also be included (i.e., stored) in the provider device 304. For example, a mobile device (which is a provider device 304) may have user account details (e.g., account number) of the provider stored in the mobile device.
In one example arrangement, the provider device 304 is a computing device in the form of a watch or similar wearable and is fitted with a wireless communications interface (e.g., a NFC interface). The provider device 304 can then electronically communicate with the requestor to make a request regarding the transaction or travel request by pressing a button on the watch or wearable.
An acquirer server 306 is associated with an acquirer who may be an entity (e.g. a company or organization) which issues (e.g. establishes, manages, administers) a payment account (e.g. a financial bank account) of a merchant (e.g., provider). An example of an acquirer is a bank or other financial institution. As discussed above, the acquirer server 306 may include one or more computing devices that are used to establish communication with another server (e.g., the coordination server 308) by exchanging messages with and/or passing information to the other server. The acquirer server 306 forwards the payment transaction relating to a transaction or transport request to the coordination server 308.
A coordination server 108 is configured to carry out processes relating to a user account by, for example, forwarding data and information associated with the transaction to the other servers in the system 100, such as the preview server 140. In an example, the coordination server 108 may provide data and information associated with a request including location information that may be used for the preview process of preview server 140. An image may be determined based on an outcome of the preview process e.g. an image corresponding to a drop-off point for a location based on the location information, the image being one that is based on historical data identifying the drop-off point.
An issuer server 310 is associated with an issuer and may include one or more computing devices that are used to perform a payment transaction. The issuer may be an entity (e.g. a company or organization) which issues (e.g. establishes, manages, administers) a transaction credential or a payment account (e.g. a financial bank account) associated with the owner of the requestor device 302. As discussed above, the issuer server 310 may include one or more computing devices that are used to establish communication with another server (e.g., the coordination server 109308 by exchanging messages with and/or passing information to the other server.
The coordination server 308 may be a server that hosts software application programs for processing transaction or coordination requests, for example, purchasing of a good or service by a user. The coordination server 308 may also be configured for processing coordination requests between a requestor and a provider. The coordination server communicates with other servers (e.g., graph network division server 312) concerning transaction or coordination requests. The coordination server 308 may communicate with the graph network division server 312 to facilitate adaptive division and provision of a graph network associated with the transaction or coordination requests. The coordination server 308 may use a variety of different protocols and procedures in order to process the transaction or coordination requests.
In an example, the coordination server 308 may receive transaction and graph network data and information associated with a request including latent attribute, feature and information associated with nodes, subnetwork and graph network and/or their graph neural networks processing parameters from one user device (such as the requestor device 312 or the provider device 314) and provide the data and information to the graph network division server 312 for use in the adaptive graph network division/generation and subnetworks forming/reforming process.
Additionally, transactions that may be performed via the coordination server include good or service purchases, credit purchases, debit transactions, fund transfers, account withdrawals, etc. Coordination servers may be configured to process transactions via cash-substitutes, which may include payment cards, letters of credit, checks, payment accounts, tokens, etc.
The coordination server 308 is usually managed by a service provider that may be an entity (e.g. a company or organization) which operates to process transaction or coordination requests. The coordination server 308 may include one or more computing devices that are used for processing transaction or coordination requests.
A user account may be an account of a user who is registered at the coordination server 308. The user can be a customer, a service provider (e.g., a map application developer), or any third parties (e.g., a route planner) who want to use the coordination server. A user who is registered to a coordination server 308 or a graph network division server 312 will be called a registered user. A user who is not registered to the coordination server 308 or graph network division server 312 will be called a non-registered user.
The coordination server 308 may also be configured to manage the registration of users. A registered user has a user account which includes details and data of the user. The registration step is called on-boarding. A user may use either the requestor device 302 or the provider device 304 to perform on-boarding to the coordination server 308.
The on-boarding process for a user is performed by the user through one of the requestor device 302 or the provider device 304. In one arrangement, the user downloads an app (which includes, or otherwise provides access to, the API to interact with the coordination server 308) to the requestor device 302 or the provider device 304. In another arrangement, the user accesses a website (which includes, or otherwise provides access to, the API to interact with the coordination server 108) on the requestor device 302 or the provider device 304. The user is then able to interact with the graph network division server 312. The user may be a requestor or a provider associated with the requestor device 302 or the provider device 304, respectively.
Details of the registration include, for example, name of the user, address of the user, emergency contact, blood type or other healthcare information, next-of-kin contact, permissions to retrieve data and information from the requestor device 302 and/or the provider device 304. Alternatively, another mobile device may be selected instead of the requestor device 302 and/or the provider device 304 for retrieving the details/data. Once on-boarded, the user would have a user account that stores all the details/data.
It may not be necessary to have a user account at the coordination server 308 to access the functionalities of the coordination server 308. However, there may be functions that are available only to a registered user for example the provision of certain choices of advance parameters and graph neural networks for processing a graph network and certain detailed graph network data. The term user will be used to collectively refer to both registered and non-registered users. A user may interchangeably be referred to as a requestor (e.g. a person who requests for a graph network or its division service) or a provider (e.g. a person who provides the requested graph network or its division service).
The coordination server 308 may be configured to communicate with, or may include, a database 309 via connection 328. The connection 328 may be over a network (e.g., a local area network, a wide area network, the Internet, etc.). The database 328 stores user details/data as well as data corresponding to a transaction (or transaction data). Examples of the transaction data include Transaction identifier (ID), Merchant (Provider) ID, Merchant Name, MCC/Industry Code, Industry Description, Merchant Country, Merchant Address, Merchant Postal Code, Aggregate Merchant ID. For example, data (“Merchant name” or “Merchant ID”) relating to the merchant/provider, time and date relating to the transaction of goods/services.
A graph network division server 312 may be a server that hosts graph neural networks and software application programs for adaptively dividing a graph network into a plurality of subnetworks, the graph network comprising a plurality of nodes, each node being connected to at least one other node in the graph network. In one embodiment, each node is associated with a road within a graph/map. In one example, the graph network division server 312 may obtain data relating to a node, a subnetwork, or a graph network (e.g., latent attribute, feature and information) and/or their graph neural networks processing parameters, for example, in the form of a request, from a user device (such as the requestor device 312 or the provider device 314) or the coordination server 308, and use the data for adaptively dividing the graph network into subnetworks, forming/reforming the subnetworks and/or generating the graph network. The graph network division server may be implemented as shown in
The graph network database 313 stores graph network data comprising data relating nodes such as roads across various regions of the world or part of the world like South-East-Asia (SEA). Such road data may be geo-tagged with a geographical coordinate (e.g. latitudinal and longitudinal coordinates) and map-matched to road-segments to locate a road using location coordinates. The graph network database may be a component of the graph network division server 312. In an example, the graph network database 313 may be a database managed by an external entity and the graph network database 313 is a server that, based on a request received from a user device (such as the requestor device 312 or the provider device 314) or the coordination server 308, retrieve data relating to the nodes, subnetworks and/or graph network (e.g. from the graph network database 313) and transmit the data to the user device or the coordination server 308. Alternatively, a module such as a graph network module may store the graph network instead of the graph network database 313, wherein the graph network module may be integrated as part of the graph network division server 312, or may be external to the graph network division server 312.
Use of the term ‘server’ herein can mean a single computing device or a plurality of interconnected computing devices which operate together to perform a particular function. That is, the server may be contained within a single hardware unit or be distributed among several or many different hardware units.
The requestor device 302 is in communication with the provider device 304 via a connection 312. The connection 312 may be an ad hoc connection (e.g., via NFC communication, Bluetooth, etc.) or over a network (e.g., the Internet). The requestor device 302 is also in communication with the graph network division server 312 via a connection 320. The connections 312, 320 may be a network connection (e.g., the Internet). The requestor device 302 may also be connected to a cloud that facilitates the system 300 for adaptively dividing and generating a graph network. For example, the requestor device 302 can send a signal or data to the cloud directly via an ad hoc connection (e.g., via NFC communication, Bluetooth, etc.) or over a network (e.g., the Internet).
The provider device 304 is in communication with the requestor device 312 as described above, usually via the coordination server 308. The provider device 304 is, in turn, in communication with the acquirer server 316 via a connection 314. The provider device 304 is also in communication with the graph network division server 312 via a connection 324. The connections 314 and 324 may be network connections (e.g., provided via the Internet). The provider device 304 may also be connected to a cloud that facilitates the system 100 for adaptively dividing and generating a graph network. For example, the provider device 304 can send a signal or data to the cloud directly via an ad hoc connection (e.g., via NFC communication, Bluetooth, etc.) or over a network (e.g., the Internet).
The acquirer server 306, in turn, is in communication with the coordination server 308 via a connection 316. The coordination server 308, in turn, is in communication with an issuer server 310 via a connection 318. The connections 316 and 318 may be over a network (e.g., the Internet).
The coordination server 308 is further in communication with the graph network division server 312 via a connection 322. The connection 322 may be over a network (e.g., a local area network, a wide area network, the Internet, etc.). In one arrangement, the coordination server 308 and the graph network division server 312 are combined and the connection 322 may be an interconnected bus.
The graph network division server 312, in turn, is in communication with a graph network database 313 via a connection 326. The connection 326 may be a network connection (e.g., provided via the Internet). The graph network division server 312 may also be connected to a cloud that facilitates the system 300 for adaptively dividing the graph network into subnetworks, forming/reforming the subnetworks and/or generating the graph network. For example, the graph network division server 312 can send a signal or data to the cloud directly via a wireless ad hoc connection (e.g., via NFC communication, Bluetooth, etc.) or over a network (e.g., the Internet).
In the illustrative embodiment, each of the devices 302, 304, and the servers 306, 308, 310, 312 provides an interface to enable communication with other connected devices 302, 304 and/or servers 306, 308, 310, 312. Such communication is facilitated by an application programming interface (“API”). Such APIs may be part of a user interface that may include graphical user interfaces (GUIs), Web-based interfaces, programmatic interfaces such as application programming interfaces (APIs) and/or sets of remote procedure calls (RPCs) corresponding to interface elements, messaging interfaces in which the interface elements correspond to messages of a communication protocol, and/or suitable combinations thereof. Examples of APIs include the REST API, and the like. For example, it is possible for at least one of the requestor device 302 and the provider device 304 to receive or submit a request to generate or dividing a graph network and/or forming/reforming subnetworks of a graph network in response to an enquiry shown on the GUI running on the respective API.
The system 400 comprises an association score calculation module 402 configured to calculate an association score S, for each subnetwork pair (e.g., subnetworks A and B) of the plurality of subnetworks within a graph network, based on a first accuracy metric (e.g., PM
The system 400 may comprise a subnetwork pair set generation module (not shown) configured to generate all subnetwork pair combinations (pair sets) of the graph network, each pair combination having a different set of subnetwork pairs.
The system 400 further comprises a subnetwork forming/reforming module 404 configured to form one of a plurality of larger subnetworks (e.g., subnetwork A′) within the graph network from each pair of a pair combination from all the pair combinations based on a result of determining a sum of the association scores of the pair combination is higher than that of another pair combination. In one embodiment, the subnetwork forming/reforming module 404 form a larger subnetwork (e.g., subnetwork A′) from a subnetwork pair (e.g., subnetworks A and B) by identifying or classifying every node in the subnetwork pair a node of the larger subnetwork.
In one embodiment, the association score calculation module 402 of the system 400 may be further configured to calculate a sum of the association scores of all pairs within each of the subnetwork pair combinations, and a subnetworks pair set selection module 416 to select a pair combination among all the subnetwork pair combinations with the highest association score sum. The subnetwork forming/reforming module 404 is configured to then form the plurality of larger subnetworks (e.g., subnetwork A′) within the graph network from the pair combination selected by the subnetworks pair set selection module 416.
In another embodiment, the system 400 may further comprise an accuracy metric determination module 414 configured to determine if the first accuracy metric and/or the second accuracy metric is higher than a threshold accuracy metric, where the threshold accuracy metric is one of a third accuracy metric (e.g., PM
In another embodiment, if a previous merging process and formation of a subnetwork from a subnetwork pair has been carried out, a subnetwork number determination module 406 in the system 400 may be configured to determine if the number of subnetworks within the map is higher than a pre-configured threshold number of subnetworks. In response to determining that the number of subnetworks within the map is higher than a pre-configured threshold number of subnetworks the association score calculation module 402 further calculates the association score for each subnetworks pair in the graph network and/or the subnetwork forming/reforming module carries out the formation of the plurality of larger subnetworks in response to determining of the number of subnetworks being higher than the pre-configured threshold number of subnetworks.
In an additional or alternative embodiment, the system 400 comprises a target node identification module 408 configured to identify a target node (e.g., node t) in a target subnetwork (e.g., subnetwork A′ or one of the plurality of larger subnetworks formed by subnetworks forming/reforming module) in the graph network based on a fifth accuracy metric
in predicting a latent attribute of the target node to be in the target subnetwork using parameters optimized for accurately predicting each of nodes of the target subnetwork to be in the target subnetwork.
The system 400 comprises a reassignment score calculation module 410 configured to calculate a reassignment score for another subnetwork (e.g., subnetworks B′) based on a sixth accuracy metric
in predicting the latent attribute of the target node using parameters optimized for the other subnetwork (e.g., MB′). The reassignment score calculation module 410 is configured to calculate a reassignment score for every other subnetwork (e.g., subnetworks B′, C′ . . . ) in the graph network, apart from the target subnetwork whether the target node currently belongs.
The subnetworks forming/reforming module 404 is further configured to remove the target node from the target subnetwork and identify/classify the target node as a node of the subnetwork that has a highest reassignment score along all calculated reassignment scores calculated by the reassignment score calculation module 410, and as a result, reform the target subnetwork and the subnetwork with the highest assignment score.
In one embodiment, the accuracy metric determination module 414 may be further configured to determine if the sixth accuracy metric
of the other subnetwork (e.g., subnetwork B′) associated with the highest reassignment score is higher than the fifth accuracy metric
the target subnetwork (e.g., subnetwork A′) in predicting the latent attribute of the target node. The subnetwork forming/reforming module 404 is configured to then reassign the target node from the target subnetwork (e.g., subnetwork A′) to the other subnetwork (e.g., subnetwork B′), i.e., remove the target node from the target subnetwork and identify the other subnetwork to further comprise the target node, to reform the target subnetwork and the other subnetwork if it is determined that the sixth accuracy metric is at least higher than the fifth accuracy metric.
The method for adaptively dividing a graph network may further comprises a step of identifying a third (target) node in a target subnetwork of the plurality of subnetworks based on a highest third accuracy metric in predicting a third latent attribute of the third node using parameters optimized for accurately predicting the third latent attribute of the third node. Next, for each of the remaining subnetworks of the plurality of the subnetworks, a step of calculating a reassignment score to assign the third node to the each of the remaining subnetworks is further carried out based on a fourth accuracy metric in predicting the third latent attribute of the third node using parameters optimized for accurately predicting a fourth attribute of at least one fourth node of the each of the remaining subnetworks. Subsequently, a step of reforming the target subnetwork and one of the remaining subnetworks with a highest reassignment score from the calculated reassignment scores is carried out by removing the third node from the target subnetwork and identifying the one of the remaining subnetworks with the highest reassignment score to further comprise the third node.
In subnetwork pair evaluation step 604, cluster association score SMerge(MI, MJ) which is a score of combining two models MI and MJ trained over two partitions PI & PJ is calculated for every partition pair in the graph network G using Equations 1 and 2, where Sp (Pi, PJ) represents an arbitrary cluster distancing score which penalizes the selection of cluster pairs with low degree of connectedness; and Loss(Mi, MJ) represents the inference loss of model MI on partition PJ and model MJ on partition PI . . . For example, the spinner connectivity score for measuring the degree of connectedness between partition can be used to determine association score.
In the graph matching problem to obtain an optimal partition pair set P*={(P1, P2) . . . (PA, PB) } such that a sum of the association score of all pairs in the pair set, i.e., Σ(I,J) ϵS*S (MI, MJ), is the highest/maximum among those of other pair sets. In step 606, each partition pair of the optimal partition pair set is merged to form a new partition set P*={P*1 . . . P*K } and as a result, the number of partitions in the graph network is deduced by a factor of 2, i.e., N/2. The steps 602, 604, 606 are repeated until a desired number of partitions is obtained.
Subsequently, a reassignment score Sreassign (MJ, DMI) of assigning D from partition Pi to another partition PJ (with associated model MJ), is calculated using equation (4), where Sp (PJ, DMI) represents an arbitrary cluster distancing score which penalizes the selection of partition pairs with low degree of connectedness and Loss (MJ, DMI) represents the inference loss of model MJ on the batch of vertices DMI. Such reassignment score is calculated for every other partition within an existing partition set, e.g., partition set e.g., P*={P*1 . . . P*K} obtained from graph partitioning framework, Intuitively, the best performing partition is selected based on the highest reassignment score among the reassignment scores of all choices of partitions within the partition set and DMI is then allocated to the best performing partition, as illustrated in the cluster refining process of partitions P2 at 704.
In one embodiment, a set of worst performing nodes within each partition of the partition set e.g., P*={P*1 . . . P*K} is identified, and the cluster refining process is repeated on each partition until an optimal partition boundary in the graph network is obtained.
Subsequently, boundary refinements are carried out by moving small batches of D_mi vertices (or worst performing vertices in each region) across subnetwork boundaries based on accuracy metric of the vertices in their own region and accuracy metric and/or reassignment score of the vertices in another region in the map 800, This will cause then cause the subnetwork boundaries to change after reassigning D_mi vertices to other subnetworks. In this embodiment, as shown in
The input and output devices may be used by an operator who is interacting with the coordination server 308. For example, the printer 915 may be used to print reports relating to the status of the coordination server 308.
The coordination server 308 uses the communications network 920 to communicate with the provider device 304, the requestor device 302, the graph network division server 312 and the database 328 to receive commands and data. The coordination server 308 also uses the communications network 920 to communicate with the provider device 304, the requestor device 302, the graph network division server 312 and the database 328 to send notification messages or transaction and graph network data.
The computer module 901 typically includes at least one processor unit 905, and at least one memory unit 906. For example, the memory unit 906 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 901 also includes a number of input/output (I/O) interfaces including: an audio-video interface 907 that couples to the video display 914, loudspeakers 917 and microphone 980; an I/O interface 913 that couples to the keyboard 902, mouse 903, scanner 926, camera 927 and optionally a joystick or other human interface device (not illustrated); and an interface 908 for the external modem 916 and printer 915. In some implementations, the modem 916 may be incorporated within the computer module 901, for example within the interface 908. The computer module 901 also has a local network interface 911, which permits coupling of the computer system 900 via a connection 223 to a local-area communications network 922, known as a Local Area Network (LAN). As illustrated in
The I/O interfaces 908 and 913 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 909 are provided and typically include a hard disk drive (HDD) 910. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 912 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the coordination server 308.
The components 905 to 913 of the computer module 901 typically communicate via an interconnected bus 904 and in a manner that results in a conventional mode of operation of a computer system known to those in the relevant art. For example, the processor 905 is coupled to the system bus 904 using a connection 918. Likewise, the memory 906 and optical disk drive 912 are coupled to the system bus 904 by connections 919. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac™ or like computer systems.
The methods of operating the coordination server 308, as shown in the processes of
The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the coordination server 308 from the computer readable medium, and then executed by the computer system 900. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the coordination server 308 preferably effects an advantageous apparatus for receiving/transmitting transaction data and/or graph network data including latent attributes associated with nodes, subnetwork and graph network and/or their graph neural networks processing parameters used for adaptively dividing a graph network into subnetworks and refining/reforming the subnetworks.
The software (i.e., computer program codes) 933 is typically stored in the HDD 910 or the memory 906. The software 933 is loaded into the computer system 900 from a computer readable medium (e.g., the memory 906), and executed by the processor 905. Thus, for example, the software 933 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 925 that is read by the optical disk drive 912. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the coordination server 308 preferably effects an apparatus for receiving/transmitting transaction data and/or graph network data including latent attributes associated with nodes, subnetwork and graph network and/or their graph neural networks processing parameters used for adaptively dividing a graph network into subnetworks and refining/reforming the subnetworks.
In some instances, the application programs 933 may be supplied to the user encoded on one or more CD-ROMs 925 and read via the corresponding drive 912, or alternatively may be read by the user from the networks 920 or 922. Still further, the software can also be loaded into the coordination server 308 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the coordination server 308 for execution and/or processing by the processor 905. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 901. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 901 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
The second part of the application programs 933 and the corresponding code modules mentioned above may be executed to implement one or more API of the coordination server 308 with associated graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 914 or the display of the provider device 304 and the requestor device 302. Through manipulation of typically the keyboard 902 and the mouse 903, an operator of the server 110 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Similarly, on the provider device 304 and the requestor device 302, a user of those devices 302, 304 manipulate the input devices (e.g., touch screen, keyboard, mouse, etc.) of those devices 302, 304 in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 917 and user voice commands input via the microphone 980. These other forms of functionally adaptable user interfaces may also be implemented on the provider device 304 and the requestor device 302.
When the computer module 901 is initially powered up, a power-on self-test (POST) program 950 executes. The POST program 950 is typically stored in a ROM 949 of the semiconductor memory 906 of
The operating system 953 manages the memory 934 (609, 906) to ensure that each process or application running on the computer module 901 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the server 308 of
As shown in
The application program 933 includes a sequence of instructions 931 that may include conditional branch and loop instructions. The program 933 may also include data 932 which is used in execution of the program 233. The instructions 931 and the data 932 are stored in memory locations 928, 929, 930 and 935, 936, 937, respectively. Depending upon the relative size of the instructions 931 and the memory locations 928-630, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 930. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 928 and 929.
In general, the processor 905 is given a set of instructions which are executed therein. The processor 905 waits for a subsequent input, to which the processor 905 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 902, 903, data received from an external source across one of the networks 920, 902, data retrieved from one of the storage devices 906, 909 or data retrieved from a storage medium 925 inserted into the corresponding reader 912, all depicted in
The disclosed association management and payment initiation arrangements use input variables 954, which are stored in the memory 934 in corresponding memory locations 955, 956, 957. The association management and payment initiation arrangements produce output variables 961, which are stored in the memory 934 in corresponding memory locations 962, 963, 964. Intermediate variables 258 may be stored in memory locations 959, 960, 966 and 967.
Referring to the processor 905 of
Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 939 stores or writes a value to a memory location 932.
Each step or sub-process in the processes of
It is to be understood that the structural context of the coordination server 308 is presented merely by way of example. Therefore, in some arrangements, one or more features of the coordination server 308 may be omitted. Also, in some arrangements, one or more features of the coordination server 308 may be combined. Additionally, in some arrangements, one or more features of the coordination server 308 may be split into one or more component parts.
With reference to
The input and output devices may be used by an operator who is interacting with the graph network division server 312. For example, the printer 1215 may be used to print reports relating to the status of the graph network division server 312.
The graph network division server 312 uses the communications network 1220 to communicate with the provider device 304, the requestor device 302, the coordination server 308 and the graph network database 313 to receive commands and data. The graph network division server 312 also uses the communications network 1220 to communicate with the provider device 304, the requestor device 302, the coordination server 312 and the database 328 to send notification messages or transaction and graph network data.
The computer module 1201 typically includes at least one processor unit 1205, and at least one memory unit 1206. For example, the memory unit 1206 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1201 also includes a number of input/output (I/O) interfaces including: an audio-video interface 1207 that couples to the video display 1214, loudspeakers 1217 and microphone 1280; an I/O interface 1213 that couples to the keyboard 1202, mouse 1203, scanner 1226, camera 1227 and optionally a joystick or other human interface device (not illustrated); and an interface 1208 for the external modem 1216 and printer 1215. In some implementations, the modem 1216 may be incorporated within the computer module 1201, for example within the interface 1208. The computer module 1201 also has a local network interface 1211, which permits coupling of the computer system 1200 via a connection 223 to a local-area communications network 1222, known as a Local Area Network (LAN). As illustrated in
The I/O interfaces 1208 and 1213 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1209 are provided and typically include a hard disk drive (HDD) 1210. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1212 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the graph network division server 312.
The components 1205 to 1213 of the computer module 1201 typically communicate via an interconnected bus 1204 and in a manner that results in a conventional mode of operation of a computer system known to those in the relevant art. For example, the processor 1205 is coupled to the system bus 1204 using a connection 1218. Likewise, the memory 1206 and optical disk drive 1212 are coupled to the system bus 1204 by connections 1219. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac™ or like computer systems.
The methods of operating the graph network division server 312, as shown in the processes of
The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the graph network division server 312 from the computer readable medium, and then executed by the computer system 1200. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the graph network division server 312 preferably effects an advantageous apparatus for adaptively dividing a graph network into a plurality of subnetworks and reforming/refining the plurality of subnetworks.
The software (i.e., computer program codes) 1233 is typically stored in the HDD 1210 or the memory 1206. The software 1233 is loaded into the computer system 1200 from a computer readable medium (e.g., the memory 1206), and executed by the processor 1205. Thus, for example, the software 1233 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1225 that is read by the optical disk drive 1212. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the graph network division server 312 preferably effects an apparatus for adaptively dividing a graph network into a plurality of subnetworks and reforming/refining the plurality of subnetworks.
In some instances, the application programs 1233 may be supplied to the user encoded on one or more CD-ROMs 1225 and read via the corresponding drive 1212, or alternatively may be read by the user from the networks 1220 or 1222. Still further, the software can also be loaded into the graph network division server 312 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the graph network division server 312 for execution and/or processing by the processor 1205. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1201. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1201 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
The second part of the application programs 1233 and the corresponding code modules mentioned above may be executed to implement one or more API of the graph network division server 312 with associated graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1214 or the display of the provider device 304 and the requestor device 302. Through manipulation of typically the keyboard 1202 and the mouse 1203, an operator of the server 312 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Similarly, on the provider device 304 and the requestor device 302, a user of those devices 302, 304 manipulate the input devices (e.g., touch screen, keyboard, mouse, etc.) of those devices 302, 304 in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1217 and user voice commands input via the microphone 680. These other forms of functionally adaptable user interfaces may also be implemented on the provider device 304 and the requestor device 302.
It is to be understood that the structural context of the coordination server 308 is presented merely by way of example. Therefore, in some arrangements, one or more features of the coordination server 308 may be omitted. Also, in some arrangements, one or more features of the graph network division server 312 may be combined. Additionally, in some arrangements, one or more features of the graph network division server 312 may be split into one or more component parts.
The graph network division server 312 may also include a data module 1306 configured to perform the functions of receiving transaction and graph network data and information from the requestor device 302, provider device 304, coordination server 308, a cloud and other sources of information to facilitate the processes of
The input and output devices may be used by an operator who is interacting with the combined coordination and graph network division server 308, 312. For example, the printer 1415 may be used to print reports relating to the status of the combined coordination and graph network division server 308, 312.
The combined coordination and graph network division server 308, 312 uses the communications network 1420 to communicate with the provider device 304, the requestor device 302, and the databases 309, 313 to receive commands and data. In one example, the databases 309, 313 may be combined, as shown in
The computer module 1401 typically includes at least one processor unit 1405, and at least one memory unit 1406. For example, the memory unit 1406 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1401 also includes a number of input/output (I/O) interfaces including: an audio-video interface 1407 that couples to the video display 1414, loudspeakers 1417 and microphone 1480; an I/O interface 1413 that couples to the keyboard 1402, mouse 1403, scanner 1426, camera 1427 and optionally a joystick or other human interface device (not illustrated); and an interface 1408 for the external modem 1416 and printer 1415. In some implementations, the modem 1416 may be incorporated within the computer module 1401, for example within the interface 1408. The computer module 1401 also has a local network interface 1411, which permits coupling of the computer system 1400 via a connection 223 to a local-area communications network 1422, known as a Local Area Network (LAN). As illustrated in
The I/O interfaces 1408 and 1413 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1409 are provided and typically include a hard disk drive (HDD) 1410. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1412 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the combined coordination and graph network division server 308, 312.
The components 1405 to 1413 of the computer module 1401 typically communicate via an interconnected bus 1404 and in a manner that results in a conventional mode of operation of a computer system known to those in the relevant art. For example, the processor 1405 is coupled to the system bus 1404 using a connection 1418. Likewise, the memory 1406 and optical disk drive 1412 are coupled to the system bus 1404 by connections 1419. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple Mac™ or like computer systems.
The methods of operating the combined coordination and graph network division server 308, 312, as shown in the processes of
The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the combined coordination and graph network division server 308, 312 from the computer readable medium, and then executed by the computer system 1400. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the combined coordination and graph network division server 308, 312 preferably effects an advantageous apparatus for adaptively dividing a graph network into a plurality of subnetworks and reforming/refining the plurality of subnetworks as well as for receiving/transmitting transaction data and/or graph network data including latent attributes associated with nodes, subnetwork and graph network and/or their graph neural networks processing parameters used for the adaptive graph network division and subnetworks refining/reforming processes.
The software (i.e., computer program codes) 1433 is typically stored in the HDD 1410 or the memory 1406. The software 1433 is loaded into the computer system 1400 from a computer readable medium (e.g., the memory 1406), and executed by the processor 1405. Thus, for example, the software 1433 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1425 that is read by the optical disk drive 1412. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the combined coordination and graph network division server 308, 312 preferably effects an apparatus for adaptively dividing a graph network into a plurality of subnetworks and reforming/refining the plurality of subnetworks as well as for receiving/transmitting transaction data and/or graph network data including latent attributes associated with nodes, subnetwork and graph network and/or their graph neural networks processing parameters used for the adaptive graph network division and subnetworks refining/reforming processes.
In some instances, the application programs 1433 may be supplied to the user encoded on one or more CD-ROMs 1425 and read via the corresponding drive 1412, or alternatively may be read by the user from the networks 1420 or 1422. Still further, the software can also be loaded into the combined coordination and graph network division server 308, 312 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the combined coordination and graph network division server 308, 312 for execution and/or processing by the processor 1405. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1401. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1401 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
The second part of the application programs 1433 and the corresponding code modules mentioned above may be executed to implement one or more API of the combined coordination and graph network division server 308, 312 with associated graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1414 or the display of the provider device 304 and the requestor device 302. Through manipulation of typically the keyboard 1402 and the mouse 1403, an operator of the server 312 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Similarly, on the provider device 304 and the requestor device 302, a user of those devices 302, 304 manipulate the input devices (e.g., touch screen, keyboard, mouse, etc.) of those devices 302, 304 in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1417 and user voice commands input via the microphone 680. These other forms of functionally adaptable user interfaces may also be implemented on the provider device 304 and the requestor device 302.
It is to be understood that the structural context of the coordination server 308 is presented merely by way of example. Therefore, in some arrangements, one or more features of the coordination server 308 may be omitted. Also, in some arrangements, one or more features of the combined coordination and graph network division server 308, 312 may be combined. Additionally, in some arrangements, one or more features of the combined coordination and graph network division server 308, 312 may be split into one or more component parts.
The combined coordination and graph network division server 308, 312 may also include a coordination module 1508 configured to perform the function of communicating with the requestor device 102 and the provider device 104; and the acquirer server 106 and the issuer server 110 to respectively receive and transmit a transaction and graph network data such as latent attributes associated with nodes, subnetwork and graph network, and graph neural networks processing parameters that are used for the graph network division/generation and subnetworks forming/reforming processes of the graph network division server 312.
The combined coordination and graph network division server 308, 312 may also include a data module 1506 configured to perform the functions of receiving transaction and graph network data and information from the requestor device 302, provider device 304, coordination server 308, a cloud and other sources of information to facilitate the processes of
The foregoing describes only some embodiments of the present disclosure, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
Number | Date | Country | Kind |
---|---|---|---|
10202203388W | Apr 2022 | SG | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SG2023/050204 | 3/28/2023 | WO |