Various embodiments of the present disclosure address technical challenges related to the predictive classification techniques in large data prediction domains. Traditional predictive classification techniques may employ machine learning classification models that are trained to generate predictive classifications based on historical observations identified from various datasets. However, traditional machine learning classification models are inefficient at capturing historical observations of relevance in large data prediction domains with robust datasets defining diverse relationships and data types. Thus, traditional predictive classification techniques are ill suited for various big data analytics, such as coordination of benefits (COB) analytics in clinical domains in which machine learning classification models may be trained using structured data descriptive of membership information, historical claims data, and/or the like.
The inefficiencies of traditional predictive classification techniques are derived, at least in part, from the traditional storage techniques used to handle big data. For instance, traditional data storage solutions leverage relational databases in which data is stored in tabular formats. Such storage designs require predefined and carefully modeled set of tables in which each table persists a certain entity of information. The design is thus (i) entity-first and not relationship-first and (ii) rigid and not easily extendible or augmented. Both of these characteristics limit the predictive capabilities of models trained to identify historical observations from such data. For example, given the immense complexity of large data prediction domains, such as health insurance data, a tabular storage design is incapable of capturing all available dimensions of entity information. This, in turn, forces design choices that limit which relationships are being persisted. These manual design choices then impact the performance of machine learning models with unknown results.
Traditional machine learning techniques for large data prediction domains leverage supervised classification machine learning models. Such models learn a function of engineered features and the target. However, they are unable to capture complex relationships between entities and entity-level attributes. Moreover, the traditional models are trained on historic data and are then optimized for certain target classes. Thus, performance of the models is limited by the data purity within each target class. This is problematic as certain target classes may be susceptible to pollution by other classes due to volume limitations (e.g., all records in the database cannot be investigated leading to unprocessed data that is considered as the negative class by default, etc.), false negatives, process designs, and/or the like.
Ultimately, some of the above listed limitations lead to the creation of skewed or biased datasets that require performance compromises to be made during model development. This, in turn, limits the performance of analytical processes, thereby falling short of the true potential of the data and machine learning capabilities. Various embodiments of the present disclosure make important contributions to traditional predictive classification techniques by addressing these technical challenges, among others.
Various embodiments of the present disclosure provide graph-based predictive modeling techniques that improve traditional predictive classification techniques in large data prediction domains. To do so, some of the techniques of the present disclosure enable the generation of a plurality of distinct subdomain-specific graphs (e.g., a network of member information for subdomain of information in a COB use case, etc.) that each capture diverse relationships expressed across multiple information subdomains within prediction domain in a relationship-first manner. By doing so, some of the techniques of the present disclosure, may enable the processing of a set of distinct graphs, using a graph-based machine learning model, to encode subdomain-specific graph embeddings representing dense and diverse relationships and semantic information associated with each information domain. As described herein, each subdomain-specific graph embedding may include a vector that corresponds to an entity within the prediction domain such that subdomain-specific graph embedding may be leveraged to generate predictive classifications for entities within the prediction domain based on information from a specific subdomain. These embeddings may be aggregated, with respect to a particular prediction, to generate a composite graph embedding tailored to a particular prediction task. In this manner, some techniques of the present disclosure may improve the performance, processing efficiency, and training efficiency of traditional machine learning models leveraged within a large data prediction domain. This, in turn, may be practically applied to improve various predictive tasks for various prediction domains including, as one example, COB investigations in a clinical domain.
For instance, some of the techniques of the present disclosure may facilitate a confirmation of dual coverage (e.g., data in addition to member COB probability may be made available to the COB investigation team) in a clinical domain. By way of example, the subdomain-specific graphs may define relationship first data structures for health insurance data to generate a multi-graph environment of a plurality of heterogeneous undirected graph networks that may be processed using graph-based machine leaning models to expose the underlying relationships between nodes and allows for node prediction where there are no observed labels (or insufficient labels) exist.
In some embodiments, a computer-implemented method includes generating, by one or more processors and using a plurality of source tables for a prediction domain, a plurality of subdomain-specific graphs for the prediction domain, each comprising a respective plurality of graph nodes and a respective plurality of weighted edges between the respective plurality of graph nodes; generating, by the one or more processors and using a graph-based machine learning model, a plurality of subdomain-specific embeddings comprising a respective subdomain-specific embedding for each of the plurality of subdomain-specific graphs; generating, by the one or more processors and using the graph-based machine learning model, a composite graph embedding based on the plurality of subdomain-specific embeddings and a designated predictive task; and initiating, by the one or more processors, the performance of the designated predictive task based on the composite graph embedding.
In some embodiments, a computing system includes memory and one or more processors communicatively coupled to the memory. The one or more processors are configured to generate, using a plurality of source tables for a prediction domain, a plurality of subdomain-specific graphs for the prediction domain, each comprising a respective plurality of graph nodes and a respective plurality of weighted edges between the respective plurality of graph nodes; generate, using a graph-based machine learning model, a plurality of subdomain-specific embeddings comprising a respective subdomain-specific embedding for each of the plurality of subdomain-specific graphs; generate, using the graph-based machine learning model, a composite graph embedding based on the plurality of subdomain-specific embeddings and a designated predictive task; and initiate the performance of the designated predictive task based on the composite graph embedding.
In some embodiments, one or more non-transitory computer-readable storage media include instructions that, when executed by one or more processors, cause the one or more processors to generate, using a plurality of source tables for a prediction domain, a plurality of subdomain-specific graphs for the prediction domain, each comprising a respective plurality of graph nodes and a respective plurality of weighted edges between the respective plurality of graph nodes; generate, using a graph-based machine learning model, a plurality of subdomain-specific embeddings comprising a respective subdomain-specific embedding for each of the plurality of subdomain-specific graphs; generate, using the graph-based machine learning model, a composite graph embedding based on the plurality of subdomain-specific embeddings and a designated predictive task; and initiate the performance of the designated predictive task based on the composite graph embedding.
Various embodiments of the present disclosure are described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the present disclosure are shown. Indeed, the present disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. The term “or” is used herein in both the alternative and conjunctive sense, unless otherwise indicated. The terms “illustrative” and “example” are used to be examples with no indication of quality level. Terms such as “computing,” “determining,” “generating,” and/or similar words are used herein interchangeably to refer to the creation, modification, or identification of data. Further, “based on,” “based at least in part on,” “based at least on,” “based upon,” and/or similar words are used herein interchangeably in an open-ended manner such that they do not indicate being based only on or based solely on the referenced element or elements unless so indicated. Like numbers refer to like elements throughout. Moreover, while certain embodiments of the present disclosure are described with reference to predictive data analysis, one of ordinary skills in the art will recognize that the disclosed concepts may be used to perform other types of data analysis.
Embodiments of the present disclosure may be implemented in various ways, including as computer program products that comprise articles of manufacture. Such computer program products may include one or more software components including, for example, software objects, methods, data structures, or the like. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform. A software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple architectures. A software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution.
Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query, or search language, and/or a report writing language. In one or more example embodiments, a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together, such as in a particular directory, folder, or library. Software components may be static (e.g., pre-established, or fixed) or dynamic (e.g., created or modified at the time of execution).
A computer program product may include a non-transitory computer-readable storage medium storing applications, programs, program modules, scripts, source code, program code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like (also referred to herein as executable instructions, instructions for execution, computer program products, program code, and/or similar terms used herein interchangeably). Such non-transitory computer-readable storage media include all computer-readable media (including volatile and non-volatile media).
In some embodiments, a non-volatile computer-readable storage medium may include a floppy disk, flexible disk, hard disk, solid-state storage (SSS) (e.g., a solid-state drive (SSD), solid state card (SSC), solid state module (SSM), enterprise flash drive, magnetic tape, or any other non-transitory magnetic medium, and/or the like). A non-volatile computer-readable storage medium may also include a punch card, paper tape, optical mark sheet (or any other physical medium with patterns of holes or other optically recognizable indicia), compact disc read only memory (CD-ROM), compact disc-rewritable (CD-RW), digital versatile disc (DVD), Blu-ray disc (BD), any other non-transitory optical medium, and/or the like. Such a non-volatile computer-readable storage medium may also include read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory (e.g., Serial, NAND, NOR, and/or the like), multimedia memory cards (MMC), secure digital (SD) memory cards, SmartMedia cards, CompactFlash (CF) cards, Memory Sticks, and/or the like. Further, a non-volatile computer-readable storage medium may also include conductive-bridging random access memory (CBRAM), phase-change random access memory (PRAM), ferroelectric random-access memory (FeRAM), non-volatile random-access memory (NVRAM), magnetoresistive random-access memory (MRAM), resistive random-access memory (RRAM), Silicon-Oxide-Nitride-Oxide-Silicon memory (SONOS), floating junction gate random access memory (FJG RAM), Millipede memory, racetrack memory, and/or the like.
In some embodiments, a volatile computer-readable storage medium may include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), fast page mode dynamic random access memory (FPM DRAM), extended data-out dynamic random access memory (EDO DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), double data rate type two synchronous dynamic random access memory (DDR2 SDRAM), double data rate type three synchronous dynamic random access memory (DDR3 SDRAM), Rambus dynamic random access memory (RDRAM), Twin Transistor RAM (TTRAM), Thyristor RAM (T-RAM), Zero-capacitor (Z-RAM), Rambus in-line memory module (RIMM), dual in-line memory module (DIMM), single in-line memory module (SIMM), video random access memory (VRAM), cache memory (including various levels), flash memory, register memory, and/or the like. It will be appreciated that where embodiments are described to use a computer-readable storage medium, other types of computer-readable storage media may be substituted for, or used in addition to, the computer-readable storage media described above.
As should be appreciated, various embodiments of the present disclosure may also be implemented as methods, apparatuses, systems, computing devices, computing entities, and/or the like. As such, embodiments of the present disclosure may take the form of an apparatus, system, computing device, computing entity, and/or the like executing instructions stored on a computer-readable storage medium to perform certain steps or operations. Thus, embodiments of the present disclosure may also take the form of an entirely hardware embodiment, an entirely computer program product embodiment, and/or an embodiment that comprises a combination of computer program products and hardware performing certain steps or operations.
Embodiments of the present disclosure are described below with reference to block diagrams and flowchart illustrations. Thus, it should be understood that each block of the block diagrams and flowchart illustrations may be implemented in the form of a computer program product, an entirely hardware embodiment, a combination of hardware and computer program products, and/or apparatuses, systems, computing devices, computing entities, and/or the like carrying out instructions, operations, steps, and similar words used interchangeably (e.g., the executable instructions, instructions for execution, program code, and/or the like) on a computer-readable storage medium for execution. For example, retrieval, loading, and execution of code may be performed sequentially such that one instruction is retrieved, loaded, and executed at a time. In some example embodiments, retrieval, loading, and/or execution may be performed in parallel such that multiple instructions are retrieved, loaded, and/or executed together. Thus, such embodiments may produce specifically configured machines performing the steps or operations specified in the block diagrams and flowchart illustrations. Accordingly, the block diagrams and flowchart illustrations support various combinations of embodiments for performing the specified instructions, operations, or steps.
The external computing entities 112a-c, for example, may include and/or be associated with one or more entities that may be configured to receive, store, manage, and/or facilitate datasets, such as the historical dataset, subdomain-specific source tables, modification data objects, and/or the like. The external computing entities 112a-c may provide such datasets, and/or the like to the predictive computing entity 102 which may leverage the datasets to generate a subdomain-specific graphs, one or more predictive classification, and/or the like, as described herein. In some examples, the datasets may include an aggregation of data from across the external computing entities 112a-c into one or more aggregated datasets. The external computing entities 112a-c, for example, may be associated with one or more data repositories, cloud platforms, compute nodes, organizations, and/or the like, that may be individually and/or collectively leveraged by the predictive computing entity 102 to obtain and aggregate data for a prediction domain.
The predictive computing entity 102 may include, or be in communication with, one or more processing elements 104 (also referred to as processors, processing circuitry, digital circuitry, and/or similar terms used herein interchangeably) that communicate with other elements within the predictive computing entity 102 via a bus, for example. As will be understood, the predictive computing entity 102 may be embodied in a number of different ways. The predictive computing entity 102 may be configured for a particular use or configured to execute instructions stored in volatile or non-volatile media or otherwise accessible to the processing element 104. As such, whether configured by hardware or computer program products, or by a combination thereof, the processing element 104 may be capable of performing steps or operations according to embodiments of the present disclosure when configured accordingly.
In one embodiment, the predictive computing entity 102 may further include, or be in communication with, one or more memory elements 106. The memory element 106 may be used to store at least portions of the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like being executed by, for example, the processing element 104. Thus, the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like, may be used to control certain aspects of the operation of the predictive computing entity 102 with the assistance of the processing element 104.
As indicated, in one embodiment, the predictive computing entity 102 may also include one or more communication interfaces 108 for communicating with various computing entities, e.g., external computing entities 112a-c, such as by communicating data, content, information, and/or similar terms used herein interchangeably that may be transmitted, received, operated on, processed, displayed, stored, and/or the like.
The computing system 100 may include one or more input/output (I/O) element(s) 114 for communicating with one or more users. An I/O element 114, for example, may include one or more user interfaces for providing and/or receiving information from one or more users of the computing system 100. The I/O element 114 may include one or more tactile interfaces (e.g., keypads, touch screens, etc.), one or more audio interfaces (e.g., microphones, speakers, etc.), visual interfaces (e.g., display devices, etc.), and/or the like. The I/O element 114 may be configured to receive user input through one or more of the user interfaces from a user of the computing system 100 and provide data to a user through the user interfaces.
The predictive computing entity 102 may include a processing element 104, a memory element 106, a communication interface 108, and/or one or more I/O elements 114 that communicate within the predictive computing entity 102 via internal communication circuitry, such as a communication bus and/or the like.
The processing element 104 may be embodied as one or more complex programmable logic devices (CPLDs), microprocessors, multi-core processors, coprocessing entities, application-specific instruction-set processors (ASIPs), microcontrollers, and/or controllers. Further, the processing element 104 may be embodied as one or more other processing devices or circuitry including, for example, a processor, one or more processors, various processing devices, and/or the like. The term circuitry may refer to an entirely hardware embodiment or a combination of hardware and computer program products. Thus, the processing element 104 may be embodied as integrated circuits, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), hardware accelerators, digital circuitry, and/or the like.
The memory element 106 may include volatile memory 202 and/or non-volatile memory 204. The memory element 106, for example, may include volatile memory 202 (also referred to as volatile storage media, memory storage, memory circuitry, and/or similar terms used herein interchangeably). In one embodiment, a volatile memory 202 may include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), fast page mode dynamic random access memory (FPM DRAM), extended data-out dynamic random access memory (EDO DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), double data rate type two synchronous dynamic random access memory (DDR2 SDRAM), double data rate type three synchronous dynamic random access memory (DDR3 SDRAM), Rambus dynamic random access memory (RDRAM), Twin Transistor RAM (TTRAM), Thyristor RAM (T-RAM), Zero-capacitor (Z-RAM), Rambus in-line memory module (RIMM), dual in-line memory module (DIMM), single in-line memory module (SIMM), video random access memory (VRAM), cache memory (including various levels), flash memory, register memory, and/or the like. It will be appreciated that where embodiments are described to use a computer-readable storage medium, other types of computer-readable storage media may be substituted for, or used in addition to, the computer-readable storage media described above.
The memory element 106 may include non-volatile memory 204 (also referred to as non-volatile storage, memory, memory storage, memory circuitry, and/or similar terms used herein interchangeably). In one embodiment, the non-volatile memory 204 may include one or more non-volatile storage or memory media, including, but not limited to, hard disks, ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like.
In one embodiment, a non-volatile memory 204 may include a floppy disk, flexible disk, hard disk, solid-state storage (SSS) (e.g., a solid-state drive (SSD)), solid state card (SSC), solid state module (SSM), enterprise flash drive, magnetic tape, or any other non-transitory magnetic medium, and/or the like. A non-volatile memory 204 may also include a punch card, paper tape, optical mark sheet (or any other physical medium with patterns of holes or other optically recognizable indicia), compact disc read only memory (CD-ROM), compact disc-rewritable (CD-RW), digital versatile disc (DVD), Blu-ray disc (BD), any other non-transitory optical medium, and/or the like. Such a non-volatile memory 204 may also include read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory (e.g., Serial, NAND, NOR, and/or the like), multimedia memory cards (MMC), secure digital (SD) memory cards, SmartMedia cards, CompactFlash (CF) cards, Memory Sticks, and/or the like. Further, a non-volatile computer-readable storage medium may also include conductive-bridging random access memory (CBRAM), phase-change random access memory (PRAM), ferroelectric random-access memory (FeRAM), non-volatile random-access memory (NVRAM), magnetoresistive random-access memory (MRAM), resistive random-access memory (RRAM), Silicon-Oxide-Nitride-Oxide-Silicon memory (SONOS), floating junction gate random access memory (FJG RAM), Millipede memory, racetrack memory, and/or the like.
As will be recognized, the non-volatile memory 204 may store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like. The term database, database instance, database management system, and/or similar terms used herein interchangeably may refer to a collection of records or data that is stored in a computer-readable storage medium using one or more database models, such as a hierarchical database model, network model, relational model, entity-relationship model, object model, document model, semantic model, graph model, and/or the like.
The memory element 106 may include a non-transitory computer-readable storage medium for implementing one or more aspects of the present disclosure including as a computer-implemented method configured to perform one or more steps/operations described herein. For example, the non-transitory computer-readable storage medium may include instructions that when executed by a computer (e.g., processing element 104), cause the computer to perform one or more steps/operations of the present disclosure. For instance, the memory element 106 may store instructions that, when executed by the processing element 104, configure the predictive computing entity 102 to perform one or more steps/operations described herein.
Embodiments of the present disclosure may be implemented in various ways, including as computer program products that comprise articles of manufacture. Such computer program products may include one or more software components including, for example, software objects, methods, data structures, or the like. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language, such as an assembly language associated with a particular hardware framework and/or operating system platform. A software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware framework and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple frameworks. A software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution.
Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query, or search language, and/or a report writing language. In one or more example embodiments, a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together, such as in a particular directory, folder, or library. Software components may be static (e.g., pre-established, or fixed) or dynamic (e.g., created or modified at the time of execution).
The predictive computing entity 102 may be embodied by a computer program product which includes non-transitory computer-readable storage medium storing applications, programs, program modules, scripts, source code, program code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like (also referred to herein as executable instructions, instructions for execution, computer program products, program code, and/or similar terms used herein interchangeably). Such non-transitory computer-readable storage media include all computer-readable media such as the volatile memory 202 and/or the non-volatile memory 204.
The predictive computing entity 102 may include one or more I/O elements 114. The I/O elements 114 may include one or more output devices 206 and/or one or more input devices 208 for providing and/or receiving information with a user, respectively. The output devices 206 may include one or more sensory output devices, such as one or more tactile output devices (e.g., vibration devices such as direct current motors, and/or the like), one or more visual output devices (e.g., liquid crystal displays, and/or the like), one or more audio output devices (e.g., speakers, and/or the like), and/or the like. The input devices 208 may include one or more sensory input devices, such as one or more tactile input devices (e.g., touch sensitive displays, push buttons, and/or the like), one or more audio input devices (e.g., microphones, and/or the like), and/or the like.
In addition, or alternatively, the predictive computing entity 102 may communicate, via a communication interface 108, with one or more external computing entities such as the external computing entity 112a. The communication interface 108 may be compatible with one or more wired and/or wireless communication protocols.
For example, such communication may be executed using a wired data transmission protocol, such as fiber distributed data interface (FDDI), digital subscriber line (DSL), Ethernet, asynchronous transfer mode (ATM), frame relay, data over cable service interface specification (DOCSIS), or any other wired transmission protocol. In addition, or alternatively, the predictive computing entity 102 may be configured to communicate via wireless external communication using any of a variety of protocols, such as general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), CDMA2000 1× (1×RTT), Wideband Code Division Multiple Access (WCDMA), Global System for Mobile Communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), Evolution-Data Optimized (EVDO), High Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), IEEE 802.9 (Wi-Fi), Wi-Fi Direct, 802.16 (WiMAX), ultra-wideband (UWB), infrared (IR) protocols, near field communication (NFC) protocols, Wibree, Bluetooth protocols, wireless universal serial bus (USB) protocols, and/or any other wireless protocol.
The external computing entity 112a may include an external entity processing element 210, an external entity memory element 212, an external entity communication interface 224, and/or one or more external entity I/O elements 218 that communicate within the external computing entity 112a via internal communication circuitry, such as a communication bus and/or the like.
The external entity processing element 210 may include one or more processing devices, processors, and/or any other device, circuitry, and/or the like described with reference to the processing element 104. The external entity memory element 212 may include one or more memory devices, media, and/or the like described with reference to the memory element 106. The external entity memory element 212, for example, may include at least one external entity volatile memory 214 and/or external entity non-volatile memory 216. The external entity communication interface 224 may include one or more wired and/or wireless communication interfaces as described with reference to communication interface 108.
In some embodiments, the external entity communication interface 224 may be supported by one or more radio circuitry. For instance, the external computing entity 112a may include an antenna 226, a transmitter 228 (e.g., radio), and/or a receiver 230 (e.g., radio).
Signals provided to and received from the transmitter 228 and the receiver 230, correspondingly, may include signaling information/data in accordance with air interface standards of applicable wireless systems. In this regard, the external computing entity 112a may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. More particularly, the external computing entity 112a may operate in accordance with any of a number of wireless communication standards and protocols, such as those described above with regard to the predictive computing entity 102.
Via these communication standards and protocols, the external computing entity 112a may communicate with various other entities using means such as Unstructured Supplementary Service Data (USSD), Short Message Service (SMS), Multimedia Messaging Service (MMS), Dual-Tone Multi-Frequency Signaling (DTMF), and/or Subscriber Identity Module Dialer (SIM dialer). The external computing entity 112a may also download changes, add-ons, and updates, for instance, to its firmware, software (e.g., including executable instructions, applications, program modules), operating system, and/or the like.
According to one embodiment, the external computing entity 112a may include location determining embodiments, devices, modules, functionalities, and/or the like. For example, the external computing entity 112a may include outdoor positioning embodiments, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, universal time (UTC), date, and/or various other information/data. In one embodiment, the location module may acquire data, such as ephemeris data, by identifying the number of satellites in view and the relative positions of those satellites (e.g., using global positioning systems (GPS)). The satellites may be a variety of different satellites, including Low Earth Orbit (LEO) satellite systems, Department of Defense (DOD) satellite systems, the European Union Galileo positioning systems, the Chinese Compass navigation systems, Indian Regional Navigational satellite systems, and/or the like. This data may be collected using a variety of coordinate systems, such as the Decimal Degrees (DD); Degrees, Minutes, Seconds (DMS); Universal Transverse Mercator (UTM); Universal Polar Stereographic (UPS) coordinate systems; and/or the like. Alternatively, the location information/data may be determined by triangulating a position of the external computing entity 112a in connection with a variety of other systems, including cellular towers, Wi-Fi access points, and/or the like. Similarly, the external computing entity 112a may include indoor positioning embodiments, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, time, date, and/or various other information/data. Some of the indoor systems may use various position or location technologies including RFID tags, indoor beacons or transmitters, Wi-Fi access points, cellular towers, nearby computing devices (e.g., smartphones, laptops), and/or the like. For instance, such technologies may include the iBeacons, Gimbal proximity beacons, Bluetooth Low Energy (BLE) transmitters, NFC transmitters, and/or the like. These indoor positioning embodiments may be used in a variety of settings to determine the location of someone or something within inches or centimeters.
The external entity I/O elements 218 may include one or more external entity output devices 220 and/or one or more external entity input devices 222 that may include one or more sensory devices described herein with reference to the I/O elements 114. In some embodiments, the external entity I/O element 218 may include a user interface (e.g., a display, speaker, and/or the like) and/or a user input interface (e.g., keypad, touch screen, microphone, and/or the like) that may be coupled to the external entity processing element 210.
For example, the user interface may be a user application, browser, and/or similar words used herein interchangeably executing on and/or accessible via the external computing entity 112a to interact with and/or cause the display, announcement, and/or the like of information/data to a user. The user input interface may include any of a number of input devices or interfaces allowing the external computing entity 112a to receive data including, as examples, a keypad (hard or soft), a touch display, voice/speech interfaces, motion interfaces, and/or any other input device. In embodiments including a keypad, the keypad may include (or cause display of) the conventional numeric (0-9) and related keys (#, *, and/or the like), and other keys used for operating the external computing entity 112a and may include a full set of alphabetic keys or set of keys that may be activated to provide a full set of alphanumeric keys. In addition to providing input, the user input interface may be used, for example, to activate or deactivate certain functions, such as screen savers, sleep modes, and/or the like.
In some embodiments, the term “prediction domain” refers to an area of knowledge that may be augmented by one or more predictions using some of the techniques of the present disclosure. A prediction domain may include any knowledge area and may be associated with data descriptive of one or more known characteristics, attributes, and/or the like within the knowledge area. Examples of prediction domains may include financial domains, clinical domains, logistics domains, and/or the like. Prediction domains may be associated with big data that includes a variety of different data types, at times, arriving in increasing volumes and velocity. For example, a clinical domain may be associated with various clinical datasets each describing attributes for one or more entities (e.g., members, providers, etc.) that operate within the clinical domain. By way of example, a prediction domain may include a coordination of benefits (COB) domain in which healthcare insurers offer, manage, and/or facilitate one or more different coverage plans across a plurality of members.
In some embodiments, diverse sets data associated with a prediction domain may be monitored, received, updated, and/or stored to generate a global knowledge base for the prediction domain. The global knowledge base may aggregate data from across a plurality of subdomains with the prediction domain.
In some embodiments, the term “subdomain” refers to a portion of a prediction domain. For example, a prediction domain may include a plurality of subdomains. Each subdomain may include a particular information segment from the prediction domain. Each particular information segment may include information for a specific facet of the prediction domain. By way of example, using a clinical domain for illustration, a clinical domain may include a plurality of clinical subdomains that each contain specific facets of member information for a plurality of members within the clinical domain. The clinical subdomains, for example, may include a demographic subdomain including member demographic data, a claim subdomain including medical claim data, an employment subdomain including employer data, a plan subdomain including health plan data, an investigation subdomain including COB investigation data, and/or the like.
In some examples, a prediction domain may be divided into a plurality of subdomains. For instance, a plurality of subdomains, taken together, may include all of the information available in a prediction domain. In some examples, each of the plurality of subdomains may correspond to one or more information definitions. Subdomains for a prediction domain may be added, modified, and/or removed by changing the one or more information definitions.
In some examples, data corresponding to each of the plurality of subdomains may be stored in a plurality of disparate data structures. In some examples, each subdomain may be associated a respective one or more of the plurality of disparate data structures. The plurality of disparate data structures, for example, may include one or more subdomain-specific source tables for each subdomain within a prediction domain.
In some embodiments, the term “source table” refers to a data structure that describes data associated with a portion of a prediction domain. A source table may include any type of data storage structure including, as examples, one or more linked lists, databases, and/or the like. In some examples, a source table may include a relational database. For instance, data associated with portions of the prediction domain may be persisted in one or more relational databases where it is organized in one or more different data tables. In some examples, the one or more different data tables may be linked by relationships to entities within a prediction domain. As an example, in a clinical domain, the core of the generated data for the clinical domain is member related information and/or associated transactional data that is not limited to a member's health. In such an example, each of the source tables may include a plurality of attributes that are directly and/or indirectly linked to a member within the clinical domain.
A prediction domain may be associated with a plurality of source tables. In some examples, the plurality of source tables may include one or more subdomain-specific source tables for each subdomain of the prediction domain. Using a clinical domain as an example, a demographic subdomain may include member demographic data stored in one or more member tables, a claim subdomain may include medical claim data stored in one or more claim tables, an employment subdomain may include employer data stored in one or more client enterprise tables, a plan subdomain may include health plan data stored in one or more insurance plan tables, and/or an investigation subdomain may include COB investigation data stored in one or more investigation tables.
Each of the tables may include one or more member attributes and/or attributes that may be linked to a member. For instance, a member table may include member attributes, such as a member identifier, a date of birth, an address, a plan identifier, a family member identifier of another member, a relationship type with respect to the other member, an employer identifier, an employment type, among other attributes. In some examples, a claim table may include claim attributes that are associated with a member, such as a claim identifier, the member identifier corresponding to a member associated with the claim, a timestamp, a cost, a healthcare location (e.g., hospital, etc.), a diagnosis code, amount other attributes. In some examples, a client enterprise table may include enterprise attributes that are associated with a member, such as an employer identifier (e.g., linked to an employer identifier in a member table, etc.), an address, number of employees, insurance plan identifiers, start dates, among other attributes. In some examples, an insurance plan table may include plan attributes that are associated with a member, such as a plan identifier (e.g., linked to a plan identifier in a member table, etc.), a plan name, a cost, a location services, contextual details, among other attributes. In some examples, an investigation table may include investigation attributes that are associated with a member, such as the member identifier, an investigation identifier, an investigation data, an investigation result, among other attributes.
In some examples, a plurality of source tables may be processed using analytic models that may be augmented by machine learning techniques. However, the complex connections between the plurality of source tables reduce the efficiency, reliability, adaptability, and the functionality of such techniques. To address this technical challenge, data from a plurality of source tables for a prediction domain may be aggregated to generate a plurality of subdomain-specific graph within a multi-graph environment.
In some embodiments, the term “multi-graph environment” refers to a plurality of graph data structures that describe the prediction domain. A multi-graph environment, for example, may include a plurality of subdomain-specific graphs that each store a portion of data (e.g., member data in clinical domain) for a prediction domain. As described herein, a multi-graph environment, and/or portions thereof, may be processed, using graph-based modeling techniques, to generate predictive insights for different portions of a prediction domain.
In some examples, a multi-graph environment may include a subdomain-specific graph for each subdomain of a prediction domain. In some examples, a composition of a domain may be defined by a use-case, business protocols, laws, and/or driven by the plurality of source tables (e.g., an existing relational database table architecture). The multi-graph environment may persist logical blocks of information into a plurality of dedicated subdomain-specific graphs to create an entity-centric multi-graph environment that may present an ideal platform for leveraging capabilities of graph-based machine learning models.
In some embodiments, the term “subdomain-specific graph” refers to a component of a multi-graph environment that describes a subdomain of a prediction domain. The data structure, for example, may persist data in a form of items linked by their relationship to one another. The building blocks of a subdomain-specific graph may include nodes and edges where nodes are the vertices and edges are the links that connect the nodes. By way of example, a subdomain-specific graph may include a graph data structure, such as an undirected and acyclic graph with a plurality of nodes and edges. In some examples, the nodes and edges of the subdomain-specific graph may be generated based on data from one or more of a plurality of source tables for a prediction domain. For instance, source table data (e.g., member attributes, claim attributes, enterprise attributes, plan attributes, investigation attributes, etc. for a clinical domain) may be aggregated to construct each subdomain-specific graph.
In some embodiments, each respective subdomain-specific graph is generated by aggregating source table data from one or more source tables of a corresponding subdomain. For example, for a clinical domain, the subdomain-specific graphs may include a demographic subdomain-specific graph, a claim subdomain-specific graph, an employment subdomain-specific graph, a plan subdomain-specific graph, an investigation subdomain-specific graph, and/or the like. The demographic subdomain-specific graph may be generated by aggregating member demographic data from one or more member tables. The claim subdomain-specific graph may be generated by aggregating medical claim data from one or more claim tables. The employment subdomain-specific graph may be generated by aggregating employer data from one or more client enterprise tables. The plan subdomain-specific graph may be generated by aggregating health plan data from one or more insurance plan tables. The investigation subdomain-specific graph may be generated by aggregating COB investigation data from one or more investigation tables.
In some embodiments, a subdomain-specific graph defines a plurality of graph nodes and weighted edges. Each of the graph nodes, for example, may correspond to an entity within a prediction domain, such as a member in a clinical domain. Each of the weighted edges connect at least two graph nodes and correspond to a relationship between the two entities. In this manner, using a clinical domain as an example, a subdomain-specific graph may capture a portion of health insurance data in the form of a heterogenous graph network by generating a plurality of member nodes from attributes sourced from respective subdomain-specific source tables the plurality of source tables. In some examples, in addition to the attributes from the source tables, each subdomain-specific graph may include derived data, such as a member's age (e.g., from a date of birth attribute, etc.), geographic distances (e.g., from between employer and member locations, etc.), and/or open source information, such as a geographic location's population, mean age, cost of living, etc., a company's size, industry, revenue, etc., an insurance plan or coverage type, and/or the like.
In some examples, each subdomain-specific graph within a multi-graph environment may capture specific subdomain information within the prediction domain, such as demographics data, employer data, claims history, health data, COB investigation history, and/or the like. Given the entity-centric nature of the multi-graph environment, each subdomain-specific graph in the environment may contain a plurality of common graph nodes (e.g., member nodes in a clinical domain) that are shared across each of the plurality of subdomain-specific graphs and a plurality of subdomain-specific graph nodes, and by extension a plurality of subdomain-specific edge, that correspond to a particular subdomain. In this manner, a multi-graph environment may include an n number of linked subdomain-specific graphs that may comprehensively capture an n number of subdomains within a prediction domain.
In some embodiments, the term “graph node” refers to a component of a subdomain-specific graph that describes an entity within a prediction domain. A graph node, for example, may include a vertex of a subdomain-specific graph that corresponds to an entity within the prediction domain. The entity may depend on the prediction domain. For example, in a clinical domain, the graph node may describe a member within a healthcare system. The graph node may be associated with a plurality of node attributes that correspond to the entity. The node attributes may be aggregated from a plurality of source tables corresponding to the prediction domain. In some examples, the node attributes for a graph node within a subdomain-specific graph may be aggregated from one or more subdomain-specific source tables corresponding to the subdomain-specific graph.
In some embodiments, the term “common graph node” refers to a type of graph node that is included in each subdomain-specific graph within a multi-graph environment. A common graph node, for example, may represent an entity within a prediction domain that is associated with information from each subdomain of the prediction domain. In some examples, a common graph node associated with an entity may be included in each of the plurality of subdomain-specific graphs to represent a plurality of attributes and relationships for the entity across each of the subdomains of the prediction domain. By way of example, in a clinical domain, a common graph node may represent a member within a healthcare system that is associated with demographics data, employer data, claims history, health data, COB investigation history, and/or the like.
In some embodiments, the term “subdomain-specific graph node” refers to one or more types of graph node that is specific to a particular subdomain-specific graph within a multi-graph environment. A subdomain-specific graph node, for example, may represent an entity within a prediction domain that is specific to a particular subdomain of information. In some examples, a subdomain-specific graph node may be associated with an entity that included in one of the plurality of subdomain-specific graphs to represent a plurality of attributes and relationships that are specific to a particular subdomain. By way of example, in a clinical domain, a subdomain-specific graph node may represent a demographic entity (e.g., a physical location, etc.) within a demographic subdomain, a claim entity (e.g., a medical claim, etc.) within a claim subdomain, an employment entity (e.g., an employer, etc.) within an employment subdomain, a plan entity (e.g., a healthcare plan, etc.) within a plan subdomain, an investigation entity (e.g., a COB label, etc.) within an investigation subdomain, and/or the like.
In some embodiments, the term “node attribute” refers to data entity that describes a parameter of a graph node. A node attribute, for example, may include a data value from at least one source table associated with the prediction domain. Each graph node may be associated with one or more node attributes. Each of the one or more node attributes may describe a characteristic of an entity represented by a respective graph node. In some examples, the one or more node attributes may depend on the prediction domain and/or subdomain thereof. By way of example, in a clinical domain, example node attributes for a demographic subdomain may include an age of a member, example node attributes for a claim subdomain may include a claim history of a member, example node attributes for an employment subdomain may include a number of people employed by an employer, example node attributes for an investigation subdomain may include an indication of whether a member is associated with a COB event, and/or the like.
In some embodiments, the term “weighted edge” refers to a component of a subdomain-specific graph that describes a relationship within a prediction domain and/or subdomain thereof. A weighted edge, for example, may connect two graph nodes of a subdomain-specific graph based on a defined relationship within a subdomain. For example, the graph nodes of the subdomain-specific graph may be connected via various types of relationships that may be expressed using a plurality of subdomain-specific edges. In some examples, some of the connections have a higher significance that may be represented by one or more initial edge weights. In some examples, the one or more initial edge weights may be generated based on a relationship weighting ruleset. The relationship weighting ruleset, for example, may include one or more heuristics (e.g., a spouse is more significant than a brother, etc.) that may define a relationship hierarchy based on one or more historical observations for the prediction domain and/or subdomain thereof.
In some examples, the defined relationships may depend on the prediction domain. For example, in a clinical domain, a weighted edge may describe one or more healthcare related relationships, such a familial relationship, legal relationships, healthcare provider relationships, and/or the like. In some examples, the defined relationships may depend on the subdomain. By way of example, in a clinical domain, example defined relationships for a demographic subdomain may include a familial relationship between two members, example defined relationships for a claim subdomain may include a relationship between a member and an insurance claim, example defined relationships for an employment subdomain may include an employment relationship between an employer and a member, example defined relationships for an investigation subdomain may include an indication of whether a member is associated with a COB event, and/or the like.
In some embodiments, the term “graph-based machine learning model” refers to a data entity that describes parameters, hyper-parameters, and/or defined operations of a rules-based and/or machine learning model (e.g., model including at least one of one or more rule-based layers, one or more layers that depend on trained parameters, coefficients, and/or the like). A graph-based machine learning model, for example, may include a machine learning model that is trained to generate graph embeddings for a designated predictive task. A graph-based machine learning model may include one or more of any type of machine learning model including one or more supervised, unsupervised, semi-supervised, and/or reinforcement learning models. In some embodiments, the graph-based machine learning model may include multiple models configured to perform one or more different stages of an embedding process.
In some embodiments, a graph-based machine learning model includes a machine learning model trained, using one or more semi-supervised training techniques, to generate graph embeddings for one or more subdomain-specific graphs and/or combinations thereof. The graph-based machine learning model, for example, may be trained to leverage one or more metapaths formed by a respective subdomain-specific graph to extract diverse semantic information by learning the relevant metapaths and fusing semantic information to improve predictive accuracy with respect to a designated predictive task. To do so, the graph-based machine learning model may include a graph model, such as a multiple graph learning neural network (MGLNN) that is trained to generate node-level and semantic-level attention weights by exploiting the complementary information of multiple graphs. In other words, goal of the graph-based machine learning model may be to learn an optimal graph structure from multiple graph structures that best serves a designated predictive task.
For example, the graph-based machine learning model may be trained to generate a plurality of node-level weights for the plurality of graph nodes of the subdomain-specific graph. For instance, the graph-based machine learning model may receive node attributes for a respective node, project the node attributes across all of the nodes into the same space, and generate node-level attention values (e.g., node-level weights, etc.) by learning the attention values between the nodes and their meta-path based neighbors. In addition, or alternatively, the graph-based machine learning model may be trained to generate a plurality of semantic-level weights for the plurality of weighted edges of the subdomain-specific graph. For instance, the graph-based machine learning model may learn attention values (e.g., semantic-level weights) of one or more different metapaths within the subdomain-specific graph.
In some examples, the graph-based machine learning model may generate a subdomain-specific embedding based on the learned attention values (e.g., node-level weights, semantic-level weights, etc.). For instance, the graph-based machine learning model may generate an optimal combination of neighbors and metapaths in a hierarchical manner (node-level attention to semantic-level attention), which results in the importance of graph nodes and the metapaths being taken into consideration simultaneously.
In some examples, the graph-based machine learning model may generate subdomain-specific embeddings for each of a plurality of subdomain graphs. The subdomain-specific embeddings may be further processed, using reinforcement training techniques, such as back-propagation of errors, to generate a composite graph embedding for a designated predictive task.
In some embodiments, the term “node-level weight” refers to a data value for a graph node that describes a relevance of one or more node attributes. A node-level weight may include one type of attention weight for a subdomain-specific graph. A node-level weight, for example, may include a learned attention value for a graph node that may be based on one or more node attributes of the respective graph node and/or one or more metapath-based neighbors.
In some embodiments, the term “semantic-level weight” refers to a data value for a weighted edge that describes a relevance of one or more metapaths. A node-level weight may include one type of attention weight for a subdomain-specific graph. A semantic-level weight, for example, may include a learned attention value for a weighted edge that may be based on one or more weighted edges and/or node attributes within a subdomain-specific graph. A semantic-level weight, for example, may be based on a comparison to one or more metapaths and one or more node labels within a subdomain-specific graph.
In some embodiments, the term “subdomain-specific embedding” refers to a data structure that describes a subdomain-specific graph. A subdomain-specific embedding, for example, may include an encoded vector (and/or any other data representation, etc.) that encodes one or more attributes (e.g., node attributes, edge attributes, etc.) and/or weights (node-level weights, semantic-level weights, etc.) into a data structure representing a subdomain-specific graph. As described herein, a subdomain-specific embedding may include a plurality of vectors, each with a plurality of real numbers representing entities with a subdomain-specific graph that may be leveraged by a plurality of different predictive tasks, including supervised techniques for generating node classifications, and/or the like.
In some embodiments, the term “composite graph embedding” refers to a data structure that describes a plurality of subdomain-specific graphs within a multi-graph environment. A composite graph embedding, for example, may include an approximation of each of the individual subdomain-specific graph within the multi-graph environment. In this manner, a composite graph embedding may aggregate data from across each subdomain of a prediction domain. The composite graph embedding may be learned to emphasize characteristics that are more likely to result in a prediction output for a designated predictive task. Because the composite graph embedding is a vector, it may be optimized for processing by any type of designated predictive task—including clustering of nodes (unsupervised) or node classification (supervised)—using methods such as similarity calculation and/or complex forms of processing by large transformer models.
In some embodiments, the term “designated predictive task” refers to a predictive task that leverages a composite graph embedding to generate a prediction within a prediction domain. A designated predictive task may include one or more machine learning, rule based, and/or the like processes that may be leveraged to generate a predictive classification. A designated predictive task may depend on the prediction domain. By way of example, in a clinical domain, a designated predictive task may include a classification process for detecting members with overlapping health care coverages, detecting instances of fraud, waste, and/or abuse, and/or the like.
In some embodiments, the term “node label” refers to a node attribute that describes a ground truth value for a designated predictive task. In some examples, a node label may include a node attribute. By way of example, the plurality of graph nodes may include one or more labeled graph nodes and one or more unlabeled graph nodes. In some examples, a designated predictive task may be configured to generate one or more predictive classifications for the one or more unlabeled graph nodes based on the one or more labeled graph nodes. In this respect, a node label may be based on a prediction domain and/or a designated predictive task within the prediction domain. For instance, in a clinical domain, a node label may be indicative of a graph node with overlapping health care coverages (e.g., instances of COB), instances of fraud, waste, and/or abuse, and/or the like.
In some embodiments, the term “classification model” refers to a data entity that describes parameters, hyper-parameters, and/or defined operations of a rules-based and/or machine learning model (e.g., model including at least one of one or more rule-based layers, one or more layers that depend on trained parameters, coefficients, and/or the like). A classification model, for example, may include a machine learning model that is trained to perform a designated predictive task to generate a predictive classification for a prediction domain. A classification model may include one or more of any type of machine learning model including one or more supervised, unsupervised, semi-supervised, and/or reinforcement learning models. In some embodiments, the classification model may include multiple models configured to perform one or more different stages of a classification process.
In some examples, a classification model may include an embedding-based classification model. An embedding-based classification model, for example, may be trained using a plurality of composite embeddings (e.g., and label pairs) to generate probability scores for a predictive classification (e.g., insurance coverage through spouse-to-spouse or child-to-parent or some other relationship, etc.). The embedding-based classification model may generate a predictive classification for one or more graph nodes associated with probability score over a threshold.
In some examples, a predictive classification may be generated based on a comparison between a first vector from the composite embedding corresponding to a labeled graph node and second vector from the composite embedding corresponding to an unlabeled graph node. By way of example, a probability score of a predictive classification (e.g., a COB label, etc.) for an unlabeled graph node may be based on a dot-product between the first vector and the second vector. In the event that the two nodes are in a close vector space, a high probability score may be generated, and, by extension, a predictive classification may be generated.
In some examples, a machine learning classification model may include a clustering model. For example, a clustering model may include an unsupervised machine learning model configured to generate one or more node clusters from the plurality of graph nodes within at least one subdomain-specific graph based on the composite embeddings. For example, the clustering model may include one or more hierarchical clustering models, k-means models, mixture models, and/or the like.
In some embodiments, the term “predictive classification” refers to a data entity that describes a predicted value for a designated predictive task. A predictive classification may include an unobserved data value for a graph node that is generated by a designated predictive task. In some examples, a predictive classification may be assigned to a graph node to generate an additional node attribute for a domain-specific graph. A predictive classification may depend on a prediction domain. For example, in a clinical domain, a predictive classification may include a COB label indicating whether a member has dual insurance coverage.
In some embodiments, the term “modification data object” refers to a data entity that describes modified data for a prediction domain. A modification data object may include one or more additional, modified, and/or removed nodes, edges, and/or attributes. A modification data object, for example, may include data that is recorded and/or observed after a generation of a composite embedding. A modification data object, for example, may include an update to one or more of the plurality of source tables of the prediction domain. A modification data object may depend on a prediction domain. For instance, in a clinical domain, a modification data object may describe a new member, a new claim, a new member address, relationship, residence, and/or the like.
In some embodiments, the term “defined time interval” refers to a data entity that describes a unit of time associated with a reception of one or more modification data objects. In some examples, a defined time interval may identify a time period in between one or more versions of a composite embedding. For instance, a defined time interval may identify an update frequency for a composite embedding.
Embodiments of the present disclosure present improved data storage and prediction techniques that leverage a multi-graph environment with multiple graph data structures and compatible graph-based machine learning models to improve predictive classifications in a large data prediction domain. For example, some techniques of the present disclosure enable the generation a subdomain-specific graphs tailored to individual subdomains within a complex prediction domain. The subdomain-specific graphs may be processed using a compatible graph-based machine learning model to generate a plurality of attention weights that capture attribute relevance with respect to a metapaths within the graphs. In this manner, a subdomain-specific graphs may be individually attended (and reattended, etc.) to capture sematic information that is tailored with respect to a diverse set of entities represented within an information segment of a prediction domain. Embedding of these graphs may extracted and then aggregated to create a composite graph embedding that learned based on a designated predictive task. In this way, the subdomain-specific graphs may be leveraged to flexibly reconfigure data in a large data prediction domain for any designated predictive task. Moreover, the subdomain-specific graphs may be augmented over time, and then reattended, to accommodate for changes within the prediction domain. This, in turn, enables an improved flexible data storage mechanism (e.g., with respect to traditional storage mechanisms such as those described herein, etc.) that is reconfigurable, modifiable, and capable of representing dense and diverse relationships across a large data prediction domain.
For example, the subdomain-specific graphs may each include a heterogeneous, undirected graph which captures core data—such as member data, employer data and geolocation in a clinical domain—in the form of nodes. Weighted edges may be added to capture the connection data—such as relationship, proximity, and employment type in a clinical domain. Complex semantic information may be stored in each subdomain-specific graph and reflected by meta-paths (e.g., sequences of paths, etc.) connected with one or more of the weighted edges between an origin node and a destination node. Different meta-paths in the subdomain-specific graphs may reflect diverse semantic information that may be reflective of a relevancy between entities within each of the subdomain-specific graphs with respect to a particular information segment of a prediction domain. In some examples, certain meta-paths may be more significant than others depending on a designated predictive task. Some examples of this may include, for a clinical domain, (i) family connections are more significant compared to professional ones, (ii) a longer the duration of employment with a particular company, the greater the strength of the connection, (iii) entities with close geographic proximity could carry a higher significance when compared to those with greater geographic distances, and/or the like. Each graph node and weighted edge of a subdomain-specific graph may store large amounts of data from source table defining various relationship within a prediction domain. In this way, all (or a majority of, etc.) dimensions of information aggregated from across a plurality of source tables for a prediction domain, and in some examples additionally derived information (node and edge attributes), may be effectively captured with a relationship-centric approach. This enable the complexities and interconnectedness in the data to be captured which is a significant improvement from existing solutions.
As described herein, the improved data storage mechanism (e.g., the subdomain-specific graphs) of the present disclosure may enable improved prediction techniques for various designated predictive tasks within a prediction domain. For instance, once configured (e.g., attended, etc.), the subdomain-specific graphs may be leveraged to generate subdomain-specific embeddings that capture dense relationship information tailored to a particular information segment within a prediction domain. These embeddings may be processed to generate a composite graph embedding that is learned for a particular designated predictive task. The composite graph embedding may then be processed using various predictive techniques to generate predictive classifications for entities within a prediction domain. Unlike traditional predictive classification techniques, the composite graph embedding may be structured in a data format that allows machine learning models (or other predictive techniques) to efficiently process and train on diverse relationship and sematic information expressed within a predictive domain. This, in turn, improves model performance and training efficiency which, ultimately, results in a reduction of computing resources and processing times, while achieving improved predictive performances.
By way of example, once the data is captured in the form of subdomain-specific graphs, graph nodes (e.g., representing members in a clinical domain, etc.) in the subdomain-specific graphs that have a high probability for a predictive classification may be identified using vectors within the composite graph embedding that correspond to the respective graph nodes. The vectors, for example, may be generated using a graph-based machine learning model that learns the importance of various meta-paths within each of the subdomain-specific graphs (e.g., as expressed by individual subdomain-specific graph embeddings, etc.) to generate a composite vector reflective of the learned importance. For each graph node, the importance of meta-path based neighbors may be learned, and the graph-based machine learning model may assign additional weights (e.g., node level weights, etc.) reflective of the learned importance. This results in semantic-level and node-level attentions, respectively. The attention values between graph nodes and their meta-path based neighbors may be aggregated to generate semantic-specific node embeddings (e.g., graph node embeddings, etc.). The graph-based machine learning model may identify an optimal combination of neighbors and meta-paths in a hierarchical manner, which enables a learned composite embedding that better captures the complex structure and rich semantic information in a multi-graph environment. Using a semi-supervised approach, the graph-based machine learning model may generate an optimal weighted combination of subdomain-specific graph embeddings that are tailored to a designated predictive task. These embeddings, reflected by the composite graph embedding, may then be used as features in various models—that are optimized for the designated predictive task. The subdomain-specific graphs may not expose such models to data that is not pure and focuses strictly on data that is pure and confirmed. In this way, the subdomain-specific graphs and compatible graph-based machine learning model are able to leverage all of the complexities captured/available in a prediction domain, thereby, achieving higher performance when compared to existing solutions.
Examples of technologically advantageous embodiments of the present disclosure include: (i) subdomain-specific graph building techniques for large data prediction domains, (ii) graph-based predictive classification techniques that leverage a multi-graph environment to generate holistic predictive classifications, among other aspects of the present disclosure. Other technical improvements and advantages may be realized by one of ordinary skill in the art.
As indicated, various embodiments of the present disclosure make important technical contributions to data storage and predictive modeling technology. In particular, systems and methods are disclosed herein that implement subdomain-specific graphs and compatible graph-based machine learning models configured model large and diverse dataset within a prediction domain. By doing so, dense embeddings may be generated at an entity level that captures relevant relationships across a diverse prediction domain. As described herein, these embeddings may be leveraged to improve various predictive tasks, which may result in improved machine learning training and inference techniques.
In some embodiments, a plurality of subdomain-specific graphs 304a-n are generated for a prediction domain. The subdomain-specific graphs 304a-n may be generated using a plurality of source tables for the prediction domain. The source tables, for example, may include a plurality of subdomain-specific source tables 302a-n that respectively correspond to one or more subdomains within the prediction domain. In some example, each of subdomain-specific graphs 304a-n may include a respective plurality of graph nodes and a respective plurality of weighted edges between the respective plurality of graph nodes. This is illustrated, for example, by graph node 306 and weighted edge 310 of the subdomain-specific graph 304a.
In some embodiments, the prediction domain is an area of knowledge that may be augmented by one or more predictions using some of the techniques of the present disclosure. A prediction domain may include any knowledge area and may be associated with data descriptive of one or more known characteristics, attributes, and/or the like within the knowledge area. Examples of prediction domains may include financial domains, clinical domains, logistics domains, and/or the like. Prediction domains may be associated with big data that includes a variety of different data types, at times, arriving in increasing volumes and velocity. For example, a clinical domain may be associated with various clinical datasets each describing attributes for one or more entities (e.g., members, providers, etc.) that operate within the clinical domain. By way of example, a prediction domain may include a coordination of benefits (COB) domain in which healthcare insurers offer, manage, and/or facilitate one or more different coverage plans across a plurality of members.
In some embodiments, diverse sets data associated with a prediction domain may be monitored, received, updated, and/or stored to generate a global knowledge base for the prediction domain. The global knowledge base may aggregate data from across a plurality of subdomains with the prediction domain.
In some embodiments, a subdomain is a portion of a prediction domain. For example, a prediction domain may include a plurality of subdomains. Each subdomain may include a particular information segment from the prediction domain. Each particular information segment may include information for a specific facet of the prediction domain. By way of example, using a clinical domain for illustration, a clinical domain may include a plurality of clinical subdomains that each contain specific facets of member information for a plurality of members within the clinical domain. The clinical subdomains, for example, may include a demographic subdomain including member demographic data, a claim subdomain including medical claim data, an employment subdomain including employer data, a plan subdomain including health plan data, an investigation subdomain including COB investigation data, and/or the like.
In some examples, a prediction domain may be divided into a plurality of subdomains. For instance, a plurality of subdomains, taken together, may include all of the information available in a prediction domain. In some examples, each of the plurality of subdomains may correspond to one or more information definitions. Subdomains for a prediction domain may be added, modified, and/or removed by changing the one or more information definitions.
In some examples, data corresponding to each of the plurality of subdomains may be stored in a plurality of disparate data structures. In some examples, each subdomain may be associated a respective one or more of the plurality of disparate data structures. The plurality of disparate data structures, for example, may include one or more subdomain-specific source tables 302a-n for each subdomain within a prediction domain.
In some embodiments, a source table is a data structure that describes data associated with a portion of a prediction domain. A source table may include any type of data storage structure including, as examples, one or more linked lists, databases, and/or the like. In some examples, a source table may include a relational database. For instance, data associated with portions of the prediction domain may be persisted in one or more relational databases where it is organized in one or more different data tables. In some examples, the one or more different data tables may be linked by relationships to entities within a prediction domain. As an example, in a clinical domain, the core of the generated data for the clinical domain is member related information and/or associated transactional data that is not limited to a member's health. In such an example, each of the source tables may include a plurality of attributes that are directly and/or indirectly linked to a member within the clinical domain.
A prediction domain may be associated with a plurality of source tables. In some examples, the plurality of source tables may include one or more subdomain-specific source tables 302a-n for each subdomain of the prediction domain. Using a clinical domain as an example, a demographic subdomain may include member demographic data stored in one or more member tables, a claim subdomain may include medical claim data stored in one or more claim tables, an employment subdomain may include employer data stored in one or more client enterprise tables, a plan subdomain may include health plan data stored in one or more insurance plan tables, and/or an investigation subdomain may include COB investigation data stored in one or more investigation tables.
Each of the subdomain-specific source tables 302a-n may include one or more member attributes and/or attributes that may be linked to a member. For instance, a member table may include member attributes, such as a member identifier, a date of birth, an address, a plan identifier, a family member identifier of another member, a relationship type with respect to the other member, an employer identifier, an employment type, among other attributes. In some examples, a claim table may include claim attributes that are associated with a member, such as a claim identifier, the member identifier corresponding to a member associated with the claim, a timestamp, a cost, a healthcare location (e.g., hospital, etc.), a diagnosis code, amount other attributes. In some examples, a client enterprise table may include enterprise attributes that are associated with a member, such as an employer identifier (e.g., linked to an employer identifier in a member table, etc.), an address, number of employees, insurance plan identifiers, start dates, among other attributes. In some examples, an insurance plan table may include plan attributes that are associated with a member, such as a plan identifier (e.g., linked to a plan identifier in a member table, etc.), a plan name, a cost, a location services, contextual details, among other attributes. In some examples, an investigation table may include investigation attributes that are associated with a member, such as the member identifier, an investigation identifier, an investigation data, an investigation result, among other attributes.
In some examples, a plurality of source tables may be processed using analytic models that may be augmented by machine learning techniques. However, the complex connections between the plurality of source tables reduce the efficiency, reliability, adaptability, and the functionality of such techniques. To address this technical challenge, data from a plurality of subdomain-specific source tables 302a-n for a prediction domain may be aggregated to generate a plurality of subdomain-specific graphs 304a-n within a multi-graph environment.
In some embodiments, each of the plurality of subdomain-specific graphs 304a-n include a separate heterogeneous and undirected graph data structure. In some examples, each subdomain-specific graph 304a may include a graph data structure for a particular subdomain of the prediction domain. For instance, each of the plurality of subdomain-specific source tables 302a-n may include subdomain data for a subdomain of the prediction domain. A subdomain-specific graph 304a of the plurality of subdomain-specific graphs 304a-n may be generated based on subdomain data from a corresponding subdomain-specific source table 302a of the plurality of subdomain-specific source tables 302a-n.
As described herein, in some examples, the subdomain-specific source table 302a-n may include overlapping entities that link the tables. These linkages may be maintained by the subdomain-specific graphs 304a-n. For example, a plurality of graph nodes 306 for a subdomain-specific graph 304a of the plurality of subdomain-specific graphs 304a-n may include a set of common graph nodes that are within each of the plurality of subdomain-specific graphs 304a-n and a set of subdomain-specific graph nodes specific to the subdomain-specific graph 304a. In some examples, the common graph nodes may include labeled graph nodes and/or unlabeled graph nodes. The unlabeled graph nodes may be labeled using one or more techniques of the present disclosure.
In some embodiments, a subdomain-specific graph 304a is a component of a multi-graph environment that describes a subdomain of a prediction domain. The subdomain-specific graph 304a may include a graph data structure, for example, that may persist data in a form of items linked by their relationship to one another. The building blocks of a subdomain-specific graph 304a may include nodes and edges where nodes are the vertices and edges are the links that connect the nodes. By way of example, a subdomain-specific graph 304a may include an undirected and acyclic graph with a plurality of nodes and edges. In some examples, the nodes and edges of the subdomain-specific graph 304a may be generated based on data from one or more of a plurality of subdomain-specific source tables 302a for the prediction domain. For instance, source table data (e.g., member attributes, claim attributes, enterprise attributes, plan attributes, investigation attributes, etc. for a clinical domain) may be aggregated to construct each of the subdomain-specific graphs 304a-n.
In some embodiments, each respective subdomain-specific graph 304a-n is generated by aggregating source table data from one or more subdomain-specific source tables 302a-n of a corresponding subdomain. For example, for a clinical domain, the subdomain-specific graphs 304a-n may include a demographic subdomain-specific graph, a claim subdomain-specific graph, an employment subdomain-specific graph, a plan subdomain-specific graph, an investigation subdomain-specific graph, and/or the like. A demographic subdomain-specific graph may be generated by aggregating member demographic data from one or more member tables. A claim subdomain-specific graph may be generated by aggregating medical claim data from one or more claim tables. An employment subdomain-specific graph may be generated by aggregating employer data from one or more client enterprise tables. A plan subdomain-specific graph may be generated by aggregating health plan data from one or more insurance plan tables. An investigation subdomain-specific graph may be generated by aggregating COB investigation data from one or more investigation tables; and other distinct subdomain-specific graphs may be generated by aggregate data associated with any other clinical subdomain.
In some embodiments, a subdomain-specific graph 304a defines a plurality of graph nodes 306 and weighted edges 310. Each of the graph nodes 306, for example, may correspond to an entity within a prediction domain, such as a member in a clinical domain. Each of the weighted edges 310 connect at least two graph nodes and correspond to a relationship between the two entities. In this manner, using a clinical domain as an example, a subdomain-specific graph 304a may capture a portion of health insurance data in the form of a heterogenous graph network by generating a plurality of member nodes from attributes sourced from respective subdomain-specific source tables 302a of the plurality of subdomain-specific source tables 302a-n. In some examples, in addition to the attributes from the subdomain-specific source table 302a-n, each subdomain-specific graph 304a-n may include derived data, such as a member's age (e.g., from a date of birth attribute, etc.), geographic distances (e.g., from between employer and member locations, etc.), and/or open source information, such as a geographic location's population, mean age, cost of living, etc., a company's size, industry, revenue, etc., an insurance plan or coverage type, and/or the like.
In some examples, each subdomain-specific graph 304a-n within a multi-graph environment may capture specific subdomain information within the prediction domain, such as demographics data, employer data, claims history, health data, COB investigation history, and/or the like. Given the entity-centric nature of the multi-graph environment, each subdomain-specific graph 304a-n in the environment may contain a plurality of common graph nodes (e.g., member nodes in a clinical domain) that are shared across each of the plurality of subdomain-specific graphs 304a-n and a plurality of subdomain-specific graph nodes, and by extension a plurality of subdomain-specific edges, that correspond to a particular subdomain. In this manner, a multi-graph environment may include an n number of linked subdomain-specific graphs 304a-n that may comprehensively capture an n number of subdomains within a prediction domain.
In some embodiments, the graph nodes 306 are a component of a subdomain-specific graph 304a that describes one or more entities within a prediction domain. A graph node 306, for example, may include a vertex of the subdomain-specific graph that corresponds to an entity within the prediction domain. The entity may depend on the prediction domain. For example, in a clinical domain, the graph node 306 may describe a member within a healthcare system. The graph node 306 may be associated with a plurality of node attributes 308 that correspond to the entity. The node attributes 308 may be aggregated from a plurality of subdomain-specific source table 302a corresponding to the prediction domain. In some examples, the node attributes 308 for a graph node 306 within a subdomain-specific graph 304a may be aggregated from one or more subdomain-specific source tables 302a corresponding to the subdomain-specific graph 304a.
In some embodiments, a common graph node is a type of graph node 306 that is included in each subdomain-specific graph 304a-n within a multi-graph environment. A common graph node, for example, may represent an entity within a prediction domain that is associated with information from each subdomain of the prediction domain. In some examples, a common graph node associated with an entity may be included in each of the plurality of subdomain-specific graphs 304a-n to represent a plurality of attributes and relationships for the entity across each of the subdomains of the prediction domain. By way of example, in a clinical domain, a common graph node may represent a member within a healthcare system that is associated with demographics data, employer data, claims history, health data, COB investigation history, and/or the like.
In some embodiments, the subdomain-specific graph node is a type of graph node 306 that is specific to a particular subdomain-specific graph 304a within a multi-graph environment. A subdomain-specific graph node, for example, may represent an entity within a prediction domain that is specific to a particular subdomain of information. In some examples, a subdomain-specific graph node may be associated with an entity that included in one of the plurality of subdomain-specific graphs 304a-n to represent a plurality of attributes and relationships that are specific to a particular subdomain. By way of example, in a clinical domain, a subdomain-specific graph node may represent a demographic entity (e.g., a physical location, etc.) within a demographic subdomain, a claim entity (e.g., a medical claim, etc.) within a claim subdomain, an employment entity (e.g., an employer, etc.) within an employment subdomain, a plan entity (e.g., a healthcare plan, etc.) within a plan subdomain, an investigation entity (e.g., a COB label, etc.) within an investigation subdomain, and/or the like.
In some embodiments, a node attribute 308 is a data entity that describes a parameter of a graph node 306. A node attribute 308, for example, may include a data value from at least one subdomain-specific source table 302a-n associated with the prediction domain. Each graph node 306 may be associated with one or more node attributes 308. Each of the one or more node attributes 308 may describe a characteristic of an entity represented by a respective graph node. In some examples, the one or more node attributes 308 may depend on the prediction domain and/or subdomain thereof. By way of example, in a clinical domain, example node attributes for a demographic subdomain may include an age of a member, example node attributes for a claim subdomain may include a claim history of a member, example node attributes for an employment subdomain may include a number of people employed by an employer, example node attributes for an investigation subdomain may include an indication of whether a member is associated with a COB event, and/or the like.
In some embodiments, a weighted edge 310 is a component of a subdomain-specific graph 304a that describes a relationship within a prediction domain and/or subdomain thereof. A weighted edge 310, for example, may connect two graph nodes of a subdomain-specific graph 304a based on a defined relationship within a subdomain. For example, the graph nodes 306 of the subdomain-specific graph 304a may be connected via various types of relationships that may be expressed using a plurality of subdomain-specific edges. In some examples, some of the connections have a higher significance that may be represented by one or more initial edge weights. In some examples, the one or more initial edge weights may be generated based on a relationship weighting ruleset. The relationship weighting ruleset, for example, may include one or more heuristics (e.g., a spouse is more significant than a brother, etc.) that may define a relationship hierarchy based on one or more historical observations for the prediction domain and/or subdomain thereof.
In some examples, the defined relationships may depend on the prediction domain. For example, in a clinical domain, a weighted edge 310 may describe one or more healthcare related relationships, such a familial relationship, legal relationships, healthcare provider relationships, and/or the like. In some examples, the defined relationships may depend on the subdomain. By way of example, in a clinical domain, example defined relationships for a demographic subdomain may include a familial relationship between two members, example defined relationships for a claim subdomain may include a relationship between a member and an insurance claim, example defined relationships for an employment subdomain may include an employment relationship between an employer and a member, example defined relationships for an investigation subdomain may include an indication of whether a member is associated with a COB event, and/or the like.
In some embodiments, a plurality of subdomain-specific embeddings 314 are generated for the plurality of subdomain-specific graphs 304a-n. For example, the subdomain-specific embeddings 314 may be generated using a graph-based machine learning model 312. The subdomain-specific embeddings 314 may include a respective subdomain-specific embedding for each of the plurality of subdomain-specific graphs 304a-n.
In some embodiments, the graph-based machine learning model 312 refers to a data entity that describes parameters, hyper-parameters, and/or defined operations of a rules-based and/or machine learning model (e.g., model including at least one of one or more rule-based layers, one or more layers that depend on trained parameters, coefficients, and/or the like). A graph-based machine learning model 312, for example, may include a machine learning model that is trained to generate graph embeddings for a designated predictive task 318. A graph-based machine learning model 312 may include one or more of any type of machine learning model including one or more supervised, unsupervised, semi-supervised, and/or reinforcement learning models. In some embodiments, the graph-based machine learning model 312 may include multiple models configured to perform one or more different stages of an embedding process.
In some embodiments, the graph-based machine learning model 312 includes a machine learning model trained, using one or more semi-supervised training techniques, to generate graph embeddings for one or more subdomain-specific graphs 304a-n and/or combinations thereof. The graph-based machine learning model 312, for example, may be trained to leverage one or more metapaths formed by a respective subdomain-specific graph 304a to extract diverse semantic information by learning the relevant metapaths and fusing semantic information to improve predictive accuracy with respect to the designated predictive task 318. To do so, the graph-based machine learning model 312 may include a graph model, such as a multiple graph learning neural network (MGLNN) that is trained to generate node-level and semantic-level attention weights by exploiting the complementary information of multiple graphs. In other words, the goal of the graph-based machine learning model 312 may be to learn an optimal graph structure from multiple graph structures that best serves the designated predictive task 318.
For example, the graph-based machine learning model 312 may be trained to generate a plurality of node-level weights for the plurality of graph nodes 306 of the subdomain-specific graph 304a. For instance, the graph-based machine learning model 312 may receive node attributes 308 for a respective graph node 306, project the node attributes 308 across all of the nodes into the same space, and generate node-level attention values (e.g., node-level weights, etc.) by learning the attention values between the nodes and their meta-path based neighbors. In addition, or alternatively, the graph-based machine learning model 312 may be trained to generate a plurality of semantic-level weights for the plurality of weighted edges 310 of the subdomain-specific graph 304a. For instance, the graph-based machine learning model 312 may learn attention values (e.g., semantic-level weights, etc.) of one or more different metapaths within the subdomain-specific graph 304a.
In some examples, the graph-based machine learning model 312 may generate subdomain-specific embeddings 314 based on the learned attention values (e.g., node-level weights, semantic-level weights, etc.). For instance, the graph-based machine learning model 312 may generate an optimal combination of neighbors and metapaths in a hierarchical manner (node-level attention to semantic-level attention), which results in the importance of graph nodes and the metapaths being taken into consideration simultaneously.
In some examples, the graph-based machine learning model 312 may generate subdomain-specific embeddings 314 for each of a plurality of subdomain-specific graphs 304a-n. The subdomain-specific embeddings 314 may be further processed, using reinforcement training techniques, such as back-propagation of errors, to generate a composite graph embedding 316 for a designated predictive task 318.
In some embodiments, a subdomain-specific embedding 314 is a data structure that describes a subdomain-specific graph 304a. A subdomain-specific embedding 314, for example, may include an encoded vector (and/or any other data representation, etc.) that encodes one or more attributes (e.g., node attributes, edge attributes, etc.) and/or weights (node-level weights, semantic-level weights, etc.) into a data structure representing a subdomain-specific graph 304a. As described herein, a subdomain-specific embedding 314 may include a plurality of vectors, each with a plurality of real numbers representing entities with a subdomain-specific graph 304a that may be leveraged by a plurality of different predictive tasks, including supervised techniques for generating node classifications, and/or the like.
In some embodiments, a subdomain-specific embedding of the plurality of subdomain-specific embeddings 314 is based on a plurality of attention weights assigned to a plurality of graph nodes 306 and a plurality of weighted edges 310 of a subdomain-specific graph 304a corresponding to the subdomain-specific embedding. The plurality of attention weights, for example, may include a plurality of node-level weights and/or semantic-level weights. In some examples, a plurality of node-level weights may be generated for the plurality of graph nodes 306 of the subdomain-specific graph 304a based on a plurality of node attributes 308 corresponding to the plurality of graph nodes 306. In addition, or alternatively, a plurality of semantic-level weights may be generated for the plurality of weighted edges 310 of the subdomain-specific graph 304a based one or more metapaths (e.g., sequences of edges connecting one or more graph nodes, etc.) within the subdomain-specific graph 304a.
In some embodiments, a node-level weight is a data value for a graph node 306 that describes a relevance of one or more node attributes 308. A node-level weight may include one type of attention weight for a subdomain-specific graph 304a. A node-level weight, for example, may include a learned attention value for a graph node 306 that may be based on one or more node attributes of the respective graph node 306 and/or one or more metapath-based neighbors.
In some embodiments, a semantic-level weight is a data value for a weighted edge 310 that describes a relevance of one or more metapaths. A node-level weight may include one type of attention weight for a subdomain-specific graph 304a. A semantic-level weight, for example, may include a learned attention value for a weighted edge 310 that may be based on one or more weighted edges 310 and/or node attributes 308 within a subdomain-specific graph 304a. A semantic-level weight, for example, may be based on a comparison of one or more metapaths and one or more node labels within a subdomain-specific graph 304a.
In some embodiments, a composite graph embedding 316 is generated based on the plurality of subdomain-specific embeddings 314 and/or a designated predictive task 318. For example, the composite graph embedding 316 may be generated using the graph-based machine learning model 312. In some embodiments, a plurality of node attributes may include one or more node labels for the designated predictive task 318. A model loss may be generated, using a semi-supervised loss function, for the graph-based machine learning model 312 based on the composite graph embedding 316. In some examples, the composite graph embedding 316 may be updated, using a machine learning training technique, based on the model loss.
In some embodiments, a composite graph embedding 316 is a data structure that describes a plurality of subdomain-specific graphs 304a-n within a multi-graph environment. A composite graph embedding 316, for example, may include an approximation of each of the individual subdomain-specific graphs 304a-n within the multi-graph environment. In this manner, a composite graph embedding 316 may aggregate data from across each subdomain of a prediction domain. The composite graph embedding 316 may be learned to emphasize characteristics that are more likely to result in a prediction output for a designated predictive task 318. In some examples, the composite graph embedding 316 may be a vector, such that it may be optimized for processing by any type of designated predictive task 318—including clustering of nodes (unsupervised) or node classification (supervised)—using methods such as similarity calculation and/or complex forms of processing by large transformer models.
In some embodiments, the designated predictive task 318 is a predictive task that leverages a composite graph embedding 316 to generate a prediction within a prediction domain. The designated predictive task 318 may include one or more machine learning, rule based, and/or the like processes that may be leveraged to generate a predictive classification. The designated predictive task 318 may depend on the prediction domain. By way of example, in a clinical domain, the designated predictive task 318 may include a classification process for detecting members with overlapping health care coverages, detecting instances of fraud, waste, and/or abuse, and/or the like.
In some embodiments, a node label is a node attribute that describes a ground truth value for a designated predictive task 318. In some examples, a node label may include a node attribute 308. By way of example, the plurality of graph nodes 306 may include one or more labeled graph nodes and/or one or more unlabeled graph nodes. In some examples, the designated predictive task 318 may be configured to generate one or more predictive classifications for the one or more unlabeled graph nodes based on the one or more labeled graph nodes. In this respect, a node label may be based on a prediction domain and/or a designated predictive task 318 within the prediction domain. For instance, in a clinical domain, a node label may be indicative of a graph node with overlapping health care coverages (e.g., instances of COB), instances of fraud, waste, and/or abuse, and/or the like.
In some embodiments, the performance of the designated predictive task 318 is initiated based on the composite graph embedding 316. In some examples, the designated predictive task 318 is a machine learning classification task and initiating the performance of the designated predictive task 318 may include generating, using a classification model 320, a predictive classification 322 for an unlabeled graph node associated with the plurality of subdomain-specific graphs 304a-n. In some examples, each of the plurality of subdomain-specific graphs 304a-n may be modified by assigning the predictive classification 322 to an unlabeled graph node.
In some embodiments, the classification model 320 is a data entity that describes parameters, hyper-parameters, and/or defined operations of a rules-based and/or machine learning model (e.g., model including at least one of one or more rule-based layers, one or more layers that depend on trained parameters, coefficients, and/or the like). A classification model 320, for example, may include a machine learning model that is trained to perform a designated predictive task 318 to generate a predictive classification 322 for a prediction domain. A classification model 320 may include one or more of any type of machine learning model including one or more supervised, unsupervised, semi-supervised, and/or reinforcement learning models. In some embodiments, the classification model 320 may include multiple models configured to perform one or more different stages of a classification process.
In some examples, the classification model 320 may include an embedding-based classification model. An embedding-based classification model, for example, may be trained using a plurality of composite embeddings (e.g., and label pairs) to generate probability scores for a predictive classification (e.g., insurance coverage through spouse-to-spouse or child-to-parent or some other relationship, etc.). The embedding-based classification model may generate a predictive classification 322 for one or more graph nodes 306 associated with probability score over a threshold.
In some examples, a predictive classification 322 may be generated based on a comparison between a first vector from the composite graph embedding 316 corresponding to a labeled graph node and second vector from the composite graph embedding 316 corresponding to an unlabeled graph node. By way of example, a probability score of a predictive classification 322 (e.g., a COB label, etc.) for an unlabeled graph node may be based on a dot-product between the first vector and the second vector. In the event that the two nodes are in a close vector space, a high probability score may be generated, and, by extension, a predictive classification 322 may be generated.
In some examples, a classification model 320 may include a clustering model. For example, a clustering model may include an unsupervised machine learning model configured to generate one or more node clusters from the plurality of graph nodes 306 within at least one subdomain-specific graph 304a based on the composite graph embedding 316. For example, the clustering model may include one or more hierarchical clustering models, k-means models, mixture models, and/or the like.
In some embodiments, the predictive classification 322 is a data entity that describes a predicted value for a designated predictive task 318. A predictive classification 322 may include an unobserved data value for a graph node 306 that is generated by the designated predictive task 318. In some examples, a predictive classification 322 may be assigned to a graph node 306 to generate an additional node attribute for a subdomain-specific graph 304a. A predictive classification 322 may depend on a prediction domain. For example, in a clinical domain, a predictive classification 322 may include a COB label indicating whether a member has dual insurance coverage.
In some embodiments, one or more modification data objects 324 are received that are associated with the corresponding subdomain-specific source table 302a. In response to the one or more modification data objects 324, a subdomain-specific graph 304a may be regenerated. In some examples, the one or more modification data objects are received at a defined time interval. By way of example, a prediction domain may include a clinical domain and the defined time interval may be associated with a claim aggregation frequency.
In some embodiments, the modification data object 324 is a data entity that describes modified data for a prediction domain. A modification data object 324 may include one or more additional, modified, and/or removed nodes, edges, and/or attributes. A modification data object 324, for example, may include data that is recorded and/or observed after a generation of a composite graph embedding 316. A modification data object 324, for example, may include an update to one or more of the plurality of subdomain-specific source tables 302a-n of the prediction domain. A modification data object 324 may depend on a prediction domain. For instance, in a clinical domain, a modification data object 324 may describe a new member, a new claim, a new member address, relationship, residence, and/or the like.
In some embodiments, a defined time interval is a data entity that describes a unit of time associated with a reception of one or more modification data objects 324. In some examples, a defined time interval may identify a time period in between one or more versions of a composite graph embedding 316. For instance, a defined time interval may identify an update frequency for a composite graph embedding 316 and/or one or more sources of the composite graph embedding 316, including the subdomain-specific source tables 302a-n, subdomain-specific graphs 304a-n, subdomain-specific embeddings 314, and/or the like.
In this manner, using some of the techniques of the present disclosure, one or more predictive insights, such as composite graph embeddings 316 and/or predictive classifications 322 derived thereof may be generated based on holistic, up-to-date data. By leveraging the subdomain-specific graphs 304a-n, some of the techniques of the present disclosure, provide improved data structures that increase the flexibility and adaptability of data stored in a large data predictive domain. An example of the subdomain-specific graphs 304a-n will now further be described with reference to
In some examples, the multi-graph environment 400 may include a subdomain-specific graph 304a-n for each subdomain of a prediction domain. In some examples, a composition of a domain may be defined by a use-case, business protocols, laws, and/or driven by a plurality of subdomain-specific source tables (e.g., an existing relational database table architecture). The multi-graph environment 400 may persist logical blocks of information into a plurality of dedicated subdomain-specific graphs 304a-n to create an entity-centric multi-graph environment that may present an ideal platform for leveraging capabilities of graph-based machine learning models.
In some embodiments, each component of the multi-graph environment 400 may include a distinct graph data structure 450 corresponding to a particular subdomain, as shown by
The embedding process may include generating a node-level weight for each of the target nodes. For instance, a first node-level weight 508 may be generated for the first target node 502 based on one or more node attributes corresponding to the first target node 502 and/or one or more corresponding neighbor nodes 504. In addition, or alternatively, second node-level weight 510 may be generated for the second target node 506 based on one or more node attributes corresponding to the second target node 506 and/or one or more corresponding neighbor nodes 504. The embedding process may include generating a semantic-level weight 512 for a metapath (and/or one or more weighted edges thereof) based on the node-level weights, the node attributes, and/or the weighted edges corresponding to the metapath. In this manner, as described herein, the node-level weights and the semantic-level weights of the subdomain-specific graph may aggregated to generate subdomain-specific embeddings that may enable improved prediction processes. An example prediction process will now further be described with reference to
In some embodiments, the process 700 includes, at step/operation 702, receiving source tables. For example, a computing system 100 may receive a plurality of subdomain-specific source tables for a prediction domain. In some examples, each of the plurality of source tables may include subdomain data for a subdomain of the prediction domain.
In some embodiments, the process 700 includes, at step/operation 704, generating a plurality of subdomain-specific graphs. For example, the computing system 100 may generate, using a plurality of source tables for a prediction domain, a plurality of subdomain-specific graphs for a prediction domain. In some examples, each of the plurality of subdomain-specific graphs may include a separate heterogeneous and undirected graph data structure. For example, each graph may include a respective plurality of graph nodes and a respective plurality of weighted edges between the respective plurality of graph nodes.
In some embodiments, a subdomain-specific graph of the plurality of subdomain-specific graphs is generated based on subdomain data from a corresponding source table of the plurality of source tables. For example, a plurality of graph nodes for a subdomain-specific graph of the plurality of subdomain-specific graphs may include a set of common graph nodes that are within each of the plurality of subdomain-specific graphs and/or a set of subdomain-specific graph nodes specific to the subdomain-specific graph. The set of common graph nodes may include one or more labeled and/or unlabeled graph nodes.
In some embodiments, the process 700 includes, at step/operation 706, performing node-level attention. For example, the computing system 100 may generate, using a graph-based machine learning model, a plurality of node-level weights for the plurality of graph nodes of each subdomain-specific graph based on a plurality of node attributes corresponding to the plurality of graph nodes.
In some embodiments, the process 700 includes, at step/operation 708, performing semantic-level attention. For example, the computing system 100 may generate, using the graph-based machine learning model, a plurality of sematic-level weights for a plurality of weighted edges of a subdomain-specific graph based on one or more metapaths within the subdomain-specific graph.
In some embodiments, the process 700 includes, at step/operation 710, generating a plurality of subdomain-specific embeddings. For example, the computing system 100 may generate, using a graph-based machine learning model, a plurality of subdomain-specific embeddings comprising a respective subdomain-specific embedding for each of the plurality of subdomain-specific graphs. In some embodiments, a subdomain-specific embedding of the plurality of subdomain-specific embeddings is based on a plurality of attention weights assigned to a plurality of graph nodes and a plurality of weighted edges of a subdomain-specific graph corresponding to the subdomain-specific embedding. The plurality of attention weights may include a plurality of node-level weights and a plurality of semantic-level weights.
In some embodiments, the process 700 includes, at step/operation 712, generating a composite graph embedding. For example, the computing system 100 may generate, using the graph-based machine learning model, a composite graph embedding based on the plurality of graph embeddings and a designated predictive task.
In some examples, a plurality of node attributes may include one or more node labels for a designated predictive task. The computing system 100 may generate, using a semi-supervised loss function, a model loss for the graph-based machine learning model based on the composite graph embedding and update, using a machine learning training technique, the composite graph embedding based on the model loss.
In some embodiments, the process 700 includes, at step/operation 712, generating predictive classifications. For example, the computing system 100 may initiate the performance of the designated predictive task based on the composite graph embedding. In some examples, the designated predictive task is a machine learning classification task. The computing system 100 may initiate the performance of the designated predictive task by generating, using a machine learning classification model, a predictive classification for an unlabeled graph node associated with the plurality of subdomain-specific graphs. In some examples, the computing system 100 may modify each of the plurality of subdomain-specific graphs by assigning the predictive classification to the unlabeled graph node.
In some embodiments, the computing system 100 receives one or more modification data objects associated with the corresponding source table. In response to the one or more modification data objects, the computing system 100 may regenerate the subdomain-specific graph. In some embodiments, the one or more modification data objects may be received at a defined time interval. By way of example, a prediction domain may include a clinical domain and the defined time interval may be associated with a claim aggregation frequency.
Some techniques of the present disclosure enable the generation of action outputs that may be performed to initiate one or more prediction-based actions to achieve real-world effects. The computer data storage and interpretation techniques of the present disclosure may be used, applied, and/or otherwise leveraged to generate predictive insights, such as predictive classifications, which may help in the interpretation of diverse relationships within a large data prediction domain. The predictive insights of the present disclosure may be leveraged to initiate the performance of various computing tasks that improve the performance of a computing system (e.g., a computer itself, etc.) with respect to various prediction-based actions performed by the computing system 100, such as for the identification and handling of various predictive classifications and/or the like. Example prediction-based actions may include the display, transmission, and/or the like of data indicative (e.g., including a prediction identifier, etc.) of predictive classification, such as alerts of a COB outcome for a member, and/or the like.
In some examples, the computing tasks may include prediction-based actions that may be based on a prediction domain. A prediction domain may include any environment in which computing systems may be applied to achieve real-word insights, such as risk predictions (e.g., adverse outcome predictions, etc.), and initiate the performance of computing tasks, such as prediction-based actions to act on the real-world insights (e.g., derived from adverse outcome predictions, etc.). These prediction-based actions may cause real-world changes, for example, by controlling a hardware component, providing alerts, interactive actions, and/or the like. For instance, prediction-based actions may include the initiation of automated instructions across and between devices, automated notifications, automated scheduling operations, automated precautionary actions, automated security actions, automated data processing actions, and/or the like.
Many modifications and other embodiments will come to mind to one skilled in the art to which the present disclosure pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the present disclosure is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Example 1. A computer-implemented method comprising: generating, by one or more processors and using a plurality of source tables for a prediction domain, a plurality of subdomain-specific graphs for the prediction domain, each comprising a respective plurality of graph nodes and a respective plurality of weighted edges between the respective plurality of graph nodes; generating, by the one or more processors and using a graph-based machine learning model, a plurality of subdomain-specific embeddings comprising a respective subdomain-specific embedding for each of the plurality of subdomain-specific graphs; generating, by the one or more processors and using the graph-based machine learning model, a composite graph embedding based on the plurality of subdomain-specific embeddings and a designated predictive task; and initiating, by the one or more processors, the performance of the designated predictive task based on the composite graph embedding.
Example 2. The computer-implemented method of example 1, wherein a subdomain-specific embedding of the plurality of subdomain-specific embeddings is based on a plurality of attention weights assigned to a plurality of graph nodes and a plurality of weighted edges of a subdomain-specific graph corresponding to the subdomain-specific embedding.
Example 3. The computer-implemented method of example 2, wherein the plurality of attention weights comprises a plurality of node-level weights and the computer-implemented method further comprises generating, using the graph-based machine learning model, the plurality of node-level weights for the plurality of graph nodes of the subdomain-specific graph based on a plurality of node attributes corresponding to the plurality of graph nodes.
Example 4. The computer-implemented method of example 3, wherein the plurality of node attributes comprises one or more node labels for the designated predictive task and the computer-implemented method further comprises generating, using a semi-supervised loss function, a model loss for the graph-based machine learning model based on the composite graph embedding; and updating, using a machine learning training technique, the composite graph embedding based on the model loss.
Example 5. The computer-implemented method of any of examples 2 through 4, wherein the plurality of attention weights comprises a plurality of semantic-level weights and the computer-implemented method further comprises generating, using the graph-based machine learning model, the plurality of semantic-level weights for the plurality of weighted edges of the subdomain-specific graph based on one or more metapaths within the subdomain-specific graph.
Example 6. The computer-implemented method of any of the preceding examples, wherein the designated predictive task is a machine learning classification task and initiating the performance of the designated predictive task based on the composite graph embedding comprises generating, using a machine learning classification model, a predictive classification for an unlabeled graph node associated with the plurality of subdomain-specific graphs.
Example 7. The computer-implemented method of example 6, wherein a plurality of graph nodes for a subdomain-specific graph of the plurality of subdomain-specific graphs comprises a set of common graph nodes that are within each of the plurality of subdomain-specific graphs and a set of subdomain-specific graph nodes specific to the subdomain-specific graph, and the set of common graph nodes comprises the unlabeled graph node.
Example 8. The computer-implemented method of example 7, further comprising modifying each of the plurality of subdomain-specific graphs by assigning the predictive classification to the unlabeled graph node.
Example 9. The computer-implemented method of any of the preceding examples, wherein each of the plurality of subdomain-specific graphs comprises a separate heterogeneous and undirected graph data structure.
Example 10. The computer-implemented method of any of the preceding examples, wherein each of the plurality of source tables comprise respective subdomain data for a subdomain of the prediction domain and a subdomain-specific graph of the plurality of subdomain-specific graphs is generated based on subdomain data from a corresponding source table of the plurality of source tables.
Example 11. The computer-implemented method of example 10, further comprising receiving one or more modification data objects associated with the corresponding source table; and in response to the one or more modification data objects, regenerating the subdomain-specific graph.
Example 12. The computer-implemented method of example 11, wherein the one or more modification data objects are received at a defined time interval.
Example 13. The computer-implemented method of example 12, wherein the prediction domain comprises a clinical domain and the defined time interval is associated with a claim aggregation frequency.
Example 14. A computing system comprising memory and one or more processors communicatively coupled to the memory, the one or more processors configured to generate, using a plurality of source tables for a prediction domain, a plurality of subdomain-specific graphs for the prediction domain, each comprising a respective plurality of graph nodes and a respective plurality of weighted edges between the respective plurality of graph nodes; generate, using a graph-based machine learning model, a plurality of subdomain-specific embeddings comprising a respective subdomain-specific embedding for each of the plurality of subdomain-specific graphs; generate, using the graph-based machine learning model, a composite graph embedding based on the plurality of subdomain-specific embeddings and a designated predictive task; and initiate the performance of the designated predictive task based on the composite graph embedding.
Example 15. The computing system of example 14, wherein a subdomain-specific embedding of the plurality of subdomain-specific embeddings is based on a plurality of attention weights assigned to a plurality of graph nodes and a plurality of weighted edges of a subdomain-specific graph corresponding to the subdomain-specific embedding.
Example 16. The computing system of example 15, wherein the plurality of attention weights comprises a plurality of node-level weights and the one or more processors are further configured to generate, using the graph-based machine learning model, the plurality of node-level weights for the plurality of graph nodes of the subdomain-specific graph based on a plurality of node attributes corresponding to the plurality of graph nodes.
Example 17. The computing system of example 16, wherein the plurality of node attributes comprises one or more node labels for the designated predictive task and the one or more processors are further configured to generate, using a semi-supervised loss function, a model loss for the graph-based machine learning model based on the composite graph embedding; and update, using a machine learning training technique, the composite graph embedding based on the model loss.
Example 18. The computing system of examples 15 through 17, wherein the plurality of attention weights comprises a plurality of semantic-level weights and the one or more processors are further configured to generate, using the graph-based machine learning model, the plurality of semantic-level weights for the plurality of weighted edges of the subdomain-specific graph based on one or more metapaths within the subdomain-specific graph.
Example 19. One or more non-transitory computer-readable storage media including instructions that, when executed by one or more processors, cause the one or more processors to generate, using a plurality of source tables for a prediction domain, a plurality of subdomain-specific graphs for the prediction domain, each comprising a respective plurality of graph nodes and a respective plurality of weighted edges between the respective plurality of graph nodes; generate, using a graph-based machine learning model, a plurality of subdomain-specific embeddings comprising a respective subdomain-specific embedding for each of the plurality of subdomain-specific graphs; generate, using the graph-based machine learning model, a composite graph embedding based on the plurality of subdomain-specific embeddings and a designated predictive task; and initiate the performance of the designated predictive task based on the composite graph embedding.
Example 20. The one or more non-transitory computer-readable storage media of example 19, wherein the designated predictive task is a machine learning classification task and initiating the performance of the designated predictive task based on the composite graph embedding comprises generating, using a machine learning classification model, a predictive classification for an unlabeled graph node associated with the plurality of subdomain-specific graphs.