Aspects hereof relate to devices, systems, and methods for classifying records based on asserted ontological axioms.
Traditional electronic health records (EHR) commonly utilize relational databases to organize and store patient records. These databases use structured and unstructured fields to hold values (e.g., numeric, descriptive, or illustrative) in any number of tables and nested tables. Although relational databases facilitate storage of massive amounts of data, complex querying traditionally called for writing each iteration of the query in procedural code. For example, decision support applications (e.g., patient care workflow automation applications) traditionally operate using computational algorithms that progress based on a complex querying schema.
In order for traditional computerized systems to aggregate and evaluate complex querying requests (e.g., a query that includes multiple inclusion criteria, exclusion criteria, or the combination of both) each iteration of the query is expressed as procedural code. The systems, methods, and devices described here present a paradigm shift from these traditional procedure-based constraints. In contrast to the traditional systems, those described herein facilitate hybrid procedure-based and ontology-based data evaluation. Among other benefits, the described systems, methods, and devices described herein may detect a triggering event, such as a modification of a field of a database record via a set of procedurally executed code that is monitoring operations associated with a database maintaining the database record. The database record may be relationally linked to a plurality of database fields including the field of the database record. A plurality of values stored in the plurality of database fields and a field address for each of the plurality of values may can be extracted from the database record by a procedurally executed portion of code (e.g., an object oriented script). In some aspects, an identifier may be extracted that links the database fields to the database record. Based on the extracted values, an entity is generated within a knowledge-based library by execution of a script that writes a formatted version of the plurality of values to a corresponding set of fields mapped to the plurality of database fields based on the field address. A class of the entity is computed based on inferences of a reasoner drawn from a knowledge-based library including the generated entity. In response to the computation of the entities class, classification data for the entity is returned as input to a procedurally defined workflow. The classification data may include at least a finding of a value of the plurality of values within the entity and a super class ontologically defined by the inferences reasoner drawn from a knowledge-based library including the generated entity. A workflow action may be executed within the procedurally defined workflow based on the classification data.
The present invention is described in detail herein with reference to the attached drawing figures, wherein:
In order for a traditional computerized system to understand and aggregate the information stored in electronic records, the traditionally computerized system applied rules to evaluate the selected records. A separate rule is used to evaluate each possible combination of variables and values for each variable that may be present in the record. Thus, a separate rule is drafted to address each and every distinct combination of variable(s) and/or values(s) for each variable. For example, in order to evaluate records for one condition (e.g., variable) having multiple available states (e.g., a value), a rule is drafted for each distinct permutation of combinations that may be present in the record. As the amount of information stored in electronic records increases, the number and complexity of rules utilized escalates. For this reason, traditional computerized system require a vast number of rules.
To illustrate, in order for a traditional computerized system to aggregate and evaluate whether any of the plurality of records stored in a relational database support progression from a first point in a decision support workflow to a second point, a rule may be needed to evaluate each combination of the individual variable(s) and/or values(s). Generally, the decision point is a procedurally programmed step within the workflow where a set of rules define progression along one of at least two potential paths. A path may be simple (e.g., a closed loop or the programmatic equivalent of wait until the rules of the decision point are satisfied) or complex. In a traditional workflow, the two paths may include the algorithmic evaluation of data held in a database. As a simplified example, a traditional decision point can include an if/else programmatic expression. Where the conditions of the if expression are satisfied the workflow may advance along the path defined by the workflow. Otherwise, the workflow may advance along the path defined by the else expression of the workflow.
Even if a particular decision point only had five conditions where condition A has four available states, condition B has six available states, condition C has three available states, condition D has three available states, and condition E has four available states, the traditional computerized system may need to access and evaluate each one of the 864 available combinations by using 864 distinct rules. For example, a traditional patient care workflow to automate diagnosis of a viral infection may include procedural rules for each combination of lab values (e.g., oxygen saturation, cell count, antibody count, PCR result and so forth) and physical manifestation (e.g., temperature, inflammation, cough, and so forth) that are defined as necessary to verify the viral infection. Additionally, rules may be needed for each combination of the type of values that are possibly stored in the record. Moreover, traditional procedural code may include a rule that points to each combination of the potentially used fields a relational database. For example, a temperature value that may be stored in at different locations within the database based on the source of the temperature (e.g., oral, temporal, rectal, and so forth), the unit used to record the temperature (e.g., Celsius or Fahrenheit).
Further, differences between the structures of any two relational databases can prevent interoperability of the procedural code, because the particular field may be organized differently in the structure of each relational database. In other words, these procedurally coded rules may only enable evaluation of a single relational database structure, and thus a traditional computerized system cannot reuse procedurally coded rules for any relational database that does not adhere to one specific structure.
As such, a rules database consumes vast amounts of computer readable memory in order to store the supportive rules for record evaluations involving hundreds or thousands of conditions. Additionally, a significant amount of processing resources are utilized by the traditional computerized system in order to execute the rules against records for an evaluation involving hundreds or thousands of conditions. Further, each time that a change is made to one of the conditions or the available states for the conditions (e.g., new condition to add, a condition to remove, new state to add to a condition, or state to remove for a condition), the procedural code for the corresponding rules is modified to keep the rules database up-to-date.
The aspects described herein represent a paradigm shift away from procedural rule dependency. Rather, the aspects described herein may facilitate a hybridization of procedurally executed decision support and ontologically-based classification reasoning. In particular, the aspects described herein provide methods, systems, and media to facilitate automatic ingestion of variable(s) and/or values(s) relevant to one or more decision support workflows by a knowledge-based library. In some aspects, one or more scripts facilitate the automatic ingestion of the variable(s) and/or values(s). For example, a script may be executed that maps the location of database fields (e.g., a field address) of the variable(s) and/or values(s) called by the decision support workflow to a corresponding set of fields of an entity in the knowledge-based library. A reasoner may classify the variable(s) and/or values(s) based on the asserted relationships within the knowledge-based library and classification data maybe returned to facilitate computations defined by the one or more decision support workflows. The classification data may be a class, a direct super class, or indirect super class to which the generated entity belongs.
Turning to
Some embodiments of process flow 100 may begin with activation of the decision support process 102. Activation of the decision support process 102 may be triggered by any programmatically detected action that initiates the decision support workflow based on the process the workflow supports. For example, activation of the decision support workflow can be automatically triggered in response to the programmatic detection of a modification of a database record in some aspects. The modification of the database record can include addition of data to one or more fields of the database that is maintaining the database record. For example, and with brief reference to
Some embodiments of process flow 100, data from a database record is parsed at block 104 based on the activated decision support workflow. The data maybe parsed from the database 128 using any suitable means. For example, in a particular aspect, a script is programmatically coded to extract the values stored in the database fields associated with the first decision point of the activated decision support workflow. In some embodiments, one of the extracted values corresponds to an identifier of the database record. The identifier may be an alphanumeric value that is associated with each of the fields of a database record. For example, the identifier may be a patient identification value, an index key value, or any other value that defines the relationship of a plurality of fields as belonging to the same database record. Additionally, a field address corresponding to each of the database fields that are holding the extracted values can be extracted at block 104. The field address can be a value that uniquely identifies the identity or location of the field in the database. A database record in the database may include multiple values in fields associated with a particular field address (e.g., multiple fields with a field address identified as holding temperature in Celsius).
Example process flow 100, at block 106, includes mapping the extracted data to an entity within a knowledge-based system. In some aspects the extracted data is mapped by execution of a script within a decision support application (e.g., decision support application 312 of
In some aspects of process flow 100, block 106 includes creation of the entity within the knowledge-based system. For example, the script may include programmatic expressions that query the knowledge-based system for an entity corresponding to the identifier extracted from the database in block 104. Where the knowledge-based system includes an entity corresponding to the identifier, the script may modify the entity within the knowledge-based system with the other data extracted from database 128. In contrast, where the knowledge-based system does not include an entity corresponding to the identifier, the script may automatically execute operations that create a new entity within the knowledge-based system. In an aspect, the newly created entity is populated with at least one field that includes the identifier extracted from the database 128. Similarly, the script may facilitate the creation of the entity by communicating extracted data to an ontology-guided classification component (e.g., ontology-guided classification component 302 of
Some aspects of process flow 100 include, at block 130, initiation of ontology-guided classification of an entity generated or modified at block 106 within a knowledge-based system. For example, a reasoner 108 may be activated to classify the entity based on the logical consequences of the data asserted in the entity. As depicted in
The reasoner 108 may output additional classification data in some aspects. For example, reasoner 108 may identify at least one super class 116 or direct super class of the entity or at least one filed based on the asserted axioms. A super class refers to any class that includes the inferred class ontological hierarchy. A direct super class refers to any class that is immediately above the inferred class in the ontological hierarchy.
Additionally, reasoner 108 can write data to a database or a file. In some aspects, the reasoner 108 writes data to a database or file in response to the classification of the entity. For example, the reasoner 108 may output the class to which the entity belongs, a super class to which the entity belongs, a direct super class to which the entity belongs, the classes to which the fields of the entity belong, the super classes to which the fields of the entity belong, the direct super classes to which the fields of the entity belong, or any combination thereof.
Some aspects of process flow 100 include, at block 118, communicating the classification data 110 to the decision support process. The communication of classification data 110 may be facilitated by at least one script in some aspects. For example, the script may include programmatic expressions that injects a formatted version of at least one piece of classification data to a field in the decision support workflow.
Some aspects of process flow 100 include, at block 120, executing the decision support process workflow algorithm based on the classification data. In some aspects, the decision support process workflow may progress, based on the classification data, to a first decision point in the workflow. For example, and with reference to
At decision point 206, the classification data is analyzed based on the programmatically expressed algorithm associated with decision point 206. As illustratively depicted in
Returning to
Alternatively, where the decision support process workflow progressed to a decision point (e.g., decision point 214) or any other non-terminal point in the workflow, process flow 100 processes to block 124. At block 124, process flow 100 determines whether the workflow reached the same point for the database record based on the same data parsed from the database record. For example, a script may assess a log file to determine whether decision support process workflow 200 has previously progressed to decision points 206 or 214 and was unable to continue. Further, the script may assess the log file and the database record to determine whether data was added to the patient's record (e.g., database record) since the decision support process workflow 200 progressed to decision points 206 or 214 and was unable to continue. In response to a determination that new data was added or the workflow has not progressed to the same point, block 124 may return to block 104. Alternatively, in response to a determination that no new data has been added and the workflow has progressed to the same point, block 124 may progress to block 126.
Turning to
The ontology-guided classification component 302 may include hardware, software, firmware or any combination thereof. Additionally, some aspects of the ontology-guided classification component 302 can be a subroutine or subcomponent of an application, cloud service, or any other computational platform. Generally, the ontology-guided classification component 302 receives queries from a decision support application 312 that hosts at least one procedurally coded decision support workflow 314 (e.g., workflow 200 of
An ingestion controller 332 generally identifies queries that are submitted by the decision support application 312. Ingestion controller 332 can identify queries in any suitable manner. For example, in some aspects, ingestion controller 332 may continuously, periodically, or intermittently crawl the decision support application's 312 for documents that contain data parsed from record repository server 316. In other aspects, ingestion controller 332 monitors data communicated from decision support application 312 and analyzes the data to identify data parsed from record repository server 316. In some aspects, ingestion controller 332 compares an identifier included in the data to a set of identifiers stored in a lookup table to determine whether an entity corresponding to the identifier is new or has been previously processed in the knowledge-based environment 340. When the ingestion controller 332 detects that an entity containing the identifier does not exist within a knowledge-based library, the ingestion controller 332 activates an ontology entity generator, such as ontology entity generator 336. Additionally, or alternatively, when the ingestion controller 332 detects an entity within the knowledge-based library that includes the identifier within a field, the ingestion controller 332 activates an ontology entity modifier, such as ontology entity modifier 334.
Ontology entity modifier 334 generally accepts formatted versions of values from a script, and outputs machine-readable modifications to preexisting entities within a knowledge-based library (e.g., knowledge-based library 340). The modifications may include the addition of values to previously unpopulated data fields, replacement of values in previously populated data fields, removal of values in previously populated data fields, addition of values to new instances of previously populated data fields, or any other computer understandable data manipulation function. The values may be mapped to data fields of the entity based on data encoded by a script (e.g., script 318). Ontology entity modifier 334 comprises a module within a software application, a software application, or set of applications (which may include programs, routines, functions, or computer-performed services) that is executed by a processor associated with the ontology-guided classification component 302
Ontology entity generator 336 generally accepts formatted versions of values from a script, and outputs a machine-readable entity within a knowledge-based library (e.g., knowledge-based library 340). The values may be mapped to data fields of the entity based on data encoded by a script (e.g., script 318). Ontology entity generator 336 comprises a module within a software application, a software application, or a set of applications (which may include programs, routines, functions, or computer-performed services) that is executed by a processor associated with the ontology-guided classification component 302.
Reasoner 338 generally infers the logical consequences of an asserted patient entity. The output of reasoner 338 includes an inferred classification of each record (e.g., the asserted patient entity) stored in a record database (e.g., record database 320) based on the knowledge-based library. In other words, the reasoner 338 accepts the rules, concepts, classes, and the relationships connecting each as true logical axioms. As is discussed in more detail with reference to
Additionally, reasoner 338 can infer the finding of one or more values within each entity. The finding may be inferred by reasoner 338 based on the concepts, rules, and relationships asserted within knowledge-based library. For example, a value in a field corresponding to temperature greater than a predetermined threshold may have a finding of IncreasedTemperature. Similarly, a value less than or equal to the predetermined threshold may have a finding of NormalTemperature. In some aspects, the reasoner 138 writes data to a database or file in response to the determination that a patient record belongs to a predetermined class. In some aspects, the particular database record or particular file the data is written to varies based on the class to which the patient record is assigned. Example reasoners include Cyc, KAON2, Cwm, Drools®, Flora-2, Jena, Prova.
In some aspects, an ontology-guided classification component 302 is communicatively coupled to a knowledge-based library 340. Generally, a knowledge-based library maintains a plurality of rules, concepts, relationships, and the taxonomy asserted as true. Further, a knowledge-based library can maintain one or more entities. The entities may be populated into the knowledge-based library by an ontology-guided classification component 302, a script 318, or a combination of both in some aspects. The entities included in the knowledge-based library include a plurality of data fields including at least an identifier that corresponds to an identifier of a database record maintained by a record database (e.g., record database 320). The values of the entity are asserted as true. However, the class of the entity is not asserted as true in at least one aspect. Rather, the class of an entity is inferred based on the rules, concepts, relationships, and the taxonomy asserted as true in the knowledge-based library 340. The knowledge-based library may be maintained by one or more servers and one or more databases. As depicted in
Data schema knowledge database 304 stores and maintains one or more data schema knowledge libraries. A data schema knowledge library comprises a computer-understandable model of all of the domain knowledge associated with a data schema. For example, a data schema knowledge library includes the field nomenclature, field types, metadata, domain resources, field addresses, and other similar framework rules. Similarly, taxonomy knowledge database 306 stores and maintains one or more taxonomy knowledge libraries. A taxonomy knowledge library comprises a computer-understandable model of all of the domain knowledge associated with a taxonomy. For example, in some aspects, taxonomy knowledge database 306 includes a SNOMED CT library.
Network 308 generally facilitates communication between the ontology-guided classification component 302, user device 310, record repository server 316, other devices or servers connected to network 308, or any combination thereof. As such, network 308 can include access points, routers, switches, or other commonly understood network components that provide wired or wireless network connectivity. In other words, network 308 may include multiple networks, or a network of networks, but is depicted in a simple form so as not to obscure aspects of the present disclosure. By way of example, network 308 can include one or more wide area networks (WANs), one or more local area networks (LANs), one or more public networks, such as the Internet, one or more private networks, one or more telecommunications networks, or any combination thereof. In other words, where network 308 includes a wireless telecommunications network, components such as a base station, a communications tower, or even access points (as well as other components) may provide wireless connectivity. Networking environments are commonplace in enterprise-wide computer networks, intranets, and the Internet. Accordingly, network 308 is not described in significant detail herein.
System 300 includes a user device 310. User device 310 generally facilitates a user's (i.e., user of the device) interaction with the output of ontology-guided classification component 302 and decision support application 312. Additionally, user device 310 can facilitate access to a record repository server 316. User device 310 can facilitate this interaction by executing an application stored in computer-readable media that allows the user device 310 to communicatively couple with the ontology-guided classification component 302, record repository server 316, or both. Alternatively, user device 310 can locally execute some or all of the components of the ontology-guided classification component 302, though not shown in such a configuration in
System 300 includes a decision support application 312 that facilitates execution of procedurally coded workflows 314. The hosted workflows 314 can be in any format suitable to store procedural code. In a particular aspect, each workflow, such as workflows 314, contains code that corresponds to a diagnostic decision support tool. As depicted in
Decision support application 312 may also maintain one or more scripts 318. Script 318 can include programmatic code to extract values stored in database fields of a record maintained by a record database (e.g., record database 320). Additionally, the script(s) 318 may extract field addresses associated with each value extracted from the record. Further, the script(s) 318 may include expressions that link the field address associated with the database to a corresponding field address of an entity in a knowledge-based system library. The script(s) 318 may further include expressions that convert the format of an extracted value to the format native to the corresponding field within the entity. For example, script(s) 318 may convert a value from a small integer field to floating point. For another example, the script may convert a variable string to a fixed string, a decimal to a floating point, or any other format conversion. Additionally, the script(s) 318 may include expressions that convert the value to a unit native to the corresponding field within the entity. For example, a value held in a field with a field address identified as holding temperatures in Celsius may be converted to Fahrenheit. For another example, a value held in a field identified as holding a date of birth may be converted to age in days, months, years, or any combination thereof. The script(s) 318 may include computer executable procedural code in any suitable format. For example, script(s) 318 may include j son, java, C++, C#, Python, R, PHP, Visual Basic.NET, JavaScript, Ruby, Perl, SIMSCRIPT, Object Pascal, Objective-C, Dart, Swift, Scala, Kotlin, Common Lisp, MATLAB, or Smalltalk files in some aspects.
System 300 also includes a record repository server 316, mentioned above. The record repository server 316 generally facilitates the storage and maintenance of data. Generally, the data can be stored via any suitable computer-readable media communicatively accessible to the processing components of the record repository server 316. For example, the data can be stored in a record database 320 with a defined data schema. As will be understood by those skilled in the art, the data schema of a record database (e.g., record database 320) can vary widely. For example, the nomenclature, table structure, field structure, and database language (e.g., SQL, Oracle, SQL Server, MySQL, and so forth) preferred by the person or people building and administrating the database directly and indirectly affect the overall data schema.
Record repository server 316 generally maintains one or more record databases 320 that store and organize data records. Record repository server 316 can include hardware, software, firmware that facilitate creation, retrieval, and modification of the data records stored in the record database 320. Each database 320 has a data schema. The data schema includes the relational associations, metadata, and configuration of each field and table of database 320. In some aspects, record repository server 316 comprises an EHR system. An EHR system includes medical records, which may be maintained in one or more databases, and may further include one or more computers or servers that facilitate the storing and retrieval of the medical records associated with a patient. Each medical record contains personal medical or health data for a particular patient and any other data associated with the patient (e.g., a unique identifier, demographic data, scheduled appointments, care facility admission information, and so forth). Examples of EHR systems include Cerner's® Millennium®. In some aspects, record repository server 316 also includes an interoperability interface, which is a computing interface that enables data transmission between the database 320 and another device (e.g., user device 310, ontology-guided classification component 302, decision support application 312, or any other device). In particular, the interoperability interface may define how calls, codes, or requests are made in the data schema used by the record database 320 that is maintained by the record repository server 316.
It should be understood that the system 300 of
In a traditional computerized system, a computer programmer would convert the diagnostic criteria included in a standard of care, predicative model, or standard procedure into procedural code that identifies each iteration of the diagnostic criteria. As mentioned above, even relatively uncomplicated diagnostic criteria can result in a significant number of possible permutations. For example, particular diagnostic criteria having only had five inclusion criteria conditions where condition A has four available states, condition B has six available states, condition C has three available states, condition D has three available states, and condition E has four available states, may have 864 available permutations (i.e., 4×6×3×3×4=available permutations of the available states). Moreover, a single misplaced ‘,’ or; ‘;’ or ‘(’ or ‘)’ in any of the procedural code permutations may result in inoperable procedural code or improperly operating procedural code. Because each procedural code permutation includes a state for each condition based on the unique criteria for the diagnosis; reusing or repurposing procedural code created for one clinical trial with the criteria of another clinical trial is traditionally a non-option.
In contrast, with reference to
Similar to ontologies in other scientific and technical fields, an ontology hierarchy (e.g., ontology hierarchy 400) comprises a “tree”, “backbone”, or “hierarchy” representing the relationships of the concepts and rules 402 and classes 404. For example, the depicted concepts 406 include a group of numeric concepts including a Temperature concept. Each concept 406 is linked to at least one rule 408 by a set of properties 410. Properties of a concept are the characteristics of the concept. The characteristics can include directed binary relations that specify a rule which is true for instances of that concept, logical features, for example, by being transitive, symmetric, inverse and functional. For example, in the ontology hierarchy 400, the depicted properties 410 specify that the Temperature concept can have an answer that is defined by the rules defined by IncreasedTemperature rule or the NormalTemperature rule. Further, properties can include interoperability data schema associations. For example, in the ontology hierarchy 400, the Temperature concept has a property 410 that associates the Temperature concept with a particular FHIR® code (e.g., field address, 8310-5).
The ontology hierarchy reference rules in relation to concepts and classes, as mentioned above. Rules 308 define the directed binary relations, logical expression, or a combination thereof that is asserted as true for an entity. Said another way, for each entity analyzed, a reasoner (e.g., reasoner 338) assumes that the finding answer for the Temperature concept is IncreasedTemperature if the rule IncreasedTemperature is true when evaluated with the temperature data (e.g., value stored in the data field) included in the entity.
The ontology hierarchy reference classes in relation to concepts and rules, as mentioned above. A class is the logical axiom that describes an entity based only on the rules and concepts of the ontology hierarchy. An ontology hierarchy may include one or more asserted classes. Each class is defined by a relationship to the applicable concepts. For example, the ontology hierarchy 400 includes two depicted classes 404 (i.e., Covid-19_Diagnosis_Confirmed and Covid-19_Lab_Result_Positive). A reasoner will infer that an entity is a member of Covid-19_Diagnosis_Confirmed class if, for example, the entity includes value data that makes the logical axiom for that class true. Although
Continuing,
Generally, method 500 detects a triggering event (e.g., a modification of a database record or execution of a procedurally coded decision support workflow). A set of values, field addresses, and an identifier are extracted for a database record. The extracted data is populated into an entity within an knowledge-based library. The rules, concepts, and relationships of the knowledge-based library are asserted by a reasoner to infer the class of the entity. Based on the inferred class, classification data of the entity and the values within the entity are computed. The classification data is returned to a procedurally coded decision support workflow, which is executed based on the classification data.
Some aspects of method 500 begin with block 502. At block 502, a modification of a field of a database record is detected via a set of procedurally executed code that monitors operations associated with a database maintaining the database record. For example, a script (e.g., script 318 of
Some aspects of method 500 alternatively begin with detection of a triggering event. For example, a procedurally coded workflow (e.g., procedurally coded workflow 318 of
At block 504, a plurality of values stored in database fields are extracted from the database record. The values include at least an identifier corresponding to the database record. Additionally, in some aspects, a field address is extracted for the database field associated with each of the plurality of values. For example, a script (e.g., script 318 of
At block 506, an entity is generated in a knowledge-based library by execution of a script that writes a formatted version of the plurality of values to a corresponding set of fields mapped to the plurality of database fields based on the field address. Some aspects of block 506 may be facilitated by the executable procedural code of a script (e.g., script 318 of
At block 508, a first class of the entity to which the entity belongs is computed based on the inference of a reasoner and a knowledge-based library including the entity. Aspects of block 508 are facilitated by a reasoner (e.g., reasoner 338 of
At block 510, classification data is returned as input to a procedurally defined workflow for the entity in response to determination of the inferred class. The classification data may include the first class, one or more super classes of the first class, one or more direct super classes of the first class, one or more findings of the values within the entity, or any combination thereof. In some aspects, block 510 is facilitated by a reasoner (e.g., reasoner 338 of
At block 512, a workflow action within a procedurally defined workflow is executed based on the classification data. In some aspects, block 512 is facilitated by a decision support application (e.g., decision support application 312 of
Advantageously, and in contrast to traditional procedurally defined workflows, the class of an entity is dynamically dependent on the values associated with the entity. As such, and again in contrast to traditional procedurally defined workflows, the addition or manipulation of data within a record may not merely advance the workflow to the next procedurally defined point. Rather, aspects of the hybridized systems and methods described herein may facilitate adaptive execution of a procedurally defined workflow based on the classification data dynamically inferred based on the rules, classes, and relationships defined by a knowledge-based library.
Embodiments of the disclosure may be described in the context of computer code or machine-useable instructions, including computer-useable or computer-executable instructions, such as program modules, being executed by a computer or other machine, such as a personal data assistant, a smartphone, a tablet PC, or other handheld device. Generally, program modules, including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Embodiments of the disclosure may be practiced in a variety of system configurations, including mobile devices, consumer electronics, more specialty computing devices, or the like. Embodiments of the disclosure may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computing device 600 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device 600 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 600. Computer storage media does not comprise transitory signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media, such as a wired network or direct-wired connection, and wireless media, such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 612 includes computer storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 600 includes one or more processors 614 that read data from various entities such as memory 612 or I/O components 620. Presentation component(s) 616 presents data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, and the like.
The I/O ports 618 allow computing device 600 to be logically coupled to other devices, including I/O components 620, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. The I/O components 620 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition associated with displays on the computing device 600. The computing device 600 may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition. Additionally, the computing device 600 may be equipped with accelerometers or gyroscopes that enable detection of motion. The output of the accelerometers or gyroscopes may be provided to the display of the computing device 600 to render immersive augmented reality or virtual reality.
Some embodiments of computing device 600 may include one or more radio(s) 624 (or similar wireless communication components). The radio 624 transmits and receives radio or wireless communications. The computing device 600 may be a wireless terminal adapted to receive communications and media over various wireless networks. Computing device 600 may communicate via wireless protocols, such as long term evolution (“LTE”), code division multiple access (“CDMA”), global system for mobiles (“GSM”), or time division multiple access (“TDMA”), as well as others, to communicate with other devices. The radio communications may be a short-range connection, a long-range connection, or a combination of both a short-range and a long-range wireless telecommunications connection. When we refer to “short” and “long” types of connections, we do not mean to refer to the spatial relation between two devices. Instead, we are generally referring to short range and long range as different categories, or types, of connections (i.e., a primary connection and a secondary connection). A short-range connection may include, by way of example and not limitation, a Wi-Fi® connection to a device (e.g., mobile hotspot) that provides access to a wireless communications network, such as a WLAN connection using the 802.11 protocol; a Bluetooth® connection to another computing device is a second example of a short-range connection, or a near-field communication connection. A long-range connection may include a connection using, by way of example and not limitation, one or more of CDMA, LTE, GPRS, GSM, TDMA, and 802.16 protocols.
The subject matter of the technology described herein is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of the methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. Additionally, those skilled in the art will understand that the pseudocode included herein, is illustrative in nature and given the variations programmatic languages should not be interpreted as implying any particular requirements.
Number | Name | Date | Kind |
---|---|---|---|
5072383 | Brimm et al. | Dec 1991 | A |
5077666 | Brimm et al. | Dec 1991 | A |
5530861 | Diamant et al. | Jun 1996 | A |
5692125 | Schloss et al. | Nov 1997 | A |
5721913 | Ackroff et al. | Feb 1998 | A |
5745901 | Entner et al. | Apr 1998 | A |
5758095 | Albaum et al. | May 1998 | A |
5784635 | McCallum | Jul 1998 | A |
5790119 | Sklut et al. | Aug 1998 | A |
5799297 | Goodridge et al. | Aug 1998 | A |
5826239 | Du et al. | Oct 1998 | A |
5832455 | Hayashi et al. | Nov 1998 | A |
5842173 | Strum et al. | Nov 1998 | A |
5842976 | Williamson | Dec 1998 | A |
5911687 | Sato et al. | Jun 1999 | A |
5923018 | Kameda et al. | Jul 1999 | A |
5937388 | Davis et al. | Aug 1999 | A |
5970463 | Cave et al. | Oct 1999 | A |
5987422 | Buzsaki | Nov 1999 | A |
5991728 | Debusk et al. | Nov 1999 | A |
6014629 | DeBruin-ashton | Jan 2000 | A |
6037940 | Schroeder et al. | Mar 2000 | A |
6052669 | Smith et al. | Apr 2000 | A |
6052684 | Du | Apr 2000 | A |
6061506 | Wollaston et al. | May 2000 | A |
6064984 | Ferguson et al. | May 2000 | A |
6067548 | Cheng | May 2000 | A |
6072493 | Driskell et al. | Jun 2000 | A |
6078982 | Du et al. | Jun 2000 | A |
6085184 | Bertrand et al. | Jul 2000 | A |
6088679 | Barkley | Jul 2000 | A |
6115646 | Fiszman et al. | Sep 2000 | A |
6151583 | Ohmura et al. | Nov 2000 | A |
6208345 | Sheard et al. | Mar 2001 | B1 |
6208974 | Campbell et al. | Mar 2001 | B1 |
6223164 | Seare et al. | Apr 2001 | B1 |
6225998 | Okita et al. | May 2001 | B1 |
6278901 | Winner et al. | Aug 2001 | B1 |
6279009 | Smirnov et al. | Aug 2001 | B1 |
4847764 | Halvorson et al. | Sep 2001 | C1 |
6304886 | Bernardo et al. | Oct 2001 | B1 |
6308163 | Du et al. | Oct 2001 | B1 |
6308188 | Bernardo et al. | Oct 2001 | B1 |
6311192 | Rosenthal et al. | Oct 2001 | B1 |
6314556 | Debusk et al. | Nov 2001 | B1 |
6347329 | Evans | Feb 2002 | B1 |
6349329 | Mackintosh et al. | Feb 2002 | B1 |
6430538 | Bacon et al. | Aug 2002 | B1 |
6458080 | Brown et al. | Oct 2002 | B1 |
6484144 | Martin et al. | Nov 2002 | B2 |
6697784 | Bacon et al. | Feb 2004 | B2 |
6728947 | Bengston | Apr 2004 | B1 |
6912549 | Rotter et al. | Jun 2005 | B2 |
6915265 | Johnson | Jul 2005 | B1 |
6966049 | Lepejian et al. | Nov 2005 | B2 |
6970844 | Bierenbaum | Nov 2005 | B1 |
6978268 | Thomas et al. | Dec 2005 | B2 |
7027997 | Robinson et al. | Apr 2006 | B1 |
7035862 | Patitucci | Apr 2006 | B2 |
7047535 | Lee et al. | May 2006 | B2 |
7051012 | Cole et al. | May 2006 | B2 |
7136824 | Masuda et al. | Nov 2006 | B2 |
7181375 | Rao et al. | Feb 2007 | B2 |
7184967 | Mital et al. | Feb 2007 | B1 |
7240324 | Casati et al. | Jul 2007 | B2 |
7275039 | Setteducati | Sep 2007 | B2 |
7296056 | Yaung | Nov 2007 | B2 |
7318059 | Thomas et al. | Jan 2008 | B2 |
7403936 | Rao et al. | Jul 2008 | B2 |
7428495 | Dhar et al. | Sep 2008 | B2 |
7437302 | Haskell et al. | Oct 2008 | B2 |
7447644 | Brandt et al. | Nov 2008 | B2 |
7457731 | Rao | Nov 2008 | B2 |
7457765 | Thompson et al. | Nov 2008 | B2 |
7590932 | Britton et al. | Sep 2009 | B2 |
7617078 | Rao et al. | Nov 2009 | B2 |
7630947 | Pandya et al. | Dec 2009 | B2 |
7653566 | Kim et al. | Jan 2010 | B2 |
7689441 | Craft | Mar 2010 | B1 |
7711404 | Rao et al. | May 2010 | B2 |
7725330 | Rao et al. | May 2010 | B2 |
7744540 | Rao et al. | Jun 2010 | B2 |
7756728 | Maughan et al. | Jul 2010 | B2 |
7805385 | Steck et al. | Sep 2010 | B2 |
7840511 | Rosales et al. | Nov 2010 | B2 |
7844560 | Krishnan et al. | Nov 2010 | B2 |
7877272 | Rosales et al. | Jan 2011 | B2 |
7890349 | Cole et al. | Feb 2011 | B2 |
7895055 | Schneider et al. | Feb 2011 | B2 |
7917377 | Rao et al. | Mar 2011 | B2 |
7937655 | Teng et al. | May 2011 | B2 |
8000978 | Wager et al. | Aug 2011 | B2 |
8027849 | Johnson et al. | Sep 2011 | B2 |
8046362 | Bayliss | Oct 2011 | B2 |
8200527 | Thompson et al. | Jun 2012 | B1 |
8214224 | Rao et al. | Jul 2012 | B2 |
8214225 | Rao et al. | Jul 2012 | B2 |
8219416 | Auker et al. | Jul 2012 | B2 |
8280750 | Krishnan et al. | Oct 2012 | B2 |
8326667 | Johnson | Dec 2012 | B2 |
8392152 | Rao | Mar 2013 | B2 |
8392232 | Mcgillin | Mar 2013 | B2 |
8571884 | Badgett et al. | Oct 2013 | B2 |
8579784 | Krishnan et al. | Nov 2013 | B2 |
8694518 | Schultz | Apr 2014 | B2 |
8768741 | Hinton et al. | Jul 2014 | B1 |
8775207 | Abraham et al. | Jul 2014 | B2 |
9336283 | Giang et al. | May 2016 | B2 |
9639662 | Sethumadhavan et al. | May 2017 | B2 |
9703927 | Chaudhri et al. | Jul 2017 | B2 |
9824316 | Junker et al. | Nov 2017 | B2 |
20010001144 | Kapp | May 2001 | A1 |
20010032108 | Sieron et al. | Oct 2001 | A1 |
20010037227 | McInnis et al. | Nov 2001 | A1 |
20020018066 | Vizer | Feb 2002 | A1 |
20020059201 | Work | May 2002 | A1 |
20020059251 | Stern et al. | May 2002 | A1 |
20020065701 | Kim et al. | May 2002 | A1 |
20020128871 | Adamson et al. | Sep 2002 | A1 |
20020128890 | Dick et al. | Sep 2002 | A1 |
20020129031 | Lau et al. | Sep 2002 | A1 |
20020170035 | Casati et al. | Nov 2002 | A1 |
20030023593 | Schmidt | Jan 2003 | A1 |
20030023728 | Yaung | Jan 2003 | A1 |
20030045958 | Brandt et al. | Mar 2003 | A1 |
20030050800 | Brandt et al. | Mar 2003 | A1 |
20030074225 | Borsand et al. | Apr 2003 | A1 |
20030078813 | Haskell et al. | Apr 2003 | A1 |
20030149714 | Casati et al. | Aug 2003 | A1 |
20030158832 | Sijacic et al. | Aug 2003 | A1 |
20040015841 | Lepejian et al. | Jan 2004 | A1 |
20050027566 | Haskell | Feb 2005 | A1 |
20060184475 | Krishnan et al. | Aug 2006 | A1 |
20060184943 | Delmonego et al. | Aug 2006 | A1 |
20070130206 | Zhou et al. | Jun 2007 | A1 |
20080140694 | Mangla | Jun 2008 | A1 |
20080313204 | Schultz | Dec 2008 | A1 |
20090043634 | Tisdale | Feb 2009 | A1 |
20090089092 | Johnson et al. | Apr 2009 | A1 |
20100004948 | Toomey et al. | Jan 2010 | A1 |
20100131289 | Brandt et al. | May 2010 | A1 |
20110071850 | Nuthi | Mar 2011 | A1 |
20110320187 | Motik et al. | Dec 2011 | A1 |
20120041910 | Ludik et al. | Feb 2012 | A1 |
20120239671 | Chaudhri et al. | Sep 2012 | A1 |
20120245948 | Nolte et al. | Sep 2012 | A1 |
20120253836 | Nolte et al. | Oct 2012 | A1 |
20130046558 | Landi et al. | Feb 2013 | A1 |
20130085977 | Junker | Apr 2013 | A1 |
20130204830 | Franke | Aug 2013 | A1 |
20140058748 | Ford et al. | Feb 2014 | A1 |
20140095203 | Anand et al. | Apr 2014 | A1 |
20140244300 | Bess et al. | Aug 2014 | A1 |
20150081326 | Krishnapuram et al. | Mar 2015 | A1 |
20150120327 | Compton et al. | Apr 2015 | A1 |
20150149362 | Baum et al. | May 2015 | A1 |
20150317311 | Cannon et al. | Nov 2015 | A1 |
20160350361 | Chen et al. | Dec 2016 | A1 |
20170091388 | Zolla | Mar 2017 | A1 |
20170109477 | Farooq et al. | Apr 2017 | A1 |
20170124269 | Mcnair et al. | May 2017 | A1 |
20170161439 | Raduchel et al. | Jun 2017 | A1 |
20170212748 | Agnew et al. | Jul 2017 | A1 |
20180181644 | Lyons et al. | Jun 2018 | A1 |
20190197421 | Agassi et al. | Jun 2019 | A1 |
20190197428 | Kodish-Wachs et al. | Jun 2019 | A1 |
20190303371 | Rowe et al. | Oct 2019 | A1 |
20200342991 | Hu et al. | Oct 2020 | A1 |
20210182306 | Agassi et al. | Jun 2021 | A1 |
20220044812 | Barnes et al. | Feb 2022 | A1 |
Number | Date | Country |
---|---|---|
0090971 | Oct 1983 | EP |
0950971 | Oct 1999 | EP |
1065618 | Jan 2001 | EP |
1304645 | Apr 2003 | EP |
2001202408 | Jul 2001 | JP |
9924927 | May 1999 | WO |
0003344 | Jan 2000 | WO |
0014618 | Mar 2000 | WO |
0033238 | Nov 2000 | WO |
2017188987 | Nov 2017 | WO |
Entry |
---|
Apelon Products, Retrieved from internet URL: http://apelon.com/products/products . . . authoring.htm, May 22, 2002, 45 pages. |
Health Supplier, Retrieved from internet URL http://www.healthtrade.com/tw/en/left/healthsupplier-en.htm, 2000, 4 pages. |
Healthcare informatics: Feb. 1999 News and Trends, Retrieved from internet URL: <http://www.healthcare-informatics.com/issues/1999/02_99/news.htm>, printed on, May 22, 2002, pp. 1-12. |
Non-Final Office Action received for U.S. Appl. No. 16/233,348, mailed on Oct. 26, 2022, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/715,640, mailed on Jun. 9, 2022, 30 pages. |
Organization Profile (OP FORM), Retrieved from the internet URL http://www.unece.org/ceiproj/exlop.htm, 2002, 3 pages. |
Owl, an ontology language, Ontogenesis, available at: <http://ontogenesis.knowledgeblog.org/55>, Jan. 21, 2010, pp. 1-6. |
Preinterview First Office Action received for U.S. Appl. No. 16/233,341, mailed on Jul. 22, 2022, 4 pages. |
Signature Product description information, Jun. 1985, 4 pages. |
Batch, Kim, “Who Needs a Standard Medical Terminology.”, Kim Batch Enterprise Architect Center for Biomedical Information, University of Pittsburgh., 9 pages. |
Bechhofer et al., “Terminologies and terminology servers for information environments”, Software Technology and Engineering Practice, 1997. Proceedings., Eighth IEEE International Workshop on [incorporating Computer Aided Software Engineering]. IEEE, 1997, Jul. 14, 1997, pp. 484-497. |
Bertino et al., “A Flexible Model Supporting the Specification and Enforcement of Role-Based Authorization in Workflow Management Systems.”, Proceedings of the second ACM workshop on Role-based access, 1997, 12 pages. |
Chun et al., “Dynamic Composition of Workflows for Customized eGovernment Service Delivery”, Proceedings of the 2002 Annual National Conference on Digital Government Research, May 2002, pp. 1-7. |
Dewan et al., “Workflow Optimization Through Task Redesign in Business Information Processes”, Proceedings of the Thirty-First Hawaii International Conference on System Sciences, vol. 1, Jan. 1998, 13 pages. |
Elkin et al., “Automated enhancement of description logic-defined terminologies to facilitate mapping to ICD9-CM”, Journal of Biomedical Informatics Academic Press USA, vol. 35: 5-6, Oct. 2002, pp. 281-288. |
Georgakopoulos et al., “An Overview of Workflow Management: From Process Modeling to Workflow Automation Infrastructure”, Distributed and Parallel Databases, vol. 3, No. 2, Apr. 1995, pp. 119-153. |
Hogarth et al., “Terminology Query Language: A Server Interface For Concept-Oriented Terminology Systems”, Proceedings of the AMIA Symposium. American Medical Informatics Association., 2000, 5 pages. |
Horrocks, Ian, “Description Logic: Axioms and Rules”, Dagstuhl Rule Markup Techniques, Feb. 7, 2002, pp. 1-51. |
Ingenerf et al., “Standardized terminological services enabling semantic interoperability between distributed and heterogeneous systems”, International Journal of Medicine Informatics, 2001, pp. 223-240. |
Lowe et al., “The image engine HPCC project. A medical digital library system using agent-based technology to create an integrated view of the electronic medical record.”, Digital Libraries, 1996. ADL'96., Proceedings of the Third Forum on Research and Technology Advances in. IEEE, 1996, May 13, 1996, pp. 45-56. |
Marazakis et al., “Management of Work Sessions in Dynamic Open Environments”, Proceedings Ninth International Workshop on Database and Expert Systems Applications, IEEE, Aug. 26-28, 1998, 6 pages. |
Nielsen et al., “Using Domino Workflow.”, IBM Corporation, International Technical Support Organization,, May 2000, pp. 148-178. |
Nikolai et al., “Thesaurus federations: a framework for the flexible integration of heterogeneous, autonomous thesauri”, Research and Technology Advances in Digital Libraries, 1998. ADL 98. Proceedings. IEEE International Forum on. IEEE, 1998, Apr. 22, 1998, pp. 46-55. |
Noy et al., “Ontology Development 101: A Guide to Creating Your First Ontology”, Web Page<(https://protege.stanford.edu/publications/ontology_development/ontology101.pdf>, Jul. 23, 2001, , retrieved from Internet Archive Wayback Machine <https://web.archive.org/web/20010801000000*/https://protege.stanford.edu/publications/ontology_development/ontology101.pdf> on Dec. 19, 2018, Dec. 19, 2018, pp. 1-20. |
Raths, David, “The Importance of Bringing EMS Systems Into the HIE Loop”, Healthcare Informatics, Health It Summit Series., May 31, 2017, 3 Pages. |
Rector et al., “A Terminology Server for Medical Language and Medical Information Systems”, Published in the Proceedings IMIA WG6, Geneva, May 1994, May 1994, pp. 147-157. |
Taentzer et al., “Towards Refactoring of Rule-based, in-place Model Transformation Systems Taken”, Available online at: <https://dl.acm.org/doi/10.1145/2432497.2432506>, 2021, pp. 41-46. |
Yu et al., “Representing genomic knowledge in the UMLS semantic network”, Proceedings of AMIA Annual Symposium The Emergence of Internetable Health Care Systems That Really Work, Nov. 6, 1999, pp. 181-185. |
Zhao et al., “Temporal Workflow Management in a Claim Handling System”, ACM SIGSOFT Software Engineering Notes, vol. 24, No. 2, 1999, pp. 187-195. |
Anonymous, “Database,” Wikipedia, retrieved from https://en.wikipedia.org/w/index.php?title=Database&oldid=757392515, Dec. 30, 2016, pp. 22. |
Anonymous, “Snomed CT,” Wikipedia, retrieved from https://en.wikipedia.org/w/index.php?title=SNOMED_CT&oldid-612767614, retrieved on Mar. 15, 2021, pp. 10. |
Number | Date | Country | |
---|---|---|---|
20230359676 A1 | Nov 2023 | US |