There are a variety of situations in which a human operator has to answer a set of discrete questions given a corpus of documents containing information pertaining to the questions. One example of such a situation is that in which a human operator is tasked with associating billing codes with a hospital stay of a patient, based on a collection of all documents containing information about the patient's hospital stay. Such documents may, for example, contain information about the medical procedures that were performed on the patient during the stay and other billable activities performed by hospital staff in connection with the patient during the stay.
This set of documents may be viewed as a corpus of evidence for the billing codes that need to be generated and provided to an insurer for reimbursement. The task of the human operator, a billing coding expert in this example, is to derive a set of billing codes that are justified by the given corpus of documents, considering applicable rules and regulations. Mapping the content of the documents to a set of billing codes is a demanding cognitive task. It may involve, for example, reading reports of surgeries performed on the patient and determining not only which surgeries were performed, but also identifying the personnel who participated in such surgeries, and the type and quantity of materials used in such surgeries (e.g., the number of stents inserted into the patient's arteries), since such information may influence the billing codes that need to be generated to obtain appropriate reimbursement. Such information may not be presented within the documents in a format that matches the requirements of the billing code system. As a result, the human operator may need to carefully examine the document corpus to extract such information.
Because of such difficulties inherent in generating billing codes based on a document corpus, various computer-based support systems have been developed to guide human coders through the process of deciding which billing codes to generate based on the available evidence. Despite such guidance, it can still be difficult for the human coder to identify the information necessary to answer each question.
To address this problem, the above-referenced patent application entitled, “Providing Computable Guidance to Relevant Evidence in Question-Answering Systems” (U.S. patent application Ser. No. 13/025,051) discloses various techniques for pointing the human coder to specific regions within the document corpus that may contain evidence of the answers to particular questions. The human coder may then focus initially or solely on those regions to generate answers, thereby generating such answers more quickly than if it were necessary to review the entire document corpus manually. The answers may themselves take the form of billing codes or may be used, individually or in combination with each other, to select billing codes.
For example, an automated inference engine may be used to generate billing codes automatically based on the document corpus and possibly also based on answers generated manually and/or automatically. The conclusions drawn by such an inference engine may, however, not be correct. What is needed, therefore, are techniques for improving the accuracy of billing codes and other data generated by automated inference engines.
A method for selective modification to one of a plurality of components includes receiving, by an engine, a draft transcript including at least one concept content. The method includes accessing, by a first component in a plurality of components executed by the engine, a mapping between content data and codes to identify a code mapped to the at least one concept content. The method includes modifying the draft transcript to include the identified code. The method includes receiving input representing a status of the identified code. The method includes accessing a data structure storing an indication that the first component identified the code. The method includes modifying a reliability score for the first component. The method includes determining that the first component has a reliability score that fails to satisfy a predetermined threshold. The method includes modifying execution of the first component, based on the determination.
Embodiments of the present invention may be used to improve the quality of computer-based components that are used to identify concepts within documents, such as components that identify concepts within speech and that encode such concepts in codes (e.g., XML tags) within transcriptions of such speech. Such codes are referred to herein as “concept codes” to distinguish them from other kinds of codes. One example of a system for performing such encoding of concepts within concept codes is disclosed in U.S. Pat. No. 7,584,103, entitled, “Automated Extraction of Semantic Content and Generation of a Structured Document from Speech,” which is hereby incorporated by reference herein. Embodiments of the present invention may generate transcripts of speech and encode concepts represented by such speech within concept codes in those transcripts using, for example, any of the techniques disclosed in U.S. Pat. No. 7,584,103.
For example, by way of high-level overview,
A transcription system 104 transcribes a spoken audio stream 102 to produce a draft transcript 106 (operation 202). The spoken audio stream 102 may, for example, be dictation by a doctor describing a patient visit. The spoken audio stream 102 may take any form. For example, it may be a live audio stream received directly or indirectly (such as over a telephone or IP connection), or an audio stream recorded on any medium and in any format.
The transcription system 104 may produce the draft transcript 106 using, for example, an automated speech recognizer or a combination of an automated speech recognizer and a physician or other human reviewer. The transcription system 104 may, for example, produce the draft transcript 106 using any of the techniques disclosed in the above-referenced U.S. Pat. No. 7,584,103. As described therein, the draft transcript 106 may include text that is either a literal (verbatim) transcript or a non-literal transcript of the spoken audio stream 102. As further described therein, although the draft transcript 106 may include or solely contain plain text, the draft transcript 106 may also, for example, additionally or alternatively contain structured content, such as XML tags which delineate document sections and other kinds of document structure. Various standards exist for encoding structured documents, and for annotating parts of the structured text with discrete facts (data) that are in some way related to the structured text. Examples of existing techniques for encoding medical documents include the HL7 CDA v2 XML standard (ANSI-approved since May 2005), SNOMED CT, LOINC, CPT, ICD-9 and ICD-10, and UMLS.
As shown in
Codes 108 may encode instances of concepts represented by corresponding text in the draft transcript 106. For example, in
Transcription system 104 may include components for extracting instances of discrete concepts from the spoken audio stream 102 and for encoding such concepts into the draft transcript 106. For example, assume that first concept extraction component 120a extracts instances of a first concept from the audio stream 102, that the second concept extraction component 120b extracts instances of a second concept from the audio stream 102, and that the third concept extraction component 120c extracts instances of a third concept from the audio stream 102. As a result, the first concept extraction component 120a may extract an instance of the first concept from a first portion of the audio stream 102 (
The concept extraction components 120a-c may use natural language processing (NLP) techniques to extract instances of concepts from the spoken audio stream 102. The concept extraction components 120a-c may, therefore, also be referred to herein as “natural language processing (NLP) components.”
The first, second, and third concepts may differ from each other. As just one example, the first concept may be a “date” concept, the second concept may be a “medications” concept, and the third concept may be an “allergies” concept. As a result, the concept extractions performed by operations 202a, 202b, and 202c in
The first, second, and third portions of the spoken audio stream 102 may be disjoint, contain each other, or otherwise overlap with each other in any combination.
As used herein “extracting an instance of a concept from an audio stream” refers to generating content that represents the instance of the concept, based on a portion of the audio stream 102 that represents the instance of the concept. Such generated content is referred to herein as “concept content.” For example, in the case of a “date” concept, an example of extracting an instance of the date concept from the audio stream 102 is generating the text “<DATE>October 1, 1993</DATE>” based on a portion of the audio stream in which “ten one ninety three” is spoken, because both the text “<DATE>October 1, 1993</DATE>” and the speech “one ninety three” represent the same instance of the “date” concept, namely the date Oct. 1, 1993. In this example, the text “<DATE>October 1, 1993</DATE>” is an example of concept content.
As this example illustrates, concept content may include a code and corresponding text. For example, the first concept extraction component 120a may extract an instance of the first concept to generate first concept content 122a (operation 202a) by encoding the instance of the first concept in concept code 108a and corresponding text 118a in the draft transcript 106, where the concept code 108a specifies the first concept (e.g., the “date” concept) and wherein the first text 118a represents (i.e., is a literal or non-literal transcription of) the first portion of spoken audio stream 102. Similarly, the second concept extraction component 120b may extract an instance of the second concept to generate second concept content 122b (operation 202b) by encoding the instance of the second concept in concept code 108b and corresponding text 118b in the draft transcript 106, where the concept code 108b specifies the second concept (e.g., the “medications” concept) and wherein the second text 118b represents the second portion of spoken audio stream 102. Finally, the third concept extraction component 120c may extract an instance of the third concept to generate third concept content 122c (operation 202c) by encoding the instance of the third concept in concept code 108c and corresponding text 118c in the draft transcript 106, where the concept code 108c specifies the second concept (e.g., the “medications” concept) and wherein the second text 118b represents the second portion of spoken audio stream 102.
As stated above, in this example, the text “<DATE>October 1, 1993</DATE>” is an example of concept content that represents an instance of the “date” concept. Concept content need not, however, include both a code and text. Instead, for example, concept content may include only a code (or other specifier of the instance of the concept represented by the code) but not any corresponding text. For example, the concept content 122a in
The concept extraction components 120a-c may take any form. For example, they might be distinct rules, heuristics, statistical measures, sets of data, or any combination thereof. Each of the concept extraction components 120a-c may take the form of a distinct computer program module, but this is not required. Instead, for example, some or all of the concept extraction components may be implemented and integrated into in a single computer program module.
As described in more detail below, embodiments of the present invention may track the reliability of each of the concept extraction components 120a-c, such as by associating a distinct reliability score or other measure of reliability with each of the concept extraction components 120a-c. Such reliability scores may, for example, be implemented by associating and storing a distinct reliability score in connection with each of the concepts extracted by the concept extraction components 120a-c. For example, a first reliability score may be associated and stored in connection with the concept generated by concept extraction component 120a; a second reliability score may be associated and stored in connection with the concept generated by concept extraction component 120b; and a third reliability score may be associated and stored in connection with the concept generated by concept extraction component 120a. If some or all of the concept extraction components 120a-c are integrated into a single computer program module, then the distinct concept extraction components 120a-c shown in
As described above, each of the concept contents 122a-c in the draft transcript 106 may be created by a corresponding one of the concept extraction components 120a-c. Links 124a-c in
Links 124a-c may or may not be generated and/or stored as elements of the system 100a. For example, links 124a-c may be stored within data structures in the system 100a, such as in data structures within the draft transcript 106. For example, each of the links 124a-c may be stored within a data structure within the corresponding one of the concept contents 122a-c. Such data structures may, for example, be created by or using the concept extraction components 120a as part of the process of generating the concept contents 122a-c (
Embodiments of the present invention may be used in connection with a question-answering system, such as the type described in the above-referenced patent application entitled, “Providing Computable Guidance to Relevant Evidence in Question-Answering Systems.” As described therein, one use of question-answering systems is for generating billing codes based on a corpus of clinical medical reports. In this task, a human operator (coder) has to review the content of the clinical medical reports and, based on that content, generate a set of codes within a controlled vocabulary (e.g., CPT and ICD-9 or ICD-10) that can be submitted to a payer for reimbursement. This is a cognitively demanding task which requires abstracting from the document content to generate appropriate billing codes.
In particular, once the draft transcript 106 has been generated, a reasoning module 130 (also referred to herein as an “inference engine”) may be used to generate or select appropriate billing codes 140 based on the content of the draft transcript 106 and/or additional data sources. The reasoning module 130 may use any of the techniques disclosed in the above-referenced U.S. patent application Ser. No. 13/025,051 (“Providing Computable Guidance to Relevant Evidence in Question-Answering Systems”) to generate billing codes 140. For example, the reasoning module 130 may be a fully automated reasoning module, or combine automated reasoning with human reasoning provided by a human billing code expert.
Although billing codes 140 are shown in
The reasoning module 130 may encode the applicable rules and regulations for billing coding published by, e.g., insurance companies and state agencies. The reasoning module 130 may, for example, include forward logic components 132a-c, each of which implements a distinct set of logic for mapping document content to billing codes. Although three forward logic components 132a-c are shown in
Although the reasoning module 130 is shown in
As another example, the reasoning module 130 may receive a text document (e.g., in ASCII or HTML), which is then processed by data extraction components (not shown) to encode the text document with concept content in a manner similar to that in which the concept extraction components 120a-c encode concept contents based on an audio stream. Therefore, any reference herein to the use of the draft transcript 106 by the reasoning module 130 should be understood to refer more generally to the use of any data source (such as a data source containing data relating to a particular patient or a particular procedure) by the reasoning module 130 to generate billing codes 140.
Furthermore, although in the example of
The reconciliation module 150 may derive the propositions 162a-c from the draft transcripts 106a-c by, for example, applying reconciliation logic modules 152a-c to the draft transcripts 106a-c (e.g., to the concept contents 122a-c within the draft transcripts 106a-c). Each of the reconciliation logic modules 152a-c may implement distinct logic for deriving propositions from draft transcripts 106a-c. A reconciliation logic module may, for example, derive a proposition from a single concept content (such as by deriving the proposition “patient has diabetes” from a <DIABETES_NOT_FURTHER_SPECIFIED> code). As another example, a reconciliation logic module may derive a proposition from multiple concept contents, such as by deriving the proposition “patient has uncontrolled diabetes” from a <DIABETES_NOT_FURTHER_SPECIFIED> code and a <DIABETES_UNCONTROLLED> code. The reconciliation module 150 may perform such derivation of a proposition from multiple content contents by first deriving distinct propositions from each of the content contents and then applying a reconciliation logic module to the distinct propositions to derive a further proposition.
This is an example of reconciling a general concept with a specialization of the general concept by deriving a proposition representing the specialization of the general concept. Those having ordinary skill in the art will understand how to implement other reconciliation logic for reconciling multiple concepts to generate propositions resulting from such reconciliation. Furthermore, the reconciliation module 150 need not be limited to applying reconciliation logic modules 152a-c to draft transcripts 106a-c in a single iteration. More generally, reconciliation module 150 may, for example, repeatedly (e.g., periodically) apply reconciliation logic modules 152a-c to the current set of propositions 162a-c to refine existing propositions and to add new propositions to the set of propositions 160. As new draft transcripts are provided as input to the reconciliation module 150, the reconciliation module 150 may derive new propositions from those draft transcripts, add the new propositions to the set of propositions 160, and again apply reconciliation logic modules 152a-c to the new set of propositions 160.
As described in more detail below, embodiments of the present invention may track the reliability of various components of the systems 100a-b, such as individual concept extraction components 120a-c. The reconciliation module 150 may propagate the reliability of one concept to other concepts that are derived from that concept using the reconciliation logic modules 152a-c. For example, if a first concept has a reliability score of 50%, then the reconciliation module 150 may assign a reliability score of 50% to any proposition that the reconciliation module 150 derives from the first concept. When the reconciliation module 150 derives a proposition from multiple propositions, the reconciliation module 150 may assign a reliability score to the derived proposition based on the reliability scores of the multiple propositions in any of a variety of ways.
The propositions 160 may be represented in a different form than the concept contents 122a-c in the draft transcripts 106a-c. For example, the concept contents 122a-c may be represented in a format such as SNOMED, while the propositions 162a-c may be represented in a format such as ICD-10.
The reasoning module 130 may reason on the propositions 160 instead of or in addition to the concepts represented by the draft transcripts 106a-c. For example, the systems 100a (
Although the reasoning module 130 may, for example, be either statistical or symbolic (e.g., decision logic), for ease of explanation and without limitation the reasoning module 130 in the following description will be assumed to reason based on symbolic rules. For example, each of the forward logic components 132a-c may implement a distinct symbolic rule for generating or selecting billing codes 140 based on information derived from the draft transcript 106. Each such rule includes a condition (also referred to herein as a premise) and a conclusion. The conclusion may specify one or more billing codes. As described in more detail below, if the condition of a rule is satisfied by content (e.g., concept content) of a data source, then the reasoning module 130 may generate the billing code specified by the rule's conclusion.
A condition may, for example, require the presence in the data source of a concept code representing an instance of a particular concept. Therefore, in the description herein, “condition A” may refer to a condition which is satisfied if the data source contains a concept code representing an instance of concept A, whereas “condition B” may refer to a condition which is satisfied if the data source contains a concept code representing an instance of concept B, where concept A may differ from concept B. Similarly, “condition A” may refer to a condition which is satisfied by the presence of a proposition representing concept A in the propositions 160, while “condition B” may refer to a condition which is satisfied by the presence of a proposition representing concept B in the propositions 160. These are merely examples of conditions, however, not limitations of the present invention. A condition may, for example, include multiple sub-conditions (also referred to herein as clauses) joined by one or more Boolean operators.
One advantage of symbolic rules systems is that as rules and regulations change, the symbolic rules represented by the forward logic components 132a-c may be adjusted manually without the need to re-learn the new set of rules on an annotated corpus respectively from observing operator feedback.
Furthermore, not all elements of the systems 100a (
Furthermore, although transcript 106 and transcripts 106a-c are referred to herein as “draft” transcripts, embodiments of the present invention may be applied not only to draft documents but more generally to any document, such as documents that have been reviewed, revised, and finalized, so that they are no longer drafts.
An example of three rules that may be implemented by forward logic components 132a-c, respectively, are shown in Table 1:
Each of the three rules is of the form “if (premise) then (conclusion),” where the premise and conclusion of each rule is as shown in Table 1. More specifically, in the example of Table 1:
The reasoning module 130 may generate the set of billing codes 140 based on the data source (e.g., draft transcript 106) by initializing the set of billing codes 140 (e.g., creating an empty set of billing codes) (
As previously mentioned, the reasoning module 130 may generate the set of billing codes 140 based on the propositions 160 instead of the data source (e.g., draft transcript 106), in which case any reference herein to applying forward logic components 132a-c to concept codes or to the data source should be understood to refer to applying forward logic components 132a-c to the propositions 160. For example, the conditions of the rules in Table 1 may be applied to the propositions 160 instead of to codes in the data source.
Billing codes may represent concepts organized in an ontology. For example,
If a particular node represents a first concept, and a child node of the particular node represents a second concept, then the second concept may be a “specialization” of the first concept. For example, in the ontology 300 of
Operation 208 of the method 200 of
To further understand the method 200 of
Similarly, assume that the reasoning module 130 finds that the draft transcript 106 contains a finding related to the same patient that has been marked up with a code of “<DIABETES_UNCONTROLLED>.” In this case, the condition of forward logic component 132b (e.g., Rule #2) would be satisfied, and the reasoning module 130 would add a billing code <DIABETES_UNCONTROLLED> to the current set of billing codes 140 being generated. Assume for purposes of example that billing code 142b is the billing code <DIABETES_UNCONTROLLED>.
Further assume that the draft transcript 106 contains no evidence that the same patient suffers from hyperosmolarity. As a result, the reasoning module 130 would not find that the condition of forward logic component 132c (e.g., Rule #3) is satisfied and, as a result, forward logic component 132c would not cause any billing codes to be added to the set of billing codes 140 in this example.
In this example, although the set of billing codes 140 would now contain both the billing code <DIABETES_NOT_FURTHER_SPECIFIED> and the billing code <UNCONTROLLED_DIABETES>, the code <UNCONTROLLED_DIABETES> should take precedence over the code <DIABETES_NOT_FURTHER_SPECIFIED>. The reasoning module 130 may remove the now-moot code <DIABETES_NOT_FURTHER_SPECIFIED>, for example, by applying a recombination step. For example, if a generated code A represents a specialization of the concept represented by a generated code B, then the two codes A and B may be combined with each other. As another example, if the clauses Z1 of a rule that generates a code Y1 strictly implies a clause Z2 of a rule that generates a code Y2, then the two codes Y1 and Y2 may be combined with each other (e.g., so that code Y1 survives the combination but code Y2 does not). As another example, codes may be combined based on a rule, e.g., a rule that specifies that if code A and B have been generated, then codes A and B should be combined (e.g., so that code A survives the combination but code B does not). As yet another example, statistical or other learned measures of recombination may be used.
Links 134a-b may or may not be generated and/or stored as elements of the system 100a. For example, links 134a-b may be stored within data structures in the system 100a, such as in data structures within the set of billing codes 140. For example, each of the billing codes may contain data identifying the forward logic component concept content (or part thereof) that caused the billing code to be generated. The reasoning module 130 may, for example, generate and store data representing the links 134a-b as part of the process of adding individual billing codes 142a-b, respectively, to the system 100a in operation 210 of
Links 144a-b may or may not be generated and/or stored as elements of the system 100a. For example, links 144a-b may be stored within data structures in the system 100a, such as in data structures within the set of billing codes 140. For example, each of the billing codes may contain data identifying the forward logic component that caused the billing code to be generated. The reasoning module 130 may, for example, generate and store data representing the links 144a-b as part of the process of adding individual billing codes 142a-b, respectively, to the system 100a in operation 210 of
The set of billing codes 140 that is output by the reasoning module 130 may be reviewed by a human operator, who may accept or reject/modify the billing codes 140 generated by the automatic system 100a. More specifically,
A billing code output module 402 provides output 404, representing some or all of the billing codes 142a-c, to the human reviewer 406 (
The human reviewer 406 may evaluate some or all of the billing codes 140 and make a determination regarding whether some or all of the billing codes 140 are accurate. The human reviewer 406 may make this determination in any way, and embodiments of the present invention do not depend on this determination being made in any particular way. The human reviewer 406 may, for example, determine that a particular one of the billing codes 140 is inaccurate because it is inconsistent with information represented by the spoken audio stream 102 and/or the draft transcript 106.
For example, the human reviewer 406 may conclude that one of the billing codes 142a is inaccurate because the billing code is inconsistent with the meaning of some or all of the text (e.g., text 118a-c) in the data source. As one particular example of this, the human reviewer 406 may conclude that one of the billing codes 142a is inaccurate because the billing code is inconsistent with the meaning of text in the data source that has been encoded incorrectly by the transcription system 104. For example, the human reviewer 406 may conclude that billing code 142a is inaccurate as a result of concept extraction component 120a incorrectly encoding text 118a with concept code 108a. In this case, concept code 108a may represent a concept that is not represented by text 118a or by the speech in the spoken audio stream 102 that caused the transcription system 104 to generate the text 118a. As this example illustrates, the reasoning module 130 may generate an incorrect billing code as the result of providing an invalid premise (e.g., inaccurate concept content 122a) to one of the forward logic components 132a-c, where the invalid premise includes concept content that was generated by one of the concept extraction components 120a-c.
The system 400 also includes a billing code feedback module 408. Once the human reviewer 406 has determined whether a particular billing code is accurate, the reviewer 406 provides feedback 408 representing that determination to a billing code feedback module 410 (
As will now be described in more detail, the feedback 408 provided by the reviewing human operator 406 may be captured and interpreted automatically to assess the performance of the automatic billing coding system 100a. In particular, embodiments of the present invention are directed to techniques for inverting the reasoning process of the reasoning module 130 in a probabilistic way to assign blame and/or praise for an incorrectly/correctly-generated billing code to the constituent logic clauses which lead to the generation of the billing code.
In general, the billing code feedback module 410 may identify one or more components of the billing code generation system 100a that was responsible for generating the billing code corresponding to the feedback 408 (
Examples of components that may be identified as responsible for generating the billing code associated with the feedback 408 are the concept extraction components 120a-c and the forward logic components 132a-c. The system 400 may identify the forward logic component responsible for generating a billing code by, for example, following the link from the billing code back to the corresponding forward logic component. For example, if the reviewer 406 provides feedback 408 on billing code 142b, then the feedback module 410 may identify forward logic component 132b as the forward logic component that generated billing code 142b by following the link 144b from billing code 142b to forward logic component 132b. It is not necessary, however, to use links to identify the forward logic component responsible for generating a billing code. Instead, and as will be described in more detail below, inverse logic may be applied to identify the responsible forward logic component without the use of links.
The billing code feedback module 410 may associate a truth value with the identified forward logic component. For example, if the reviewer's feedback 408 confirms the reviewed billing code, then the billing code feedback module 410 may associate a truth value of “true” with the identified forward logic component; if the reviewer's feedback 408 disconfirms the reviewed billing code, then the billing code feedback module 410 may associate a truth value of “false” with the identified forward logic component. The billing code feedback module 410 may, for example, store such a truth value in or in association with the corresponding forward logic component.
The system 400 (in operation 506) may identify the concept extraction component responsible for generating the billing code by, for example, following the series of links from the billing code back to the corresponding forward logic component. For example, if the reviewer 406 provides feedback 408 on billing code 142b, then the feedback module 410 may identify the concept extraction component 120b as the concept extraction component that generated billing code 142b by following the link 144b from billing code 142b to forward logic component 132b, by following the link 134b from the forward logic component 132b to the concept content 122b, and by following the link 124b from the concept content 122b to the concept extraction component 120b. It is not necessary, however, to use links to identify the concept extraction component responsible for generating a billing code. Instead, and as will be described in more detail below, inverse logic may be applied to identify the responsible concept extraction component without the use of links.
The system 400 (in operation 506) may identify more than one component as being responsible for generating a billing code, including components of different types. For example, the system 400 may identify both the forward logic component 132b and the concept extraction component 120b as being responsible for generating billing code 142b.
The system 400 (in operation 506) may, additionally or alternatively, identify one or more sub-components of a component as being responsible for generating a billing code. For example, as illustrated by the example rules above, a forward logic component may represent logic having multiple clauses (sub-conditions). For example, consider a forward logic component that implements a rule of the form “if A AND B, Then C.” Such a rule contains two clauses (sub-conditions): A and B. In the description herein, each such clause is said to be correspond to and be implemented by a “sub-component” of the forward logic component that implements the rule containing the clauses.
The system 400 (in operation 506) may identify, for example, one or both of these clauses individually as being responsible for generating a billing code. Therefore, any reference herein to taking action in connection with (such as associating blame or praise with) a “component” of the system 100a should also be understood to refer to taking the action in connection with one or more sub-components of the component. In particular, each sub-component of a forward logic component may correspond to and implement a distinct clause (sub-condition) of the logic represented by the forward logic component.
The billing code feedback module 410 may associate reinforcement with the component identified in operation 506 in a variety of ways. Associating reinforcement with a component is also referred to herein as “applying” reinforcement to the component.
The billing code feedback module 410 may, for example, determine whether the feedback 408 provided by the human reviewer 406 is positive, i.e., whether the feedback 408 indicates that the corresponding billing code is accurate (
Both praise and blame are examples of “reinforcement” as that term is used herein. Therefore, in general the billing code feedback module 410 may generate reinforcement output 412, representing praise and/or blame, as part of operations 510 and 512 in
As mentioned above, reliability scores may be associated and stored in connection with representations of concepts, rather than in connection with concept extraction components. In either case, a concept may have one or more attributes, and reliability scores may be associated with attributes of the concept in addition to being associated with the concept itself. For example, if a concept has two attributes, then a first reliability score may be associated with the concept, a second reliability score may be associated with the first attribute, and a second reliability score may be associated with the second attribute.
This particular reliability score scheme is merely one example and does not constitute a limitation of the present invention, which may implement reinforcement output 412 in any way. For example, the scale of reliability scores may be inverted, so that 0 represents complete reliability and 1 represents complete unreliability. In this case, the reliability score may be thought of as a likelihood of error, ranging from 0% to 100%.
Associating praise (positive reinforcement) with a particular component (
In addition to or instead of associating a reliability score with a component, a measure of relevance may be associated with the component. Such a measure of relevance may, for example, be a counter having a value that is equal or proportional to the number of observed occurrences of instances of the concept generated by the component. For example, each time an instance of a concept generated by a particular component is observed, the relevance counter associated with that component.
If the billing code feedback module 410 applies reinforcement (i.e., blame or praise) to multiple components of the same type (e.g., multiple forward logic components, or multiple clauses of a single forward logic component), the billing code feedback module 410 may divide (apportion) the reinforcement among the multiple components of the same type, whether evenly or unevenly. For example, if the billing code feedback module 410 determines that two clauses of forward logic component 132b are responsible for generating incorrect billing code 142b, then the billing code feedback module 410 may assign half of the blame to the first clause and half of the blame to the second clause, such as by dividing (apportioning) the total blame to be assigned in half (e.g., by dividing a blame value of 0.1 into a blame value of 0.05 assigned to the first clause and a blame value of 0.05 assigned to the second clause).
As yet another example, the billing code feedback module 410 may apply reinforcement to a particular component (or sub-component) of the system 100a by assigning, to the component, a prior known likelihood of error associated with the component. For example, a particular component may be observed in a closed feedback loop in connection with a plurality of different rules. The accuracy of the component may be observed, recorded, and then used as a prior known likelihood of error for that component by the billing code feedback module 410.
The results of applying reinforcement output 412 to the component identified in operation 506 may be stored within the system 100a. For example, the reliability score associated with a particular component may be stored within, or in association with, the particular component. For example, reliability scores associated with concept extraction components 120a-c may be stored within concept extraction components 120a-c, respectively, or within transcription system 104 and be associated with concept extraction components 120a-c. Similarly, reliability scores associated with forward logic components 132a-c may be stored within forward logic components 132a-c, respectively, or within reasoning module 130 and be associated with forward logic components 132a-c. As another example, reliability scores may be stored in, or in association with, billing codes 142a-c. For example, the reliability score(s) for the forward logic component and/or concept extraction component responsible for generating billing code 142a may be stored within billing code 142a, or be stored within billing codes 140 and be associated with billing code 142a.
As mentioned above, the component that generated a billing code may be identified in operation 506 by, for example, following one or more links from the billing code to the component. Following such links, however, merely identifies the component responsible for generating the billing code. Such identification, however, may identify a component that includes multiple sub-components, some of which relied on accurate data to generate the billing code, and some of which relied on inaccurate data to generate the billing code. It is not desirable to assign blame to sub-components that relied on accurate data or to assign praise to sub-components that relied on inaccurate data.
Some embodiments of the present invention, therefore, distinguish between the responsibilities of sub-components within a component. For example, referring to
If sub-component S is not determined to be responsible, then method 512 may either assign praise to sub-component S in any of the ways described above (
Similar techniques may be applied to assign praise to sub-components of a particular component. For example, referring to
If sub-component S is not determined to be responsible, then method 510 may either assign blame to sub-component S in any of the ways described above (
The billing code feedback module 410 may implement either or both of the methods shown in
The billing code feedback module 410 may use any of a variety of techniques to determine (e.g., in operations 526 of
Inverse reasoning component 630 includes inverse logic components 632a-c, each of which may be implemented in any of the ways disclosed above in connection with forward logic components 132a-c of reasoning module 130 (
Inverse logic component 632a may implement first logic for reasoning backwards over the rule set of reasoning module 130, inverse logic component 632b may implement second logic for reasoning backwards over the rule set of reasoning module 130, and inverse logic component 632c may implement third logic for reasoning backwards over the rule set of reasoning module 130.
For example, each of the inverse logic components 632a-c may contain both a confirmatory logic component and a disconfirmatory logic component, both of which may be implemented in any of the ways disclosed above in connection with forward logic components 132a-c of reasoning module 130 (
The billing code feedback module 410 may use a confirmatory logic component to invert the logic of the rule set of reasoning module 130 if the feedback 408 confirms the accuracy of the reviewed billing code (i.e., if the feedback 408 indicates that the reviewed billing code is accurate). In other words, a confirmatory logic component specifies a conclusion that may be drawn from: (1) the rule set of reasoning module 130; (2) the propositions 160; (3) the billing code under review; and (4) feedback indicating that a reviewed billing code is accurate. Such a conclusion may, for example, be that the premise (i.e., condition) of the logic represented by a particular forward logic component in the rule set of the reasoning module 130 is valid (accurate), or that no conclusion can be drawn about the validity of the premise.
Conversely, the billing code feedback module 410 may use a disconfirmatory logic component to invert the logic of the rule set of reasoning module 130 if the feedback 408 disconfirms the accuracy of the reviewed billing code (i.e., if the feedback 408 indicates that the reviewed billing code is inaccurate). In other words, a disconfirmatory logic component specifies a conclusion that may be drawn from: (1) the rule set of reasoning module 130; (2) the propositions 160; (3) the billing code under review; and (4) feedback indicating that a reviewed billing code is inaccurate. Such a conclusion may, for example, be that the premise (i.e., condition) of the logic represented by a particular forward logic component in the rule set of the reasoning module 130 is invalid (inaccurate), or that no conclusion can be drawn about the validity of the premise.
Consider a simple example in which forward logic component 132a represents logic of the following form: “If A, Then B.” The reasoning module 130 may apply such a rule to mean, “if concept A is represented by the data source (e.g., draft transcript 106), then add a billing code representing concept B to the billing codes 140.” Assuming that inverse logic component 632a corresponds to forward logic component 132a, the confirmatory logic component 634a and disconfirmatory logic components 634b of inverse logic component 632a may represent the logic indicated by Table 2.
As indicated by Table 2, the confirmatory logic component 634a may represent logic indicating that the combination of: (1) the rule “If A, Then B”; and (2) feedback indicating that B is true (e.g., that a billing code representing B has been confirmed to be accurate) justifies the conclusion that (3) A is true (e.g., that the code representing A is accurate). Such a conclusion may be justified if it is also known that the rule set of reasoning module 130 contains no logic, other than the rule “If A, Then B,” for generating B. Confirmatory logic component 634a may, therefore, draw the conclusion that A is accurate by applying inverse reasoning to the rule set of the reasoning module 130 (including rules other than the rule “If A, Then B” which generated B), based on feedback indicating that B is true. In this case, the billing code feedback module 410 may assign praise to the component(s) that generated the billing code representing B. If confirmatory logic component 634a cannot determine that “If A, Then B” is the only rule in the rule set of the reasoning module 130 that can generate B, then the confirmatory logic module may assign neither praise nor blame to the component(s) that generated the billing code representing B.
Now consider the disconfirmatory logic component 634b of inverse logic component 632a. As indicated by Table 2, disconfirmatory logic component 634b may, for example, represent logic indicating that the combination of: (1) the rule “If A, Then B”; and (2) disconfirmation of B justifies the conclusion that (3) A is false (e.g., that the code representing concept A is inaccurate). In this case, the billing code feedback module 410 may assign blame to the component(s) that generated the billing code representing concept B (e.g., the component(s) that generated the concept code representing concept A).
The techniques disclosed above may be used to identify components responsible for generating a billing code without using all of the various links 124a-c, 134a-c, and 144a-c shown in
The inverse reasoning component 630 may, alternatively or additionally, use inverse logic components 632a-c to identify sub-components that are and are not responsible for the accuracy or inaccuracy of a reviewed billing code, and thereby to enable operations 526 (
As indicated by Table 3, confirmatory logic component 634a may, for example, represent logic indicating that if the rule “If (A AND B), Then C” is inverted based on feedback indicating that C is true (e.g., that a billing code representing concept C is accurate), then it can be concluded that A is true (e.g., that the concept code representing concept A and relied upon by the rule is accurate) and that B is true (e.g., that the concept code representing concept B and relied upon by the rule is accurate), if no other rule in the rule set of the reasoning module 130 can generate C. In this case, the billing code feedback module 410 may assign praise to the component(s) that generated the code representing concept A and to the component(s) that generated the code representing concept B.
As indicated by Table 3, disconfirmatory logic component 634b may, for example, represent logic indicating if the rule “If (A AND B), Then C” is inverted based on feedback indicating that C is false (e.g., that a billing code representing concept C is inaccurate), then either A is false, B is false, or both A and B are false. In this case, the billing code feedback module 410 may assign blame to both the component(s) responsible for generating A and the component(s) responsible for generating B. For example, the billing code feedback module 410 may divide the blame evenly, such as by assigning 50% of the blame to the component responsible for generating concept A and 50% of the blame to the component responsible for generating concept B.
Although such a technique may result in assigning blame to a component that does not deserve such blame in a specific case, as the billing feedback module 410 assigns blame and praise to the same component repeatedly over time, and to a variety of components in the systems 100a-b over time, the resulting reliability scores associated with the various components is likely to reflect the actual reliabilities of such components. Therefore, one advantage of embodiments of the present invention is that they are capable of assigning praise and blame to components with increasing accuracy over time, even while assigning praise and blame inaccurately in certain individual cases.
Alternatively, for example, if it is not immediately possible to assign any praise or blame to the components responsible for generating codes A or B, the billing code feedback module 410 may associate and store a truth value of “false” with the rule “If (A AND B), Then C” (e.g., with the forward logic component representing that rule). As described in more detail below, this truth value may be used to draw inferences about the truth values of A and/or B individually.
Now assume that forward logic component 132a represents a rule of the form “If (A OR B), Then C.” The forward reasoning module 130 may apply such a rule to mean, “if concept A is represented by the data source (e.g., draft transcript 106) or concept B is represented by the data source, then add a billing code representing concept C to the billing codes 140.” The confirmatory logic component 634a and disconfirmatory logic components 634b of inverse logic component 632a may represent the logic indicated by Table 4.
As indicated by Table 4, disconfirmatory logic component 634b may, for example, represent logic indicating if the rule “If (A AND B), Then C” is inverted based on feedback indicating that C is true (e.g., that a billing code representing concept C is accurate), then either A is true, B is true, or both A and B are true. In this case, the billing code feedback module 410 may assign praise to both the component(s) responsible for generating A and the component(s) responsible for generating B. For example, the billing code feedback module 410 may divide the praise evenly, such as by assigning 50% of the praise to the component responsible for generating concept A and 50% of the praise to the component responsible for generating concept B.
Alternatively, for example, if it is not immediately possible to assign any praise or blame to the components responsible for generating codes A or B, the billing code feedback module 410 may associate and store a truth value of “true” with the rule “If (A OR B), Then C” (e.g., with the forward logic component representing that rule). As described in more detail below, this truth value may be used to draw inferences about the truth values of A and/or B individually.
As indicated by Table 4, disconfirmatory logic component 634b may, for example, represent logic indicating if the rule “If (A OR B), Then C” is inverted based on feedback indicating that C is false (e.g., that a billing code representing concept C is inaccurate), then A must be false and B must be false. In this case, the billing code feedback module may assign blame to both the component(s) responsible for generating the code representing concept A and the component(s) responsible for generating the code representing concept B.
The particular inversion logic described above is merely illustrative and does not constitute a limitation of the present invention. Those having ordinary skill in the art will appreciate that other inversion logic will be applicable to logic having forms other than those specifically listed above.
The feedback provided by the reviewer 406 may include, in addition to or instead of an indication of whether the reviewed billing code is accurate, a revision to the reviewed billing code. For example, the reviewer 406 may indicate, via the feedback 408, a replacement billing code. In response to receiving such a replacement billing code, the billing code feedback module 410 may replace the reviewed billing code with the replacement billing code. The reviewer 406 may specify the replacement billing code, such as by typing the text of such a code, selecting the code from a list, or using any user interface to select a description of the replacement billing code, in response to which the billing code feedback module 410 may select the replacement billing code and use it to replace the reviewed billing code in the data source.
For example, referring again to Table 1, assume that the forward reasoning module 130 had used Rule #2 to generate billing code 142b representing “<UNCONTROLLED_DIABETES>,” and that the reviewer 406 has provided feedback 408 indicating that “<UNCONTROLLED_DIABETES>” should be replaced with “<DIABETES_NOT_FURTHER_SPECIFIED>.” In response, the billing code feedback module 410 may replace the code “<UNCONTROLLED_DIABETES>” with the code “<DIABETES_NOT_FURTHER_SPECIFIED>” in the draft transcript 106.
More generally, the billing code feedback module 410 may treat the receipt of such a replacement billing code as: (1) disconfirmation by the reviewer 406 of the reviewed billing code (i.e., the billing code replaced by the reviewer 406, which in this example is “<UNCONTROLLED_DIABETES>”); and (2) confirmation by the reviewer 406 of the replacement billing code (which in this example is “<DIABETES_NOT_FURTHER_SPECIFIED>”). In other words, a single feedback input provided by the reviewer 406 may be treated by the billing code feedback module 410 as a disconfirmation of one billing code and a confirmation of another billing code. In response, the feedback module 410 may: (1) take any of the steps described above in response to a disconfirmation of a billing code in connection with the reviewed billing code that has effectively been disconfirmed by the reviewer 406; and (2) take any of the steps described above in response to a confirmation of a billing code in connection with the reviewed billing code that has effectively been confirmed by the reviewer 406.
As described above, reviewer feedback 408 may cause the feedback module 410 to associate truth values with particular forward logic components (e.g., rules). The feedback module 410 may use such truth values to automatically confirm or disconfirm individual forward logic components and/or sub-components thereof. In general, the feedback module 410 may follow any available chains of logic represented by the forward logic components 132a-c and their associated truth values at any given time, and draw any conclusions justified by such chains of logic.
As a result, the feedback module 410 may confirm or disconfirm the accuracy of a component of the system 100a, even if such a component was not directly confirmed or disconfirmed by the reviewer's feedback 408. For example, the reviewer 406 may provide feedback 408 on a billing code that disconfirms a first component (e.g., forward logic component) of the system 100a. Such disconfirmation may cause the feedback module to confirm or disconfirm a second component (e.g., forward logic component) of the system 100a, even if the second component was not responsible for generating the billing code on which feedback 408 was provided by the reviewer 406. Automatic confirmation/disconfirmation of a system component by the feedback module 410 may include taking any of the actions disclosed herein in connection with manual confirmation/disconfirmation of a system component. The feedback module 410 may follow chains of logic through any number of components of the system 100a in this way.
As described above, the term “component” as used herein includes one or more sub-components of a component. Therefore, for example, if the reviewer's feedback 408 disconfirms the reviewed billing code, this may cause the feedback module 410 to disconfirm a first sub-component (e.g., condition) of a first one of the forward logic components 132a-c, which may in turn cause the feedback module 410 to confirm a sub-component (e.g., condition) of a second one of the forward logic components 132a-c, which may in turn cause the feedback module 410 to disconfirm (and thereby to assign blame to) a second sub-component of the first one of the forward logic components 132a-c.
As a particular example, consider again the case in which the reviewer's feedback 408 replaces the billing code “<UNCONTROLLED_DIABETES>” generated by Rule #2 of Table 1 with the billing code “<DIABETES_NOT_FURTHER_SPECIFIED>”. In response, the feedback module 410 may assign a truth value of “false” (i.e., disconfirm) Rule #2, but not yet determine which sub-component (e.g., the clause “patient_has_problem<DIABETES>” or the clause “p.getStatus( )==<UNCONTROLLED>”) is to blame for the disconfirmation of the rule as a whole.
Since the user has now also confirmed the billing code “<DIABETES_NOT_FURTHER_SPECIFIED>,” the feedback module 410 may use the inverse reasoning of inverse reasoning component 630 to automatically confirm Rule #1 of Table 1 and to assign a truth value of “true” (i.e., confirm) to Rule #1. Now that Rule #1 has been confirmed, it is known that the clause “patient_has_problem<DIABETES>” is true (confirmed). It is also known, as described above, that the truth value of Rule #2 is false. Therefore, the feedback module 410 may apply the logic “If (A AND B) AND (NOT A), Then (NOT B)” to Rule #2 to conclude that “p.getStatus( )==<UNCONTROLLED>” is false (where A is “patient_has_problem<DIABETES>” and where B is “p.getStatus( )==<UNCONTROLLED>”). The feedback module 410 may, in response to drawing this conclusion, associate blame with the component(s) responsible for generating the code “<UNCONTROLLED>.”
Assigning blame and praise to components responsible for generating codes enables the system 400 to independently track the accuracy of constituent components (e.g., clauses) in the forward reasoning module 130 (e.g., rule set), and thereby to identify components of the system 100a that are not reliable at generating concept codes and/or billing codes. The feedback module 410 may take any of a variety of actions in response to determining that a particular component is unreliable. More generally, the feedback module 410 may take any of a variety of actions based on the reliability of a component, as may be represented by the reliability score of the component (
The feedback module 410 may consider a particular component to be “unreliable” if, for example, the component has a reliability score falling below (or above) some predetermined threshold. For example, a component may be considered “unreliable” if the component has generated concept codes that have been disconfirmed more than a predetermined minimum number of times. For purposes of determining whether a component is unreliable, the feedback module 410 may take into account only manual disconfirmations by human reviewers, or both manual disconfirmations and automatic disconfirmations resulting from application of chains of logic by the feedback module 410.
The system 400 may take any of a variety of actions in response to concluding that a component is unreliable. For example, the system 100a may subsequently and automatically require the human operator 406 to review and approve of any concept codes (subsequently and/or previously) generated by the unreliable concept extraction component, while allowing codes (subsequently and/or previously) generated by other concept extraction components to be used without requiring human review. For example, if a particular concept extraction component is deemed by the feedback module 410 to be unreliable, then when the particular concept extraction component next generates a concept code, the system 100a may require the human reviewer to review and provide input indicating whether the reviewer approves of the generated concept code. The system 100a may insert the generated concept code into the draft transcript 106 in response to input indicating that the reviewer 406 approves of the generated concept code, and not insert the generated concept code into the draft transcript 106 in response to input indicating that the reviewer 406 does not approve of the generated concept code.
Additionally or alternatively, the system 100a may subsequently and automatically require the human operator 406 to review and approve of any billing codes (subsequently and/or previously) generated based on concept codes generated by the unreliable concept extraction component, while allowing billing codes (subsequently and/or previously) generated without reliance on the unreliable concept extraction component to be used without requiring human review. For example, if a particular concept extraction component is deemed by the feedback module 410 to be unreliable, then when any of the forward logic components 132a-c next generates a concept code based on logic that references the concept code (e.g., a condition which requires the data source to contain a concept code generated by the unreliable concept extraction component), the system 100a may require the human reviewer to review and provide input indicating whether the reviewer approves of the generated billing code and/or concept code. The system 100a may insert the generated billing code into the draft transcript 106 in response to input indicating that the reviewer 406 approves of the generated billing code and/or concept code, and not insert the generated billing code into the draft transcript 106 in response to input indicating that the reviewer 406 does not approve of the generated billing code and/or concept code.
As another example, in response to concluding that a particular concept extraction component is unreliable, the system 400 may notify the human reviewer 406 of such insufficient reliability, in response to which the human reviewer 406 or other person may modify (e.g., by reprogramming) the identified concept extraction component in an attempt to improve its reliability.
Although certain examples described above refer to applying reinforcement (i.e., assigning praise and/or blame) to components of systems 100a-b, embodiments of the present invention may also be used to apply reinforcement to one or more human reviewers 406 who provide feedback on the billing codes 140. For example, the system 400 may associate a reliability score with the human reviewer 406, and associate distinct reliability scores with each of one or more additional human reviewers (not shown) who provide feedback to the system 400 in the same manner as that described above in connection with the reviewer 406.
As described above in connection with
As a result, as many reviewers provide feedback on a plurality of billing codes, the system 400 may refine the reliability scores that are associated with concept extraction components 120a-c over time. The billing code feedback module 410 may use such a refined reliability score for a billing code as the reference reliability score for the billing code in the process described below. The billing code feedback module 410 may, for example, first wait until the billing code's reliability score achieves some predetermined degree of confirmation, such as by waiting until some minimum predetermined amount of feedback has been provided on the billing code, or until some minimum predetermined number of reviewers have provided feedback on the billing code.
As reviewers (such as reviewer 406 and other reviewers) continue to provide feedback to the billing code feedback module 410 in connection with the billing code, the billing code feedback module may determine whether the feedback provided by the human reviewers, individually or in aggregate, diverges from the reliability scores (e.g., the sufficiently-confirmed reliability scores) sufficiently (e.g., by more than some predetermined degree). If the determination indicates that the reviewers' feedback does sufficiently diverge from the reference reliability score, then the billing code feedback module 410 may take any of a variety of actions, such as one or more of the following: (1) assigning blame to one or more of the human reviewers who provided the diverging feedback; and (2) prevent any blame resulting from the diverging feedback from propagating backwards through the systems 100a-b to the corresponding components (e.g., concept extraction components 120a-c and/or forward logic components 132a-c). Performing both (1) and (2) is an example in which the system 400 assigns blame to one component of the system (the human reviewer 406) but does not propagate such blame backwards up to any of the system components.
The billing code feedback module may apply the same techniques to any number of human reviewers 406 to modify the distinct reliability scores associated with such reviewers over time based on the feedback they provide. Such a method in effect treats the human reviewer 406 as the first component in the chain of inverse logic implemented by the inverse reasoning component 630.
It is to be understood that although the invention has been described above in terms of particular embodiments, the foregoing embodiments are provided as illustrative only, and do not limit or define the scope of the invention. Various other embodiments, including but not limited to the following, are also within the scope of the claims. For example, elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions.
Any of the functions disclosed herein may be implemented using means for performing those functions. Such means include, but are not limited to, any of the components disclosed herein, such as the computer-related components described below.
Although certain examples herein involve “billing codes,” such examples are not limitations of the present invention. More generally, embodiments of the present invention may be applied in connection with codes other than billing codes, and in connection with data structures other than codes, such as data stored in databases and in forms other than structured documents.
The techniques described above may be implemented, for example, in hardware, one or more computer programs tangibly stored on one or more computer-readable media, firmware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on (or executable by) a programmable computer including any combination of any number of the following: a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), an input device, and an output device. Program code may be applied to input entered using the input device to perform the functions described and to generate output using the output device.
Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may, for example, be a compiled or interpreted programming language.
Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps of the invention may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays). A computer can generally also receive programs and data from a storage medium such as an internal disk (not shown) or a removable disk. These elements will also be found in a conventional desktop or workstation computer as well as other computers suitable for executing computer programs implementing the methods described herein, which may be used in conjunction with any digital print engine or marking engine, display monitor, or other raster output device capable of producing color or gray scale pixels on paper, film, display screen, or other output medium.
This application is a continuation of U.S. patent application Ser. No. 13/242,532, filed on Sep. 23, 2011, entitled “User Feedback in Semi-Automatic Question Answering Systems”, which claims priority from commonly-owned U.S. Prov. Pat. App. 61/385,838, filed on Sep. 23, 2010, entitled, “User Feedback in Semi-Automatic Question Answering Systems”, which is hereby incorporated by reference herein. This application is related to and commonly-owned U.S. patent application Ser. No. 13/025,051, filed on Feb. 10, 2011, entitled, “Providing Computable Guidance to Relevant Evidence in Question-Answering Systems”, which is hereby incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
6006183 | Lai et al. | Dec 1999 | A |
7519529 | Horvitz | Apr 2009 | B1 |
7624007 | Bennett | Nov 2009 | B2 |
7650628 | Zimmerman | Jan 2010 | B2 |
8781853 | Green, III | Jul 2014 | B2 |
9424523 | Koll | Aug 2016 | B2 |
9679077 | Jaganathan et al. | Jun 2017 | B2 |
9704099 | Koll | Jul 2017 | B2 |
9996510 | Koll | Jun 2018 | B2 |
20020029161 | Brodersen | Mar 2002 | A1 |
20020065854 | Pressly | May 2002 | A1 |
20040128163 | Goodman et al. | Jul 2004 | A1 |
20040240720 | Brantley | Dec 2004 | A1 |
20050065774 | Doganata et al. | Mar 2005 | A1 |
20050102140 | Davne et al. | May 2005 | A1 |
20050171819 | Keaton | Aug 2005 | A1 |
20050203775 | Chesbrough | Sep 2005 | A1 |
20060036472 | Crockett | Feb 2006 | A1 |
20060129435 | Smitherman et al. | Jun 2006 | A1 |
20060277073 | Heilbrunn | Dec 2006 | A1 |
20070016450 | Bhora | Jan 2007 | A1 |
20070016451 | Tilson | Jan 2007 | A1 |
20070067185 | Halsted | Mar 2007 | A1 |
20070299651 | Koll | Dec 2007 | A1 |
20100063907 | Savani | Mar 2010 | A1 |
20110301978 | Shiu | Dec 2011 | A1 |
20120185275 | Loghmani | Jul 2012 | A1 |
20130159408 | Winn | Jun 2013 | A1 |
20130226617 | Mok | Aug 2013 | A1 |
20140108047 | Kinney | Apr 2014 | A1 |
20150134349 | Vdovjak | May 2015 | A1 |
20150278449 | Laborde | Oct 2015 | A1 |
20160147955 | Shah | May 2016 | A1 |
20160166220 | Bar-Shalev | Jun 2016 | A1 |
20160179770 | Koll | Jun 2016 | A1 |
20160267232 | Koll | Sep 2016 | A1 |
20160294964 | Brune | Oct 2016 | A1 |
20160335554 | Koll | Nov 2016 | A1 |
20170270626 | Koll | Sep 2017 | A1 |
20180040087 | Koll | Feb 2018 | A1 |
20180276188 | Koll | Sep 2018 | A1 |
Number | Date | Country |
---|---|---|
1361522 | Nov 2003 | EP |
2030196 | Sep 2018 | EP |
2883203 | Oct 2018 | EP |
H09106428 | Apr 1997 | JP |
2006509295 | Mar 2006 | JP |
2008108021 | May 2008 | JP |
6215383 | Sep 2016 | JP |
6339566 | May 2018 | JP |
6388864 | Sep 2018 | JP |
2005122002 | Dec 2005 | WO |
2012048306 | Apr 2012 | WO |
2012177611 | Dec 2012 | WO |
2015079354 | Jun 2015 | WO |
2018136417 | Jul 2018 | WO |
Entry |
---|
Daugherty B. et al., “Tracking Incidental Findings”, Radiology Today, Jul. 2014, vol. 15, No. 7, p. 6. |
Yildiz M.Y. et al., “A text processing pipeline to extract recommendations from radiology reports”, Journal of Biomedical Informatics, 2013, vol. 46, pp. 354-362. |
OpenVPMS, Follow-up tasks, Submitted by Matt C on Fri, Sep. 17, 2010, Available at: https://openvpms.org/project/followup-task-lists-enhancements. |
Adam E.J.et al., “ESR guidelines for the communication of urgent and unexpected findings” European Society of Radiology (ESR), 2011, vol. 3, Issue (1), pp. 1-3. |
Anonymous: “Medical transcription—Wikipedia”, Feb. 13, 2010, XP055465109, Retrieved from the Internet: URL: https://en.wikipedia.org/w/index.php?title=Medical_transcription&oldid=343657066 [Retrieved on Apr. 6, 2018]. |
Examination Report received in Canadian patent application No. 2,791,292 dated Sep. 19, 2018, 9 pages. |
Final Office Action dated Sep. 4, 2018 in U.S. Appl. No. 14/218,220 of Juergen Fritsch, filed Mar. 18, 2014, 46 pages. |
Final Rejection dated Oct. 11, 2018 for U.S. Appl. No. 14/941,445 of Defier Koll, filed Nov. 13, 2015, 16 pages. |
International Search Report and Written Opinion received for PCT Patent Appiication No. PCT/US2018/061517, dated Mar. 7, 2019, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20180101879 A1 | Apr 2018 | US |
Number | Date | Country | |
---|---|---|---|
61385838 | Sep 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13896684 | May 2013 | US |
Child | 15839037 | US | |
Parent | 13242532 | Sep 2011 | US |
Child | 13896684 | US |