The following relates generally to the medical imaging arts, medical image viewer and display arts, and related arts.
In the modern medical practice, the patient is expected to be an active participant in his or her medical care. For example, the patient must provide informed consent to various medical procedures, if physically and mentally competent to do so. To this end, it is important that the patient understand the findings of medical examinations such as radiology examinations.
However, most lay patients (that is, patients without medical training) are unfamiliar with detailed anatomy, much less the visualization of such anatomy as presented in medical images. In a typical radiology workflow, the images of a radiology examination are interpreted by a skilled radiologist who prepares a radiology report summarizing the radiologist's clinical findings. However, the radiology report uses advanced clinical language and anatomical and clinical terminology that is generally unfamiliar to the lay patient. The usual approach for conveying the substance of the radiology examination results to the patient is by way of the patient's physician or a medical specialist explaining these results to the patient in a “one-on-one” consultation. However, this is time consuming for the medical professional, and moreover not all medical professionals are proficient at explaining complex medical findings in a way that is readily understood by the lay patient.
The following discloses certain improvements.
In one disclosed aspect, a radiology viewer comprises an electronic processor, at least one display, at least one user input device, and a non-transitory storage medium storing: instructions readable and executable by the electronic processor to retrieve a radiology examination including at least one radiology image and a radiology report from a radiology examinations data storage; instructions readable and executable by the electronic processor to retrieve or generate a set of image tags identifying anatomical features in the at least one radiology image and a set of report tags identifying clinical concepts in passages of the radiology report; instructions readable and executable by the electronic processor to display at least a portion of the at least one radiology image in an image window shown on the at least one display and to display at least a portion of the radiology report in a report window shown on the at least one display; instructions readable and executable by the electronic processor to receive via the at least one user input device a selection of an anatomical feature shown in the image window and to identify at least one related passage of the radiology report using the set of image tags, the set of report tags, and an electronic medical ontology and to highlight the at least one related passage of the radiology report in the report window; and instructions readable and executable by the electronic processor to receive via the at least one user input device a selection of a passage of the radiology report shown in the report window and to identify at least one related anatomical feature of the at least one radiology image using the set of image tags, the set of report tags, and the electronic medical ontology and to highlight the at least one related anatomical feature of the at least one radiology image in the image window.
In another disclosed aspect, a non-transitory storage medium stores instructions readable and executable by an electronic processor operatively connected with at least one display, at least one user input device, and a radiology examinations data storage to perform a radiology viewing method operating on at least one radiology image and a radiology report. In the radiology viewing method, at least a portion of the at least one radiology image is displayed in an image window shown on the at least one display. At least a portion of the radiology report is displayed in a report window shown on the at least one display. Using a set of image tags identifying anatomical features in the at least one radiology image, a set of report tags identifying clinical concepts in passages of the radiology report, and an electronic medical ontology, at least one of the following is performed: (1) receiving via the at least one user input device a selection of an anatomical feature shown in the image window, identifying at least one related passage of the radiology report, and highlighting the at least one related passage of the radiology report in the report window; and (2) receiving via the at least one user input device a selection of a passage of the radiology report shown in the report window, identifying at least one related anatomical feature of the at least one radiology image, and highlighting the at least one related anatomical feature of the at least one radiology image in the image window.
In another disclosed aspect, a radiology viewer includes at least one electronic processor, at least one display, and at least one user input device. The display shows at least a portion of a radiology image in an image window, and at least a portion of a radiology report in a report window. A selection is received of an anatomical feature shown in the image window, and a corresponding passage of the radiology report is identified and highlighted in the report window. A selection is received of a passage of the radiology report shown in the report window, and a corresponding anatomical feature of the at least one radiology image is identified and highlighted in the image window. The highlighting operations use image anatomical feature tags and report clinical concept tags generated using a medical ontology and an anatomical atlas.
One advantage resides in providing a radiology viewer that provides intuitive visual linkage between radiology report contents and related features of the radiology images which are the subject of the radiology report.
Another advantage resides in providing a radiology viewer that facilitates understanding of a radiology examination by a lay patient.
Another advantage resides in providing a radiology viewer that presents radiology findings with visual representation of the anatomical context.
Another advantage resides in providing a radiology viewer that graphically links clinical concepts presented in the radiology report with anatomical features represented in the underlying medical images.
A given embodiment may provide none, one, two, more, or all of the foregoing advantages, and/or may provide other advantages as will become apparent to one of ordinary skill in the art upon reading and understanding the present disclosure.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
Disclosed herein are radiology viewers that determine linkages between clinical concepts presented in the radiology report of a radiology examination and related anatomical features in the underlying medical images, and that graphically present these linkages to the patient or other user in an intuitive fashion. The disclosed improvements are premised in part on the recognition that understanding of the results of a radiology examination generally requires synthesis of contents of the radiology report with features shown in the underlying medical images.
In the case of the user being a lay patient or other layperson, it is further recognized that the user may in general be unfamiliar with the anatomical context of clinical findings of the radiology report—accordingly, the disclosed radiology viewer provides for the user to identify features in the images by selecting the feature at which point a contextual explanation of the selected feature is presented, and any associated content of the radiology report is highlighted. Conversely, by selecting a passage of the radiology report the related feature(s) of the underlying medical image(s) are highlighted and identified by their anatomical terms (e.g. “kidney”, “lymph node”, “left thalamus”, et cetera).
With reference to
The radiology viewer workstation 10 retrieves a radiology examination 22 from a radiology examinations data storage, such as an illustrative Picture Archiving and Communication System (PACS) 24. Diagrammatic
The radiology report 32 is a report prepared by a radiologist or other medical professional which presents a summary of observations on the images 30 and clinical findings determined by the radiologist via review of the radiology images 30. The radiology report 32 may also be prepared based on other information available to the radiologist, such as the patient's medical history, and/or comparison of the radiology images 30 of the current radiology examination 22 with past radiology examinations of the patient (not shown in
The radiology viewer workstation 10 retrieves the radiology examination 22 from the PACS 24, including the radiology image(s) 30 and the radiology report 32. The viewer workstation 10 presents these data in two windows: an image window 40 in which at least a portion of the at least one radiology image 30 is displayed; and a report window 42 in which at least a portion of the radiology report 32 is displayed. In some embodiments, the image window 40 provides various image manipulation functions operable by the user via the at least one user input device 14, 16, 18—for example, the image manipulations may include zoom-in and zoom-out operations, a pan operation, and so forth. Depending upon the zoom magnitude, only a portion of an image may be seen in the image window 40. Likewise, depending upon the length of the radiology report 32, only a portion of that report 32 may be shown at any given time in the report window 42, and the user is provided with various control functions such as scroll operations, text font size adjustments, and/or so forth operable by the user via the at least one user input device 14, 16, 18. As shown in
In improved radiology viewer embodiments disclosed herein, linkages are determined between clinical concepts presented in the radiology report 32 of the radiology examination 22 and related anatomical features in the underlying medical images 30 of the radiology examination 22, and the radiology viewer graphically presents these linkages to the patient or other user in an intuitive fashion. This promotes synthesis of contents of the radiology report with features shown in the underlying medical images. While such assistance may be of value to a radiologist, this assistance is of particular value for lay patient consumption of the radiology examination 22, as the lay patient is generally unfamiliar with clinical terminology, anatomical terminology, and the ways in which various imaging modalities capture anatomical features.
To provide these features, a report-images linkage component 50 is provided. The illustrative linkage component 50 is implemented on the server computer 20, which may be the same server computer 20 that implements the PACS 24 (as shown) or may be a different computer server in communication with the PACS. The linkage component includes an anatomical features tagger 52 for generating a set of image tags identifying anatomical features in the at least one radiology image 30, a clinical concepts tagger 54 for generating a set of report tags identifying clinical concepts in passages of the radiology report 32, and a medical ontology 56 for linking the clinical concepts and the anatomical features.
The illustrative anatomical features tagger 52 includes a spatial registration component 60 which spatially aligns (i.e. registers) the image(s) 30 with an anatomical atlas 62, and generates the set of image tags by associating image features of the anatomical atlas 62 with corresponding spatial regions of the spatially registered at least one radiology image. It is to be understood that the anatomical atlas 62 is typically not a single representation of a human, but rather is a three-dimensional reference space with multi-dimensional annotations of positions and properties of multiple objects, which may be overlapping and/or mutually exclusive. For example, the anatomical atlas 62 may represent both male and female organs simultaneously (with only one gender typically matching with a given image 30). Besides organs, the anatomical atlas 62 may optionally also identify reference points (e.g. the top of the lungs) or regions (e.g. abdominal region) or any other anatomical objects which can be spatially specified. The anatomical atlas may also encode non-spatial characteristics of an anatomical object, e.g. typical CT-level-window-settings for that object or typical appearances in standard MR sequences or any other type of characteristics relevant for identifying or evaluating this object in a radiologic image. Thus, the term “anatomical atlas” here means a reference space encoding multiple types of information on the human body. The resulting tags may be stored in a suitable storage space—in the illustrative example, the image tags are stored as metadata with the image(s) 30 as DICOM tags, which conveniently leverages the existing DICOM tagging framework; however, other tag storage formalisms are contemplated. It is also contemplated to employ manual tagging, e.g. to identify patient-specific anatomical features that may not be included in the atlas 62, such as tumors.
The illustrative clinical concepts tagger 54 employs a keywords detector 64 to identify keywords in the radiology report 32 corresponding with entries of the medical ontology 56, and the set of report tags is generated by associating passages of the radiology report 32 containing the identified keywords with clinical concepts described in the corresponding entries of the medical ontology 56. In another approach, a natural language processing (NLP) component 66 performs natural language processing on the radiology report 32 to identify passages of the radiology report corresponding with entries of the medical ontology, and the set of report tags is generated by associating the identified passages of the radiology report with clinical concepts described in the corresponding entries of the medical ontology. However generated, the resulting report tags are stored in a suitable storage space—in the illustrative example, the report tags are stored as metadata associated with the radiology report 32; however, other tag storage formalisms are contemplated.
The radiology viewer leverages the thusly generated image tags and report tags to enable the display of automated linkages 70 between user-selected anatomical features of the images 30 and corresponding passages of the radiology report 32; or, conversely, enables automated display of linkages 70 between user-selected passages of the radiology report 32 and corresponding anatomical features of the images 30. For example, if the user selects the liver in a radiology image then the image tags are consulted to determine that the point selected in the image is the liver, then the report tags are searched to identify clinical concepts (if any) relating to the liver by searching those clinical concepts in the ontology 56 to detect references of the concepts to the liver, and finally the corresponding passages of the radiology report 32 are highlighted in the report window 42. Conversely, if the user selects a passage containing the keyword “cirrhosis” in the radiology report 32, then the report tags are consulted to determine that the selected passage pertains to the clinical concept of cirrhosis of the liver, then the image tags are searched to identify the liver in the radiology image(s) 30, and finally the identified liver anatomical feature is highlighted in the image window 40.
In displaying the linkages 70, highlighting of a selected anatomical feature and corresponding report passage(s), or conversely highlighting of a selected report passage and corresponding anatomical feature(s), may employ highlighting. The term “highlight” as used herein is intended to denote any display feature used to emphasize the highlighted image feature in a (portion of) a radiology image displayed in the image window 40, or to denote any display feature used to emphasize the highlighted passage of a (portion of) a radiology report displayed in the report window 42. The highlighting of an image feature may comprise, for example, highlighting an image feature by coloring it with a designated color, highlighting an image feature by superimposing a boundary contour (optionally having a distinctive color) delineating the boundary of the image feature, or so forth. The highlighting of a report passage may comprise, for example, employing a highlighting text background color, a highlighting text color, a text feature such as underscore, flashing text, or the like, or so forth. In some embodiments, both the user-selected image feature or report passage and the identified related passage or image feature are highlighted using the same highlighting, such as employing the same color or pattern for highlighting both the image feature and the report passage. Where the image window 40 and the report window 42 are shown simultaneously, e.g. side-by-side as in illustrative
As previously mentioned, while the illustrative embodiment implements the report-images linkage component 50 on the (same) server computer 20 that implements the PACS 24, other configurations are contemplated. For example, the linkages component 50 and PACS 24 may be implemented on different server computers, or in another embodiment the linkages component 50 may be implemented on the viewer workstation 10. In the illustrative embodiment, viewer functions such as constructing and displaying the windows 40, 42 and receiving user inputs via the user input device(s) 14, 16, 18 are implemented on the electronic processor of the viewer workstation 10, while more computationally complex linkages creation 50 is performed on the server computer 20 which generally has greater computing power. In the illustrative example of
With reference to
In parallel, the radiology report 32 is processed in a step S2 by the clinical concepts tagger 54, with reference to the medical ontology 56, to generate the clinical concepts tags labeling passages of the radiology report 32 as to the contained clinical concepts. This may entail keyword detection using the keywords detector 64, and/or more sophisticated processing performed by the natural language processing (NLP)-based engine or component 66, to extract findings or other clinical concepts in the radiology report 32. Keywords in the radiology report 32 are identified with entries of the medical ontology 56, and the set of report tags is generated by associating passages of the radiology report 32 containing the identified keywords with clinical concepts described in the corresponding entries of the medical ontology 56. Additionally or alternatively, natural language processing is performed on the radiology report 32 to identify passages of the radiology report 32 corresponding with entries of the medical ontology 56, and the set of report tags is generated by associating the identified passages of the radiology report 32 with clinical concepts described in the corresponding entries of the medical ontology 56. Both approaches can be combined. In one non-limiting approach, the radiology report 32 is first analyzed by the NLP engine 66 to determine sections, paragraphs, and sentences, and to determine and extract the specific body part and/or organ references from the delineated sentences. The referenced medical ontology 56 may, for example, be a standard medical ontology such as RADLEX or SNOMED CT. The clinical concepts (e.g. findings such as abnormalities, disorders, and/or so forth) are extracted and suitable contextual tags are generated labeling the report passages with the contained clinical concepts.
The steps S1 and S2 may be performed as pre-processing, e.g. at the time the radiology report 32 is filed by the radiologist. Thereafter, the generated anatomical feature tags may be stored as DICOM tags with the images 30, and the generated clinical concept tags are suitably stored with the radiology report 32. Thereafter, when the patient or other user views the radiology examination 22 using the radiology viewer workstation 10, in a step S3 when the user selects an image location or a report passage, the anatomy corresponding to the image location or the clinical concept contained in the passage are determined by referencing the image tags or report tags, respectively, and the ontology 56 is referenced to identify the corresponding report passage(s) or image anatomical feature(s). Thus, via the common ontology 56, clinical concepts and anatomical features are linked. In some embodiments, the linkage step S3 is extended over multiple radiology examinations to identify relations of different time-points in the different examinations. In this way, due to the link with the images, a patient can follow the genesis and/or evolution of an anatomical feature over multiple time points represented by different radiology examinations, even if the structure is not remarked upon in one or more of the radiology reports. For example, if a tumor appears in the kidney, the patient may look at the changes in the kidney across successive radiology examinations, without having to know how to find the kidney in the images of each examination, via the anatomical feature tags.
With reference to
It should be noted that in the step S1 of
The radiology report viewer can optionally operate to provide viewing of three-dimensional (3D) imaging datasets. For example, the patient or other user can be offered browsing functionality to “flip through” slices of a 3D image. In this regard, it may also be noted that in some instances the image slice currently shown in the image window 40 when a passage of the report 42 is selected may not show the corresponding image feature (or may not optimally show that feature). In such case, the image window 40 may be updated to present the appropriate image slice, either automatically (in some embodiments) or after querying the user as to whether the user wishes to switch to the optimal image slice for depicting the selected report passage (in other embodiments).
The medical ontology 56 employed for mining clinical concepts is generally a domain-specific ontology, specifically a highly specialized medical or radiology ontology. However, for assisting the lay patient in understanding his or her radiology examination, it is contemplated to augment the domain-specific ontology content with lay terms that may be more comprehensible to the patient. For example, terms such as “cardiac” may be augmented by “heart”, or so forth.
In some contemplated embodiments, the steps S1 and S2 of
In the following, some illustrative examples are presented.
With continuing reference to
With reference to
The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/059491 | 4/13/2018 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62486480 | Apr 2017 | US |