The present invention relates to medical renderings of imaging data.
A portion of the disclosure of this patent document contains material to which a claim of copyright protection is made. The copyright owner has no objection to the facsimile or reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but reserves all other rights whatsoever.
Three-dimensional (3-D) visualization products for medical images can primarily employ a visualization technique known Direct Volume Rendering (DVR). The data input used to create the image renderings can be a stack of image slices from a desired imaging modality, for example, a Computed Tomography (CT) or Magnetic Resonance (MR) modality. DVR can convert the image data into an image volume to create renderings, such as the one shown in
Direct Volume Rendering (“DVR”) has been used in medical visualization research for a number of years. DVR can be generally described as rendering visual images directly from volume data without relying on graphic constructs of boundaries and surfaces thereby providing a fuller visualization of internal structures from 3-D data. DVR holds promise for its diagnostic potential in analyzing medical image volumes. Slice-by-slice viewing of medical data may be increasingly difficult for the large data sets now provided by imaging modalities raising issues of information and data overload and clinical feasibility with current radiology staffing levels. See, e.g., Adressing the Coming Radiology Crisis: The Society for Computer Applications in Radiology Transforming the Radiological Interpretation Process (TRIP™) Initiative, Andriole et al., at URL scarnet.net/trip/pdf/TRIP_White_Paper.pdf (November 2003). In some modalities, patient data sets can have large volumes, such as greater than 1 gigabyte, and can even commonly exceed 10's or 100's of gigabytes.
The diagnostic task of a clinician such as a radiologist may include comparisons with similar images. In some situations, a clinician wants to compare an image with previous examinations of the same patient, to determine, for example, whether the findings are a normal variant or signs of a progressing disease. In other situations, a clinician may want to compare images resulting from examinations using different imaging modalities.
Embodiments of the present invention are directed to methods, systems and computer program products that automatically synchronize views in different 3-D medical images. That is, embodiments of the invention can be carried out so that substantially the exact same visualization and/or rendering operation can be electronically automatically applied to two or more views at once.
Methods, systems and computer programs can electronically provide a visual comparison of rendered 3-D medical images. The methods include: (a) providing first and second 3-D medical digital images of a patient on at least one display; (b) electronically altering a visualization of the first 3-D image on the at least one display; and (c) automatically electronically synchronizing visualization of the second 3-D image responsive to the altering of the first 3-D image.
Other embodiments are directed to methods that synchronize diagnostic images for a clinician. The methods include: (a) displaying a first 3-D image of a target region of a patient; (b) displaying a second 3-D image of the same target region of the patient taken at a different time or from a different imaging modality, the second image being obtained from electronic memory, wherein the second image is displayed proximate the first image; (c) electronically manipulating visualization of the first 3-D image; and (d) automatically electronically synchronizing an altered visualization of the second 3-D image to substantially concurrently display with the same visualization as the manipulated visualization of the first 3-D image.
Other embodiments are directed to visualization systems having 3-D medical image synchronization. The systems include: (a) a rendering system configured to generate 3-D medical images from respective digital medical volume data sets of one or more patients; (b) a first display in communication with the rendering system configured to display a first 3-D medical image generated by the rendering system, the first 3-D image associated with a first medical volume data set of a patient; (c) a second display in communication with the rendering system configured to display a second 3-D medical image of the patient, the second 3-D image associated with a second different medical volume data set of the patient; (d) a physician workstation comprising a graphic user interface (GUI) in communication with the first 3-D medical image on the first display to allow a physician to interactively alter the first 3-D image; and (e) a signal processor comprising a 3-D synchronization module in communication with the physician workstation, the 3-D synchronization module configured to synchronize the 3-D image on the second display with that of the 3-D image on the first display based on a physician's interactive input of a desired view of the patient.
In some embodiments, the synchronization module may be configured to programmatically (a) alter a transfer function parameter (b) segment and (c) sculpt to alter a view of the first image and substantially concurrently electronically alter a view of the second image in the same manner.
Still other embodiments are directed to computer program products for providing physician interactive access to patient medical volume data for generally concurrently rendering a plurality of related 3-D diagnostic medical images. The computer program product includes a computer readable storage medium having computer readable program code embodied in the medium. The computer-readable program code including: (a) computer readable program code configured to generate first and second 3-D medical digital images of a patient on at least one display; (b) computer readable program code configured to alter a visualization of the first 3-D image on the at least one display; and (c) computer readable program code configured to synchronize visualization of the second 3-D image responsive to the altering of the first 3-D image.
Some embodiments are directed to signal processor circuits that include a 3-D synchronization module in communication with a physician workstation. The 3-D synchronization module is configured to synchronize a 3-D image of a patient on a second display with that of a corresponding 3-D image of the patient on a first display, based on a physician's interactive input of a desired view of the patient using the first display.
Other embodiments are directed to signal processor circuits that include a 3-D synchronization module in communication with a physician workstation. The 3-D synchronization module configured to synchronize a 3-D image of a patient on a second display with that of a corresponding 3-D image of the patient on a first display, based on a sequence of views defined by a visualization algorithm corresponding to a defined diagnosis or medical condition review protocol.
In some embodiments a combined interactive and rules-based 3-D synchronization module can be provided.
It is noted that any of the features claimed with respect to one type of claim, such as a system, apparatus, or computer program, may be claimed or carried out as any of the other types of claimed operations or features.
Further features, advantages and details of the present invention will be appreciated by those of ordinary skill in the art from a reading of the figures and the detailed description of the preferred embodiments that follow, such description being merely illustrative of the present invention.
The present invention now is described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Like numbers refer to like elements throughout. In the figures, the thickness of certain lines, layers, components, elements or features may be exaggerated for clarity. Broken lines illustrate optional features or operations unless specified otherwise. In the claims, the claimed methods are not limited to the order of any steps recited unless so stated thereat.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers,-steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. As used herein, phrases such as “between X and Y” and “between about X and Y” should be interpreted to include X and Y. As used herein, phrases such as “between about X and Y” mean “between about X and about Y.” As used herein, phrases such as “from about X to Y” mean “from about X to about Y.”
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the specification and relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Well-known functions or constructions may not be described in detail for brevity and/or clarity.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the present invention. The sequence of operations (or steps) is not limited to the order presented in the claims or figures unless specifically indicated otherwise.
The term “Direct Volume Rendering” or DVR is well known to those of skill in the art. DVR comprises electronically rendering a medical image directly from volumetric data sets to thereby display visualizations of target regions of the body, which can include color as well as internal structures, using volumetric and/or 3-D data. In contrast to conventional iso-surface graphic constructs, DVR does not require the use of intermediate graphic constructs (such as polygons or triangles) to represent objects, surfaces and/or boundaries. However, DVR can use mathematical models to classify certain structures and can use graphic constructs.
Also, although embodiments of the present invention are directed to DVR of medical images, other 3-D image generation techniques and other 3-D image data may also be used. That is, the 3-D images with respective visual characteristics or features may be generated differently when using non-DVR techniques.
The term “automatically” means that the operation can be substantially, and typically entirely, carried out without human or manual input, and is typically programmatically directed or carried out. The term “electronically” includes both wireless and wired connections between components.
The term “clinician” means physician, radiologist, physicist, or other medical personnel desiring to review medical data of a patient. The term “tissue” means blood, cells, bone and the like. “Distinct or different tissue” or “distinct or different material” means tissue or material with dissimilar density or other structural or physically characteristic. For example, in medical images, different or distinct tissue or material can refer to tissue having biophysical characteristics different from other (local) tissue. Thus, a blood vessel and spongy bone may have overlapping intensity but are distinct tissue. In another example, a contrast agent can make tissue have a different density or appearance from blood or other tissue.
The term “transfer function” means a mathematical conversion of volume data to image data that typically applies one or more of color, opacity, intensity, contrast and brightness. The transfer function is usually connected to the intensity scale rather than spatial regions in the volume. See also, co-pending, co-assigned U.S. patent application Ser. No. 11137160, entitled, Automated Medical Image Visualization Using Volume Rendering with Local Histograms, and Ljung et al., Transfer Function Based Adaptive Decompression for Volume Rendering of Large Medical Data Sets, Proceedings IEEE Volume Visualization and Graphics Symposium (2004), pp. 25-32, the contents of which are hereby incorporated by reference as if recited in full herein.
The term “synchronization” and derivatives thereof means that the same operation is applied to two or more views generally, if not substantially or totally, concurrently. Synchronization is different from registration, where two volumes are merely aligned. The synchronization operation can be carried out between at least two different 3-D images, where an operation on a first image is automatically synchronized (applied) to the second image. It is noted that there can be any number of views in a synch group. Further, the synchronization does not require a static “master-slave” relationship between the images. For example, if an operation on image 1 is synched to image 2, then an operation on image 2 can also be synched to image 1 as well. In addition, in some embodiments, there can be several synch groups defined, and the synch operation can be applied across all groups, between defined groups, or within a single group, at the same time. For example, a display can have three groups of 3-D images, each group including two or more 3-D images, and the synch operation can be applied to images within a single group based on a change to one of the 3-D images in that group. Alternatively, the synch may be applied to images in other groups as well as to images within the group that the changed image belongs.
Visualization means to present, in 2-D what appears to be 3-D images, volume data representing features with different visual characteristics such as with differing intensity, opacity, color, texture and the like. Thus, the term “3-D” in relation to images does not require actual 3-D viewability (such as with 3-D glasses), just a 3-D appearance on a display. The term “similar examination type” refers to corresponding anatomical regions or features in images having diagnostic or clinical interest in different data sets corresponding to different patients (or the same patient at a different time). For example, but not limited to, a coronary artery, organs, such as the liver, heart, kidneys, lungs, brain, and the like.
Turning now to
As shown in
The rendering system 25 can include a DVR image processor system. The image processor system can include a digital signal processor and other circuit components that allow for collaborative user input as discussed above. Thus, in operation, the image processor system renders the visualization of the medical image using the medical image volume data, typically on at least one display at the physician workstation 30.
In some embodiments, a first display 31 may be the master display with, for example, GUI input, and the other display 32 may be a slave display that cooperates with commands generated using the master display to generate common visualizations of a related but different 3-D image synchronized with that on the first display 31. In other embodiments, each display can act as either a master or slave and an electronic activate switch or icon can allow a clinician to electronically tie the two displays together for synchronization of the rendered images. Additional displays may also be synched with the first and/or second displays 31, 32 (not shown).
In other embodiments, two synchronized 3-D images can be displayed on a single display at a workstation 30. In some embodiments, one image can function as the master image and the other image can be the slave image that is rendered responsive to and using the same visualization tools or data manipulation operations used to create the selected view of the first image.
In some embodiments, instead of clinician input, an electronic module (that can be automatically programmatically carried out) employing rules-based visualization (segmentation, zoom, sculpting, etc) of two or more 3-D images can be used to generate the different synchronized views of the two or more 3-D images. The rules-based algorithm can be predefined to generate a sequence of views and the views can depend on the examination underway, a diagnosis and/or or can be selected using a pull-down chart or list of certain pre-configured sequences of views.
In some embodiments, user input can be accepted to manipulate a visualization of the first 3-D image on the at least one display (block 55). Also, optionally, at least one 2-D image associated with the 3-D image can be provided adjacent the respective first and second 3-D images on the at least one display (block 52). The first and second images can be generated using different imaging modality data sets (block 58). For example, the first 3-D image can be a CT image and the second 3-D image can be an MRI image. The associated 2-D images can also be derived from different imaging modality data than the corresponding 3-D data of a patient.
As will be appreciated by one of skill in the art, embodiments of the invention may be embodied as a method, system, data processing system, or computer program product. Accordingly, the present invention may take the form of an entirely software embodiment or an embodiment combining software and hardware aspects, all generally referred to herein as a “circuit” or “module.” Furthermore, the present invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer readable medium may be utilized including hard disks, CD-ROMs, optical storage devices, a transmission media such as those supporting the Internet or an intranet, or magnetic or other electronic storage devices.
Computer program code for carrying out operations of the present invention may be written in an object oriented programming language such as Java, Smalltalk or C++. However, the computer program code for carrying out operations of the present invention may also be written in conventional procedural programming languages, such as the “C” programming language or in a visually oriented programming environment, such as VisualBasic.
Certain of the program code may execute entirely on one or more of the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, some program code may execute on local computers and some program code may execute on one or more local and/or remote server. The communication can be done in real time or near real time or off-line using a volume data set provided from the imaging modality.
The invention is described in part below with reference to flowchart illustrations and/or block diagrams of methods, systems, computer program products and data and/or system architecture structures according to embodiments of the invention. It will be understood that each block of the illustrations, and/or combinations of blocks, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block or blocks.
These computer program instructions may also be stored in a computer-readable memory or storage that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory or storage produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
As illustrated in
In particular, the processor 100 can be commercially available or custom microprocessor, microcontroller, digital signal processor or the like. The memory 136 may include any memory devices and/or storage media containing the software and data used to implement the functionality circuits or modules used in accordance with embodiments of the present invention. The memory 136 can include, but is not limited to, the following types of devices: ROM, PROM, EPROM, EEPROM, flash memory, SRAM, DRAM and magnetic disk. In some embodiments of the present invention, the memory 136 may be a content addressable memory (CAM).
As further illustrated in
The data 156 may include (archived or stored) digital volumetric image data sets 126 that provides stacks of image data correlated to respective patients. As further illustrated in
While the present invention is illustrated with reference to the application programs 154, 120, 124 in
The tools listed in
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the claims. The invention is defined by the following claims, with equivalents of the claims to be included therein.