This invention relates to imaging in general, and more particularly to the correction of rendered volumes produced during imaging.
Introduction
Artificial Intelligence (Al) is the ability of a machine to learn and react to changes in its working environment. The academic concept of artificial intelligence is very old: it goes back at least as far as 1956. However, machines that sense the surrounding environment and react to that surrounding environment existed much earlier. For example, the simple thermostat, which was invented in the late 1800's, controls the heating and cooling of a room, and can easily be classified as an intelligent machine. The thermostat senses the temperature in the room and makes a decision, without human intervention, to turn on or turn off the heating or cooling apparatus. However, the classification of a machine as an Al machine is typically based more on its complexity than its purpose. For instance, Optical Character Recognition (OCR) and Continuous Speech Recognition (CSR) topped the list of Al projects back in the 1980's, but they are now often dropped from lists of Al projects in favor of more complex applications such as Autonomous Vehicles, etc.
The components of a successful Al application are generally: 1) a mechanism to provide measurements (typically analog or digital data provided by sensors); 2) mathematical algorithms that analyze the input data and make a decision; and 3) a learning or training mechanism that “teaches” the machine how to behave under different environmental conditions with, optionally, a mechanism to self-test and validate.
The Problem
The performance of any imaging modality depends on the accuracy and the truthfulness (i.e., fidelity) of the three-dimensional (3D) volume created (i.e., rendered) through imaging. In addition, the accuracy of a surgical navigation system, and the success of auto-registration in a surgical navigation system, depends on the accuracy and truthfulness (i.e., fidelity) of the 3D volume created (i.e., rendered) through imaging.
The accuracy and truthfulness (i.e., fidelity) of a 3D volume created (i.e., rendered) through imaging depends on the translation accuracy of the imaging modality. For example, with CT imaging, the CT machine may be fixed and the patient support may be moving (“fixed CT imaging”), such as is shown in
Thus, it is desirable to provide a new and improved method and apparatus for increasing the rendering accuracy of scanning systems.
The present invention comprises a new and improved method and apparatus for increasing the rendering accuracy of an imaging modality.
More particularly, the method of the present invention comprises placing a pre-measured reference (i.e., a reference having markings set with pre-determined spacings) adjacent to the object which is being scanned so that the pre-measured reference is incorporated in the 3D volume created (i.e., rendered) through imaging. The imaged reference (i.e., the 3D rendered volume of the pre-measured reference) is then compared to the true pre-measured reference. This comparison between the rendered 3D volume of the pre-measured reference and the true pre-measured reference is used to create a “correction map” that can be used to create a more truthful (i.e., more accurate) 3D rendering of the scanned volume. The imaged (i.e., rendered) 3D volume is then adjusted, based on the results of the comparison (i.e., using the “correction map”) so as to provide a more accurate 3D rendering of the scanned volume.
Note that this reference-based auto-correction scheme may be applied to an imaging modality which comprises a moving CT machine and a fixed patient support (“fixed CT imaging”), or it may be applied to an imaging modality which comprises a fixed CT machine and a moving patient support (“mobile CT imaging”). This reference-based auto-correction scheme may also be applied to other imaging modalities. Note also that where the imaging modality comprises a moving CT machine and a fixed patient support, the moving CT machine may be a free-moving mobile scanner (e.g., a mobile scanner supported on wheels or centipede belts), or the moving CT machine may be a guided mobile scanner (e.g., a mobile scanner supported on rails or tracks).
In addition to creating a “correction map” for increasing the accuracy of the rendered 3D volume, the pre-measured reference can also be used to automatically calibrate the translation speed of the moving portion of the imaging modality (e.g., the moving patient support or the moving CT machine), and to make a decision as to whether a correction to the translation speed is needed or not. In other words, where the imaging modality comprises a fixed CT machine and a moving patient support, the pre-measured reference can be used to calibrate the translation speed of the moving patient support and adjust that translation speed if necessary; or where the imaging modality comprises a moving CT machine and a fixed patient support, the pre-measured reference can be used to calibrate the translation speed of the moving CT machine and adjust that translation speed if necessary. Again, this is done by determining the difference between the imaged (i.e., rendered) 3D volume of the pre-measured reference incorporated in the rendered 3D volume and the actual pre-measured reference.
The pre-measured reference can be embedded in, or attached to, or laid on, etc. the patient support (which can be a table, a head holder, etc.) or located anywhere in the scanner, so long as the pre-measured reference can be imaged with the volume which is being scanned. By way of example but not limitation, the patient support may comprise a scan table covered by a cushion (upon which the patient rests), and the pre-measured reference may be mounted to the scan table below the cushion.
This new method includes all of the elements needed to provide an intelligent imaging modality that measures, and corrects for, any inaccuracies in the rendered 3D volume which are due to translation speed inaccuracies resulting from the environment in which the scan is taken (e.g., floor inaccuracies, drive system inaccuracies, bumps on a vehicle floor, etc.). The pre-measured reference and the scanner itself are the measuring instruments which provide the needed input to the method (i.e., they provide the reference object and the rendered 3D volume which includes the rendered reference object). The method also includes related mathematical algorithms which are used to auto-extract information from the rendered 3D volume (i.e., to auto-extract the rendered 3D volume of the pre-measured reference and to auto-extract the rendered 3D volume of the scanned object, e.g., patient anatomy). The present invention is capable of taking this information and finding a match, from within the millions of possible hypotheses, to identify the rendered 3D volume of the pre-measured reference within the complete rendered 3D volume (i.e., the rendered 3D volume of the object which is being scanned and the pre-measured reference which is being scanned). Then, after comparing the rendered 3D volume of the pre-measured reference against the true pre-measured reference and determining the aforementioned “correction map”, the present invention can make the necessary correction to the rendered 3D volume if it is deemed necessary to counteract (i.e., correct for) inaccuracies introduced into the rendered 3D volume during imaging (i.e., due to translational inaccuracies in the moving elements of the imaging system). In accordance with the invention, the imaging modality is “taught” to make a decision whether a correction is beneficial, needed or not required.
In one preferred form of the invention, there is provided a method for correcting inaccuracies in a three-dimensional (3D) rendered volume of an object due to deviations between an actual scanner translation speed and an expected scanner translation speed, the method comprising:
placing a pre-measured reference adjacent to the object which is being scanned so that the pre-measured reference and the object are in the same scan field;
scanning the object and the pre-measured reference so that the object and the pre-measured reference are both incorporated in a 3D rendered volume produced through scanning;
comparing the 3D rendered volume of the pre-measured reference against the 3D volume of the true pre-measured reference and generating a correction map indicative of how the rendered 3D volume of the pre-measured reference should be adjusted so as to produce a more accurate 3D rendering of the pre-measured reference; and
using the correction map to adjust the rendered 3D volume of the object.
In another preferred form of the invention, there is provided a method for correcting inaccuracies in a three-dimensional (3D) rendered volume of an object due to deviations between an actual scanner translation speed and an expected scanner translation speed, the method comprising:
creating a 3D rendered volume of the object and a pre-measured reference;
extracting information regarding the 3D rendered volume of the pre-measured reference from the 3D rendered volume of the object and the pre-measured reference;
comparing the extracted information regarding the 3D rendered volume of the pre-measured reference to the corresponding information of the actual pre-measured reference;
determining the extent of the deviation of the 3D rendered volume of the pre-measured reference from the information of the actual pre-measured reference;
correcting the 3D rendered volume of the object based on the foregoing determination of the extent of the deviation of the 3D rendered volume of the pre-measured reference from the information of the actual pre-measured reference;
extracting information regarding the 3D rendered volume of the pre-measured reference from the corrected 3D rendered volume of the object and the pre-measured reference; and
comparing the extracted information regarding the 3D rendered volume of the pre-measured reference to the corresponding information of the actual pre-measured reference.
These and other objects and features of the present invention will be more fully disclosed or rendered obvious by the following detailed description of the preferred embodiments of the invention, which is to be considered together with the accompanying drawings wherein like numbers refer to like parts, and further wherein:
Background
The present invention addresses the rendering accuracy of imaging systems, e.g., CT scanners.
As mentioned above, there are two types of imaging modalities: the class of fixed scanners (i.e., the fixed imaging modality) and the class of mobile scanners (i.e., the mobile imaging modality).
The translational accuracy of the fixed scanner is driven by the translational accuracy of moving the patient support (e.g., the table or bed). With a fixed imaging modality, the fixed scanner gantry is stationary (“fixed”) and the patient is moved in and out of the scan zone (“plan”) using a mounted scanning table or bed. The patient table or bed is, by design, well-leveled and moves on a well-defined trajectory driven by high precision rails. They have an accuracy of 0.25 mm. This will serve as the “gold standard” of translational accuracy.
The translational accuracy of the mobile scanner is driven by the translational accuracy of the moving scanner.
Mobile scanners can be divided into two sub-classes. The first sub-class includes scanners that move on a platform guided by rails. The second sub-class includes free-moving scanners that move over the floor using wheels or centipedes.
In imaging mode, scanners of the first sub-class do not move on the floor: instead, they move on a well-leveled platform which helps overcome any variation in the floor. The rails ensure that the scanner will travel in a straight line. With scanners of the first sub-class, the scan platforms have good translational accuracy but they reduce the scanner mobility and require more space.
Scanners of the second sub-class are truly mobile scanners: they do not use any platform or rails. Instead, they are free to move on any floor of any structure (e.g., room, vehicle, etc.) that is large enough to accommodate them. However, with scanners of the second sub-class, the translational accuracy of the scanners is affected by the floor they are moving on. Any variation in the floor (such as bumps, ramps or inclines) affects the translational accuracy of the scanner (and hence affects the accuracy of the rendered 3D volume produced by the scanner). Thus, with scanners of this second sub-class, the translational accuracy of the scanner is highly dependent on the floor where the scan is being conducted.
With fixed CT imaging, a variation in the speed of movement of the patient support (i.e., the patient support moving faster or slower than its intended speed) causes positioning errors in the rendered 3D volume; and with mobile CT imaging, a variation in the speed of movement of the scanner (i.e., the scanner moving faster or slower than its intended speed) causes positioning errors in the rendered 3D volume.
By way of example but not limitation, consider the case of a moving scanner, where the scanner moves at (i) the ideal speed (i.e., the expected speed), (ii) a slower speed (i.e., slower than the expected speed) and (iii) a faster speed (i.e., faster than the expected speed).
Ideal Speed: Looking first at
Slow Speed: Looking next at
Fast Speed: Looking next at
Thus, a variation in the speed of the scanner (i.e., the scanner moving faster or slower than its intended speed) causes positioning errors (i.e., misalignment of the scanned sections) in the rendered 3D volume.
In the past, two software-based approaches have been developed to help improve the translational accuracy of mobile CT scanners.
The first software-based approach is based on tracking the distance traveled by the scanner in axial mode (i.e., along the longitudinal axis of the scan), where the scanner translates in between scan sections to cover the entire scan volume. The distance traveled by the scanner is examined after each move, and a correction is introduced in the next move to compensate for any difference between the actual travel distance and the predicted (i.e., expected) travel distance. With this first software-based approach, the solution to the problem of translational accuracy (or, more precisely, translational inaccuracy) depends on feedback from the scanner (i.e., data relating to the distance actually traveled by the scanner).
The second software-based approach is to use a tilt sensor on the scanner to measure scanner tilt (and hence measure floor variation) and then use this information to improve translational accuracy. But this second approach is dependent on the location of the sensor on the scanner (i.e., the higher the tilt sensor is on the scanner, the more accurate the tilt data, and hence the more accurate the measure of floor variation). And in any case, with this second software-based approach, the solution to the problem of translational accuracy (or, more precisely, translational inaccuracy), depends on feedback from the scanner.
In yet another approach, where the scanner operates in a continuous translational mode, a speed calibration is introduced to improve the translational accuracy of the scanner.
The New Method
The new method of the present invention is an image-based correction. This implies that once the 3D volume is imaged, the imaging modality is no longer needed for the correction, i.e., everything required for correcting inaccuracies in the rendered 3D volume is already present in the rendered 3D volume (unlike the aforementioned first software-based approach and second software-based approach, which require feedback from the scanner). This is achieved by placing a pre-measured reference (e.g., a ruler 10 with its beads 15, see below) into the scan field so that the pre-measured reference is incorporated into the rendered 3D volume produced by the scan. The rendered 3D volume of the pre-measured reference may then be extracted from the complete rendered 3D volume (i.e., the rendered 3D volume of the patient anatomy being scanned and the pre-measured reference positioned in the scan field), and compared with the actual pre-measured reference, so as to generate a correction map which can be used to adjust the complete rendered 3D volume so as to correct for translational errors introduced by the scanner (i.e., by differences between the actual speed of the scanner vis-a-vis the expected speed of the scanner).
The pre-measured reference is, in one preferred form of the invention, a ruler 10 (
Using the 3D volume generated by the imaging modality, in one preferred form of the invention, the auto-correction scheme of the present invention works as follows:
The last two steps are optional, and are only used to validate the auto-correction.
In one preferred form of the invention, the auto-correction scheme of the present invention is implemented by software running on an appropriately-programmed general purpose computer.
The auto-correction scheme of the present invention preferably employs several algorithm tools which may be implemented in software on an appropriately-programmed general purpose computer:
These three algorithm tools are contained within the software application which implements the preferred form of the present invention.
The Software Application Which Implements The Preferred Form Of The Invention
The software application which implements the preferred form of the invention reads the DICOM images which are produced by simultaneously scanning the patient anatomy and the pre-measured reference in the same scan field, corrects the rendered 3D volume using the scanned reference, and then outputs a set of corrected images (e.g., in the DICOM standard).
The auto-correction scheme of the present invention preferably comprises seven blocks. See
The First Block reads the image data or data from a binary file, for example, a DICOM series. The image data includes the reference phantom and the target scan (i.e., the pre-measured reference within the image data). Note that the image data can be from a clinical scan or from a validation scan of the phantom (i.e., the pre-measured reference), either of which can be used to measure the accuracy of the scanned volume.
The Second Block runs the auto-detection algorithm if the system is running a validation scan of a validation phantom (i.e., the pre-measured reference). The validation phantom (i.e., a pre-measured validation phantom which can be the same as the pre-measured reference which is used for a clinical scan) is typically made up of markers set at known distances (e.g., ruler 10 with beads 15, see
The Third Block runs the auto-detection and the auto-matching algorithms on the rendered 3D reference phantom (i.e., the rendered 3D volume of the pre-measured reference). In addition to calculating the maximum errors, it also calculates the re-sampling map (i.e., the “correction map”) for correcting scanner translation errors in the scanned volume.
The Fourth Block re-samples the scanned volume using the correction map generated earlier. In other words, the Fourth Block adjusts the original rendered 3D volume, using the correction map, to produce a corrected rendered 3D volume, and then this corrected rendered 3D volume is re-sampled (i.e., “re-sliced”) so as to produce a corrected set of 2D slice images (e.g., slice images in the DICOM standard) which reflect the corrected rendered 3D volume. The re-sampled volume should be more accurate than the original rendered volume, since it corrects for scanner translational errors. The re-sampled data uses real information about the scan floor to correct the positional accuracy of the scanner.
The Fifth Block is a repeat of the Third Block. However, the corrected volume is used for auto-detecting and auto-matching of the resampled reference phantom (i.e., the pre-measured reference). The block is used to validate the accuracy of the auto-correction performed on the rendered 3D reference phantom (i.e., the pre-measured reference).
The Sixth Block is similar to the Second Block. It runs the auto-detection and auto-matching algorithms on the re-sampled volume. Similar to the Second Block, it will only be applied only if the validation phantom (i.e., the pre-measured validation phantom or pre-measured reference) is used.
The Seventh (and last) Block is used for writing the re-sampled volume to a memory storage unit (e.g., on the scanner, or in a hospital PACS unit, etc.) for later use.
Blocks Three and Four are the only blocks needed for achieving auto-correction of the scanned volume to correct for translational errors of the scanner. The remaining blocks are mainly used for validation and verification purposes. Blocks Three, Four and Five are used in both validation and clinical cases. The First and Seventh (the last) Blocks serve as the interface to the scanner.
The Auto-Detection Algorithm Tool
The auto-detection algorithm is designed to detect the markers of the reference phantom (e.g., the beads 15 of the pre-measured reference ruler 10) or the validation phantom (e.g., a pre-measured reference which may be the same as the clinical reference phantom) in the rendered 3D image. The auto-detection process is a multi-step process:
The objects are created using “the connected component labeling segmentation” method. However, any segmentation method can be used to create the 3D objects. The auto-detection algorithm is parameter-driven so it can be adjusted based on the size and the nature of the reference marker (e.g., bead 15).
The Auto-Matching Algorithm Tool
The detected markers (e.g., beads 15) are automatically matched to a set of reference markers (e.g., the beads 15 of the pre-measured reference ruler 10). The matching is done based on the inner distances between the detected markers and the true markers:
M(m1 . . . mn)=argminall permutation(|Σdm(i,j)−Σdt(i,j)|)
The matching is done sequentially:
7—Repeat Steps 6 and 7 until all the true markers are matched to a detected marker.
The Decision Tree
The auto-matching algorithm used a tree-based decision tree with N-best choices. For a set of M markers, the number of possible matches is astronomical, for example, for a set of 13 markers there exists 6,227,020,800 possible matches. For a set of 78 markers, the number of all possible choices is 118 digits long. The decision is based on calculating the matching scores of all possible matches (which is practically impossible). Several methods have been used to make the calculation practical. The N-best choice algorithm, coupled with the sequential search algorithm, will reduce the computation time and allow for a fast decision:
The Error Estimation
In general, the imaged (i.e., rendered) 3D volume is made up of a collection of 2D images. The 2D images are assumed to be equi-spaced. However, slight errors in the locations of the 2D images causes the deformation in the scanned volume (these errors in the locations of the 2D images are the result of translational errors of the scanner). For example, if the scanner is moving too slowly, the spacing between the 2D images will be compressed (see
Elocation=f(Mtruelocation, Mscannedlocation)
The Correction
Based on the estimated locations of the scanned slices, a new 3D volume is created using interpolation of the actual 2D slices. The adjusted 3D volume created by adjusting the spacing of its 2D slices may then be re-sampled (i.e., re-sliced) so as to provide an appropriate set of 2D images (e.g., in the DICOM standard) reflective of the corrected rendered 3D volume. The re-sampled volume should present a more accurate representation of the true volume, since the re-sampled slices of the corrected rendered 3D volume will correct for translational errors of the scanner. The calculated error of the location of each scanned slice is used to re-sample the volume at equi-spaced locations using interpolation of the scanned slices.
I(Sresampled)=f(Elocation)
For instance, for the case where f is a linear interpolation:
I(Slocation)=W1Mpl1+W2Mpl2
The Pre-Measured Reference
The pre-measured reference can be any configuration that contains a plurality of identifiable objects set with a known spacing. For example, in one preferred form of the invention, the pre-measured reference comprises a ruler 10 with a set of markers in the form of beads 15. The markers can be chosen to minimize their impact on the intended use of the imaging modality. A typical reference is a ruler with high density beads that are placed at given distances or coordinates.
Of course, other references can also be used. See
The pre-measured reference with a plurality of identifiable objects can be placed at any location as long as it is being included in the imaged volume. A typical position would be under the mat of the patient support table, or on the side of a surgical table, or at the bottom of a head holder, etc. These are few of the convenient locations of the reference, but not the only ones.
Modifications Of The Preferred Embodiments
It should be understood that many additional changes in the details, materials, steps and arrangements of parts, which have been herein described and illustrated in order to explain the nature of the present invention, may be made by those skilled in the art while still remaining within the principles and scope of the invention.
This patent application claims benefit of pending prior U.S. Provisional Patent Application Ser. No. 62/714,396, filed Aug. 3, 2018 by NeuroLogica Corporation and Ibrahim Bechwati et al. for Al-BASED RENDERED VOLUME AUTO-CORRECTION FOR FIXED AND MOBILE X-RAY IMAGING MODALITIES AND OTHER IMAGING MODALITIES, which patent application is hereby incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4352020 | Horiba et al. | Sep 1982 | A |
4962514 | Hart et al. | Oct 1990 | A |
5301108 | Hsieh | Apr 1994 | A |
5615279 | Yoshioka et al. | Mar 1997 | A |
5651046 | Floyd et al. | Jul 1997 | A |
5774519 | Lindstrom et al. | Jun 1998 | A |
5867553 | Gordon et al. | Feb 1999 | A |
6040580 | Watson et al. | Mar 2000 | A |
6148057 | Urchuck et al. | Nov 2000 | A |
6178220 | Freundlich et al. | Jan 2001 | B1 |
6400789 | Dafni | Jun 2002 | B1 |
6408044 | Sembritzki et al. | Jun 2002 | B2 |
6568851 | Saito | May 2003 | B2 |
6597803 | Pan et al. | Jul 2003 | B1 |
6683934 | Zhao et al. | Jan 2004 | B1 |
6813374 | Karimi et al. | Nov 2004 | B1 |
6848827 | Wu et al. | Feb 2005 | B2 |
6944258 | Nukui et al. | Sep 2005 | B2 |
7086780 | Wu et al. | Aug 2006 | B2 |
7088800 | Nukui et al. | Aug 2006 | B2 |
7108424 | Heumann et al. | Sep 2006 | B2 |
7134787 | Sun et al. | Nov 2006 | B2 |
7175347 | Tybinkowski et al. | Feb 2007 | B2 |
7428290 | Nishide et al. | Sep 2008 | B2 |
7555097 | Yamazaki | Jun 2009 | B2 |
7724866 | Naidu | May 2010 | B2 |
8121250 | Dafni et al. | Feb 2012 | B2 |
8315352 | Wu et al. | Nov 2012 | B2 |
8358824 | Hagiwara | Jan 2013 | B2 |
8503750 | Benson et al. | Aug 2013 | B2 |
8611625 | Oohara | Dec 2013 | B2 |
8611627 | Wu et al. | Dec 2013 | B2 |
8686368 | Tybinkowski et al. | Apr 2014 | B2 |
8818058 | Paul et al. | Aug 2014 | B2 |
8888364 | Bailey et al. | Nov 2014 | B2 |
9208918 | Tybinkowski et al. | Dec 2015 | B2 |
9285326 | Gagnon et al. | Mar 2016 | B2 |
9683948 | Gao et al. | Jun 2017 | B2 |
9852526 | Nakanishi | Dec 2017 | B2 |
20030058994 | Sembritzki | Mar 2003 | A1 |
20040196960 | Tanigawa et al. | Oct 2004 | A1 |
20040228451 | Wu et al. | Nov 2004 | A1 |
20050013414 | Sun et al. | Jan 2005 | A1 |
20060159223 | Wu et al. | Jul 2006 | A1 |
20100027867 | Bernhardt et al. | Feb 2010 | A1 |
20100195804 | Dafni et al. | Aug 2010 | A1 |
20110293161 | Yi et al. | Dec 2011 | A1 |
20120163557 | Hsieh et al. | Jun 2012 | A1 |
20130026353 | Yan et al. | Jan 2013 | A1 |
20130156163 | Liu et al. | Jun 2013 | A1 |
20140072108 | Rohler et al. | Mar 2014 | A1 |
20140321608 | Ueki et al. | Oct 2014 | A1 |
20160030133 | Ramsey | Feb 2016 | A1 |
20160074002 | Bechwati et al. | Mar 2016 | A1 |
20160157809 | Takahashi et al. | Jun 2016 | A1 |
20160203241 | Dean et al. | Jul 2016 | A1 |
20170140560 | Kraus et al. | May 2017 | A1 |
20170281117 | Sullivan et al. | Oct 2017 | A1 |
20180095450 | Lappas et al. | Apr 2018 | A1 |
Entry |
---|
Mennessier C. et al., Distortion Correction, Geometric Calibration, and Volume Reconstruction for an Isocentric C-Arm X-Ray System, IEEE Nuclear Science Symposium Conference Record, Oct. 2011, pp. 2943-2947. |
Number | Date | Country | |
---|---|---|---|
20200037981 A1 | Feb 2020 | US |
Number | Date | Country | |
---|---|---|---|
62714396 | Aug 2018 | US |