The subject matter disclosed herein relates to medical imaging systems and, more particularly, to detecting motion during a computed tomography (CT) scan using one or more sensors or systems to detect the motion. The sensors may include cameras and/or laser imaging, detection and ranging (LiDAR) based techniques incorporated within a CT imaging system.
In CT, X-ray radiation spans a subject of interest, such as a human patient, and a portion of the radiation impacts a detector where the image data is collected. In digital X-ray systems a photodetector produces signals representative of the amount or intensity of radiation impacting discrete pixel regions of a detector surface. The signals may then be processed to generate an image that may be displayed for review. In the images produced by such systems, it may be possible to identify and examine the internal structures and organs within a patient's body. In CT imaging systems a detector array, including a series of detector elements or sensors, produces similar signals through various positions as a gantry is rotated around a patient, allowing volumetric reconstructions to be obtained.
An accurate three-dimensional (3D) measurement of a patient during a CT can significantly improve subsequent workflow (e.g., patient positioning, automated landmarking, etc.). For example, using the 3D measurement and of a patient and position information obtained from the measurement can help determine movement of the patient.
Certain embodiments commensurate in scope with the originally claimed subject matter are summarized below. These embodiments are not intended to limit the scope of the claimed subject matter, but rather these embodiments are intended only to provide a brief summary of possible forms of the subject matter. Indeed, the subject matter may encompass a variety of forms that may be similar to or different from the embodiments set forth below. This summary introduces concepts that are described in more detail in the detailed description. It should not be used to identify essential features of the claimed subject matter, nor to limit the scope of the claimed subject matter.
In one aspect, a method of identifying movement of a patient during a medical imaging scan. The method includes initiating motion detection data acquisition, initiating a medical imaging scan of the patient to acquire scan data, determining a motion score curve for duration of the medical imaging scan based on the motion detection data, removing portions of the scan data corresponding to a motion score curve outside of an acceptable range, and selecting the scan data corresponding to a motion score curve within the acceptable range for reconstruction.
In another aspect, a medical imaging system is provided. The medical imaging system includes a computed tomography (CT) imaging system, which includes a gantry having a bore, rotatable about an axis of rotation, an X-ray source mounted on the gantry and configured to emit an X-ray beam, an X-ray controller to operate the X-ray source, and an X-ray detector configured to detect the X-ray beam emitted by the X-ray source. The medical imaging system also includes a motion detection system coupled to the CT imaging system, which includes a motion detection apparatus mounted on the gantry, a controller to control operation of the motion detection apparatus, and a movement data processing unit to obtain data from the motion detection apparatus, wherein the movement data processing unit obtains a baseline position of a patient and real-time movement data during an imaging scan to generate additional position information, wherein the real-time movement data is compared to the baseline position of the patient. The system also includes a processor to determine a motion score of the patient for each view during the scan and select views of scan data in which the corresponding motion score is in an acceptable range for image reconstruction.
These and other features, aspects, and advantages of the present disclosed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Embodiments of the present disclosure will now be described, by way of example, with reference to the Figures. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present subject matter, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Furthermore, any numerical examples in the following discussion are intended to be non-limiting, and thus additional numerical values, ranges, and percentages are within the scope of the disclosed embodiments.
While aspects of the following discussion may be provided in the context of medical imaging, it should be appreciated that the present techniques are not limited to such medical contexts. Indeed, the provision of examples and explanations in such a medical context is only to facilitate explanation by providing instances of real-world implementations and applications. However, the present approaches may also be utilized in other contexts, such as tomographic image reconstruction for industrial Computed Tomography (CT) used in non-destructive inspection of manufactured parts or goods (i.e., quality control or quality review applications), and/or the non-invasive inspection of packages, boxes, luggage, and so forth (i.e., security or screening applications). In general, the present approaches may be useful in any imaging or screening context to provide accurate three-dimensional (3D) information of a target to improving workflow processes and post processing steps.
The present disclosure provides systems and methods for monitoring patient motion during a CT scan, which may include incorporating laser imaging, detection and ranging (LiDAR) based techniques, motion sensors, and/or 3D cameras with a CT imaging system. An example motion detection system includes a LiDAR system, which is a remote sensing method to measure target objects or patients that a variable distance from an X-ray source. With the advancement of LiDAR technique, it can now produce 3D rendering of a subject with high spatial resolution (e.g., sub millimeter (mm) accuracy). The disclosed techniques of motion detection using a LiDAR technique use time of flight information of reflected pulsed light (e.g., laser) to calculate and reproduce 3D information (e.g., depth dependent information) of the patient. The time of flight information may calculate depth, which may be used to create a 3D model of a patient undergoing a CT scan and to detect motion of the patient undergoing the CT scan. Multiple views are utilized to cover an entire target area to reproduce high fidelity 3D information. In certain embodiments, light images may be acquired by moving the data acquisition system (i.e., LiDAR scanning system having one or more LiDAR scanners or instruments) around the target (e.g., around the gantry). Using these techniques, a LiDAR-based motion detection system is able to detect motion of a patient in real-time with accuracy. Additionally or alternatively, other types of motion sensors and/or 3D cameras may be used to track the motion of the patient. For example, a motion detection system using a 3D camera (e.g., a depth camera) may create a 3D mesh that can be updated in real time to monitor the movement of the patient.
In some embodiments, the motion detection system is integrated within the CT imaging system. In this embodiment, the data acquisition system may be integrated outside the scan window (and, thus, physically coupled to the CT imaging system) and rotated to capture multiple views. In certain embodiments, multiple motion detection apparatuses or instruments may be placed at different angular positions around the patient to capture the entire region of interest. In certain embodiments, a motion detection system (e.g., having multiple motion detection apparatuses or instruments) may be mounted externally relative to the gantry (e.g., on the scanner housing or the CT table) but still be physically coupled to the CT imaging system. The external motion detection system may be placed in position as required by a guided rail system. The patient movement data may be acquired prior to, during, and subsequent to a CT scan of the target or patient. The patient movement data can be processed and utilized for subsequent (i.e., after the LiDAR scan) workflow processes (e.g., accurate light scout measurement, proper patient positioning and automated landmarking, etc.) and post-processing steps (e.g., image reconstruction). The disclosed embodiments provide a holistic framework for including a motion detection system in a CT imaging system to improve overall efficiency and robustness of the workflow processes and post-processing steps.
The present disclosure also provides a method of detecting a position and movement of an object or patient positioned for an imaging scan using the motion detection system. The example motion detection system provides data related to the position and movement of the object, specifically 3D point cloud or 3D mesh information of the object or patient. The position and movement information data can be compared to an imaging protocol to determine if the position or movement of the object corresponds to the protocol (i.e., the right portion of the object is positioned to be imaged, the movement of the patient is within an acceptable range to provide quality scan data). If movement outside an acceptable range is detected, the imaging system can account for the movement by removing the view from the imaging scan that include movement. Specifically, the imaging system can select a continuous portion of the data that has only movement in an acceptable range for use during reconstruction, and discard data containing movement falling outside of the acceptable range. Thus, the image quality can be improved by accounting for the movement of the subject.
With the preceding in mind and referring to
Rotation of gantry 12 and the operation of X-ray source 14 are governed by a control mechanism 26 of CT imaging system 10. Control mechanism 26 includes an X-ray controller 28 that provides power and timing signals to the X-ray source 14 and a gantry motor controller 30 that controls the rotational speed and position of gantry 12.
The imaging system 10 also includes motion detection system 32 physically coupled to the imaging system 10. In some examples, the motion detection system includes a LiDAR system and/or 3D camera(s). The motion detection system 32 includes one or more motion detection apparatuses or instruments 34. As depicted, the motion detection system 32 has one motion detection apparatus 34. The one or more motion detection apparatuses 34 are utilized to acquire depth dependent information (e.g., LiDAR data or light images, 3D mesh data) of the patient 22 with high spatial fidelity. The depth dependent information is utilized in subsequent workflow processes for a CT scan. In examples where LiDAR techniques are used, the one or more motion detection apparatuses 34 emit pulsed light 35 (e.g., laser) at the patient 22 and detect the reflected pulsed light from the patient 22. The motion detection system 32 is configured to acquire the movement data from multiple different views (e.g., at different angular positions relative to the axis of rotation 24).
In certain embodiments, as depicted in
In certain embodiments, multiple motion detection apparatuses 34 may be coupled to the gantry 12 in fixed positions but disposed at different angular positions (e.g., relative to axis of rotation 24). In the illustrated example, the motion detection apparatuses 34 are depicted as LiDAR scanners, but other suitable means to detect motion (e.g., 3D cameras, sensors, etc.) may be used in addition and/or in place of the LiDAR scanners. The motion detection apparatuses 34 in fixed positions may acquire the movement data at the same time while remaining stationary.
In certain embodiments, the motion detection system 32 may be external to the gantry 12 but still physically coupled to the imaging system 10. For example, multiple motion detection apparatuses 34 may be coupled to a panel (e.g., at different angular positions relative to the axis of rotation 24) that is coupled to a guide rail system. The guide rail system may be coupled to the gantry housing 13 or a table 36 of the system 10. The guide rail system may be configured to move the panel toward and away from the gantry 12. In certain embodiments, the guide rail system may also be configured to rotate the panel about the axis of rotation 24.
The motion detection system 32 includes a controller 38 configured to provide timing and control signals to the one or more motion detection apparatuses 34 for acquiring the movement data at the different angular positions. The movement data may be acquired prior to, during, and/or subsequent to a CT scan of the patient 22. The motion detection system 32 also includes a movement data processing unit 40 that receives or obtains the movement data from the one or more motion detection apparatuses 34. The integrated motion detection system and/or the movement data processing unit 40 utilizes time of flight information of the reflected pulsed light and processes the movement data (e.g., acquired at the different views) to generate an accurate 3D measurement of the patient 22. The 3D measurement of the patient 22 has a high spatial resolution (e.g., sub mm accuracy). As noted above, the 3D measurement may be utilized in subsequent workflow processes of a CT scan. For example, the 3D measurement may be utilized as an accurate light scout measurement (e.g., for modifying scan acquisition parameters). The 3D measurement may also be utilized for proper patient positioning (e.g., for modifying or optimizing patient position parameters) and automated landmarking. The 3D measurement may further be utilized for post-processing such as in an image reconstruction algorithm of the CT scan data (e.g., modifying reconstruction parameters).
The 3D measurement information from the motion detection system 32 (e.g., from the movement data processing unit 40) and the scan data from the DAS 33 is input to a computer 42. The computer 42 includes a calibration vector storage 44 (e.g., for storing calibration parameters and calibration protocols for acquiring the CT scan data). The 3D measurement information obtained from the motion detection system 32 may be utilized in determining the calibration parameters utilized. The computer 42 also includes a data correction unit 46 for processing or correcting the CT scan data from the DAS 33. The computer 42 further includes an image reconstructor 48. The image reconstructor 48 receives sampled and digitized X-ray data from DAS 33 and performs high-speed reconstruction. The reconstructed image is applied as an input to the computer 42, which stores the image in a mass storage device 50. Computer 42 also receives commands and scanning parameters from an operator via console 52. An associated display 54 allows the operator to observe the reconstructed image as well as the 3D measurement data and other data from the computer 42. The operator supplied commands and parameters are used by computer 42 to provide control signals and information to the DAS 33, X-ray controller 28, gantry motor controller 30, and the controller 38. In addition, computer 42 operates a table motor controller 56, which controls a motorized table 36 to position the patient 22 relative to the gantry 12. Particularly, table 36 moves portions of the patient 22 through a gantry opening or bore 58.
The computer 42 and the movement data processing unit 40 include may each include processing circuitry. The processing circuitry may be one or more general or application-specific microprocessors. The processing circuitry may be configured to execute instructions stored in a memory to perform various actions. For example, the processing circuitry may be utilized for receiving or obtaining movement data acquired with the motion detection system 32. In addition, the processing circuitry may also generate a 3D measurement of the patient 22. Further, the processing circuitry may utilize the 3D measurement in a subsequent workflow process for a CT scan of the patient with the CT imaging system 32.
As depicted, an annular motion detection system panel or window 66 is also disposed within the interior wall 64 of the gantry 12 formed within the bore 58 of the gantry 12. The panel 66 is made of a material transparent to the pulsed light (e.g., laser) emitted by one or more motion detection apparatus disposed within gantry 12 toward the object or subject and reflected back to the one or more motion detection apparatus. The panel 66 is disposed between scan window 62 and the gantry cover 60 in the Z-direction. In particular, the panel 66 is disposed between the scan window 62 and a front of the gantry 12 adjacent the CT table. During an imaging session, a subject or patient is moved within the bore 58. The panel 66 also self-supports and acts as a safety barrier to keep the subject or patient from contacting components (e.g., sometimes rotating components) within the gantry 12. The one or more motion detection apparatuses may be located within the gantry 12 behind the panel 66. In certain embodiments, the one or more motion detection apparatuses 34 are stationary during acquisition of movement data. In certain embodiments, the one or more motion detection apparatuses rotate during acquisition of movement data. The panel 66 is located outside the region of the subject or object being scanned by the CT imaging system.
In certain embodiments, as depicted in
In certain embodiments, the motion detection system 32 may be external to the gantry 12 but still physically coupled to the imaging system 10.
The motion detection system 32 includes the panel 68 (e.g., having an arc shape). The panel 68 includes a plurality of motion detection apparatuses coupled to it. The plurality motion detection apparatus are circumferentially spaced apart along the arc of the panel 68 at different angular positions to enable the acquisition of different views of movement data. The panel 68 is coupled to the guide rail system 72. In particular, the panel 68 is coupled to the main guide rail 74 via a vertical support stanchion 76 (e.g., post or bar). The guide rail system 72 is configured to move the panel 68 (and the vertical support stanchion) back and forth toward the gantry 12 as indicated by arrow 78 (in the Z-direction) along the main guide rail 74. The main guide rail 74 may include an actuation system (e.g., chain actuator, electro-mechanical linear actuator, or any other mechanism for facilitating linear movement along the main guide rail 74). In certain embodiments, the guide rail system 72 is configured to circumferentially move the panel 68 along its arc. In particular, the guide rail system 72 may enable the panel 68 to rotate circumferentially (e.g., relative to the axis of rotation 24 in
At block 1106, the scan is initiated and patient scan data acquisition begins. During the patient scan, scan data is obtained simultaneously with movement data (blocks 1108 and 1110). Additionally, the movement data may be processed in real-time to determine a motion score. More details of an example method for processing patient movement monitoring data are described in conjunction with
In Equation 1, the motion score can be determined for each contour at a point in time during the scan. The position of the contour tn is compared to the initial or baseline position to, where n represents the number of the point in time or the contour 1504, 1506 being evaluated. The position of the contour 1504, 1506 in both x and y directions are used to determine the overall motion score. The motion score may also be given a direction in addition to a magnitude. For example, the motion score for contour 1504 might be negative because the movement is a reduction in the y-direction from the initial contour 1502, and the motion score for 1506 might be positive because the movement is an increase in the y-direction from initial contour 1502. The position of the patient may be determined based on the point cloud or data mesh created using the movement data acquired using the motion detection apparatuses. For example, if one or more of the motion detection apparatuses includes a LiDAR scanner, a 3D point cloud may be created to identify the position of the patient at any given time during the scan, and the real-time position or movement data of the patient can be compared to the initial position or initial point cloud information of the patient to determine the motion score. Similarly, if the motion detection apparatuses include a 3D camera, the 3D mesh created by the camera may be compared in real-time to an initial or baseline 3D mesh to determine the motion score. Thus, the motion score for each view is determined based on the initial position information and the real-time or additional position information obtained for the duration of the scan.
For the surface of a patient, the motion score can be calculated using Equation 2:
To obtain the motion score for the surface of the patient, Equation 2 also incorporates a z-direction component. The motion scores may be calculated for the entire surface of interest for the patient at each point in time for the duration of the scan. The motion scores are then embedded in the scan data and provided to the user in the graphical representation of a curve indicating which views have a motion score that exceeds an acceptable threshold.
While the example methods are described as a whole, portions of the methods may be used independently of the remainder of the methods. In other examples, other portions of the methods 1100, 1200, and 1300 may be excluded, or additional steps may be included and/or repeated. Although a CT system is described by way of example, it should be understood that the present techniques may also be useful when applied to images acquired using other imaging modalities, such as X-ray imaging systems, magnetic resonance (MR) imaging systems, positron emission tomography (PET) imaging systems, single-photon emission computed tomography (SPECT) imaging systems, ultrasound imaging systems, and combinations thereof (e.g., multi-modality imaging systems, such as PET/CT, PET/MR or SPECT/CT imaging systems). The present discussion of a CT imaging modality is provided merely as an example of one suitable imaging modality.
Technical effects of the disclosed embodiments include providing systems and methods for generating an accurate representation of movement of a patient during a CT scan to be utilized in a subsequent CT workflow process for the CT scan. The present disclosure provides systems and methods for incorporating motion detection techniques, which may include LiDAR based techniques, with a CT imaging system to aid various workflows more efficiently. The disclosed embodiments provide a method for detecting the movement of a subject during an imaging scan and subsequently select views of the scan containing an acceptable amount of movement based on the detected movement, thereby increasing the image quality.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
This written description uses examples to disclose the present subject matter, including the best mode, and also to enable any person skilled in the art to practice the subject matter, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the subject matter is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims. Embodiments of the present disclosure shown in the drawings and described above are example embodiments only and are not intended to limit the scope of the appended claims, including any equivalents as included within the scope of the claims. Various modifications are possible and will be readily apparent to the skilled person in the art. It is intended that any combination of non-mutually exclusive features described herein are within the scope of the present invention. That is, features of the described embodiments can be combined with any appropriate aspect described above and optional features of any one aspect can be combined with any other appropriate aspect. Similarly, features set forth in dependent claims can be combined with non-mutually exclusive features of other dependent claims, particularly where the dependent claims depend on the same independent claim. Single claim dependencies may have been used as practice in some jurisdictions require them, but this should not be taken to mean that the features in the dependent claims are mutually exclusive.