The following generally relates to imaging and more particularly to a 3-D virtual endoscopy rendering and is described with particular application to computed tomography.
Polyps in the colon can possibly develop into colon cancer. The literature indicates that if such polyps are removed early then cancer can be prevented highly effectively. A colonoscopy is a procedure available to asymptomatic subjects who are over a certain age to detect and assess possible polyps. For a colonoscopy, a gas is insufflated into the colon to inflate the colon so that the wall of the colon can be more easily inspected. An endoscope, which includes a video camera on a flexible tube, is inserted into the colon through the anus and passed through the lumen of the colon. The video camera records images of the interior walls of the colon as it is passed through the lumen. The images can be used for visual inspection of the wall. During the procedure, suspected polyps can be biopsied and/or removed. Endoscopic colonoscopy is an invasive procedure.
Computed tomography (CT) virtual colonoscopy (VC) is a non-invasive, imaging procedure. With CT VC, volumetric image data of the colon is acquired and processed to generate a three-dimensional virtual endoscopic (3-D VE) rendering of a lumen of a colon through a 2-D image of the lumen of the colon from a viewpoint of a virtual camera of a virtual endoscope passing through the lumen with shading derived from the viewing direction and local shapes of iso-surfaces to provide depth information. Generally, the 2-D image provides a 3-D impression similar to a view of a real endoscope. However, there are classes of polyps (e.g., flat and/or serrated polyps) that tend to be difficult to visually detect in the 3-D VE by shape due to their inconspicuousness with the colon wall. As such, there is an unresolved need for an improved 3-D VE.
Aspects described herein address the above-referenced problems and others.
A 3-D VE rendering of a lumen of a tubular structure is based on both non-spectral volumetric imaging data and spectral volumetric imaging data. The non-spectral volumetric image data is used to determine an opacity and shading of the 3-D VE rendering. The spectral volumetric image data is used to visually encode the 3-D VE rendering to visually differentiate the inner wall of the tubular structure from structure of interest on the wall.
In one aspect, a system includes a processor and a memory storage device configured with a three-dimensional virtual endoscopy module and a rendering module. The processor is configured to process non-spectral volumetric image data from a scan of a tubular structure with the three-dimensional virtual endoscopy module to generate a three-dimensional endoscopic rendering of a lumen of the tubular structure with an opacity and shading that provide a three-dimensional impression. The processor is further configured to process spectral volumetric image data from the same scan with the three-dimensional virtual endoscopy module to produce a visual encoding on the three-dimensional endoscopic rendering that visually distinguishes a wall of the tubular structure from structure of interest on the wall based on spectral characteristics. The processor is further configured to execute the rendering module to display the three-dimensional endoscopic rendering with the visual encoding via a display monitor.
In another aspect, a method includes generating a three-dimensional endoscopic rendering of a lumen of a tubular structure with an opacity and shading that provide a three-dimensional impression based on non-spectral volumetric image data from a scan of the tubular structure. The method further includes generating a visual encoding for the three-dimensional endoscopic rendering that visually distinguishes a wall of the tubular structure from structure of interest on the wall based on spectral characteristics determined from spectral volumetric image data from the scan. The method further includes displaying three-dimensional endoscopic rendering with the visual encoding.
In another aspect, a computer-readable storage medium stores instructions that when executed by a computer cause the computer to perform a method for using a computer system to generate a three-dimensional endoscopic rendering. The method includes generating a three-dimensional endoscopic rendering of a lumen of a tubular structure with an opacity and shading that provide a three-dimensional impression based on non-spectral volumetric image data from a scan of the tubular structure. The method further includes generating a visual encoding for the three-dimensional endoscopic rendering that visually distinguishes a wall of the tubular structure from structure of interest on the wall based on spectral characteristics determined from spectral volumetric image data from the scan. The method further includes displaying three-dimensional endoscopic rendering with the visual encoding.
Those skilled in the art will recognize still other aspects of the present application upon reading and understanding the attached description.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the embodiments and are not to be construed as limiting the invention.
The following describes an approach for generating a 3-D VE rendering of a lumen based on both non-spectral and spectral volumetric imaging data. The spectral volumetric imaging data is used at least to visually encode inner walls of a tubular structure and structure/materials thereon based on spectral characteristics of the walls and structure/materials. The 3-D VE is a 2-D image that provides a 3-D impression similar to the view of a real endoscope. Examples of the tubular structures/VE procedures include the colon/VC, bronchi/virtual bronchoscopy (VB), etc. For 3-D VC, in one instance, structure such as flat and/or serrated polyps, which tend to be difficult to visually detect by shape due to their inconspicuousness with the colon, are visually encoded in the displayed 2-D image based on their spectral characteristics and differently than the colon wall, which may allow for visually distinguishing polyps from wall tissue, stool, etc. through spectral characteristics.
A radiation source 112, such as an X-ray tube, is supported by and rotates with the rotating gantry 106 around the examination region 108. The radiation source 112 emits X-ray radiation that is collimated e.g., by a source collimator (not visible) to form a generally fan, wedge, cone or other shaped X-ray radiation beam that traverses the examination region 108. In one instance, the radiation source 112 is a single X-ray tube configured to emit broadband (polychromatic) radiation for a single selected peak emission voltage (kVp) of interest. In another instance, the radiation source 112 is configured to switch between at least two different emission voltages (e.g., 70 keV, 100 keV, 120 keV, 140 keV, etc.) during a scan. In yet another instance, the radiation source 112 includes two or more X-ray tubes angular offset on the rotating gantry 104 with each configured to emit radiation with a different mean energy spectrum. In still another instance, the CT scanner 102 includes a combination of two or more of the above. An example of kVp switching and/or multiple X-ray tubes is described in U.S. Pat. No. 8,442,184 B2, filed Jun. 1, 2009, and entitled “Spectral CT,” which is incorporated herein by reference in its entirety.
A radiation sensitive detector array 114 subtends an angular arc opposite the radiation source 112 across the examination region 108. The detector array 114 includes one or more rows of detectors that are arranged with respect to each other along the z-axis direction and detects radiation traversing the examination region 108. In one instance, the detector array 114 includes an energy-resolving detector such as a multi-layer scintillator/photo-sensor detector. An example system is described in U.S. Pat. No. 7,968,853 B2, filed Apr. 10, 2006, and entitled “Double decker detector for spectral CT,” which is incorporated herein by reference in its entirety. In another instance, the detector array 114 includes a photon counting (direct conversion) detector. An example system is described in U.S. Pat. No. 7,668,289 B2, filed Apr. 25, 2006, and entitled “Energy-resolved photon counting for CT,” which is incorporated herein by reference in its entirety. In these instances, the radiation source 112 includes the broadband, kVp switching and/or multiple X-ray tube radiation sources. Where the detector array 114 includes a non-energy-resolving detector, the radiation source 112 includes kVp switching and/or multiple X-ray tube radiation sources. The radiation sensitive detector array 114 produces at least spectral projection data (line integrals) indicative of the examination region 108. In one configuration, the radiation sensitive detector array 114 also produces non-spectral projection data.
A reconstructor 116 processes the projection data from a same scan and generates spectral volumetric image data and non-spectral volumetric image data. The reconstructor 116 generates spectral volumetric image data by reconstructing the spectral projection data for the different energy bands. An example of spectral volumetric image data includes low energy volumetric image data and high energy volumetric image data for a dual energy scan. Other spectral volumetric image data can be derived through material decomposition in the projection domain followed by reconstruction or in the image domain. Examples include Compton scatter (Sc) and photo-electric effect (Pe) bases, an effective atomic number (Z-value), contrast agent (e.g., iodine, barium, etc.) concentration, and/or other basis. Where the CT scanner 102 produces non-spectral projection data, the reconstructor 116 generates non-spectral volumetric image data with the non-spectral projection data. Otherwise, the reconstructor 116 generates non-spectral volumetric image data by combining the spectral projection data to produce non-spectral projection data and reconstructing the non-spectral projection data and/or combining spectral volumetric image data to produce non-spectral volumetric image data. The reconstructor 116 can be implemented with a processor such as a central processing unit (CPU), a microprocessor, etc.
An operator console 118 includes a human readable output device 120 such as a display monitor, a filmer, etc. and an input device 122 such as a keyboard, mouse, etc. The console 118 further includes a processor 124 (e.g., a CPU, a microprocessor, etc.) and computer readable storage medium 126 (which excludes transitory medium) such as physical memory like a memory storage device, etc. In the illustrated embodiment, the computer readable storage medium 126 includes a 3-D virtual endoscopy (VE) module 128, a rendering module 130 and an artificial intelligence (AI) module 132, and the processor 124 is configured to execute computer readable instructions of the 3-D VE module 128, the rendering module 130 and/or the AI module 132, which causes functions described below to be performed.
In a variation, the 3-D VE module 128, the rendering module 130 and/or the AI module 132 are executed by a processor in a different computing system, such as a dedicated workstation located remotely from the CT scanner 102, “cloud” based resources, etc. The different computing system can receive volumetric image data from the CT scanner 102, another scanner, a data repository (e.g., a radiology information system (RIS), a picture archiving and communication system (PACS), a hospital information system (HIS), etc.), etc. The different computing system additionally or alternatively can receive projection data from the CT scanner 102, another scanner, a data repository, etc. In this instance, the different computing system may include a reconstructor configured similar to the reconstructor 116 in that it can process the projection data and generate the spectral and non-spectral volumetric image data.
The 3-D VE module 128 generates, based on both the non-spectral volumetric image data and the spectral volumetric image data, a 3-D VE rendering (i.e. 2-D image with a 3-D impression) of a lumen of a tubular structure from a viewpoint of a virtual camera of a virtual endoscope passing through the lumen. In one instance, the 3-D VE rendering is similar to the view which is provided by a physical endoscopic video camera inserted into the actual tubular structure and positioned at that location. In this instance, e.g., the 3-D VE rendering shows the inner wall of the tubular structure, including surfaces and structure/materials on the surfaces of the inner wall.
In one instance, the 3-D VE module 128 determines local opacity and gradient shading with the non-spectral volumetric image data. The 3-D VE module 128 can employ volume rendering, surface rendering and/or other approaches. With volume rendering, a virtual view ray is cast through the non-spectral volumetric image data and a region about the wall of the tubular structure is detected via a sharp gradient at the air/wall interface along a ray, where an opacity of the data points in the region begins low and increases sharply of the gradient. Shading is determined based on the angle between the viewing angle and local gradient and implemented through intensity. With surface rendering, a mesh (e.g., triangular or other) is fitted to the wall, e.g., at the strongest gradient, which is at the air/wall interface along a ray. An example approach for generating a 3-D VE rendering is described in U.S. Pat. No. 7,839,402 B2, filed Jun. 2, 2005, and entitled “Virtual endoscopy,” which is incorporated herein by reference in its entirety.
The 3-D VE module 128 generates visual encodings based on the spectral characteristics of the spectral volumetric image data for pixels of the 3-D rendering representing surfaces of inner walls of the tubular structure and/or structure/materials thereon. As described in greater detail below, in one instance, the visual encodings include color hue encodings and correspond to a spectral angle, an effective atomic number, a contrast agent concentration, other spectral characteristic and/or a combination thereof. The visual encodings may convey visual cues to an observer about a presence and/or a type of a structure/material. These spectral characteristics cannot be determined only from the non-spectral volumetric image data and thus the visual encodings are not part of existing 3-D VE technology. As such, the approach described herein, which further utilizes spectral volumetric image data to visually encode a 3-D VE rendering based on spectral characteristics, represents an improvement over the existing 3-D VE technology. In addition, the visual encodings allow for efficient visual assessment.
A rendering module 130 displays the 3-D VE rendering via a display monitor of the output device 120.
The rendering module 130 displays the 3-D VE rendering alone or in combination with one or more other images, such as one or more slice (axial, coronal, sagittal, oblique, etc.) images from the non-spectral and/or spectral volumetric image data, a rendering of just the tubular structure with material outside of the tubular structure rendered invisible, etc. In one instance, the same visual encoding (e.g., color hue) is concurrently displayed in the one or more of the other displayed images, and visualization thereof is controlled independently from or dependently with control of the visual encoding in the displayed 3-D VE rendering. Additionally or alternatively, the 3-D VE rendering is interactively linked (spatially coupled) to one or more of the other displayed images such that hovering the display screen pointer, such as a mouse pointer, over the displayed 3-D VE rendering results in the display of a location indicator in one or more of the other displayed images indicating the location of the pointer in the displayed 3-D VE rendering.
Returning to
In one instance, the AI module 132 includes a deep learning algorithm, such as a feed-forward artificial neural network (e.g., a convolutional neural network) and/or other neural network to learn patterns of the spectral characteristics of the different materials and/or structure, etc. to distinguish the structure/material of interest from the other structure/material. Examples of such algorithms are discussed in Gouk, et al., “Fast Sliding Window Classification with Convolutional Neural Networks,” IVNVZ '14 Proceedings of the 29th International Conference on Image and Vision Computing New Zealand, Pages 114-118, Nov. 19-21, 2014, “Fully convolutional networks for semantic segmentation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, and Ronneberger, et al., “U-Net: Convolution Networks for Biomedical Image Segmentation,” Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, Vol. 9351: 234-241, 2015. In a variation, the AI module 132 is omitted.
For some scans, an auxiliary device is employed with the CT scanner 102. For example, for a VC scan, an insufflator 122 may be used to inflate the colon for the scan. In one instance, the insufflator 122 is used to insufflate a gas into the colon to inflate the colon so that the wall of the colon can be more easily inspected via the displayed 3-D VE rendering.
As briefly described above, the VE module 128 generates the visual encodings for the 3-D rendering based on spectral characteristics determined from the spectral volumetric image data. The following describes a non-limiting example in which the visual encodings include color hues. In one instance, this includes determining a color hue for a pixel based on a spectral angle. In another instance, this includes determining a color hue for the pixel based on an effective atomic number. In yet another instance, this includes determining the color hue for a pixel based on a contrast agent concentration. In another instance, this includes determining the color hue for a pixel based on one or more other spectral characteristics. In still another instance, this includes determining a color hue for a pixel based on a combination of two or more of the above.
An approach for determining the spectral angle includes determining a scatter plot from the spectral volumetric image data sets with a different spectral volumetric image data set on a different axis of the plot.
In
In this example, a spectral angle θ is determined for the first point 408 as an angle between the first axis 402 and a line 412 extending from the first point 408 to the origin 406. In one instance, θ is determined by calculating an inverse of the tangent of a ratio of the Pe voxel value to the Sc voxel value (i.e. θ=arctan(HUPe/HUSc)). A spectral angle ϕ is determined for the first point 410 as an angle between the first axis 402 and a line 414 extending from the second point 410 to the origin 406. In one instance, ϕ is determined by calculating an inverse of the tangent of a ratio of the Pe voxel value to the Sc voxel value (i.e. ϕ=arctan(HUPe/HUSc)). In this example, the spectral angle θ for the pixel corresponding to the point 208 is smaller than the spectral angle ϕ for the pixel corresponding to the point 410.
In one instance, the color hue for a pixel is determined from the spectral angle as a linear mixture between the first and second color hues of the first and second axis 402 and 404. The color hue for the pixel corresponding to the point 408 will be a linear mixture based on the angle θ and include a greater contribution of the first color relative to the second color since the illustrated angle θ is less than ninety degrees. The color hue for the pixel corresponding to the point 410 will be a linear mixture based on the angle ϕ and will include a greater contribution of the second color relative to the first color since the illustrated angle ϕ is greater than ninety degrees. In a variation, the spectral angle is translated non-linearly into a pseudo-color scale, such as a rainbow scale, a temperature scale, and/or other color hue scale.
In another embodiment, where a volume rendering algorithm is employed, the spectral angle is computed from a predetermined region in the spectral volumetric image data about the wall of the tubular structure in which the ray opacity saturates to unity and includes voxels before the wall and voxels after the wall. In one instance, the color hue is determined from a linear superposition from each location within the predetermined region weighted by the local opacity. In a variation, the color hue is instead determined as a maximum value within the predetermined region. In yet another variation, the color hue is instead determined as a mean or median value within the predetermined region. In yet another instance, the color hue is determined otherwise or based on a combination of the foregoing.
In another embodiment, the color hue is also determined based on a radial distance between a point on the plot and the spectral angle. For example, in this embodiment in
In another embodiment, each coordinate in the grid in the scatter plot 400 of
In another instance, the color hue is otherwise determined based on a combination of two or more of the above described approaches.
It is to be appreciated that the ordering of the acts in the method is not limiting. As such, other orderings are contemplated herein. In addition, one or more acts may be omitted, and/or one or more additional acts may be included.
At 502, non-spectral and spectral volumetric image data from a same scan of a tubular structure is obtained, as described herein and/or otherwise.
At 504, a 3-D VE rendering, including opacity and shading, is generated based on the non-spectral volumetric image data, as described herein and/or otherwise.
At 506, visual encodings for the pixels of the 3-D VE rendering are determined based on the spectral volumetric image data, as described herein and/or otherwise.
At 508, the 3-D VE rendering is displayed with the visual encoding, as described herein and/or otherwise.
The above may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium, which, when executed by a computer processor(s), cause the processor(s) to carry out the described acts. Additionally, or alternatively, at least one of the computer readable instructions is carried out by a signal, carrier wave or other transitory medium, which is not computer readable storage medium.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
The word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/082260 | 11/22/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/114806 | 6/11/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6514082 | Kaufman | Feb 2003 | B2 |
7599465 | Walter | Oct 2009 | B2 |
7668289 | Proksa | Feb 2010 | B2 |
7839402 | Dekel | Nov 2010 | B2 |
7968853 | Altman | Jun 2011 | B2 |
8442184 | Forthmann | May 2013 | B2 |
8953911 | Xu | Feb 2015 | B1 |
9865079 | Miyamoto | Jan 2018 | B2 |
20070189443 | Walter | Aug 2007 | A1 |
20090244060 | Suhling | Oct 2009 | A1 |
20100215226 | Kaufman | Aug 2010 | A1 |
20130296682 | Clavin | Nov 2013 | A1 |
20160055650 | Park | Feb 2016 | A1 |
20160275709 | Gotman | Sep 2016 | A1 |
20180247153 | Ganapati | Aug 2018 | A1 |
20200034968 | Freiman | Jan 2020 | A1 |
20200037997 | Viggen | Feb 2020 | A1 |
Number | Date | Country |
---|---|---|
2007014483 | Jan 2007 | JP |
2019114035 | Jul 2019 | JP |
101705346 | Feb 2017 | KR |
Entry |
---|
PCT International Search Report, International application No. PCT/EP2019/082260, Feb. 24, 2020. |
Rie T. et al., “Deep Multi-Spectral Ensemble Learning for Electronic Cleansing in Dual-Energy CT Colonography”, Proc of SPIE, vol. 10134, 2017. |
Nappi J. et al., “Automated Detection of Colorectal Lesions with Dual-Energy CT Colonography”, Proceedings of SPIE, vol. 8315, Feb. 22, 2012 (Feb. 22, 2012), p. 83150V, XP055660921. |
Radin Adi et al., “Analysis of Multi-Energy Spectral CT for Advanced Clinical, Pre-Clinical, and Industrial Applications”, Nov. 7, 2014 (Nov. 7, 2014), XP055462103. |
Faisal M. et al., “Deep Learning with Cinematic Rendering: Fine-Tuning Deep Neural Networks Using Photorealistic Medical Images”, arxiv.org, Cornell University Library, 201 Olin Library Cornell University Ithaca, NY 14853, May 22, 2018 (May 22, 2018), XP081410062. |
Long J. et al., “Fully Convolutional Networks for Semantic Segmentation,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. |
Ronneberger O. et al., “U-Net: Convolution Networks for Biomedical Image Segmentation,” Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, vol. 9351: 234-241, 2015. |
Gouk et al., “Fast Sliding Window Classification with Convolutional Neural Networks,” Proceedings of the 29th International Conference on Image and Vision Computing New Zealand, pp. 114-118, Nov. 19-21, 2014. |
Number | Date | Country | |
---|---|---|---|
20220101617 A1 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
62776053 | Dec 2018 | US |