1. Technical Field
The present disclosure relates to visualization of cardiac scars and, more specifically, to visualization of scaring on cardiac surface.
2. Discussion of Related Art
Myocardial scarring is the establishment of fibrous tissue that replaces normal tissue destroyed by injury or disease within the muscular tissue of the heart. Myocardial scarring often occurs as a result of myocardial infarction but may also result from surgical repair of congenital heart disease. This scarring may result in a disruption to the electrical conduction system of the heart, and may also affect surrounding heart muscle tissue.
As such disruptions to the electrical conduction system of the heart may contribute to cardiac dysrhythmia and other problems, effective visualization of cardiac scarring may be useful in performing various interventions such as radio frequency ablation, which may be used to treat dysrhythmia and other problems.
For example, during cardiac visualization, cardiac scars may become more visible as contrast agent is absorbed in the scar tissue. Accordingly, complaining cardiac image volumes acquired before and after the contrast agent is absorbed in the scar tissue is a common way to visualize scars. However, it may be difficult to adequately visualize the scars with regular volume rendering techniques.
A method for imaging a myocardial surface includes receiving an image volume. A myocardial surface is segmented within the received image volume. A polygon mesh of the segmented myocardial surface is extracted. A surface texture is calculated from voxel information taken along a path normal to the surface of the myocardium. A view of the myocardial surface is rendered. The rendering includes imposing the calculated surface texture onto the polygon mesh.
The image volume may be received from an image database, a computed tomography (CT) scanner, or a C-arm CT scanner. Segmentation of the myocardial surface may include loading a pre-determined segmentation or calculating segmentation by applying a detection algorithm to the image volume. Extracting a polygon mesh may be performed by applying a marching squares approach to the segmented myocardial surface. Rendering the view of the myocardial surface may include rendering the polygon mesh in a depth buffer of a graphical processing unit (GPU) using a rasterization algorithm. Position information of the visible myocardial surface may be extracted from the rendering of the myocardial surface in the depth buffer rather than the image volume. Calculating the surface texture from the voxel information taken along a path normal to the surface of the myocardium may be performed starting from the previous extracted position information. The path normal to the surface of the myocardium ay be a smoothed normal.
Regions of scarring may be highlighted on the rendering of the myocardial surface. Highlighting of scarring may include calculating a derivative of the rendered surface. Highlighting of scarring may include application of a Sobel filter to the highlighted regions of scarring.
Scarring may be automatically segmented from the rendered surface mesh. Scarring may be automatically segmented from the rendered surface mesh based on the highlighting.
A user may be allowed to change one or more parameters of display or segmentation and then re-rendering the view of the myocardial surface in real-time based on the changed parameters.
A method for applying texture to a polygon mesh includes casting a ray from a point of view. The ray intercepts a three-dimensional structure within an image volume. A direction normal to the surface of the three-dimensional structure is determined at the point at which the ray intercepts the surface of the structure. A set of voxels of the three-dimensional structure is analyzed along the normal direction including ascertaining voxel color and transparency. As the normals may be smoothed before this point, the set of voxels of the three-dimensional structure may be analyzed along the smoothed normal direction. The set of voxels is combined based on the ascertained color and transparency to create a texture element. The created texture element is applied to the polygon mesh.
Prior to determining the direction normal to the surface of the structure, normals of the surface of the structure may be smoothed. The three-dimensional structure may include a myocardium and the polygon mesh may be a representation of a surface of the myocardium.
A computer system includes a processor and a non-transitory, tangible, program storage medium, readable by the computer system, embodying a program of instructions executable by the processor to perform method steps for imaging a myocardial surface. The method includes receiving an image volume. A myocardial surface is segmented from within the received image volume. A polygon mesh of the segmented myocardial surface is extracted. A surface texture is calculated from voxel information taken along a path normal to the surface of the myocardium. A view of the myocardial surface is rendered. The rendering includes imposing, on-the-fly, the calculated surface texture onto the polygon mesh without having to do a texture mapping. Scarring is highlighted on the rendering of the myocardial surface. The scarring is segmented on the rendering of the myocardial surface based on the highlighting.
Rendering the view of the myocardial surface may include rendering the polygon mesh in a depth buffer of a graphical processing unit (GPU) using a rasterization algorithm. Position information of the visible myocardial surface may be extracted from the rendering of the myocardial surface in the depth buffer rather than the image volume. Calculating the surface texture from the voxel information taken along a path “normal” to the surface of the myocardium may be performed starting from the previous extracted position information. The path normal to the surface of the myocardium may be a smoothed normal.
A more complete appreciation of the present disclosure and many of the attendant aspects thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
In describing exemplary embodiments of the present disclosure illustrated in the drawings, specific terminology is employed for sake of clarity. However, the present disclosure is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents which operate in a similar manner.
Exemplary embodiments of the present invention seek to provide methods for visualization of cardiac scars that may be located inside the myocardium from within image volumes such as those acquired by computed tomography (CT) or C-Arm CT. These methods may use parallel computing and may be efficiently implemented within a graphic processing unit (GPU). In so doing, a user may be able to change in real-time the visualization parameters to better highlight the scars.
The surface of the heart may then be segmented (Step S113). Segmentation of the surface of the heart may be defined as determining which of the voxels of the image volume represent the outer surface of the heart. Segmenting of the surface of the heart may include either loading a pre-determined segmentation or calculating segmentation by applying an algorithm for detecting the surface of the heart. After segmentation, a polygon mesh may be extracted from the segmented surface of the heart (Step S114). An example of a suitable mesh extraction technique is the marching cubes algorithm, however, other known techniques for polygon mesh extraction may be used to generate a polygon mesh that represents the surface of the heart.
Extraction of the surface mesh may result in a three-dimensional polygon mesh representing the heart surface. Next, the three-dimensional mesh may be rendered for viewing. Rendering of the three-dimensional mesh may include applying a surface texture over the surface mesh so that a two-dimensional rendering of the surface of the heart, complete with surface texture, may be displayed for a user. In order to determine an appropriate surface texture, the image volume may be consulted. The surface texture may then be determined by identifying color values and transparency values of the corresponding portion of the image volume.
Traditionally, in determining a texture to be applied, the color values and the transparency values may be assessed along a ray that is traced from a viewpoint. Ray casting may be performed starting from the surface of the myocardium. To do so, rasterization may then be used to find the intersection of a pixel ray and the myocardium surface. Then the surface texture in that region may be determined by combining the colors of the voxels that intercept the corresponding ray while accounting for their degree of transparency so that a realistic surface texture may be created.
Exemplary embodiments of the present invention may continuously recalculate polygon mesh surface shading. Rather than relying on texture mapping in the classical approach where a 2D texture image is computed and then imposed upon a 3D object surface using 2D texture coordinates, surface color is computed on-the-fly. As this approach may be computationally more expensive than classic texture mapping approaches, exemplary embodiments of the present invention may achieve acceptable speed by utilizing a graphical processing unit (GPU) for the on-the-fly computation of surface color. This approach may avoid creation of artifacts in surface shading which may be commonly found when using texture mapping.
Moreover, according to classical volume rendering techniques such as that described above where texture is computed along the ray cast from the point of view to the 3D structure, texels are generally calculated once and is not intended to change as the point of view moves. As described above, this may present problems when applied to myocardial surface visualization as texture will tend to be different depending on the current point of view.
In addition to continuously recalculating surface shading, exemplary embodiments of the present invention utilize a novel approach to computing surface shading for mesh polygons that may provide for shading that remains accurate regardless of point of view.
Depending on the quality of the image volume and on the tool used to perform segmentation, the cardiac surface may appear complex and/or noisy. This in turn may cause parts of the surface of the image volume to contain small concave pockets which may cause the normal directions of adjacent regions to intersect. This may result in misleading visualization result, for example, where a scar would be visualized in multiple locations on the surface. To minimize or avoid this phenomenon, exemplary embodiments of the present invention may perform an optional step of smoothing the normals of the mesh (Step S115). An exemplary smoothing technique according to an embodiment of an invention smoothes the normal of the polygons comprising the mesh using a low-pass filter that is iteratively applied to the mesh. An example of low-pass filter is the mean filter. Use of the mean filter method may include iterating the vertex of the mesh, computing the mean of the normal of the neighborhood vertices and putting this value on the current vertex, although other techniques for normal smoothing may be used in addition to or in place of the mean filter or other low-pass filters. Rendering of the three-dimensional mesh, including the process of shading, may be performed, for example, using a ray tracing algorithm to perform a classic integration over the view direction. Alternatively, however, exemplary embodiments of the present invention may utilize a two-pass approach to rendering (Step S116). In the first pass, the mesh may be rendered in a depth buffer, for example, using a rasterization algorithm (Step S116a). This step may be implemented, for example, in a graphics processing unit (GPU) using an available hardware accelerated API such as OpenGL, DirectX, or GLSL.
In the second pass, a ray trace may be performed for each voxel (Step S116b). The ray trace may begin at the position of the surface of the heart using the depth buffer information and the camera configuration, an operation that is known as unprojection, and analysis may be performed for the volume following the smoothed normal of the surface for a certain distance that can be fixed or computed on-the-fly using smart algorithms. The result of the analysis may then be stored in the display buffer. Accordingly, the on-screen rendering, which may be performed quickly by the GPU, may be used to determine the depth of each polygon of the surface mesh and the calculating of depth from each voxel of the original image volume may be avoided.
The rendering is accordingly the result of the surface shading described above. After rendering has been performed, additional steps such as merging different information together, for example, classic surface rendering and the result of the Sobel filter of the scar rendering, may be performed to highlight the boundaries of the scars on the surface. Such subsequent steps may be implemented using image merging techniques.
This two-pass rendering approach (Step S116) may be used to automatically consider only the relevant volume data close to the cardiac surface rather than volume data that is above or far from the surface. Moreover, this approach may be more efficient and effective than classic ray tracing because it does analysis only where the segmented surface is visible thereby avoiding non-useful computation.
This added efficiency may allow for on-the-fly re-rendering as display parameters are changed and/or as viewing angle changes. Thus re-computation of surface shading may be performed in real-time.
Once the mesh has been rendered, an example of which may be seen in
The approach for two-pass rendering described above may also permit the avoidance of deformation when computing a texture for a surface by avoiding an unfolding step that may otherwise be necessary for algorithms that generate a texture that covers the whole surface.
In addition to or instead of highlighting the scar surface for enhanced viewing, exemplary embodiments of the present invention may automatically segment the scar surface (Step S118). Segmentation of the scar surface may include the performance of ray analysis to analyze the image volume over the normals of each surface mesh polygon so that the scar may be more easily segmented. When analyzing the volume over the normals, multiple techniques may be used. Examples of suitable techniques include maximum intensity projection (MIP), minimum intensity projection (MINIP), mean integration projections, combination of the above and/or the use of one or more other smarter filters.
Ray analysis may thus include the use of one or more filters. For example, one filter, according to an exemplary embodiment of the present invention, may involve computing along each ray both the mean and the maximum and then visualizing the difference between the mean and the maximum of the rays. The visualization may involve a threshold that has been found automatically or a global threshold provided by the user that can segment the scars.
Highlighting and segmentation of the scar surface may be considered post-processing. Post-processing may be included in an optional embodiment of the present invention. According to one exemplary embodiment of the present invention, segmentation of the scar surface (Step S118) may occur after the highlighting (Step S117). In such a case, the highlighting results may be used to facilitate segmentation. For example, the derivative of the segmentation results may be calculated to highlight the contours of the scar segmentation. Derivatives may be calculated, for example, using the Sobel filter discussed above. It may also be possible to combine such visualization with rendering of a classic MIP.
After rendering has been performed, exemplary embodiments of the present invention may be efficient enough to provide for re-rendering to refresh the image, for example, 20 to 60 times per second or more. Accordingly, rendering may be performed in real-time. Exemplary embodiments of the present invention may also permit a user to change parameters to fine tune, in real-time, the rendering and scar highlighting/segmentation (Step S119).
The computer system referred to generally as system 1000 may include, for example, a central processing unit (CPU) 1001, random access memory (RAM) 1004, a printer interface 1010, a display unit 1011, a local area network (LAN) data transmission controller 1005, a LAN interface 1006, a network controller 1003, an internal bus 1002, and one or more input devices 1009, for example, a keyboard, mouse etc. As shown, the system 1000 may be connected to a data storage device, for example, a hard disk, 1008 via a link 1007.
Exemplary embodiments described herein are illustrative, and many variations can be introduced without departing from the spirit of the disclosure or from the scope of the appended claims. For example, elements and/or features of different exemplary embodiments may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims.
The present application is based on provisional application Ser. No. 61/251,887, filed Oct. 15, 2009, the entire contents of which are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61251887 | Oct 2009 | US |