The present invention relates to the diagnostic imaging systems and methods. It finds particular application in conjunction with the model based image segmentation of diagnostic medical images and will be described with particular reference thereto. Although described by the way of example with reference to x-ray computer tomography, it will further be appreciated that the invention is equally applicable to other diagnostic imaging techniques which generate 3D image representations.
Radiation therapy has been recently experiencing a transition from conformal methods to Intensity Modulation Radiation Therapy (IMRT). IMRT enables an improved dose distribution in the patient's body and makes possible precise delivery of high radiation dose directly to the tumor while maximally sparing the surrounding healthy tissue. Accurate target and “organ at risk” delineation is important in IMRT. Presently, the procedure is performed manually in 2D slices, which is cumbersome and the most time-consuming part of the radiation therapy planning process. The use of robust and reliable automatic segmentation technique would substantially facilitate the planning process and increase patient throughput.
Model based image segmentation is a process of segmenting (contouring) medical diagnostic images that is used to improve robustness of segmentation methods. Typically, a pre-determined 3D model of the region of interest or organ to be segmented in the diagnostic image is selected. The model represents an anatomical organ such as a bladder or femur, but it may also represent a structure such as a target volume for radiotherapy. In many cases, the model can be used to aid automated image segmentation by providing knowledge of the organ shape as an initial starting point for the automated segmentation process. However, in some instances, the auto-segmentation of the image may not be possible, or it is not robust enough to fit a specific organ or a section of the model accurately. Particularly, application of the auto-segmentation to the image data is difficult due to insufficient soft tissue contrast in CT data, high organ variability, and image artifacts, e.g. caused by dental fillings or metal implants. It would be desirable to be able to initiate the segmentation with a model and further complete an accurate segmentation when auto-segmentation is not practical or to enhance the auto-segmentation result for specific situations after auto-segmentation has been completed.
There is a need for the method and apparatus to provide the image segmentation of the model based image that is easily adapted to match a specific patient's anatomy. The present invention provides a new and improved imaging apparatus and method which overcomes the above-referenced problems and others.
In accordance with one aspect of the present invention, a diagnostic imaging system is disclosed. A means selects a shape model of an organ. A means best fits the selected model to an image data. A manual means modifies selected regions of the model to precisely match the image data.
In accordance with another aspect of the present invention, a method of segmenting an image of a diagnostic imaging system is disclosed. A shape model of an organ is selected. The selected model is dragged and dropped on an image data. The selected model is globally scaled, rotated and translated to best fit the image data. Local regions of the model are modified with a set of manual tools to precisely match the image data.
One advantage of the present invention resides in enabling the manipulation of the models to match subject's anatomy.
Another advantage resides in providing a set of diagnostic image modification tools enable the user to modify the models with a mouse.
Still further advantages and benefits of the present invention will become apparent to those of ordinary skill in the art upon reading and understanding the following detailed description of the preferred embodiments.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
With reference to
Typically, the imaging technician performs a scan using the workstation 12. Diagnostic data from the scanner 18 is reconstructed by a reconstruction processor 30 into 3D electronic image representations which are stored in a diagnostic image memory 32. The reconstruction processor 30 may be incorporated into the workstation 12, the scanner 18, or may be a shared resource among a plurality of scanners and workstations. The diagnostic image memory 32 preferably stores a three-dimensional image representation of an examined region of the subject. A video processor 34 converts selected portions of the three-dimensional image representation into appropriate format for display on a video monitor 36. The operator provides input to the workstation 12 by using an operator input device 38, such as a mouse, touch screen, touch pad, keyboard, or other device.
With continuing reference to
With continuing reference to
The user best fits the model to the organ using of a set of global tools 62 which apply transformation to the entire model on the image. The global tools 62 include rotation, translation and scaling tools that allow the user to rotate, translate, and scale the model. The global tools 62 are applied by a use of the mouse 38 on each (x, y, z) dimension of the model, e.g. the mouse motion is converted into translation, scale or rotation such that all vertices in the model are transformed by the defined translation, scale, or rotation.
An auto-segmentation means or process 64 automatically adapts the best fitted model to the boundaries of the anatomical structures of interest. By sliding the intersection point, the user can check the fit in various directions and slices. If the user determines that the results of the auto-segmentation process 64 are not satisfactory, e.g. the desired segmentation accuracy is not achieved, the user initiates image modification via an image modification means 66 which includes a set of manual local tools 68 which allows the user to manipulate local regions of the model 52 to match the image data more accurately or in accordance with user's preferences. Alternatively, when the user determines that the auto-segmentation is not possible, the auto-segmentation process 64 is skipped. The local tools 68 comprise three main functions: selection of the local region (vertices) to be modified, the method by which the vertices are transformed, and the translation of the mouse motion into parameters defining the transformation.
The selection of the vertices is based either on the distance from the mouse position or the layers of vertex neighbors from the closest vertex to the mouse location. In the first case, all vertices within a specified distance from the mouse are selected. In the latter case, the vertex closest to the mouse is selected. All vertices which share a triangle with the first vertex are considered neighbors and comprise the first neighbor layer. A second neighbor layer is all vertices which share a triangle with any of the first layer of vertices. In this case, the selection of the vertices to be deformed is based on the number of neighbor layers to be used.
Additionally, control parameters related to local manipulation tools are stored as part of the organ model. In this way, optimal tool settings are maintained as part of the organ model. Of course, it is also contemplated that the manual tools (68) may be used to manipulate boundaries between multiple organs at one time or within a regional area with a single mouse motion.
The image undergoing segmentation and segmented images are stored in a data memory 70.
With continuing reference to
In one embodiment, the Gaussian pull tool 72 pulls a Gaussian shaped distortion (or other functional shape the smoothly transitions from 1 to 0) but derives the distance that the distortion is pulled from the distance of the mouse position from the organ model. The organ model 52 is pulled directly to the mouse position enabling smooth drawing, rather than having to click up and down on the mouse to grab and stretch the organ.
With continuing reference to
With continuing reference to
The Pencil draw tool 90 recognizes begin 96 and end 98 points of each mouse step and defines a capture plane 100 through a vector whose normal vector is in the plane of the mouse motion and is normal to the mouse motion direction. Two end planes 102, 104, which are defined at the start and end points 96, 98, identify a capture range 106 around the mouse motion vector. Vertices located within the capture range 106 are pulled towards the capture plane 100. Vertices that lie on the plane 100 are pulled onto the plane 100. Vertices that lie further from the mouse motion plane are pulled with a Gaussian weighting of the distance to the capture plane 100 based on the distance from the mouse motion plane.
In one embodiment, the Pencil tool 90 is used to shrink fit an organ model to a predefined set of contours for a particular organ where the mouse motion is replaced with successive vertices of the pre-defined contour.
Preferably, the Pencil draw tool 90 is controlled by a In-Draw Plane distance which defines the maximum distance between a vertex of the organ model and the mouse for the vertex to be captured by the Pencil tool 90, and a From-Draw Plane parameter which dictates how the model 52 is deformed in the direction orthogonal to the drawing plane and represents the width of the Gaussian function used to weight the distance that the vertices move. In one embodiment, the Pencil draw tool 90 is controlled by a function that smoothly transitions from 1 to 0 to perform the weighting of the distance of vertex motion for vertices that do not lie on the drawing plane.
Optionally, the auto-segmentation process 64 is run after manual segmentation, preferably freezing the manually adjusted model surfaces against further modification or modification beyond preselected criteria.
The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon a reading and understanding of the preceding detailed description. It is intended that the invention be constructed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
This application claims the benefit of U.S. provisional application Ser. No. 60/512,453 filed Oct. 17, 2003, and provisional application Ser. No. 60/530,488 filed Dec. 18, 2003, which are both incorporated herein by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB2004/052000 | 10/6/2004 | WO | 00 | 4/12/2006 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2005/038711 | 4/28/2005 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4885702 | Ohba | Dec 1989 | A |
5889524 | Sheehan et al. | Mar 1999 | A |
5926568 | Chaney et al. | Jul 1999 | A |
6106466 | Sheehan et al. | Aug 2000 | A |
6201543 | O'Donnell et al. | Mar 2001 | B1 |
6385332 | Zahalka et al. | May 2002 | B1 |
6701174 | Krause et al. | Mar 2004 | B1 |
6911980 | Newell et al. | Jun 2005 | B1 |
7167738 | Schweikard et al. | Jan 2007 | B2 |
7200251 | Joshi et al. | Apr 2007 | B2 |
20020184470 | Weese et al. | Dec 2002 | A1 |
20030018235 | Chen et al. | Jan 2003 | A1 |
20030020714 | Kaus et al. | Jan 2003 | A1 |
20030056799 | Young et al. | Mar 2003 | A1 |
20030194057 | Dewaele | Oct 2003 | A1 |
20040012641 | Gauthier | Jan 2004 | A1 |
20040246269 | Serra et al. | Dec 2004 | A1 |
Number | Date | Country |
---|---|---|
WO0209611 | Feb 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20070133848 A1 | Jun 2007 | US |
Number | Date | Country | |
---|---|---|---|
60512453 | Oct 2003 | US | |
60530488 | Dec 2003 | US |