Reference is made to commonly-assigned copending U.S. application Ser. No. 11/039,422, filed Jan. 20, 2005, entitled RADIATION THERAPY METHOD WITH TARGET DETECTION, by Schildkraut et al., the disclosure of which is incorporated herein.
The invention relates generally to radiation therapy systems, and in particular, to the detection of the target at the time or radiation treatment without the use of internal markers.
In the application of radiotherapy to extra-cranial tumors, movement of the tumor target can result in a decrease in the radiation dose received by the tumor and an increased dose to normal tissue. This is especially relevant to pulmonary tumors that move due to respiration. The uncertainty in the location of the tumor target can be compensated for in several ways. One approach is to increase the treatment margin around the gross tumor volume. This approach increases the probability that all of the tumor will receive a lethal dose of radiation. Unfortunately, it also increases collateral damage to healthy tissue.
If the location of a tumor target could be determined immediately before or during radiation treatment the radiation dose could be concentrated more effectively at the tumor target and less normal tissue would be irradiated. Radiographic images of the patient, in the vicinity of the target, may be captured during radiotherapy. Unfortunately, real-time detection of a tumor target in a projection radiograph is generally very difficult because the target is obscured by overlapping anatomical structures. A number of inventions have been directed at solving this problem.
U.S. Pat. No. 6,731,970 discloses the use of metal as a marker for target tracking. A metal marker has high contrast relative to human tissue in a radiograph. This allows its location to be easily detected by manual or automatic means. However, this approach has significant drawbacks including the requirement of an invasive procedure to implant a marker and the motion of the marker and tumor may not be perfectly correlated. In U.S. Pat. No. 6,898,456 the degree of lung filling is determined from a radiograph by the measurement of the position of the diaphragm, which is clearly visible, relative to the position of stationary anatomy. The lung filling value, instead of the location of an implanted marker, is correlated with the target position.
A number of inventions are directed at performing three-dimensional (3-D) imaging at the time of radiotherapy. U.S. Pat. No. 6,914,959 discloses a combined computed tomography (CT) and radiotherapy system. CT is used to obtain an image for treatment planning and images of the target during treatment. U.S. Pat. No. 6,198,957 discloses a combined magnetic resonance (MR) imaging and radiotherapy system. A drawback to 3-D imaging approaches to target detection during radiotherapy is that the time required to capture and analyze the image may preclude near real-time detection. Also, the addition of a 3-D medical imaging system to a radiotherapy unit would greatly increase its cost. These drawbacks can be mitigated by an approach that captures less than full 3-D images. In U.S. Pat. No. 6,778,850 a plurality of two-dimensional (2-D) x-ray images are captured and synthesized into a low clarity 3-D image.
U.S. Patent Application Publication Nos. 2005/0054916, 2005/0053267, and 2005/0053196 describe a method in which a time sequence of reference fluoroscopic images is captured of an area containing a radiation therapy target throughout a physiological cycle. Moving content in a reference image is enhanced by comparing it to previous images in the sequence. For example, a different image is created from an image by subtracting from it a weighted average of a number of previous images. At the time of radiation treatment a sequence of fluoroscopic images is captured using the same source and imaging device configuration as for the reference images. Moving content is enhanced as before, and the treatment images are correlated with templates that are created from the enhanced reference images. A high correlation between an image and a reference template determines the patient's current position in the physiological cycle.
U.S. Patent Application Publication No. 2004/0092815 describes a method that, instead of directly attempting to detect the location of the target at treatment time, determines the current respiratory state, shift, and orientation of the target region. This information is then used to infer the location of the target. In the planning phase, 3-D CT images are captured in at least two respiratory states preferably including maximum and minimum inhalation. Addition CT images at intermediate respiratory states are estimated from the captures CT images. The location of the target is identified in each of the CT images. Next, a set of digitally reconstructed radiographs (DRR) is calculated for each of the CT images. During radiotherapy radiographs are captured of the target region. The captured radiographs are matched to the set of DRR. When a match to a DRR is found, the respiratory state at the time of capture is determined to be that of the CT image from which the matching DRR was generated. The source and imaging plane location that was used in the calculation of the matching DRR can be used to determine the position and orientation of the target region relative to the position of the radiographic unit. Finally, since the location of the target in the DRR is known, the location of the target at the time the radiograph was captured can be determined.
An object of this invention is to directly detect the location of a target at the time of radiotherapy without the need for the acquisition of multiple 2-D or 3-D images in the planning phase of radiation therapy. Another object of this invention is to directly detect the location of a target at the time of radiotherapy in a way that is fast, does not add significantly to the radiation dose to normal tissue, and does not require major additions to the radiation therapy system. This invention provides a method to detect the location of a radiotherapy target based on identification of the target's projection in a captured radiographic image.
Briefly, according to one aspect of the present invention a method for delivering radiation therapy to a patient using a three-dimensional (3-D) planning image for radiation therapy of the patient wherein the planning image includes a radiation therapy target includes the steps of: determining a digitally reconstructed radiograph from the planning image; identifying a region of the target's projection in the digitally reconstructed radiograph; capturing a radiographic image corresponding to the digitally reconstructed radiograph; identifying a region in the captured radiographic image; comparing the region of the target's projection in the digitally reconstructed radiograph with the identified region in the captured radiographic image; and determining a delivery of the radiation therapy in response to this comparison.
This invention builds on U.S. patent application Ser. No. 11/039,422, which discloses a method of real-time target detection in radiotherapy that solves the problem of detecting a target in a 2-D captured radiographic image in two ways: 1) The capture configuration for a radiograph at treatment time is based on an analysis of digitally reconstructed radiographs (DRR) that are generated from a 3-D planning image. This analysis determines capture conditions for which the target can be directly detected. 2) Powerful image processing techniques are used that enable target detection in the presence of superimposed anatomical structures.
This invention provides a method of identifying the region in a captured radiographic image that corresponds to the region of the target's projection in the image. This is accomplished by first, in the planning phase, determining processing conditions that result in the identification of the region of the target's projection in a DRR. A region is identified in the DRR by a method of image segmentation. The identified region is compared with the target's projection in this image. The segmentation process is optimized until the identified region and the target's projection are substantially the same. In the treatment phase, the optimized segmentation procedure is applied to a captured radiographic image in order to identify a region at or near the isocenter. Characteristics of the region identified in the DRR are compared with those of the region identified in the captured radiographic image. Based on this comparison, the probability that the identified region in the captured radiographic image is the target is determined. This probability and the location of the identified region in the captured radiographic image are used to modify the delivery of therapeutic radiation.
The foregoing and other objects, features, and advantages of the invention will be apparent from the following more particular description of the embodiments of the invention, as illustrated in the accompanying drawings. The elements of the drawings are not necessarily to scale relative to each other.
The following is a detailed description of the preferred embodiments of the invention, reference being made to the drawings in which the same reference numerals identify the same elements of structure in each of the several figures.
A radiography unit is comprised of a diagnostic X-ray source 135 and digital X-ray imaging device 133 images the region of the target 131. The radiation therapy system preferably has more that one radiography unit to enable the location of the target in three-dimensions.
The system has means to accurately determine the position and orientation of the radiography unit relative to the radiotherapy coordinate system. This can be accomplished, for example, with the use of markers placed on the X-ray source and imaging device that are detected by the cameras 139. Another means is to use a phantom that contains markers that are detected by both the cameras and the radiography unit.
The target detection and control unit 137 in
In an embodiment of this invention, the therapeutic radiation source 136 is used in place of or in addition to the X-ray source 135 to capture radiographic images.
A method of radiation therapy with target detection in accordance with the present invention is diagrammed in
The purpose of step 212 is to determine the best capture conditions for radiographs that are acquired in step 214. In step 212, digitally reconstructed radiographs (DRR) are calculated from the planning image. One or more DRR for which target detection is facilitated is determined. Generally target detection is facilitated when overlap of the target with other anatomy is minimized and the boundary of the target is distinct.
In step 213, one or more radiographic units are arranged to capture images that correspond to a DRR as determined in step 212.
Step 214 occurs immediately before patient exposure with the radiation therapy beam. An image is captured with each of the radiographic units as shown in
In step 215 in
In step 216, the delivery of therapeutic radiation is modified based on the results of step 215. Modification options include, but are not limited to, administering the dose, refraining from administering the dose, repositioning the patient, redirecting the therapeutic radiation beam, and modifying the therapeutic radiation beam. If the modification includes repositioning, redirecting, or modifying, the dose can be administered after the repositioning, redirecting, or modifying.
This invention is described in greater detail with reference to
A tomographic image of the patient is captured in step 310 that includes the target. The target volume is designated in this image by manually or automatic means. When the target is a tumor this volume is called the gross tumor volume (GTV).
In step 312 a digitally reconstructed radiograph (DRR) is determined from the tomographic image for a chosen imaging geometry which is defined by the location of a virtual X-ray source and image plane relative to the tomographic image. The region of the target's projection in the DRR is mapped by recording which rays, that travel from the source to a pixel in the imaging plane, pass through the target volume. In step 314 the detectability of the target is measured by considering the overlap of the target with other anatomy, the distinctiveness of the target's border, and other detectability metrics.
A metric for the overlap of the target with other anatomy is defined by the equation,
where Dtotal is the total integrated density along a ray form the X-ray source to a pixel in the imaging plan and Dtarget is the integrated density only for the target volume. The integral is over the region of the target's projection in the image. The value of this overlap metric ranges between 0.0 for no overlap and 1.0 when the contribution from the target is negligible.
Optimization loop 390, which includes steps 312 and 314, serves to determine preferred imaging geometry 330 for which target detectability is facilitated.
The purpose of the next two steps 316 and 318 in
where Atarget is the area of the region of the target's projection, Aseg is the area of the segmented region, Asegoutside is the area of the segmented region that is outside the target's projection, and Aseginside is the area of the segmented region that is inside the target's projection. The value of Qseg ranges between 1.0 when the region of the target's projection and segmented region are identical and 0.0 in the case that the two regions do not overlap.
The purpose of the processing loop 392 is to optimize the segmentation process so that Qseg is as large as possible. The boundary of the region of the target's projection and segmented region 338 are passed to step 360 in the treatment phase.
In step 320 features are calculated for the region of the target's projection and/or the segmented region in the DRR. These features include, but are not limited to, the features that are described in U.S. patent application Ser. No. 11/187,676 which is incorporated into this invention by reference. The value of these features 336 is used subsequently in the treatment phase 382 wherein step 362 they are compared with feature values that are calculated for an identified region in a captured radiographic image in step 360.
The planning phase optimization processes 390 and 392 have the purpose of determining imaging geometry and processing conditions that enable detection of the target at treatment time. Any optimization procedure can be used for this purpose including genetic algorithms, dynamic programming, and gradient decent methods. In one embodiment of this invention the processing loops 390 and 392 are merged in order to jointly optimize imaging geometry and target segmentation processing conditions.
In the treatment phase 382, in step 350 a radiography unit is setup to capture a radiographic image based on the preferred imaging geometry 330 that was determined in the planning phase in steps 312 and 314. This means that the location of the X-ray source and X-ray imaging device of the radiography unit in relation to the patient corresponds with the location of the virtual source and imaging plane relative to the tomographic image in the determination of the DRR in step 312.
Step 352 is a procedure in which the radiography system is calibrated in the coordinate system of the radiotherapy unit. A phantom may be used for this purpose. As a minimum, the location that a line from the X-ray source that passes through the treatment isocenter intersects the X-ray imaging device is determined. This point is referred to as the isocenter's projection in the captured radiographic image.
In step 354 a radiographic image of the target region in the patient is captured by the radiography unit. This may occur immediately before or during irradiation with the treatment beam.
In step 356 the captured radiographic image is registered with the DRR 332 that was determined using the preferred imaging geometry 330. This step is not always necessary. For example, if the patient's conformation at the time of capture of the planning tomographic image and treatment is the same the DRR and captured radiographic image will be similar. Otherwise, registration is necessary so that the region segmentation processing conditions that were optimized in loop 392 base on the appearance of the target in the DRR will successfully segment the target's projection in the captured radiographic image.
In step 358 the preferred processing conditions 334 from step 316 are used to identify a region in the captured radiographic image by region segmentation. In a preferred embodiment of this invention, this region contains the projection of the isocenter in the image.
In step 360 features of the identified region in the captured radiographic image are calculated. The boundaries of the region of the target's projection and segmented region in the DRR 338 are provided to this step because they are used in the calculation of certain features.
In step 362 the value of the features for the identified region in the captured radiographic image are compared with the values 336 that were calculated in step 320 in the planning phase. In one embodiment of this invention, the feature values 336 are for the region of the target's projection in the DRR. In another embodiment of this invention, the feature values 336 are for the segmented region in the DRR that was determined in step 316. The probability that the segmented region in the captured radiographic image is the target and its precise location in the image is determined in this step. Any method of statistical pattern recognition can be used in this step including a neural network, learning vector quantizer (LVQ), support vector machine, and methods that are considered by Anil et al. in “Statistical Pattern Recognition: A Review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 1, 2000, pp. 4-37.
Finally, in step 364 the location of the identified region in the capture radiographic image and the probability that it is the projection of the target are potentially used in a variety of ways. This information can be used to verify that the target is located within a specified area during treatment. If target location verification fails the treatment beam is stopped. The information can be used for gated radiotherapy. Treatment is commenced only if the target is detected within specified area. The information can be used to relocate the treatment beam and/or the patient so that the target treatment beam is correctly aimed at the target.
In a preferred embodiment of this invention, the target is located in three dimensions by using the method that is described in
The region map that was created in step 420 is usually not an accurate map of the target's projection because the image generally contains background features with characteristics of the target. Also, since the input image is a DRR or radiograph other content is superimposed on the target. These problems are corrected by the next series of steps. In step 422 the region map is eroded in order to breakup connections between the target region and other regions. Binary morphological operations as described by J. Serra, in “Image Analysis and Mathematical Morphology,” Vol. 1, Academic Press, 1982, pp. 34-62 are used for this purpose.
In step 424 the connected region in the map that contains the point-of-interest (POI) is retained and all other regions are removed. When the input image is the DRR the POI is the center of the target's projection. In the case that the input image is a capture radiographic image, the POI is the isocenter's projection. Next, in step 426 the selected region is dilated in order to reverse the affect of step 422 on the map of the selected region.
The region map at this point usually still contains the target plus overlapping anatomical structures and therefore needs to be further refined. The region map after step 426 serves as a region of support for the steps that follow. In step 428 a watershed segmentation algorithm is applied to the image in the region of support. Watershed segmentation is described by Vincent and Soille in “Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations,” IEEE Trans. Patt. Anal. Machine Intell., Vol. 13, No. 6, 1991, pp. 583-598. In the next step 430 the watershed that contains the POI is retained along with other watersheds that satisfy a connectivity condition. The map of the selected watersheds forms the final region map 432.
Referring to
The result of the first application of step 316 on the DRR is shown in image 512 of
On the right side of
In step 320, of the planning phase, in
Region-based features that are calculated in step 320 in the planning phase and 360 in the treatment phase in
The feature Shape1 compares an identified region to a reference region. In step 320 the identified region is the region that is segmented in the DRR in step 316 and the reference region is the target's projection in the DRR. In step 360 the identified region is the region that is segmented in the captured radiographic image in step 358. The reference region is either the target's projection or the segmented region in the corresponding DRR 332 which is provided by the region boundaries 338. The definition of this feature is,
where A is the area of the identified region, Aref is the area of the reference region, Aoutside is the area of the identified region that is outside the reference region, and Ainside is the area of the identified that is in side the reference region.
A statistics feature compares the average code value in a region to the average in a surrounding region. This feature is defined by
Stat1=μregion−μsurround
where μregion and μsurround are the mean code value of the region and surrounding region, respectively. Other statistical measures can also be used including the code value standard deviation, minimum code value, and maximum code value.
Gradient-based features are valuable in the detection of tumor targets in an X-ray image. The gradient direction at pixels that are within the region of the target's projection tend to converge to a common point. A single band digital image is a discrete two-dimensional function. A number of linear filters have be developed for the purpose of estimating the first derivative of this function at each pixel location. For example, a Sobel filter is commonly used. The magnitude Mij and direction θij of the gradient at pixel ij in an image is defined by,
where Fij is the code value.
The calculation of a gradient-based feature for a region is described with reference to
where Nk is the number of pixels in section k. Only pixels with a gradient magnitude above t1 and below t2 are included in the summation. The gradient feature for the region is defined by,
Texture-based features for the region are calculated using the code values, gradient magnitude, or gradient direction in an image. Texture features that are based on the cooccurrence function are describe by Bevk and Kononenko in “A Statistical Approach to Texture Description of Medical Images: A Preliminary Study,” 15th IEEE Symposium on Computer-Based Medical Systems, Jun. 4-7, 2002. Two texture-based features are given by the equations,
where C(ij) is the cooccurrence function calculated over neighboring pixels. The summations range from minimum to maximum code value. The feature Texture1 is referred to as the energy and Texture2 as the contrast. Any of the other texture features described by Haralick et al. in “Texture Features for Image Classification,” IEEE Transactions on Systems, Man, Cybernetics, 1973, pp. 610-621, can also be used.
A grayscale image can be interpreted as a relief map in which the code value at a pixel is a measure of the elevation of a surface at that location. Surface-based features are obtained by fitting the image surface in a region to a 4th order bivariate polynomial. The principle curvatures are calculated at the point of highest elevation in the region as described by Abmayr et al. in “Local Polynomial Reconstruction of Intensity Data as Basis of Detecting Homologous Points and Contours with Subpixel Accuracy Applied on IMAGER 5003,” Proceedings of the ISPRS working group V/1, Panoramic Photogrammetry Workshop, Vol. XXXIV, Part 5/W16, Dresden, 2004. Second-order derivatives of the fitted polynomial are calculated to obtain the elements of the Hessian matrix. The maximum and minimum eigenvalue of the Hessian matrix λmax and λmin are the principle curvatures. The surface-based region features are,
Surface1=λmin
Surface2=λmax
Surface3=λminλmax
Referring to
806 is the captured radiographic image (see step 214 in
In practice, images IP and IT have the same size because of the correspondence between the captured radiographic image and the DRR. The origin of these images can be arbitrarily defined at the upper left corner as shown in
One purpose of target detection of the this invention is to find the target position difference, due to various causes (e.g. respiration), between the planning phase and the treatment phase. Because of the elastic nature of the soft tissue, position difference of a target varies depending on the location of the target within the body. In case of multiple targets, the motion of all the targets is considered non-rigid. While for an individual target, within a smaller region, the motion can be regarded as either rigid or non-rigid. Therefore, a step of defining a target-centric sub-image is employed.
Referring to
Before applying the method of finding the displacement, target 909 is identified in step 811 where the identification method is identical to that in step 809 in which target 907 is identified in the planning phase.
A method of finding the displacement (step 812) of target 909 from 907 based on gradients is now discussed. The process can be formulated as determining, mathematically, a mapping between the coordinates of a part of an image (e.g. 908) and a part of another image (e.g. 910), such that points in the two sub-images that correspond to the same region are mapped to each other.
For the purpose of discussion, express the sub-image ĨT 910 as I(xt, yt, t) and the sub-image ĨP 908 as I(xt+1, yt+1, t+1). The notations x and y are the horizontal and vertical coordinates of the image planar coordinate system, and t is the image index. It is important to note that the origin, (x=0, y=0), of the image planar coordinate system is defined at the center of the image plane. The coordinates x and y can be non-integers.
The image (or image pixel) is also indexed as I(i, j) where the indices (i and j) are strictly integers and parameter t is ignored for simplicity. This representation is consistent with indexing a matrix in the discrete domain. If the height of the image is h and the width is w, the corresponding image plane coordinates, x and y, at location (i, j) can be computed as x=i−(w−1)/2.0, and y=(h−1)/2.0−j. The column index i runs from 0 to w−1. The row index j runs from 0 to h−1.
In general, the mapping process is to find an optimal affine transformation function Φt+1(xt, yt) (step 1002 in
[xt+1, yt+1,1]T=Φt+1 (xt, yt)[xt, yt,1]T
Now, define the energy function (or L2 norm measure) of the difference between the two images as
where Ω is a spatial neighborhood for a local mapping (non-rigid mapping). For a global mapping, Ω is the entire image.
The transformation function Φt+1(xt, yt) is a 3×3 matrix with elements shown as,
This transformation matrix consists of two parts, a rotation sub-matrix
and a translation vector
There are a variety of ways to minimize the energy function. For instance, using derivatives of image pixel in terms of spatial positions and intensity (see “Robot Vision”, B. K. P. Horn, The MIT Press McGraw-Hill Book Company, 1986), optimal values of the entries of the transformation function Φ can be found. The optimized transformation function by applying optimal affine transformation function maps one image to another.
In practice, using the optimized transformation function Φ two displacement maps X(i, j), and Y(i, j) can be generated (step 1004 in
The steps depicted in
The invention has been described in detail with particular reference to a presently preferred embodiment, but it will be understood that variations and modifications can be effected within the scope of the invention. The presently disclosed embodiments are therefore considered in all respects to be illustrative and not restrictive. The scope of the invention is indicated by the appended claims, and all changes that come within the meaning and range of equivalents thereof are intended to be embraced therein.