The present invention relates to x-ray imaging, and more particularly, to suppressing bone structures based on a single x-ray image.
Chest radiography (i.e., x-ray imaging) is a frequently used imaging technique for the diagnosis of chest diseases, such as lung cancer, pneumoconiosis, and pulmonary emphysema. However, it is difficult to detect lung nodules (i.e., potential lung cancers) in a conventional chest radiograph (x-ray image) because lung nodules are often obscured in chest radiographs by overlying bones, such as ribs and clavicles. Even with a computer-aided diagnostic (CAD) scheme for nodule detection, it can be challenging to detect nodules in chest radiographs when bones overlap with the nodules.
A conventional solution to this problem is to use dual energy imaging in order to separate a chest radiograph into a bone image and a soft-tissue image.
Despite the advantages of dual energy imaging, many hospitals do not use dual energy imaging because specialized equipment is required. Also, dual energy imaging requires extra x-ray exposure for patients, such that the radiation dose can be greater than the recommended amount for obtaining standard radiographs. Accordingly, a method for suppressing bone structures in an x-ray image without using dual energy imaging is desirable.
The present invention provides a method for suppressing bone structures in an x-ray image. Embodiments of the present invention utilize a learning-based regression model for predictive bone suppression in an x-ray image without using dual energy imaging. Such a regression model predicts a soft-tissue image without bone structures from an input x-ray image
In one embodiment of the present invention, an x-ray image is received. A set of features is extracted for each pixel of the x-ray image. The extracted features may be wavelet features. A soft-tissue image is generated from the x-ray image using a trained regression function to determine an intensity value for the soft-tissue image corresponding to each pixel of the x-ray image based on the set of features extracted for each pixel of the x-ray image. The regression function can be trained using Bayesian Committee Machine (BCM) to approximate Gaussian process regression (GPR). The x-ray image may be normalized prior to extracting the features and the dimensionality of the set of features for each pixel may be reduced prior to generating the soft-tissue image.
In another embodiment of the present invention, multiple sets of training images, each set of training images having a training x-ray image and a corresponding training soft-tissue image, are received. A set of features, such as wavelet features, is extracted for each pixel in each of the training x-ray images. A regression function to suppress bone structures in x-ray images is trained based on the extracted wavelet features for the training x-ray images and the corresponding training soft-tissue images. The regression function may be trained using BCM to approximate GPR. The training x-ray images and the corresponding training soft-tissue images may be normalized prior to extracting the features, and the dimensionality of the features may be reduced prior to training the regression function.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
The present invention is directed to a method for suppressing bone structures in x-ray images. Embodiments of the present invention are described herein to give a visual understanding of the bone structure suppression method. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
Embodiments of the present invention utilize a learning-based regression model for predictive bone suppression in an x-ray image. The regression model estimates a soft-tissue image from an input x-ray image (radiograph). The regression model is trained using training data of known input (x-ray image) and output (soft-tissue image) pairs. For example, soft-tissue images extracted from x-ray images using dual energy imaging can be used as training data to train the regression model. The regression model predicts bone structures in the input x-ray image, and subtracts the predicted bone structures to generate an estimated soft-tissue image. Embodiments of the present invention generate a soft-tissue image from single input x-ray image
At step 204, the x-ray image is pre-processed to remove noise and normalize the input image data. In training of the regression model used in the method of
According to a possible implementation, Gaussian blurring can be used to normalize the x-ray image. A normalized image Ī is calculated from an original image I as:
The received x-ray image, as well as the x-ray images and corresponding images of the training data, can be locally normalized on a large scale σ (e.g., σ=128 for a 512 by 512 image) multiple times. For example, 6 iterations of normalization can be used.
Returning to
It is possible to extract image features using Gaussian filters. In order to extract features for each pixel of the x-ray image using Gaussian filters, a filter bank can be used to filter each pixel with a variety of filters. For example, the Leug-Malik (LM) filter bank, which was originally developed for natural textual recognition under varying viewpoints and illumination, can be used to extract the features. This filter bank is a multi-scale and multi-orientation filter bank that consists of 48 filters (36 built from first and second derivatives of Gaussian filters at six orientations and 3 scales, 8 center-surround difference of Gaussian filters, and 4 low-pass Gaussian filters). For radiograph images of size 128 by 128, the first and second derivative filters occur at the first three scales √{square root over (2)}, 2, and 2√{square root over (2)} with an elongation factor of 3 (i.e., σx=σ and σx=3σ).
According to an advantageous implementation, the image features can be extracted using wavelets. Wavelet analysis provided a versatile collection of tools for image analysis and manipulation. Wavelets are mathematical basis functions that can represent image data with different frequency components in a resolution matched to their scales. Thus, wavelet analysis of an image analyzes the image according to the scale of the image. In order to extract a set of wavelet features for each pixel of the x-ray image, multiple wavelets corresponding to multiple scales of the x-ray image are used to evaluate the x-ray image at each pixel. The set of wavelet features for a given pixel is a vector of the calculated wavelet values for that pixel.
Returning the
At step 210, a soft-tissue image is generated using a trained regression function. The soft-tissue image is an image without bone structures estimated from the original x-ray image. The soft-tissue image is generated from the original x-ray image by the trained regression function based on the extracted features. Accordingly, the trained regression function predicts the bone structures in the x-ray image based on the extracted features and subtracts the predicted bone structures from the x-ray image to generate the soft-tissue image.
Regression analysis is a statistical tool for determining and measuring a relationship among variables. In particular, regression analysis is used to determine a relationship between the input (independent, predictor) variable x and the output (dependent, response) variable y, where x is a d-dimensional vector and y is a scalar value. This statistical relationship can be mathematically formulated as:
y=ƒ(x)+ε, (2)
where ε is a random error variable. For bone suppression in an x-ray image, x represents the feature vector for a pixel of the input x-ray image, and y represents a predicted pixel intensity value for the corresponding pixel of the soft-tissue image. In order to predict a pixel intensity y based on a set of features x, a regression function ƒ is trained (learned) that represents the relationship between x and y based on the observed data D={(xi, yi)}i=1N. The regression function ƒ is training based on training data of known corresponding x-ray and soft-tissue images. The trained regression function ƒ is used to generate the soft-tissue image by estimating a pixel intensity y for each pixel in the soft tissue image based on the set of features x extracted for the corresponding pixel in the original x-ray image.
A well-known approach for training a regression function is simple linear regression, where ƒ is a linear function of the input and the parameters. However, this approach may be too limited to perform well for bone suppression in x-ray images. Various other methods may be used to train the regression function, such as k-Nearest-Neighbor regression (kNNR), support vector regression (SVR), Gaussian process regression (GPR), boosting based regression, and artificial neural networks. According to an advantageous implementation, the regression function can be trained using GPR approximated using a Bayesian Committee Machine (BCM). The training of the regression function is described in greater detail below.
At step 212, the soft-tissue image is output. The soft-tissue image generated using the trained regression function can be output by displaying the soft-tissue image, for example on a display device of a computer system. The soft-tissue image can also be output by storing the generated soft-tissue image, for example on a computer readable medium, storage, or memory of a computer system.
At step 604, the training images are pre-processed. The training images are pre-processed to normalize the images and reduce noise in the images. At step 606, features are extracted for each pixel of each of the training x-ray images (input examples). For example, the features can be extracted using wavelets. At step 608, the dimensionality of the features is reduced. Steps 604, 606, and 606 of the method of
At step 610, the regression function is trained based the extracted features. In particular, a regression function is trained that maps pixels from the input (x-ray) example images to the corresponding output (soft-tissue) example images based on the extracted features. The regression function can be trained using a non-linear regression method, such as k-Nearest-Neighbor regression (kNNR), support vector regression (SVR), Gaussian process regression (GPR). These methods for training the regression function are described in greater detail below.
kNNR is one possible method for predicting the unknown function value of a given point using previously seen input and output pairs. Given a set of training points S={x1, . . . , xn}, the kNN estimator is defined as the mean value function of the nearest neighbors:
where N(x)⊂S is the set of k nearest points to x, and k is a parameter that can be set by a user. It is possible to utilize a weighted average where the weight of each neighbor is proportional to its distance from x. This uses a special form of weighted regression, and one possible example is using the Gaussian Radial Basis Function (RBF):
where d(x,xi)=∥x−xi∥22 is the l2 norm, β is the variance, and
is a normalization factor.
Another possible non-linear regression method for training the regression function is SVR. The goal of ε-SVR is to find a function ƒ that has the most ε deviation from the actual targets yi, for all the training data. With a constraint on the flatness of ƒ, the standard form of support vector regression (ε-SVR) can be formulated as a convex optimization problem as follows:
The above problem is solved using dual problem and quadratic programming. The performance of SVR is very sensitive to the parameters. Because optimizing these parameters is an intractable problem, cross validation via parallel grid search can be used to set the values for the parameters. However, for training a regression function based on the training images, it may be difficult to find good parameters using cross validation, especially when applied to high resolution x-ray images.
According to an advantageous implementation, the regression function can be learned based on Gaussian process regression (GPR). GPR is a generalization of the Gaussian probability distribution over a finite vector space, which governs the properties of functions over a function of space of infinite dimension. In the GPR approach, without parameterizing a function ƒ, a probability is assigned to every possible function. Just as Gaussian distribution is specified by its mean vector and covariance matrix, a Gaussian process can be dully specified by its mean function μ(x)=E[ƒ(x)] and its covariance function C(x,x′)=E[(ƒ(x)−μ(x))(ƒ(x′)−μ(x′))]. The covariance function encodes assumptions about the function characteristics (such as smoothness and length-scale) by defining a notion of similarity between two function values. Given the covariance C, the predictive distribution for a new input x is also Gaussian with the following mean and variance:
ŷ(x)=μy+k(x)K−1y (4)
σŷ2(x)=C(x,x)−k(x)K−1k(x) (5)
where μy is the mean of y in the training set, k(x)=[C(x,xi), . . . , C(x,xN)]′, K is the covariance matrix for the training cases Kij=C(xi,xj), and y=[y1, . . . , yN]′.
A problem with GPR is that the computational complexity grows as O(n3), where n is the number of training points. Therefore, GPR can have difficulty dealing with a large dataset, such as with the per-pixel image analysis of the present invention. Accordingly, according to an advantageous implementation of the present invention, an approximation method can be used to approximate GPR in order to adapt GPR to the large scale problem. It is possible to approximate GPR using a sparse approximation using inducing variables. When f and f* are the vectors of the output in the training set and the testing set, respectively, and u=[u1, . . . , um]′ (where m<<n) is the inducing variables with a set of corresponding input locations Xu, two inducing conditionals can be specified as follows:
training conditional: p(f|u)=N(Kf,uKu,u−1u,Kf,f−Qf,f), (6)
training conditional: p(f*|u)=N(K*,uKu,u−1u,K*,*−Q*,*), (7)
where Ka,b is the covariance function between the two vectors of function values for two input datasets and Qa,b=Ka,uKf,f−1Ku,b, and N refers to a Gaussian distribution. Previously proposed sparse approximations can be categorized according to additional approximation to training and test conditionals into the subset of data (SoD), subset of regressors (SoR), deterministic training conditional (DTC), and partially and fully independent training conditional (PITC and FITC) approximations. A problem with these approximation methods is that they still require the computation of the n×m matrix Kf,u in training conditional for all of the approximations. When training data is large, as in the present invention, this matrix may become too large and cause memory problems.
According to an advantageous implementation, a Bayesian Committee Machine (BCM) can be used to approximate the GPR. Although, BCM can be categorized as a PITC approximation, BCM is somewhat different from other PITC approximation methods in the aspect that it is a transductive leaner (i.e., the test inputs have to be known for training) and the inducing inputs Xu are chosen to be test inputs. By utilizing BCM, the resulting approximation does not need to compute the matrix Kf,u. The core idea of BCM is that the training data is split into M data sets D={D1, . . . , DM} of approximately the same size. For example, a k-means algorithm can be used for clustering of the training data. M regressors are then trained separately on each training data set. The predictive distribution of testing data then has the following mean and inverse covariance:
where the a priori predictive density for the new testing data is assumed to be a Gaussian with zero mean and covariance K*,*, and the posterior predictive density for each regressor module is a Gaussian with mean E(f*|Di) and covariance cov(f*|D). The inducing variables are the test inputs and the matrix Kf,u can be approximated by the sum of the covariances for the individual modules.
As described above, a regression function trained using the method of
The above-described methods for bone structure suppression x-ray images and training a regression function for bone structure suppression may be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high level block diagram of such a computer is illustrated in
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application No. 60/975,295, filed Sep. 26, 2007, the disclosure of which is herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
6614874 | Avinash | Sep 2003 | B2 |
6661873 | Jabri et al. | Dec 2003 | B2 |
6754298 | Fessler | Jun 2004 | B2 |
6792072 | Avinash | Sep 2004 | B2 |
6816572 | Jabri et al. | Nov 2004 | B2 |
6917697 | Avinash et al. | Jul 2005 | B2 |
7068826 | Jabri et al. | Jun 2006 | B2 |
20050100208 | Suzuki et al. | May 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20090087070 A1 | Apr 2009 | US |
Number | Date | Country | |
---|---|---|---|
60975295 | Sep 2007 | US |