Claims
- 1. In a method for evaluating an image including an object, the improvement comprising:
filtering image data derived from the image with a first geometric enhancement filter having magnitude and likelihood filter components to produce first filtered image data in which a first geometric pattern is enhanced.
- 2. The method of claim 1, wherein the step of filtering comprises:
processing the image data using one of a Hessian matrix and a radial derivative algorithm to produce a set of second derivatives; selecting three second derivatives (λ1, λ2, λ3) in three orthogonal directions from the set of second derivatives; and filtering with a first filter characteristic selected from the group of filter characteristics consisting of ddot(λ1,λ2,λ3)=gdot(λ1,λ2,λ3)kdot(λ1,λ2,λ3), dline(λ1,λ2,λ3)=gline(λ1,λ2,λ3)kline(λ1,λ2,λ3) and dplane(λ1,λ2,λ3)=gplane(λ1,λ2,λ3)kplane(λ1,λ2,λ3).
- 3. The method of claim 2, wherein the step of filtering comprises:
filtering with a first filter characteristic selected from the group of filter characteristics consisting of ddot(λ1,λ2,λ3)=|λ3|2/|λ1|, dline(λ1,λ2,λ3)=|λ2|(|λ2|−|λ3|)/λ1| and dplane(λ1,λ2,λ3)=|λ1|−|λ2|.
- 4. The method of claim 1, further comprising:
processing the first filtered image data to derive a measure indicative of the presence of said object in said image.
- 5. The method of claim 4, wherein the step of processing comprises:
determining a region of interest in the image; extracting at least one feature from the first filtered image data from within said region of interest; and applying the at least one extracted feature to a classifier configured to output said measure indicative of the presence of said object in said image.
- 6. The method of claim 5, wherein said step of applying the at least one extracted feature to a classifier comprises:
applying the at least one extracted feature to one of an artificial neural network, a Bayesian analyzer, a linear discriminant analyzer, a K-nearest neighbor classifier, and a support vector machine.
- 7. The method of claim 5, wherein the step of extracting comprises:
extracting a measure of at least one of a) a size, b) a shape, c) a contrast, d) a regularity, e) a circularity, f) a linearity, g) a smoothness, h) a compactness, i) a standard deviation of pixel values, and j) a mean of pixel values.
- 8. The method of claim 5, wherein said step of extracting comprises:
extracting said at least one feature from kernels centered on respective pixels.
- 9. The method of claim 5, wherein said step of determining a region of interest comprises:
thresholding the first filtered image data with predetermined threshold data.
- 10. The method of claim 2, wherein the step of processing the image data comprises:
smoothing the image data with one of a 2-D and a 3-D Gaussian filter.
- 11. The method of claim 10, wherein the step of smoothing comprises:
smoothing the image data iteratively with a predetermined number of smoothing scales.
- 12. The method of claim 2, further comprising:
segmenting the image before filtering to derive said image data.
- 13. The method of claim 12, wherein the step of segmenting comprises:
processing the segmented image with a rolling ball algorithm.
- 14. The method of claim 1, further comprising:
filtering the image data with a second geometric enhancement filter having magnitude and likelihood filter components to produce second filtered image data in which a second geometric pattern in the image is enhanced, said second geometric pattern being different from the first geometric pattern.
- 15. The method of claim 14, wherein the steps of filtering with first and a second geometric enhancement filters comprise:
processing the image data using one of a Hessian matrix and a radial derivative algorithm to produce a set of second derivatives; selecting three second derivatives (λ1, λ2, λ3) in three orthogonal directions from the set of second derivatives; and filtering with respective first and second filter characteristics selected from the group consisting of ddot(λ1,λ2,λ3)=gdot(λ1,λ2,λ3)kdot(λ1,λ2,λ3), dline(λ1,λ2,λ3)=gline(λ1,λ2,λ3)kline(λ1,λ2,λ3) and dplane(λ1,λ2,λ3)=gplane(λ1,λ2,λ3)kplane(λ1,λ2,λ3).
- 16. The method of claim 15, wherein the steps of filtering with respective first and second filter characteristic comprise:
filtering with respective first and second filter characteristics selected from the group consisting of ddot(λ1,λ2,λ3)=|λ3|2/|λ1|, dline(λ1,λ2,λ3)=|λ2|(|λ2|−|λ3|)/λ1| and dplane(λ1,λ2,λ3)=|λ1|−|λ2|.
- 17. The method of claim 13, further comprising:
processing at least one of the first and second filtered image data to derive a measure indicative of the presence of said object in said image.
- 18. The method of claim 17, wherein the step of processing comprises:
determining a region of interest in the image; extracting at least one feature from the at least one of the first and second filtered image data from within said region of interest; and applying the at least one extracted feature to at least one classifier configured to output said measure indicative of the presence of said object in said image.
- 19. The method of claim 18, wherein said step of applying the at least one extracted feature to at least one classifier comprises:
applying the at least one extracted feature to one of an artificial neural network, a Bayesian analyzer, a linear discriminant analyzer, a K-nearest neighbor classifier, and a support vector machine.
- 20. The method of claim 18, wherein the step of extracting comprises:
extracting a measure of at least one of a) a size, b) a shape, c) a contrast, d) a regularity, e) a circularity, f) a linearity, g) a smoothness, h) a compactness, i) a standard deviation of pixel values, and j) a mean of pixel values.
- 21. The method of claim 18, wherein said step of extracting comprises:
extracting said at least one feature from kernels centered on respective pixels.
- 22. The method of claim 18, wherein said step of determining a region of interest comprises:
thresholding at least one of the first and second filtered image data with predetermined threshold data.
- 23. The method of claim 15, wherein the step of processing the image data comprises:
smoothing the image data with one of a 2-D and a 3-D Gaussian filter.
- 24. The method of claim 23, wherein the step of smoothing comprises:
smoothing the image data iteratively with a predetermined number of smoothing scales.
- 25. The method of claim 15, further comprising:
segmenting the image before filtering to derive said image data.
- 26. The method of claim 25, wherein the step of segmenting comprises:
processing the segmented image with a rolling ball algorithm.
- 27. The method of claim 14, further comprising:
filtering the image data with a third geometric enhancement filter having magnitude and likelihood filter components to produce third filtered image data in which a third geometric pattern in the image is enhanced, said third geometric pattern being different from the first and second geometric patterns.
- 28. The method of claim 27, wherein the steps of filtering the image with first, second, and third geometric enhancement filters comprise:
processing the image data using one of a Hessian matrix and a radial derivative algorithm to produce a set of second derivatives; selecting three second derivatives (λ1, λ2, λ3) in three orthogonal directions from the set of second derivatives; and filtering with respective first, second, and third filter characteristics selected from the group consisting of ddot(λ1,λ2,λ3)=gdot(λ1,λ2,λ3)kdot(λ1,λ2,λ3), dline(λ1,λ2,λ3)=gline(λ1,λ2,λ3)kline(λ1,λ2,λ3) and dplane(λ1,λ2,λ3)=gplane(λ1,λ2,λ3)kplane(λ1,λ2,λ3).
- 29. The method of claim 28, wherein the steps of filtering with first, second, and third filter characteristics comprise:
filtering with respective first, second, and third filter characteristic selected from the group consisting of ddot(λ1,λ2,λ3)=|λ3|2/|λ1|, dline(λ1,λ2,λ3)=|λ2|(|λ2|−|λ3|)/λ1| and dplane(λ1,λ2,λ3)=|λ1|−|λ2|.
- 30. The method of claim 27, further comprising:
processing at least two of the first, second, and third filtered image data to derive a measure indicative of the presence of said object in said image.
- 31. The method of claim 30, wherein the step of processing comprises:
determining a region of interest; extracting at least one feature from each of the at least two of the first, second, and third filtered image data from within said region of interest to produce respective extracted features; and applying the extracted features to at least one classifier configured to output said measure indicative of the presence of said object in said image.
- 32. The method of claim 31, wherein said step of applying the extracted features comprises:
applying the extracted features to one of an artificial neural network, a Bayesian analyzer, a linear discriminant analyzer, a K-nearest neighbor classifier, and a support vector machine.
- 33. The method of claim 31, wherein the step of extracting comprises:
extracting a measure of at least one of a) a size, b) a shape, c) a contrast, d) a regularity, e) a circularity, f) a linearity, g) a smoothness, h) a compactness, i) a standard deviation of pixel values, and j) a mean pixel values.
- 34. The method of claim 31, wherein said step of extracting comprises:
extracting at least one feature from kernels centered on respective pixels.
- 35. The method of claim 31, wherein said step of determining a region of interest comprises:
thresholding at least one of the first, second, and third filtered image data with predetermined threshold data.
- 36. The method of claim 28, wherein the step of processing the image data comprises:
smoothing the image data with one of a 2-D and a 3-D Gaussian filter.
- 37. The method of claim 36, wherein the step of smoothing comprises:
smoothing the image data iteratively with a predetermined number of smoothing scales.
- 38. The method of claim 27, wherein at least one of the steps of filtering further comprising:
segmenting the image before filtering to derive said image data.
- 39. The method of claim 38, wherein the step of segmenting the image comprises:
processing the segmented image with a rolling ball algorithm.
- 40. The method of claim 1, wherein the image is a medical image.
- 41. The method of claim 40, wherein the object is a nodule.
- 42. A system configured to implement the method in any one of claims 1-41.
- 43. A computer program product storing instructions for execution on a computer system, which when executed by the computer system, causes performance of the method recited in any one of claims 1-41.
Government Interests
[0001] The present invention was made in part with U.S. Government support under USPHS grants CA62625 and CA64370. The U.S. Government may have certain rights to this invention.