Claims
- 1. A method for detecting an object in a pixelated image and segmenting the pixelated image to separate the object from a background, comprising the steps of:
(a) providing pixelated image data for a plurality of pixelated images, where a pixelated image in the plurality of pixelated images may include an object; (b) detecting the presence of an object included within any of the pixelated images by filtering the pixelated image data, producing filtered image data in which an object is detected in a pixelated image based upon relative amplitude values of pixels corresponding to the filtered image data for said pixelated image; (c) segmenting the image in which the object was detected by defining a region of interest from the filtered image data for the pixelated image in which the object was detected, so that the ROI comprises less than all of the filtered image data for said pixelated image, but includes the object that was detected in said pixelated image; and (d) determining object boundaries for the object using the filtered image data within the ROI.
- 2. The method of claim 1, wherein the step of determining object boundaries comprises the steps of:
(a) applying a first binomial blur operation to the filtered image data within the regions of interest, thereby approximating convolving the filtered image data in the region of interest with a Gaussian filter, producing a Gaussian blurred image data; (b) executing a bitwise shift operation on the filtered image data to produce shifted image data; (c) determining a difference between the Gaussian blurred image data and the shifted image data to produce difference image data; and (d) applying a second binomial blur operation to the difference image data, thereby approximating the Laplacian of the Gaussian (LOG) blurred version of the filtered image data for the region of interest and producing LOG image data.
- 3. The method of claim 2, wherein the step of determining object boundaries further comprises the steps of:
(a) executing a threshold operation on the LOG image data to achieve multiple binary images; and (b) defining a binary mask based on the multiple binary images.
- 4. The method of claim 3, wherein the step of executing a threshold operation comprises the steps of:
(a) selecting a first threshold value greater than zero and a second threshold value greater than zero, the first threshold value being larger than the second threshold value, and using the first and second threshold values to define a first pair of binary images; and (b) creating a negative first threshold value having an absolute value equal to that of the first threshold value and a negative second threshold value having an absolute value equal to that of the second threshold value, and using the negative first threshold value and the negative second threshold value to define a second pair of binary images.
- 5. The method of claim 4, wherein the first threshold value is selected to be sufficiently large so that it is statistically substantially unlikely for the LOG image data to fall outside a range defined between the first negative threshold value and the first threshold value.
- 6. The method of claim 4, wherein the first threshold value is selected to be sufficiently large so that there is less than one chance in a million that the LOG image data fall outside a range defined between the first negative threshold value and the first threshold value.
- 7. The method of claim 1, wherein the step of determining object boundaries comprises the step of applying at least one morphological operation to the filtered image data for the region of interest.
- 8. The method of claim 1, wherein the step of determining object boundaries comprises the steps of:
(a) approximating convolving the filtered image data in the region of interest with a Gaussian filter, producing Gaussian blurred image data; (b) approximating the Laplacian of the Gaussian blurred version of the filtered image data for the region of interest and producing LOG image data; (c) using the LOG image data to generate a plurality of binary images; (d) manipulating the plurality of binary images to define a binary mask; and (e) using the binary mask to determine object boundaries.
- 9. The method of claim 1, wherein approximating convolving the filtered image data in the region of interest with a Gaussian filter comprises the steps of applying a binomial blur operation to the filtered image data within the regions of interest.
- 10. The method of claim 8, wherein manipulating the plurality of binary images comprises the steps of:
(a) dilating at least one of the plurality of binary images, such that at least one of the plurality of binary images remains undilated; and (b) comparing the at least one binary image that was dilated to the at least one binary image that was not dilated to determine a contiguous region associated with the object.
- 11. The method of claim 1, wherein the step of defining a region of interest comprises the step of defining a region of interest that encompasses only a single object.
- 12. The method of claim 1, wherein the step of detecting the presence of an object within the filtered image data comprises the step of normalizing grey scale values comprising the filtered image data.
- 13. The method of claim 1, wherein the pixelated image data represent a plurality of channels, further comprising the step of aligning the pixelated image data with a first axis that extends across the channels before detecting objects in the pixelated image data.
- 14. The method of claim 1, wherein the step of filtering the pixelated image data comprises the step of applying a two dimensional low pass filter to the pixelated image data to produce low pass filtered image data.
- 15. The method of claim 14, wherein the step of filtering the pixelated image data further comprises the step of applying a two dimensional high pass filter to the low pass filtered image data to produce the filtered image data.
- 16. The method of claim 15, wherein the high pass filter comprises a gradient operator.
- 17. The method of claim 14, wherein the step of filtering the pixelated image data further comprises the step of applying a non linear edge detection filter to the low pass filtered image data to produce the filtered image data.
- 18. The method of claim 1, wherein the step of filtering the pixelated image data comprises the steps of:
(a) applying a two dimensional low pass filter to the pixelated image data to produce low pass filtered image data; and (b) applying a two dimensional edge enhancement filter to the low pass filtered image data to produce the filtered image data.
- 19. The method of claim 18, wherein the step of applying the two dimensional low pass filter to the pixelated image data comprises the step of applying a boxcar filter to the pixelated image data.
- 20. The method of claim 18, wherein the step of applying the boxcar filter comprises the step of applying a 3×3 boxcar filter to the pixelated image data.
- 21. The method of claim 18, wherein the step of applying the two dimensional edge enhancement filter to the low pass filtered image data comprises the step of increasing a magnitude of each pixel in the low pass filtered image data if the pixel is adjacent to another pixel that is disposed on an amplitude gradient.
- 22. The method of claim 21, further comprising the step of determining if a specific pixel is adjacent to the other pixel disposed on the amplitude gradient by calculating differences in amplitude among four adjacent pixels around the specific pixel.
- 23. The method of claim 22, wherein the step of calculating the differences in amplitude among four adjacent pixels comprises the steps of:
(a) computing the differences between the four adjacent pixels along six different axes to determine an axis of least inclination; (b) computing an absolute difference between two of the four adjacent pixels that do not define the axis of least inclination; (c) computing a projection of the axis of least inclination and the absolute difference between said two of the four adjacent pixels that do not define the axis of least inclination; and (d) using the projection to determine a magnitude with which the specific pixel is increased.
- 24. The method of claim 23, wherein the step of determining a magnitude uses a quadratic summation and a lookup table.
- 25. The method of claim 18, wherein the step of defining a region of interest comprises the steps of:
(a) locating a center of the object; and (b) generating boundaries for the region of interest.
- 26. The method of claim 25, wherein the step of locating the center of the object comprises the steps of:
(a) decimating the filtered image data with the two dimensional low pass filter and the two dimensional edge enhancement filter a plurality of times, producing a succession of ordered decimated image data, based upon the number of times that the step of decimating was done; (b) determining if highest order decimated image data exceeds a threshold value; (c) if the threshold value has been exceeded, determining a peak of the highest order decimated image data; and (d) extrapolating peaks for each of the other orders of decimated image data and the filtered image data, and generating boundaries for the region of interest, a peak of the filtered image data within the region of interest corresponding to a center of the object included within the region of interest.
- 27. The method of claim 26, wherein the filtered image data are decimated until the highest order corresponds to a single pixel.
- 28. The method of claim 25, wherein the step of generating the boundaries for the region of interest comprises the steps of:
(a) applying a pattern analysis algorithm referenced to the center of the object to determine boundaries of the object; and (b) establishing the boundaries for the region of interest such that the region of interest encompasses the boundaries of the object.
- 29. The method of claim 25, wherein the step of generating the boundaries for the region of interest comprises the steps of:
(a) applying a pattern analysis algorithm referenced to the center of the object to determine boundaries of the object; (b) rejecting any object that is larger than a predetermined size; and (c) establishing the boundaries for the region of interest such that the region of interest encompasses the boundaries of the object.
- 30. The method of claim 18, wherein the step of applying the two dimensional edge enhancement filter to the low pass filtered image data comprises the step of increasing a magnitude of each pixel in the low pass image data whenever that pixel is in a region characterized as having a substantial amplitude gradient along two orthogonal axes.
- 31. The method of claim 30, further comprising the step of determining if a specific pixel is in a region characterized as having a substantial amplitude gradient along two orthogonal axes by implementing a gradient operator defined by the specific pixel and four adjacent pixels, the gradient operator determining a magnitude by which the specific pixel is to be increased.
- 32. The method of claim 30, wherein the step of defining a region of interest comprises the steps of:
(a) using the filtered image data to generate an amplitude histogram; and (b) comparing the mean of the amplitude histogram to a threshold value, and if the mean of the amplitude histogram exceeds the threshold value, then analyzing each pixel represented by the filtered image data to determine if the pixel is above the threshold, and if so, then including at least the pixel in the region of interest.
- 33. The method of claim 32, wherein if any specific pixel in at least one of a row and a column of the pixelated image exceeds the threshold value, further comprising the steps of:
(a) selecting one of said at least one of the row and the column in which the specific pixel is disposed; and (b) including the one the row and the column that was selected within the region of interest without analyzing any more pixels in said one of the row and the column that was selected.
- 34. The method of claim 32, wherein if any specific pixel in a row exceeds the threshold value, further comprising the step of including the row in the region of interest without analyzing any more pixels in the row, thereby defining a region of interest that comprises fewer rows, but the same number of columns, as in the pixelated image.
- 35. The method of claim 34, further comprising the step of including within the region of interest a predefined number of rows that do not contain any pixels exceeding the threshold, thereby defining a region of interest that is larger than the object disposed within the region of interest.
- 36. The method of claim 34, further comprising the step of analyzing each pixel in the region of interest that comprises fewer rows than did the pixelated image to determine if the pixel is above the threshold, and if so, then including a column in which the pixel is disposed, in the region of interest, without analyzing any more pixels in that column, thereby defining a region of interest that comprises fewer rows and fewer columns than did the pixelated image.
- 37. The method of claim 32, wherein if any specific pixel in a column exceeds the threshold value, further comprising the step of including the column in the region of interest without analyzing any more pixels in the column, thereby defining a region of interest that comprises fewer columns, but the same number of rows, as in the pixelated image.
- 38. The method of claim 37, further comprising the step of analyzing each pixel in the region of interest that comprises fewer columns than did the pixelated image to determine if the pixel is above the threshold, and if so, then including a row in which the pixel is disposed, in the region of interest without analyzing any more pixels in the row, thereby defining a region of interest that comprises fewer rows and fewer columns than in the pixelated image.
- 39. The method of claim 32, wherein the threshold is determined based on a noisy image substantially free of objects.
- 40. The method of claim 32, further comprising the step of determining the threshold using a noise-filled image.
- 41. The method of claim 40, wherein the step of determining the threshold using a noise-filled image comprises the steps of:
(a) filtering pixelated image data from a noisy image with the low pass filter and the edge enhancement filter, producing filtered noisy image data; (b) generating an amplitude histogram from the filtered noisy image data; (c) determining a mean value of the amplitude histogram; and (e) scaling the mean value by a predetermined scale factor to determine the threshold.
- 42. The method of claim 41, further comprising the steps of empirically determining a value for the predetermined scale factor, where said value has been shown to remove background noise, while retaining desired image information.
- 43. The method of claim 41, further comprising the steps of repeating the steps of claim 41 using a plurality of different noisy images to obtain a plurality of different amplitude histograms, and using the plurality of different amplitude histograms to generate a threshold value that is robust against an object being included within the plurality of different noisy images, where inclusion of said object would undesirably increase the threshold value.
- 44. The method of claim 43, further comprising the step of using weighting coefficients to obtain the amplitude histograms, so that grey scale levels above the noise floor for pixels included in the filtered image data are given less weight than levels near a noise floor.
- 45. An image signal processing system for detecting an object in an image and segmenting the pixelated image to separate the object from a background, comprising:
(a) a memory in which a plurality of machine instructions defining a signal processing function are stored; and (b), a processor that is coupled to the memory to access the machine instructions, said processor executing said machine instructions and thereby implementing a plurality of functions, including:
(i) detecting the presence of an object included within an pixelated image corresponding to pixelated image data by filtering the pixelated image data, producing filtered image data in which an object is detected in a pixelated image based upon relative amplitude values of pixels corresponding to the filtered image data for said pixelated image; (ii) segmenting the pixelated image in which the object was detected by defining a region of interest from the filtered image data for the pixelated image in which the object was detected, so that the region of interest comprises less than all of the filtered image data for said pixelated image, but includes the object that was detected in said pixelated image; and (iii) determining object boundaries for the object using the filtered image data within the region of interest.
- 46. An image signal processing system for detecting an object in a pixelated image and segmenting the pixelated image to separate the object from a background, by processing pixelated image data to:
(a) detect an object within a pixelated image to which the pixelated image data corresponds, by filtering the pixelated image data, producing filtered image data in which an object is detected in a pixelated image based upon relative amplitude values of pixels corresponding to the filtered image data for said pixelated image; (b) segment the pixelated image in which the object was detected by defining a region of interest from the filtered image data for the pixelated image in which the object was detected, so that the region of interest comprises less than all of the filtered image data for said pixelated image, but includes the object that was detected in said pixelated image; and (c) determine object boundaries for the object using the filtered image data within the region of interest.
- 47. The system of claim 46, further comprising:
(a) a memory in which a plurality of machine instructions defining a signal processing function for carrying out the functions of claim 46 are stored; and (b) a processor that is coupled to the memory to access the machine instructions, said processor executing said machine instructions and thereby implementing said functions.
- 48. The system of claim 47, wherein said memory and said processor comprise a programmed computer.
- 49. The system of claim 47, wherein said memory and said processor comprise an application specific integrated circuit.
- 50. An article of manufacture adapted for use with a computer, comprising:
(a) a memory medium; and (b) a plurality of machine instructions, which are stored on the memory medium, said plurality of machine instructions when executed by a computer, causing the computer to:
(i) detect an object of interest within a pixelated image by filtering an image data signal; (ii) define a region of interest for the pixelated image, such that the region of interest comprises less than the pixelated image and encompasses the object of interest; and (iii) determine boundaries for objects within the region of interest.
- 51. An article of manufacture adapted for use with a processor, comprising:
(a) a memory medium; and (b) a plurality of machine instructions, which are stored on the memory medium, said plurality of machine instructions when executed by a processor, causing the processor to:
(i) detect the presence of an object within a pixelated image corresponding to pixelated image data by filtering the pixelated image data, producing filtered image data in which an object is detected in a pixelated image based upon relative amplitude values of pixels corresponding to the filtered image data for said pixelated image; (ii) segment the pixelated image in which the object was detected by defining a region of interest from the filtered image data for the pixelated image in which the object was detected, so that the region of interest comprises less than all of the filtered image data for said pixelated image, but includes the object that was detected in said pixelated image; and (iii) determine object boundaries for the object using the filtered image data within the region of interest.
RELATED APPLICATIONS
[0001] This application is based on prior copending provisional application Serial No. 60/306,126, filed Jul. 17, 2001, the benefits of the filing date of which is hereby claimed under 35 U.S.C. §119(e) and is a continuation-in-part application of prior copending patent applications, Ser. No. 10/132,059, which was filed on Apr. 24, 2002, Ser. No. 09/939,049, which was filed on Aug. 24, 2001, and Ser. No. 09/939,292, which was filed on Aug. 24, 2001, the benefits of the filing dates of which are hereby claimed under 35 U.S.C. §120.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60306126 |
Jul 2001 |
US |
Continuation in Parts (3)
|
Number |
Date |
Country |
Parent |
10132059 |
Apr 2002 |
US |
Child |
10200018 |
Jul 2002 |
US |
Parent |
09939049 |
Aug 2001 |
US |
Child |
10200018 |
Jul 2002 |
US |
Parent |
09939292 |
Aug 2001 |
US |
Child |
10200018 |
Jul 2002 |
US |