1. Field of Invention
The present invention relates to processing of image data. More particularly, the present invention relates to a method and apparatus for identifying objects in an image.
2. Background Information
Historically, reconnaissance information has provided important information used in planning military operations. For example, prior to the advent of photography, scouts would be sent out to collect information regarding natural resources such as lakes and rivers, enemy troop information and the like. With the advent of photography, these scouts would provide reconnaissance information by capturing a scene of enemy installations, battlefields, and the like, using photographs. As technology advances, new methods are provided for collecting reconnaissance information. For example, it is quite common today to have reconnaissance planes, manned or remotely controlled, or satellites capture a scene for reconnaissance purposes. In addition to conventional photographic techniques, a scene can be captured using infrared detectors and the like.
Typically scenes captured by reconnaissance techniques have been analyzed by humans in order to determine the content of the captured scene. For example, a human would analyze a photograph to determine the location of bodies of water, the location of enemy troops and the location of man-made objects such as buildings and lines of communication. The human who analyzed the photograph would then have to relay the determined information to people in the field, for example, to an airplane pilot in order to identify targets. However, using humans to analyze photographs is very labor intensive. Further, there can be a considerable delay between the time when a scene is captured and the time in which the information in the captured scene is relayed to persons in the field.
In accordance with a first exemplary aspect of the present invention a method and apparatus for identifying objects in an image is provided. In accordance with the aspect the image is processed with a gradient operator to produce a gradient magnitude and direction for each pixel. A number of different gradient directions in a portion of the processed image are determined. The portion of the processed image is identified as an object if the number of different gradient directions exceeds a threshold number of gradient directions.
In accordance with another aspect of the present invention a method and apparatus for identifying objects in an image are provided. In accordance with this aspect, a gradient magnitude is determined for each pixel in the image. A gradient direction for each pixel in the image is determined, the gradient direction being determined using a look up table.
Other objects and advantages of the invention will become apparent to those skilled in the art upon reading the following detailed description of preferred embodiments, in conjunction with the accompanying drawings, wherein like reference numerals have been used to designate like elements, and wherein:
In accordance with exemplary embodiments of the present invention, portions of an image are processed to determine the number of different gradient directions present in the portion of the image. Through empirical analysis, it has been determined that closed objects, or nearly closed objects, in an image typically includes a predetermined number of different edge directions. For example, if the directions in an edge direction image are quantitized to one of eight unique directions, an object will normally comprise six, seven or eight different edge directions. It should be recognized that a quantitization of eight edge directions is merely exemplary, and that the present invention is equally applicable to other quantitizations, e.g., 16 or 32 edge directions. If other quantitizations are employed, the number of different edge directions used for identifying objects in an image can be determined by one of ordinary skill in the art through routine empirical analysis.
For each portion of the image a horizontal binary OR operation 142 is performed, followed by a vertical binary OR operation 144. The result of these operations are input to an edge count lookup table in processing block 146, which outputs a value indicating the number of different edge directions present in the portion of the image processed by processing block 140. Specifically, the output can include the thresholded gradient magnitude and gradient direction image with an indication of the number of different directions present in each portion of the image or an indication of which portions of the image contain objects. The output can be provided on a display or in printed form. If this processing is part of an automated system, the output can be in the form of coordinates of where objects are located in the images.
Dx=a+2*d+g−c−2*f−i (1)
A gradient y vector is calculated in accordance with the following equation:
Dy=a+2*b+c−g−2*h−i (2)
Using the gradient x and y vectors, the gradient magnitude and gradient direction are calculated as follows:
Returning now to
Once the number of different gradient directions are determined, a confidence value of the likelihood that the portion of the image identified is generated as containing an object actually contains an object is generated.
As discussed above, the present invention employs the conventional Sobel operator to determine the gradient directions. However, the conventional Sobel operator described in accordance with equations 1 through 4 above, requires 11 additions, 6 multiplications, 1 division, 1 square root, and 1 inverse tangent. Conventionally, the number of operations are decreased by performing the Sobel operation in accordance with equations 5 through 7 as follows:
Dx=a+2*(d−f)+g−c−i (5)
Dy=a+2*(b−h)+c−g+i (6)
Magnitude=abs(Dx)+abs(Dy) (7)
As illustrated in
It can be desirable to further reduce the number of operations required to determine the gradient direction. Prior to describing the exemplary technique for reducing the number of operations in accordance with the present invention, a review of the gradient directions of the conventional Sobel operation will be described in connection with
By rotating the boundaries of the gradient directions 22.5°, the calculations for the Sobel operation can be simplified.
Dx=l+k+[(d−f)<<1] (8)
Dy=l−k+[(b−h)<<1] (9)
wherein l=a+i and k=g−c, and the double < represents a one bit binary shift to the right.
Using the x vector and the y vector, a lookup table in
Using the 22.5° rotation described above provides an adequate approximation of the gradient direction, this approximation can be improved. Specifically, using equations 10 through 15 below, takes advantage of the decrease in operations achieved by the 22.5° rotation, while compensating for this rotation.
D′x=Dx*15137−Dy*6270 (10)
D′y=Dx*6270+Dy*15137 (11)
d1=[CMP(D′x, 0)]>>1 (12)
d2=CMP(D′y, 0) (13)
d3=[CMP(abs(D′y, abs(D′y))]>>2 (14)
Direction=LUT((d3)!(d2)!(d1)) (15)
where CMP represents a comparison operation, LUT represents a lookup table operation, and an exclamation point represents a binary OR operation.
In equations 10 and 11 the values 15,137 and 6,270 are employed to compensate for the 22.5° binary shift. Specifically, the value of 15,137 represents the cosine of 22.5° times a scaling factor, and the value 6,270 represents the sine of 22.5° times a scale factor.
For ease of understanding, the present invention has been generally described as performing processing and logical operations. The processing and logical operations can be implemented using a variety of mechanisms including, but not limited to, Application Specific Integrated Circuits (ASICs), a microprocessor which executes software code, and hard-wired logic circuits. Moreover, the tables described herein can be stored in a variety of devices including buffers, caches, Random Access Memory (RAM), Read Only Memory (ROM), and the like.
The present invention has been described with reference to several exemplary embodiments. However, it will be readily apparent to those skilled in the art that it is possible to embody the invention in specific forms other than those of the exemplary embodiments described above. This may be done without departing from the spirit of the invention. These exemplary embodiments are merely illustrative and should not be considered restrictive in any way. The scope of the invention is given by the appended claims, rather than the preceding description, and all variations and equivalents which fall within the range of the claims are intended to be embraced therein.
This application is a continuation of application Ser. No. 10/379,909, filed Mar. 6, 2003, now U.S. Pat. No. 6,912,309 which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4201958 | Ahamed | May 1980 | A |
4396903 | Habicht et al. | Aug 1983 | A |
4618989 | Tsukune et al. | Oct 1986 | A |
4644172 | Sandland et al. | Feb 1987 | A |
4908872 | Toriu et al. | Mar 1990 | A |
5019903 | Dougall et al. | May 1991 | A |
5119324 | Ahsan | Jun 1992 | A |
5355446 | Maayan | Oct 1994 | A |
5357353 | Hirota | Oct 1994 | A |
5412197 | Smith | May 1995 | A |
5867592 | Sasada et al. | Feb 1999 | A |
5936674 | Kim | Aug 1999 | A |
5940539 | Kondo et al. | Aug 1999 | A |
6134353 | Makram-Ebeid | Oct 2000 | A |
6208763 | Avinash | Mar 2001 | B1 |
6289112 | Jain et al. | Sep 2001 | B1 |
6360005 | Aloni et al. | Mar 2002 | B1 |
6366699 | Kuwano et al. | Apr 2002 | B1 |
6377698 | Cumoli et al. | Apr 2002 | B1 |
6408109 | Silver et al. | Jun 2002 | B1 |
6535651 | Aoyama et al. | Mar 2003 | B1 |
6658145 | Silver et al. | Dec 2003 | B1 |
6661842 | Abousleman | Dec 2003 | B1 |
6697537 | Norimatsu | Feb 2004 | B2 |
6775409 | Brunelli et al. | Aug 2004 | B1 |
6807286 | Krumm et al. | Oct 2004 | B1 |
6912309 | Lee | Jun 2005 | B2 |
7149356 | Clark et al. | Dec 2006 | B2 |
7430303 | Sefcik et al. | Sep 2008 | B2 |
7627178 | Suzuki et al. | Dec 2009 | B2 |
20020021365 | Yang et al. | Feb 2002 | A1 |
20030122815 | Deering | Jul 2003 | A1 |
20030223627 | Yoshida et al. | Dec 2003 | A1 |
Number | Date | Country | |
---|---|---|---|
20060062458 A1 | Mar 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10379909 | Mar 2003 | US |
Child | 11166374 | US |