The present invention relates to a system for observing objects in three dimensional space using structured light. More particularly, the invention relates to a multi-camera, three dimensional sensing system providing non-contact gauge measurements of an object using known object templates and known epi-polar geometry to identify observable projections of structured light on the surface of an object; and, to reconstruct corrupted portions of images of the projected structured light.
As shown in
First, a single pixel on each stripe in an image from a first camera is selected, as seen in FIG. 2. Given that the position of a first camera lens in space is known and the selected pixel is known, then it is known that the point corresponding to the selected pixel lies on a known line drawn from the lens center out into space. This line appears as a line in an image from a second camera. This line is called the epi-polar line of the selected point in the first image. Since the position of the lens center of the second camera is also known, this epi-polar line can be calculated and drawn in the second image. The epi-polar line, when drawn in the second image, will intersect at least one, and most likely several, of the stripes of the second video image. It is known that one of the pixels where the epi-polar line and a stripe intersect represents the selected point in the first image. The actual coordinate location in space of the point corresponding to any of these intersection points is determined by simple triangulation. Since the position in space of each plane of light which created the stripes is also known, the single point of all the intersection points which correspond to the selected point in the first image is ascertained by determining the three-dimensional coordinate of each intersection point to determine if it lies on one of the known planes of light. The intersection point which lies closest to a known plane of light is taken as the selected point.
The surface of many objects includes regions which are shiny or have otherwise poor reflectivity characteristics. Structured light projected onto these surfaces results in multiple reflections or clutter causing poor laser stripe identification in generated images. Furthermore, in some situations, entire image regions which are representative of portions of an object's surface may become corrupted due to noise or interference.
Accordingly, there is a need for a method of reconstructing projected laser stripes in those portions of images which are heavily corrupted due to reflection, noise, or interference.
Briefly stated, the present invention sets forth a method for generating a template structure representative of the surface of an object, and for utilizing the template structure to synthesize projected laser stripe data points in corrupted image regions, thereby improving the accuracy of current laser range finding and measurement systems.
The foregoing and other objects, features, and advantages of the invention as well as presently preferred embodiments thereof will become more apparent from the reading of the following description in connection with the accompanying drawings.
In the accompanying drawings which form part of the specification:
Corresponding reference numerals indicate corresponding parts throughout the several figures of the drawings.
The following detailed description illustrates the invention by way of example and not by way of limitation. The description clearly enables one skilled in the art to make and use the invention, describes several embodiments, adaptations, variations, alternatives, and uses of the invention, including what is presently believed to be the best mode of carrying out the invention.
Using a three dimensional measurement system such as the type shown in
The template structures represent prior knowledge of the surface of the objects, such that features in subsequent images, such as seen in
Predetermined template structures can be used in various ways to increase the fidelity of the laser stripe localization process. By using the template structures as a guide, a two-dimensional locally matched filter may be generated for each point in an image 10 of projected laser stripes 12A-12H on the surface of an object 13, such as shown in FIG. 3. Next, a flow field is established which defines an orientation for each point in an image. The flow field, which may be either a tangential flow field, as seen in
where R is a curve which emanates from pixel (i,j) and is always tangential to the flow field, r is a measure of arc length along curve R, and image(r) is the image intensity value for a point on curve R. The gaussian term localizes this one dimensional filter.
In the second pass, each pixel (i,j) is given the value:
where P is a curve emanating from pixel (i,j) and is always perpendicular to the flow field, and p is a measure of arc length along curve P. The result of this two pass approach is a two-dimensional local matched filter responsive to the original image 10. The matched filtering enhances much of the true signal while suppressing unwanted noise. Alternatively, a single pass approach could also be used by employing a two-dimensional filter. Or, separable one-dimensional filters which are not gaussian could be employed within the scope of the invention.
Once the image 10 has been processed with the filters to obtain values for v(i,j) and t(i,j) for each pixel, non-maximal suppression techniques are utilized to identify the centers of each laser stripe 12A 12H within image 10. In one embodiment, each raster line in an image is scanned to identify points where t(i,j) is a local maximum with respect to the raster line. These points represent the center of detected laser stripe structures in the raster line. In this way, the laser stripe signal to noise ratio is increased, resulting in an increase in measurement accuracy.
As seen in
As seen in
In one embodiment, shown in
The position and orientation of the cameras generating the first and second uncorrupted images 14, 16 is known, hence each point on the laser stripe 12 observed by the two cameras is known to lie on two separate lines. The three-dimensional location of each of the points is known to be the intersection of the two lines and can be determined by triangulation. Using the known position of a point in an uncorrupted image which corresponds to a data point of a corrupted image, together with the known camera position for the camera which generated the corrupted image, the position of a data point representing the target point on the laser stripe in the corrupted image can be accurately synthesized through the process of projection.
In view of the above, it will be seen that the several objects of the invention are achieved and other advantageous results are obtained. As various changes could be made in the above constructions without departing from the scope of the invention, it is intended that all matter contained in the above description or shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Number | Name | Date | Kind |
---|---|---|---|
5465037 | Huissoon et al. | Nov 1995 | A |
5589942 | Gordon | Dec 1996 | A |
5621529 | Gordon et al. | Apr 1997 | A |
5995650 | Migdal et al. | Nov 1999 | A |
6205240 | Pietrzak et al. | Mar 2001 | B1 |
6639594 | Zhang et al. | Oct 2003 | B2 |
6700668 | Mundy et al. | Mar 2004 | B2 |
6753876 | Brooksby et al. | Jun 2004 | B2 |
Number | Date | Country | |
---|---|---|---|
20030112449 A1 | Jun 2003 | US |