Claims
- 1. A method of controlling an object's motion through a viewed space comprising the steps of:
acquiring a stereo image of said viewed space wherein said stereo image comprises an image set; computing a set of 3D features from said stereo image; filtering from said set of 3D features to generate a set of filtered 3D features; computing a trajectory of said set of filtered 3D features; and generating a control signal influencing said objects motion in response to said trajectory.
- 2. The method according to claim 1 wherein said step of computing a set of 3D features includes the steps of:
edge-processing said stereo image to generate a plurality of connected edgelets; identifying connected edgelets having length greater than a predetermined threshold as features; matching features to generate disparities generated from different images in said image set; and computing 3D locations of feature points according to said disparities and camera geometry.
- 3. The method according to claim 2 wherein said step of computing a set of 3D features further comprises the steps of:
merging horizontal and vertical disparities to form a set of selected disparities; wherein said step of computing 3D locations of feature points is performed according to said set of selected disparities and said camera geometry.
- 4. The method according to claim 1 further comprising the step of:
segmenting said 3D features to identify mutually exclusive subsets of boundary point as objects; wherein said set of filtered 3D features are generated by filtering ground plane noise from said objects.
- 5. The method according to claim 2 wherein said edge processing step detects features by performing:
a parabolic smoothing step; a non-integral sub-sampling step at a predefined granularity; a sobel edge detection step; a true peak detection step; and a chaining step.
- 6. The method according to claim 2 wherein said matching step includes the steps of:
matching features from a first image to a second image to identify disparities; constraining an initial set of possible matches of said disparities for each feature using the an epipolar constraint; characterizing each of said possible matches by an initial strength of match (SOM), by comparing the strength and orientation of said edgelets; and enforcing a smoothness constraint within a preselected allowable disparity gradient.
- 7. The method according to claim 6 wherein said step of enforcing a smoothness constraint comprises the steps of:
updating the SOM of each correspondence comparing correspondences neighboring features under consideration; and enforcing uniqueness by iteratively identifying matches having a maximum matching strength for both of its constituent features and eliminating all other matches associated with each constituent of the identified match.
- 8. The method according to claim 6 wherein said first and second image comprise a right and left image:
wherein features from said right and left images are merged to identify horizontal disparities; and further matching features from a either said right or left image to a top image to identify vertical disparities.
- 9. The method according to claim 3 wherein said merging step includes the steps of multiplexing said disparities by:
selecting said horizontal disparities to be passed along if an orientation of said feature is between 45 and 135 or between 225 and 315; and selecting said vertical disparities to be passed along if said orientation of said feature is not between 45 and 135 or between 225 and 315.
- 10. The method according to claim 4 wherein said step of segmenting includes the steps of:
generating initial clusters according to chain organization of said edgelets; breaking chains of features into contiguous segments based on abrupt changes in z between successive points; and merging two closest clusters based on a minimum distance criteria.
- 11. The method according to claim 4 wherein said segmenting step includes the step of selecting only objects wherein a 2D distance between the objects along a particular plane exceed a preset spacing threshold.
- 12. The method according to claim 1 wherein said step of computing a set of 3D features includes the steps of:
rectifying right and left images to generate a right and left rectified image; matching features from said right and left rectified image to produce a dense disparity image; edge-processing either said right or said left rectified image to generate a plurality of connected edgelets; identifying connected edgelets having length greater than a predetermined threshold as features; mapping locations of said features into said dense disparity image to sparsified disparities computing 3D locations of feature points according to said sparsified disparities and camera geometry.
- 13. The method according to claim 1 wherein said step of filtering further comprises the steps of:
converting said 3D features to a ground plane coordinate system; eliminating features having excessive or insufficient range, excessive lateral distance, excessive height, or insufficient distance from said ground plane; projecting remaining features into said ground plane to generate projected features; converting said projected features to a 2D image; obtaining distinct regions wherein each pixel represents a plurality of feature points; scoring features in said distinct region using a scoring function to generate region scores; accumulating said region scores and comparing said accumulated scores to a predetermined threshold to determine if an object is present or absent.
- 14. The method according to claim 1 wherein said step of computing a trajectory further comprises the step of correlating segmented features in a first frame with features around an expected object position in a following frame;
- 15. A method of computing an object's trajectory comprising the steps of:
acquiring a stereo image of at least part of said object; computer a set of 3D features from said stereo image; filtering ground plane noise from said set of 3D features to generate a set of filtered 3D features; and computing a trajectory of said set of filtered 3D features by: correlating filtered 3D features in an area around a suspected position of said part in a following frame.
- 16. A method of determining an object's trajectory by viewing an area with stereo cameras;
generating a feature when an object enters said viewing area; measuring a height of said feature relative to a ground plane; clustering said features having a height above said ground plane in 3D space to generate objects; and tracking said objects in multiple frames.
- 17. A method for monitoring a passageway comprising:
providing a plurality of 3D images of a passageway over a duration of time, the plurality of 3D images comprising a reference image and a disparity map; calculating at least one motion vector for the duration of time using the reference image and disparity map; and monitoring the passageway using the at least one motion vector.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a continuation in part of U.S. application Ser. No. 10/388,925 filed Mar. 14, 2003 which claims benefit of Provisional Application 60/408,266 filed Sep. 5, 2002.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60408266 |
Sep 2002 |
US |
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
10388925 |
Mar 2003 |
US |
Child |
10749335 |
Dec 2003 |
US |