a illustrates a top view of the range-cued object detection system incorporated in a vehicle, viewing a plurality of relatively near-range objects, corresponding to
b illustrates a right-side view of a range-cued object detection system incorporated in a vehicle, viewing a plurality of relatively near-range objects, corresponding to
c illustrates a front view of the stereo cameras of the range-cued object detection system incorporated in a vehicle, corresponding to
a illustrates a geometry of a stereo-vision system;
b illustrates an imaging-forming geometry of a pinhole camera;
a illustrates a top-down view of range bins corresponding to locations of valid range measurements for regions of interest associated with the plurality of objects illustrated in
b illustrates a first histogram of valid range values with respect to cross-range location for the range bins illustrated in
c illustrates a second histogram of valid range values with respect to down-range location for the range bins illustrated in
d illustrates a plurality of predetermined two-dimensional clustering bins superimposed upon the top-down view of range bins of
e illustrates the plurality of one-dimensional cross-range cluster boundaries associated with
f illustrates the plurality of one-dimensional down-range cluster boundaries associated with
a illustrates corresponding best-fit rectangles associated with each of the regions of interest (ROI) illustrated in
b illustrates a correspondence between the best-fit ellipse illustrated in
a is a copy of the first histogram illustrated in
b illustrates a first filtered histogram generated by filtering the first histogram of
a is a copy of the second histogram illustrated in
b illustrates a second filtered histogram generated by filtering the second histogram of
a illustrates a half-tone image of a vehicle object upon which is superimposed 30 associated uniformly-spaced radial search paths and associated edge locations of the vehicle object;
b illustrates a profile of the edge locations from
a illustrates a half-tone image of a vehicle object upon which is superimposed 45 associated uniformly-spaced radial search paths and associated edge locations of the vehicle object;
b illustrates a profile of the edge locations from
a-29h illustrate a plurality of different homogeneity regions used by an associated Maximum Homogeneity Neighbor Filter;
i illustrates a legend to identify pixel locations for the homogeneity regions illustrated in
j illustrates a legend to identify pixel types for the homogeneity regions illustrated in
a-33ss illustrate plots of the amplitudes of 45 polar vectors associated with the radial search paths illustrated in
a illustrates the image of the vehicle object from
b illustrates the image of the vehicle object from
c illustrates the image of the vehicle object from
a-1 illustrate plots of the amplitudes of polar vectors for selected radial search paths from
a-b illustrate plots of the amplitudes of polar vectors for selected radial search paths from
a illustrates a half-tone image of a range map of a visual scene, and an edge profile of an object therein, wherein the edge profile is based solely upon information from the range map image;
b illustrates a half-tone image a visual scene, a first edge profile of an object therein is based solely upon information from the range map image, and a second edge profile of the object therein based upon a radial search of the a mono-image of the visual scene;
c illustrates the first edge profile alone, as illustrated in
d illustrates the second edge profile alone, as illustrated in
Referring to
The range-cued object detection system 10 incorporates a stereo-vision system 16 operatively coupled to a processor 18 incorporating or operatively coupled to a memory 20, and powered by a source of power 22, e.g. a vehicle battery 22.1. Responsive to information from the visual scene 24 within the field of view of the stereo-vision system 16, the processor 18 generates one or more signals 26 to one or more associated driver warning devices 28, VRU warning devices 30, or VRU protective devices 32 so as to provide for protecting one or more VRUs 14 from a possible collision with the vehicle 12 by one or more of the following ways: 1) by alerting the driver 33 with an audible or visual warning signal from a audible warning device 28.1 or a visual display or lamp 28.2 with sufficient lead time so that the driver 33 can take evasive action to avoid a collision; 2) by alerting the VRU 14 with an audible or visual warning signal—e.g. by sounding a vehicle horn 30.1 or flashing the headlights 30.2—so that the VRU 14 can stop or take evasive action; 3) by generating a signal 26.1 to a brake control system 34 so as to provide for automatically braking the vehicle 12 if a collision with a VRU 14 becomes likely, or 4) by deploying one or more VRU protective devices 32—for example, an external air bag 32.1 or a hood actuator 32.2 in advance of a collision if a collision becomes inevitable. For example, in one embodiment, the hood actuator 32.2—for example, either a pyrotechnic, hydraulic or electric actuator—cooperates with a relatively compliant hood 36 so as to provide for increasing the distance over which energy from an impacting VRU 14 may be absorbed by the hood 36.
Referring also to
r=b·f/d, where d=dl−dr (1)
Referring to
Referring to
Referring to
Referring to
An associated area correlation algorithm of the stereo-vision processor 78 provides for matching corresponding areas of the first 40.1 and second 40.2 stereo image components so as to provide for determining the disparity d therebetween and the corresponding range r thereof. The extent of the associated search for a matching area can be reduced by rectifying the input images (I) so that the associated epipolar lines lie along associated scan lines of the associated first 38.1 and second 38.2 stereo-vision cameras. This can be done by calibrating the first 38.1 and second 38.2 stereo-vision cameras and warping the associated input images (I) to remove lens distortions and alignment offsets between the first 38.1 and second 38.2 stereo-vision cameras. Given the rectified images (C), the search for a match can be limited to a particular maximum number of offsets (D) along the baseline direction, wherein the maximum number is given by the minimum and maximum ranges r of interest. For implementations with multiple processors or distributed computation, algorithm operations can be performed in a pipelined fashion to increase throughput. The largest computational cost is in the correlation and minimum-finding operations, which are proportional to the number of pixels times the number of disparities. The algorithm can use a sliding sums method to take advantage of redundancy in computing area sums, so that the window size used for area correlation does not substantially affect the associated computational cost. The resultant disparity map (M) can be further reduced in complexity by removing such extraneous objects such as road surface returns using a road surface filter (F).
The associated range resolution (dr) is a function of the range r in accordance with the following equation:
The range resolution (Δr) is the smallest change in range r that is discernible for a given stereo geometry, corresponding to a change Δd in disparity (i.e. disparity resolution Δd). The range resolution (Δr) increases with the square of the range r, and is inversely related to the baseline b and focal length f, so that range resolution (Δr) is improved (decreased) with increasing baseline b and focal length f distances, and with decreasing pixel sizes which provide for improved (decreased) disparity resolution Δd.
Alternatively, a CENSUS algorithm may be used to determine the range map image 80 from the associated first 40.1 and second 40.2 stereo image components, for example, by comparing rank-ordered difference matrices for corresponding pixels separated by a given disparity d, wherein each difference matrix is calculated for each given pixel of each of the first 40.1 and second 40.2 stereo image components, and each element of each difference matrix is responsive to a difference between the value of the given pixel and a corresponding value of a corresponding surrounding pixel.
Referring to
Accordingly, the near-range detection and tracking performance based solely on the range map image 80 from the stereo-vision processor 78 can suffer if the scene illumination is sub-optimal and when object 50 lacks unique structure or texture, because the associated stereo matching range fill and distribution are below acceptable limits to ensure a relatively accurate object boundary detection. For example, the range map image 80 can be generally used alone for detection and tracking operations if the on-target range fill (OTRF) is greater than about 50 percent.
It has been observed that under some circumstances, the on-target range fill (OTRF) can fall below 50 percent with relatively benign scene illumination and seemly relatively good object texture. For example, referring to
Referring to
Referring to
More particularly, referring to
Referring again to
Referring to
wherein, the cross-range distance CR is a bi-lateral measurement relative to the axial centerline 100 of the vehicle 12, the axial centerline 98 of the stereo-vision system 16, an associated mounting location of an associated camera, or some other lateral reference, in a direction that is transverse to the axial centerline 100 of the vehicle 12, the axial centerline 98 of the stereo-vision system 16; and the down-range distance DR is a measurement of the distance from the baseline b of the stereo-vision system 16, the bumper 106 of the vehicle 12, or some other longitudinal reference, forward and away from the vehicle 12.
Referring again also to
Referring also to
A first embodiment of the associated clustering process 1500 provides for clustering the range bins 108 with respect to predefined two-dimensional nominal clustering bins 113, the boundaries of each of which defined by intersections of corresponding associated one-dimensional nominal cross-range clustering bins 113′ with corresponding associated one-dimensional nominal down-range clustering bins 113″. More particularly, the predefined two-dimensional nominal clustering bins 113 are each assumed to be aligned with a Cartesian top-down space 102 and to have a fixed width of cross-range distance CR, for example, in one embodiment, 3 meters, and a fixed length of down-range distance DR, for example, in one embodiment, 6 meters, which is provided for by the corresponding associated set of one-dimensional nominal cross-range clustering bins 113′, for example, each element of which spans 3 meters, and the corresponding associated set of one-dimensional nominal down-range clustering bins 113″, for example, each element of which spans 6 meters, wherein the one-dimensional nominal cross-range clustering bins 113′ abut one another, the one-dimensional nominal down-range clustering bins 113″ separately abut one another, and the one-dimensional nominal cross-range clustering bins 113′ and the one-dimensional nominal down-range clustering bins 113″ are relatively orthogonal with respect to one another.
In one set of embodiments, the set of one-dimensional nominal cross-range clustering bins 113′ is centered with respect to the axial centerline 110 of the vehicle 12, and the closest edge 113′″ of the one-dimensional nominal down-range clustering bins 113″ is located near or at the closest point on the roadway 99′ that is visible to the stereo-vision system 16. For example, for a 3 meter by 6 meter two-dimensional nominal clustering bin 113, and a top-down space 102 extending 20 meters left and right of the axial centerline 110 of the vehicle 12 and 50 meters from the origin 104 of the stereo-vision system 16, then in one embodiment, there are about 14 one-dimensional nominal cross-range clustering bins 113′, i.e. ((+20) meter right boundary−(−20) meter left boundary)/3 meter cross-range distance CR with of a one-dimensional nominal cross-range clustering bin 113′), and with the closest one-dimensional nominal down-range clustering bins 113″ each located 5 meters down range of the origin 104, there are about 8 one-dimensional nominal down-range clustering bins 113″, i.e. (50 meter end point−5 meter start point)/6 meter down-range distance DR length of a one-dimensional nominal down-range clustering bin 113″). If an odd number of one-dimensional nominal cross-range clustering bins 113′ are used, then there will be a central dimensional nominal cross-range clustering bin 113′.0 that is centered about the axial centerline 110 of the vehicle 12.
The two-dimensional nominal clustering bins 113 provide for initially associating the two-dimensional range bins 108 with corresponding coherent regions of interest (ROI) 114, beginning with the closest two-dimensional nominal clustering bin 113.0, and then continuing laterally away from the axial centerline 110 of the vehicle 12 in cross range, and longitudinally away from the vehicle in down range. More particularly, continuing with the associated clustering process 1500 illustrated in
If, in step (1512), the identified two-dimensional nominal clustering bin 113 is sufficiently populated with a sufficient number of range pixels 81 accounted for by the associated two-dimensional range bins 108, then, in step (1514), the two-dimensional range bins 108 located within the boundaries of the identified two-dimensional nominal clustering bin 113 are associated with the currently pointed-to region of interest (ROI) 114, for example, using the above-described linked list to locate the associated two-dimensional range bins 108. Then, in step (1516), those two-dimensional range bins 108 that have been associated with the currently pointed-to region of interest (ROI) 114 are either masked or removed from the associated first 112.1 and second 112.2 histograms, so as to not be considered for subsequent association with other regions of interest (ROI) 114. Then, in step (1518), the cross-range
Alternatively, the centroid 116 of the corresponding ith region of interest (ROI) 114 could be calculated using weighted coordinate values that are weighted according to the number nk of range pixels 81 associated with each range bin 108, for example, as follows:
Then, in step (1520), the next region of interest (ROI) 114 is pointed to and initialized, for example, with a null value for the pointer to the associated two-dimensional range bins 108.
Then, either from step (1520), or from step (1512) if the current two-dimensional nominal clustering bin 113 is insufficiently populated with corresponding two-dimensional range bins 108, then, in step (1522), if all two-dimensional nominal clustering bins 113 have not been processed, then, in step (1524) the next-closest combination of one-dimensional nominal cross-range clustering bins 113′ and one-dimensional nominal down-range clustering bins 113″ is selected, and the process repeats with step (1510).
Otherwise, from step (1522), if all two-dimensional nominal clustering bins 113 have been processed, then, in step (1526), if any two-dimensional range bins 108 remain that haven't been assigned to a corresponding two-dimensional nominal clustering bin 113, then, in step (1528), each remaining two-dimensional range bin 108—for example, proceeding from the closest to the farthest remaining two-dimensional range bins 108, relative to the vehicle 12—is associated with the closest region of interest (ROI) 114 thereto. For example, in one set of embodiments, the location of the corresponding centroid 116 of the region of interest (ROI) 114 is updated as each new two-dimensional range bin 108 is associated therewith. Then, following step (1528), or from step (1526) if no two-dimensional range bins 108 remain that haven't been assigned to a corresponding two-dimensional nominal clustering bin 113, then, in step (1530), the resulting identified regions of interest (ROI) 114 are returned, each of which includes an identification of the two-dimensional range bins 108 associated therewith.
Referring again to step (1512), various tests could be use to determine whether or not a particular two-dimensional nominal clustering bin 113 is sufficiently populated with two-dimensional range bin 108. For example, this could depend upon a total number of associated range pixels 81 in the associated two-dimensional range bins 108 being in excess of a threshold, or upon whether the magnitude of the peak values of the first 112.1 and second 112.2 histograms are each in excess of corresponding thresholds for the associated one-dimensional nominal cross-range 113′ and down-range 113″ clustering bins corresponding to the particular two-dimensional nominal clustering bin 113.
Furthermore, referring again to step (1524), the selection of the next-closest combination of one-dimensional nominal cross-range 113′ and down-range 113″ clustering bins could be implemented in various ways. For example, in one embodiment, the two-dimensional nominal clustering bins 113 are scanned in order of increasing down-range distance DR from the vehicle 12, and for each down-range distance DR, in order of increasing cross-range distance DR from the axial centerline 110 of the vehicle 12. In another embodiment, the scanning of the two-dimensional nominal clustering bins 113 is limited to only those two-dimensional nominal clustering bins 113 for which a collision with the vehicle 12 is feasible given either the speed alone, or the combination of speed and heading, of the vehicle 12. In another embodiment, these collision-feasible two-dimensional nominal clustering bins 113 are scanned with either greater frequency or greater priority than remaining two-dimensional nominal clustering bins 113. For example, remaining two-dimensional nominal clustering bins 113 might be scanned during periods when no threats to the vehicle 12 are otherwise anticipated.
Referring also to
For example, in
Although the ½ meter by ½ meter range bins 108 illustrated in
The two-dimensional nominal clustering bins 113 associated with the street sign 50.4 and tree 50.6 objects 50, identified as 4 and 6, include portions of an associated central median and a relatively large, different tree. In one set of embodiments, depending upon the associated motion tracking and threat assessment processes, these two-dimensional nominal clustering bins 113 might be ignored because they are too large to be vehicle objects 50′, and they are stationary relative to the ground 99.
As another example, in another set embodiments, one or more of the associated two-dimensional nominal clustering bins 113 are determined responsive to either situational awareness or scene interpretation, for example, knowledge derived from the imagery or through fusion of on-board navigation and map database systems, for example a GPS navigation system and an associated safety digital map (SDM) that provide for adapting the clustering process 1500 to the environment of the vehicle 12. For example, when driving on a highway at speeds in excess of 50 MPH it would be expected to encounter only vehicles, in which case the size of the two-dimensional nominal clustering bin 113 and might be increased to, for example, 4 meters of cross-range distance CR by 10 meters of down-range distance DR so as to provide for clustering relatively larger vehicles 12, for example, semi-tractor trailer vehicles 12. In
The clustering process 1500 may also incorporate situation awareness to accommodate relatively large objects 50 that are larger than a nominal limiting size of the two-dimensional nominal clustering bins 113. For example, on a closed highway—as might determined by the fusion of a GPS navigation system and an associated map database—the range-cued object detection system 10 may limit the scope of associated clustering to include only vehicle objects 50′ known to be present within the roadway. In this case, the vehicle object 50′ will fit into a two-dimensional nominal clustering bin 113 of 3 meters wide (i.e. of cross-range distance CR) by 6 meters deep (i.e. of down-range distance DR). If the vehicle object 50′ is longer (i.e. deeper) than 6 meters (for example an 18-wheeled semi-tractor trailer) the clustering process 1500 may split the vehicle object 50′ into multiple sections that are then joined during motion tracking into a unified object based on coherent motion characteristics, with the associated individual parts moving with correlated velocity and acceleration.
Returning to
In accordance with one set of embodiments, the resulting regions of interest (ROI) 114 are further characterized with corresponding elliptical or polygonal boundaries that can provide a measure of an associated instantaneous trajectories or heading angles of the corresponding objects 50 associated with the regions of interest (ROI) 114 so as to provide for an initial classification thereof.
For example, referring to
Referring to
The polygonal boundary is used as a measure of the instantaneous trajectory, or heading angle, of the object 50 associated with the region of interest 114, and provides for an initial classification of the region of interest 114, for example, whether or not the range bins 108 thereof conform to a generalized rectangular vehicle model having a length L of approximately 6 meters and a width W of approximately 3 meters, the size of which may be specified according to country or region.
Then, in step (1210), the regions of interest (ROI) 114 are prioritized so as to provide for subsequent analysis thereof in order of increasing prospect for interaction with the vehicle 12, for example, in order of increasing distance in top-down space 102 of the associated centroid 116 of the region of interest (ROI) 114 from the vehicle 12, for example, for collision-feasible regions of interest (ROI) 114, possibly accounting for associated tracking information and the dynamics of the vehicle 12.
Then, beginning with step (1212), and continuing through step (1228), the regions of interest (ROI) 114 are each analyzed in order of increasing priority relative to the priorities determined in step (1210), as follows for each region of interest (ROI) 114:
In step (1214), with reference also to
Referring to
More particularly, referring to
122.1: {−0.053571,0.267856,0.814286,0.535715,−0.267858,0.053572}
122.2: {0.018589,−0.09295,0.00845,0.188714,0.349262,0.427, (12)
0.394328,0.25913,0.064783,−0.10985,−0.15041,0.092948} (13)
Generally, other types of low-pass spatial filters having a controlled roll-off at the edges may alternatively be used to generate the spatially-filtered first 112.1′ and second 112.2′ histograms provided that the roll-off is sufficient to prevent joining adjacent clustering intervals, so as to provide for maintaining zero-density histogram bins 108′, 108″ between the adjacent clustering intervals, consistent with a real-world separation between vehicles.
Referring to
More particularly, representing the kth element of either of the spatially-filtered first 112.1′ and second 112.2′ histograms as H(k), then, for example, the spatial derivative of the spatially-filtered first 112.1′ and second 112.2′ histograms, for example, using a central-difference differentiation formula, is given by:
Alternatively, the spatial first derivative may be obtained directly using the Savitzky-Golay Smoothing Filter, as described more fully hereinbelow.
For a unimodal distribution of H(k) bounded by zero-density histogram bins, at the leading and trailing edges thereof, the first spatial derivative will be zero and the second special derivative will be positive, which, for example, is given by the kth element, for which:
H′(k)≦0. AND H′(k+1)>0 (15)
The peak of the uni-modal distribution is located between the edge locations at the kth element for which:
H′(k)≧0. AND H′(k+1)<0 (16)
Accordingly, the boundaries of the one-dimensional cross-range 121′ and down-range 121″ clustering bins are identified by pairs of zero-crossings of H′(k) satisfying equation (15), between which is located a zero-crossing satisfying equation (16). Given the location of the one-dimensional cross-range 121′ and down-range 121″ clustering bins, the clustering process 1500 then proceeds as described hereinabove to associate the two-dimensional range bins 108 with corresponding two-dimensional clustering bins that are identified from the above one-dimensional cross-range 121′ and down-range 121″ clustering bins for each set of combinations thereof.
Each centroid 116 is then used as a radial search seed 123 in a process provided for by steps (1216) through (1224) to search for a corresponding edge profile 124′ of the object 50 associated therewith, wherein the first 40.1 or second 40.2 stereo image component is searched along each of a plurality of radial search paths 126—each originating at the radial search seed 123 and providing associated polar vectors 126′ of image pixel 100 data—to find a corresponding edge point 124 of the object 50 along the associated radial search paths 126, wherein the edge profile 124′ connects the individual edge points 124 from the plurality of associated radial search paths 126, so as to provide for separating the foreground object 50 from the surrounding background clutter 128.
More particularly, in step (1216), the limits of the radial search are first set, for example, with respect to the width and height of a search region centered about the radial search seed 123. For example, in one set of embodiments, the maximum radial search distance along each radial search path 126 is responsive to the down-range distance DR to the centroid 116 of the associated object 50. For example, in one embodiment, at 25 meters the largest roadway object 50—i.e. a laterally-oriented vehicle 12—spans 360 columns and 300 rows, which would then limit the corresponding radial search length to 235 image pixels 100 as given by half the diagonal distance of the associated rectangular boundary (i.e. √{square root over (3602+3002)}/2). In another embodiment, the search region defining bounds of the radial search paths 126 is sized to accommodate a model vehicle 12 that is 3 meters wide and 3 meters high. In another set of embodiments, the maximum radial search distance is responsive to the associated search direction—as defined by the associated polar search direction θ—for example, so that the associated search region is within a bounding rectangle in the first 40.1 or second 40.2 stereo image component that represents a particular physical size at the centroid 116. Alternatively, the maximum radial search distance is responsive to both the associated search direction and a priori knowledge of the type of object 50, for example, that might result from tracking the object 50 over time.
Then, referring to
In step (1222), during the search along each radial search path 126, the associated image pixels 100 of the first 40.1 or second 40.2 stereo image component therealong are filtered with an Edge Preserving Smoothing (EPS) filter 132′ so as to provide for separating the foreground object 50 from the surrounding background clutter 128, thereby locating the associated edge point 124 along the radial search path 126. For example, in one set of embodiments, the Edge Preserving Smoothing (EPS) filter 132′ comprises a Maximum Homogeneity Neighbor (MHN) Filter 132—described more fully hereinbelow—that provides for removing intra-object variance while preserving boundary of the object 50 and the associated edge points 124. Then, from step (1224), step (1218) of the range-cued object detection process 1200 is repeated for each different polar search direction θ, until all polar search directions θ are searched, resulting in an edge profile vector 124″ of radial distances of the edge points 124 from the centroid 116 of the object 50 associated with the region of interest (ROI) 114 being processed. The edge profile 124′ of the object 50 is formed by connecting the adjacent edge points 124 from each associated radial search path 126—stored in the associated edge profile vector 124″—so as to approximate a silhouette of the object 50, the approximation of which improves with an increasing number of radial search paths 126 and corresponding number of elements of the associated edge profile vector 124″, as illustrated by the comparison of
It should be understood that the radial search associated with steps (1218), (1220) and (1224) is not limited to canonical polar search directions θ, but alternatively, could be in a generally outwardly-oriented direction relative to the associated radial search seed 123 at the centroid 116 or, as described more fully hereinbelow, the associated relatively-central location 116. For example, the search could progress randomly outward from the relatively-central location 116 either until the corresponding edge point 124 is found, or until the search is terminated after a predetermined number of iterations. For example, successive image pixels 100 along the associated search path can be found by either incrementing or decrementing the row by one or more image pixels 100, or by either incrementing or decrementing the column by one or more image pixels 100, or both, wherein the rows and columns are modified or maintained independently of one another.
In step (1226), in one set of embodiments, each element of the edge profile vector 124″ is transformed from centroid-centered polar coordinates to the image coordinates (XCOL(i,m),YROW(i,m)) of one of the first 40.1 or second 40.2 stereo image components—i.e. to the mono-image geometry,—as follows, so as to provide for detecting the associated ith object 50 in the associated mono-image geometry:
XCOL(i,m)=XCOL0+R(i,m)·cos(θm), and (17.1)
YROW(i,m)=YROW0+R(i,m)·sin(θm); (18.1)
or, for a total of M equi-angularly spaced polar vectors 126′:
wherein m is the search index that ranges from 0 to M, R(i,m) is the mth element of the edge profile vector 124″ for the ith object 50, given by the radius 130 from the corresponding centroid 116 (XCOL0(i), YROW0(i)) to the corresponding edge point 124m of the object 50. For example, in one set of embodiments, the transformed image coordinates (XCOL(i,m), YROW(i,m)) are stored in an associated transformed edge profile vector 124′″.
In step (1228) the process continues with steps (1212) through (1226) for the next region of interest (ROI) 114, until all regions of interest (ROI) 114 have been processed.
Then, or from step (1226) for each object 50 after or as each object 50 is processed, either the transformed edge profile vector 124′″, or the edge profile vector 124″ and associated centroid 116 (XCOL0(i), YROW0(i)), or the associated elements thereof, representing the corresponding associated detected object 134, is/are outputted in step (1230), for example, to an associated object discrimination system 92, for classification thereby, for example, in accordance with the teachings of U.S. patent application Ser. No. 11/658,758 filed on 19 Feb. 2008, entitled Vulnerable Road User Protection System, or U.S. patent application Ser. No. 13/286,656 filed on 1 Nov. 2011, entitled Method of Identifying an Object in a Visual Scene, which are incorporated herein by reference.
For example, referring to
More particularly, referring to
In step (2806), the image pixel 1000 P(XCOLF,YROWF) is filtered using a plurality of different homogeneity regions 138, each comprising a particular subset of plurality of neighboring image pixels 100′ at particular predefined locations relative to the location of the image pixel 1000 P(XCOLF,YROWF) being filtered, wherein each homogeneity region 138 is located around the image pixel 1000 P(XCOLF,YROWF), for example, so that in one embodiment, the image pixel 1000 P(XCOLF,YROWF) being filtered is located at the center 140 of each homogeneity region 138, and the particular subsets of the neighboring image pixels 100′ are generally in a radially outboard direction within each homogeneity region 138 relative to the center 140 thereof. For example, in one embodiment, each homogeneity region 138 spans a 5-by-5 array of image pixels 100 centered about the image pixel 1000 P(XCOLF,YROWF) being filtered.
For example, referring to
which can be simplified to
wherein P0 is the value of the image pixel 1000 being filtered, Pn is the value of a neighboring image pixel 1000 corresponding to the nth active element 142 of the associated homogeneity region 138k for which there are a total of N active elements 142. Alternatively, as illustrated in
wherein ΔXCOLk (n) and ΔYROWk(n) are the relative locations of the nth active element 142 of the kth homogeneity region 138k, i.e. the respective coordinates (i,j) of active element 142 Fk(i,j), so that ΔXCOLk(n)=i and ΔYROWk(n)=j per the illustration of
For example,
Following calculation of the deviation Dk in step (2806), in step (2808), during the first loop of the Maximum Homogeneity Neighbor (MHN) filtering process 2800 for which the associated homogeneity region counter k has a value of 1, or subsequently from step (2810), if the value of the deviation Dk for kth homogeneity region 138k is less than a previously stored minimum deviation value DMIN, then, in step (2812), the minimum deviation value DMIN is set equal to the currently calculated value of deviation Dk, and the value of an associated minimum deviation index kMIN is set equal to the current value of the homogeneity region counter k. Then, in step (2814), if the current value of the homogeneity region counter k is less than the number N Regions of homogeneity regions 138, then the homogeneity region counter k is incremented in step (2816), and the Maximum Homogeneity Neighbor (MHN) filtering process 2800 continues with steps (2806)-(2814) for the next homogeneity region 138.
Otherwise, from step (2814), after the deviation Dk has been calculated and processed for each of the N Regions homogeneity regions 138, then, in step (2818), in one embodiment, the image pixel 1000 P(XCOLF,YROWF) being filtered is replaced with the average value of the neighboring image pixels 1000 of the homogeneity region 138 having the minimum deviation Dk
In one embodiment, the Maximum Homogeneity Neighbor (MHN) filtering process 2800 utilizes the N Regions=6 homogeneity regions 138 illustrated in
Generally, the Maximum Homogeneity Neighbor (MHN) Filter 132 acts similar to a low-pass filter. For example, the action of the Maximum Homogeneity Neighbor (MHN) Filter 132 is illustrated in
Returning to
If, from step (2614), the edge point 124 is not found, then, in accordance with one embodiment, the next image pixel 100 to be filtered along the radial search path 126 is located in accordance with steps (2616)-(2626), as follows:
Referring again to
YROW(i+1)=YROW0+(XCOL(i+1)−XCOL0)·tan(θm), (20.1)
which, for integer-valued results, can be effectively rounded to give:
YROW(i+1)=YROW0+INT((XCOL(i+1)−XCOL0)·tan(θn)+0.5). (20.2)
Otherwise, from step (2616), if the absolute value of angle of the polar search direction θ is between π/4 and 3π/4 radians, so that the associated radial search path 126 is located in a second portion 148 of the image space 136′ within the bounding search rectangle 136, then the next image pixel 100 along the radial search path 126 is advanced to the next row YROW(i+1) along the radial search path 126 further distant from the associated centroid 116, and to the corresponding column XCOL(i+1) along the radial search path 126. More particularly, in step (2622), the next row YROW(i+1) is given by adding the sign of θ to the current row YROW(i), and, in step (2624), the next column XCOL(i+1) is given by:
XCOL(i+1)=XCOL0+((YROW(i+1)−YROW0)·cot(θ) (21.1)
which, for integer-valued results, can be effectively rounded to give:
XCOL(i+1)=XCOL0+INT((YROW(i+1)−YROW0)·cot(θ)+0.5). (21.2)
Then, in step (2626), if the location (XCOL(i+1), YROW(i+1)) of the next image pixel 100 is not outside the bounding search rectangle 136, then the mono-image-based object detection process 2600 continues with steps (2612)-(2626) in respect of this next image pixel 100. Accordingly, if an edge point 124 along the radial search path 126 is not found within the bounding search rectangle 136, then that particular radial search path 126 is abandoned, with the associated edge point 124 indicated as missing or undetermined, for example, with an associated null value. Otherwise, in step (2628), if all polar search directions θ have not been searched, then, in step (2630), the polar search direction θ is incremented to the next radial search path 126, and the mono-image-based object detection process 2600 continues with steps (2610)-(2626) in respect of this next polar search direction θ. Otherwise, in step (2632), if all regions of interest (ROI) 114 have not been processed, then, in step (2634), the next region of interest (ROI) 114 is selected, and the mono-image-based object detection process 2600 continues with steps (2604)-(2626) in respect of this next region of interest (ROI) 114. Otherwise, from step (2632) if all regions of interest (ROI) 114 have been processed, or alternatively, from step (2628) as each region of interest (ROI) 114 is processed, in step (2636), the associated edge profile vector 124″ or edge profile vectors 124″ is/are returned as the detected object(s) 134 so as to provide for discrimination thereof by the associated object discrimination system 92.
For example,
Referring to
Referring to
Referring to
Referring to
Referring to
More particularly, referring to
In accordance with one embodiment, for a given polar vector 126′, for example, identified as S and comprising a plurality of elements Sk, a relatively large amplitude shift 152 along S is located by first calculating the spatial first derivative 154, i.e. S′, thereof, for example, the central first derivative which is given by:
This spatial first derivative 154, S′, is then filtered with a low-pass, zero-phase-shifting filter, for example, a Savitzky-Golay Smoothing Filter, so as to generate the corresponding filtered spatial first derivative 156, i.e. S′filt′, for example, in accordance with the method described in William H. PRESS, Brian P. FLANNERY, Saul A. TEUKOLSKY and William T. VETTERLING, NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5), Cambridge University Press, 1988-1992, pp. 650-655, which is incorporated by reference herein.
More particularly:
wherein nR and nL are the number of elements of S′filt before and after the location of the filtered value S′filt(i) to be used to calculated the filtered value S′filt(i), and the associated coefficients c, are given by:
wherein M is the desired order of the filter, i.e. the order of an associated underlying polynomial approximating the data, and represents that highest conserved order, and:
Aij=iji=−nL, . . . ,nR,j=0, . . . ,M. (25)
For example, in one embodiment, the filter order M is set equal to 4, and the symmetric width of the associated moving window is 6, i.e. nR=6 and nL=6, resulting in the following associated filter coefficients from equation (19):
c={0.04525,−0.08145,−0.05553,0.04525,0.16043,0.24681,0.27849,0.24681,0.16043,0.04525,−0.05553,−0.08145,0.04525} (26)
The location of a relatively large amplitude shift 152 along the particular radial search path 126 is then identified as the closest location k to the centroid 116 (i.e. the smallest value of k) for which the absolute value of the filtered spatial first derivative 156, S′filt i.e. S′filt(k), exceeds a threshold value, for example, a threshold value of 10, i.e.
kEdge=k|S′filt(k)|>10. (27)
This location is found by searching the filtered spatial first derivative 156, S′filt radially outwards, for example, beginning with k=0, with increasing values of k, to find the first location k that satisfies equation (22). Generally relatively small changes in amplitude relatively closer to the centroid 116 than the corresponding edge point 124 are associated with corresponding image structure of the associated object 50. The resulting edge index kEdge is then saved and used to identify the corresponding edge point 124, for example, saved in the associated edge profile vector 124″, or from which the associated corresponding radial distance R can be determined and saved in the associated edge profile vector 124″. This process is then repeated for each polar vector 126′, S associated with each radial search path 126 so as to define the associated edge profile vector 124″. Alternatively, rather than explicitly calculating the first spatial derivative as in equation (17), and then filtering this with the above-described smoothing variant of the above-described Savitzky-Golay Smoothing Filter, the Savitzky-Golay Smoothing Filter may alternatively be configured to generate a smoothed first spatial derivative directly from the data of the polar vector 126′, S, for example, using a parameter value of ld=1 in the algorithm given in the incorporated subject matter from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING, so as to provide for a convolution of a radial profile with an impulse-response sequence. However, the pre-calculation of the spatial first derivative 154, S′, for example, using equation (17), provides for choosing the associated method, for example, either a central difference as in equation (17), a left difference, a right difference, a second central difference, or some other method. Furthermore, alternatively, some other type of low-pass filter could be used instead of the Savitzky-Golay Smoothing Filter.
As described hereinabove, in respect of the second 150.2 and fourth 150.4 quadrants, the identification of an associated edge point 124 is also dependent upon the detection of a relatively low variance in the amplitude of the polar vector 126′, S in the associated background and ground 99 regions abutting the object 50 beyond the location of the associated edge point 124, for example, in accordance with the following method:
Having found a prospective edge point 124, the filtered spatial first derivative 156, S′filt is searched further radially outwards, i.e. for increasing values of k greater than the edge index kEdge, e.g. starting with kEdge+1, in order to locate the first occurrence of a zero-crossing 158 having a corresponding zero-crossing index kzero given by:
kZero=k|(S′filt(k)≧0)&&(S′filt(k+1)<0). (28)
Then, beginning with index k equal to the zero-crossing index kZero, a corresponding weighted forward moving average Vk is calculated as:
wherein the associates weighting vector wgt is given by:
wgt={0.35,0.25,0.20,0.10,0.10} (30)
If the value of the weighted forward moving average Vk is less than a threshold, for example, if Vk<5, then the index k is incremented, and this test is repeated, and the process is repeated over the remaining relatively distal portion 160 of the polar vector 126′, S as long as the value of the weighted forward moving average Vk is less than the threshold. If for any value of the index k, the value of the weighted forward moving average Vk is greater than or equal to the threshold, then the search is terminated and the corresponding edge point 124 of the associated radial search path 126 is marked as indeterminate. Otherwise, if every value of the weighted forward moving average Vk is less than the threshold for the remaining points of the polar vector 126′, S along the radial search path 126, then the resulting edge point 124 is given from the previously determined corresponding edge index kEdge.
Referring to
Referring to
In accordance with a first aspect, the range-cued object detection system 10, 10′ uses the associated stereo-vision system 16, 16′ alone for both generating the associated range map image 80 using information from both associated stereo-vision cameras 38, and for discriminating the object using one of the associated first 40.1 or second 40.2 stereo image components from one of the associated stereo-vision cameras 38.
Referring to
For example, in one set of embodiments, the ranging system 162 comprises either a radar or lidar system that provides a combination of down-range and cross-range measurements of objects 50 within the field of view 84.3 thereof. In one set of embodiments, when the host vehicle 12 and objects 50 are in alignment with one another relative to the ground 99, either a planar radar or planar lidar are sufficient. Objects 50 not in the same plane as the host vehicle 12 can be accommodated to at least some extent by some embodiments of planar radar having a vertical dispersion of about 4 degrees, so as to provide for detecting objects 50 within a corresponding range of elevation angles φ. Some planar lidar systems have substantially little or no vertical dispersions. A greater range of elevation angles φ when using either planar radar or planar lidar can be achieved by either vertically stacking individual planar radar or planar lidar systems, or by providing for scanning the associated beams of electromagnetic energy. The range map image 80, 80″ from the ranging system 162 is co-registered with either the first 40.1 or second 40.2 stereo image components of one of the first 38.1, 38.1′ or second 38.2, 38.2′ stereo-vision cameras, or from a mono-image 166 from the separate camera 164, so as to provide for transforming the centroid 116 locations determined from the segmented range map image 80, 80″ to corresponding locations in either the first 40.1 or second 40.2 stereo image components of one of the first 38.1, 38.1′ or second 38.2, 38.2′ stereo-vision cameras, or from a mono-image 166 from the separate camera 164.
As used herein, the term centroid 116 is intended to be interpreted generally as a relatively-central location 116 relative to a collection of associated two-dimensional range bins 108 of an associated region of interest 114, wherein the relatively-central location 116 is sufficient to provide for discriminating an associated object 50 from the associated edge profile vector 124″ generated relative to the relatively-central location 116. For example, in addition to the canonical centroid 116 calculated in accordance with equations (5.1) and (6.1) or (5.2) and (6.2), the relatively-central location 116 could alternatively be given by corresponding median values of associated cross-range CR and down-range DR distances of the associated two-dimensional range bins 108, or average values of the associated maximum and minimum cross-range CR and down-range DR distances of the associated two-dimensional range bins 108, or some other measure responsive to either the associated two-dimensional range bins 108, responsive to an associated two-dimensional nominal clustering bin 113, or responsive to a two-dimensional nominal clustering bin identified from corresponding one-dimensional cross-range 121′ and down-range 121″ clustering bins. More generally, the term relatively-central location 116 is herein intended to mean an on-target seed point that is located within the boundary of the associated object 50, 50′.
Furthermore, the relatively-central location 116 may be different for stages of the associated range-cued object detection process 1200. For example, the relatively-central location 116 calculated in step (1518) and used and possibly updated in step (1528) of the associated clustering process 1500, for purposes of clustering the associated two-dimensional range bins 108 in a top-down space 102, could be recalculated using a different metric either in step (1208) or step (1214), or both, the value from the latter of which is used in step (1220) of the range-cued object detection process 1200. For example, even if from steps (1518), (1528) and (1208) the relatively-central location 116 is not a canonical centroid 116, a corresponding canonical centroid 116 could be calculated in step (1214) with respect to image space 136′ from the corresponding associated range bins 108.
Accordingly, the range-cued object detection system 10 provides for detecting some objects 50 that might not otherwise be detectable from the associated range map image 80 alone. Notwithstanding that the range-cued object detection system 10 has been illustrated in the environment of a vehicle 12 for detecting associated near-range vehicle objects 50′, it should be understood that the range-cued object detection system 10 is generally not limited to this, or any one particular application, but instead could be used in cooperation with any combination of a ranging 152 or stereo vision 16 system in combination with co-registered mono-imaging system—for example, one of the stereo-vision cameras 38 or a separate camera 164—so as facilitate the detection of objects 50, 50′ that might not be resolvable in the associated range map image 80 alone, but for which there is sufficient intensity variation so as to provide for detecting and associated edge profile 124′ from either the first 40.1 or second 40.2 stereo image components of one of the first 38.1, 38.1′ or second 38.2, 38.2′ stereo-vision cameras, or from a mono-image 166 from the separate camera 164.
For a range-cued object detection system 10 incorporating a stereo-vision system 16, notwithstanding that the stereo-vision processor 78, image processor 86, object detection system 88 and object discrimination system 92 have been illustrated as separate processing blocks, it should be understood that any two or more of these blocks may be implemented with a common processor, and that the particular type of processor is not limiting. Furthermore, it should be understood that the range-cued object detection system 10 is not limited in respect of the process by which the range map image 80 is generated from the associated first 40.1 and second 40.2 stereo image components.
It should be understood that the range map image 80 and the associated processing thereof could be with respect to coordinates other than down-range distance DR and cross-range distance CR. More generally, the range map image 80 and the associated processing thereof could generally be with respect to first and second coordinates, wherein the first coordinate either corresponds or is related to a down-range coordinate of the visual scene, and the second coordinate either corresponds or is related to a cross-range coordinate of the visual scene. For example, in an alternative set of embodiments, the first coordinate could be either a range or an associated time-of-flight, and the second coordinate could be a corresponding azimuth angle, for example, as might be obtained from either a radar or lidar ranging system 162. The associated clustering process 1500 could be performed either with respect to the corresponding space of first and second coordinates, or by or after transforming the first and second coordinates to the space of down-range distance DR and cross-range distance CR. Alternatively, some other coordinate system that is transformable from or to a space of down-range and cross-range coordinates might be used.
It should also be understood that the images 90 illustrated herein, and the associated pixel space illustrated in
While specific embodiments have been described in detail in the foregoing detailed description and illustrated in the accompanying drawings, those with ordinary skill in the art will appreciate that various modifications and alternatives to those details could be developed in light of the overall teachings of the disclosure. It should be understood, that any reference herein to the term “or” is intended to mean an “inclusive or” or what is also known as a “logical OR”, wherein when used as a logic statement, the expression “A or B” is true if either A or B is true, or if both A and B are true, and when used as a list of elements, the expression “A, B or C” is intended to include all combinations of the elements recited in the expression, for example, any of the elements selected from the group consisting of A, B, C, (A, B), (A, C), (B, C), and (A, B, C); and so on if additional elements are listed. Furthermore, it should also be understood that the indefinite articles “a” or “an”, and the corresponding associated definite articles “the’ or “said”, are each intended to mean one or more unless otherwise stated, implied, or physically impossible. Yet further, it should be understood that the expressions “at least one of A and B, etc.”, “at least one of A or B, etc.”, “selected from A and B, etc.” and “selected from A or B, etc.” are each intended to mean either any recited element individually or any combination of two or more elements, for example, any of the elements from the group consisting of “A”, “B”, and “A AND B together”, etc. Yet further, it should be understood that the expressions “one of A and B, etc.” and “one of A or B, etc.” are each intended to mean any of the recited elements individually alone, for example, either A alone or B alone, etc., but not A AND B together. Furthermore, it should also be understood that unless indicated otherwise or unless physically impossible, that the above-described embodiments and aspects can be used in combination with one another and are not mutually exclusive. Accordingly, the particular arrangements disclosed are meant to be illustrative only and not limiting as to the scope of the invention, which is to be given the full breadth of the appended claims, and any and all equivalents thereof.
The instant application is a continuation-in-part of U.S. application Ser. No. 13/429,803 filed on Mar. 26, 2012, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4802230 | Horowitz | Jan 1989 | A |
5307136 | Saneyoshi | Apr 1994 | A |
5400244 | Watanabe et al. | Mar 1995 | A |
5487116 | Nakano et al. | Jan 1996 | A |
5671290 | Vaidyanathan | Sep 1997 | A |
5835614 | Aoyama et al. | Nov 1998 | A |
5937079 | Franke | Aug 1999 | A |
5987174 | Nakamura et al. | Nov 1999 | A |
6031935 | Kimmel | Feb 2000 | A |
6122597 | Saneyoshi et al. | Sep 2000 | A |
6169572 | Sogawa | Jan 2001 | B1 |
6215898 | Woodfill et al. | Apr 2001 | B1 |
RE37610 | Tsuchiya et al. | Mar 2002 | E |
6456737 | Woodfill et al. | Sep 2002 | B1 |
6477260 | Shimomura | Nov 2002 | B1 |
6771834 | Martins et al. | Aug 2004 | B1 |
6788817 | Saka et al. | Sep 2004 | B1 |
6911997 | Okamoto et al. | Jun 2005 | B1 |
6956469 | Hirvonen et al. | Oct 2005 | B2 |
6961443 | Mahbub | Nov 2005 | B2 |
6963661 | Hattori et al. | Nov 2005 | B1 |
7046822 | Knoeppel et al. | May 2006 | B1 |
7203356 | Gokturk et al. | Apr 2007 | B2 |
7263209 | Camus et al. | Aug 2007 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7397929 | Nichani et al. | Jul 2008 | B2 |
7400744 | Nichani et al. | Jul 2008 | B2 |
7403659 | Das et al. | Jul 2008 | B2 |
7493202 | Demro et al. | Feb 2009 | B2 |
7505841 | Sun et al. | Mar 2009 | B2 |
7539557 | Yamauchi | May 2009 | B2 |
7667581 | Fujimoto | Feb 2010 | B2 |
7796081 | Breed | Sep 2010 | B2 |
7812931 | Nichiuchi | Oct 2010 | B2 |
7920247 | Kitano | Apr 2011 | B2 |
8094888 | Satoh et al. | Jan 2012 | B2 |
20010019356 | Takeda et al. | Sep 2001 | A1 |
20030138133 | Nagaoka et al. | Jul 2003 | A1 |
20030169906 | Gokturk et al. | Sep 2003 | A1 |
20030204384 | Owechko et al. | Oct 2003 | A1 |
20050013465 | Southall et al. | Jan 2005 | A1 |
20050018043 | Takeda et al. | Jan 2005 | A1 |
20050024491 | Takeda et al. | Feb 2005 | A1 |
20050169530 | Nakai et al. | Aug 2005 | A1 |
20080240526 | Suri et al. | Oct 2008 | A1 |
20080240547 | Cho et al. | Oct 2008 | A1 |
20080253606 | Fujimaki et al. | Oct 2008 | A1 |
20090010495 | Schamp et al. | Jan 2009 | A1 |
20100074532 | Gordon et al. | Mar 2010 | A1 |
20100208994 | Yao et al. | Aug 2010 | A1 |
20110026770 | Brookshire | Feb 2011 | A1 |
20110208357 | Yamauchi | Aug 2011 | A1 |
20110311108 | Badino et al. | Dec 2011 | A1 |
20120045119 | Schamp | Feb 2012 | A1 |
Number | Date | Country |
---|---|---|
0281725 | Sep 1988 | EP |
1944620 | Jan 2010 | EP |
9254726 | Sep 1997 | JP |
2000207693 | Jul 2000 | JP |
2001052171 | Feb 2001 | JP |
3450189 | Sep 2003 | JP |
2003281503 | Oct 2003 | JP |
Entry |
---|
Kim et al., “Multi-view image and ToF sensor fusion for dense 3D reconstruction” Oct. 4, 2009, 2009 IEEE 12th Int. Conf. on Computer Vision Workshops, p. 1542-1549. |
Zabih, R.; and Woodfill, J.; “Non-parametric Local Transforms for Computing Visual Correspondence,” Proceeding of European Conference on Computer Vision, Stockholm, Sweden, May 1994, pp. 151-158. |
Woodfill, J; and Von Herzen, B.; “Real-time stereo vision on the PARTS reconfigurable computer,” Proceedings of the 5th Annual IEEE Symposium on Field Programmable Custom Computing Machines, (Apr. 1997). |
Konolige, K., “Small Vision Systems: Hardware and Implementation,” Proc. Eighth Int'l Symp. Robotics Research, pp. 203-212, Oct. 1997. |
Das et al., U.S. Appl. No. 60/549,203, Mar. 2, 2004. |
Baik, Y.K; Jo, J.H.; and Lee K.M.; “Fast Census Transform-based Stereo Algorithm using SSE2,” in the 12th Korea-Japan Joint Workshop on Frontiers of Computer Vision, Feb. 2-3, 2006, Tokushima, Japan, pp. 305-309. |
“Kim, J.H. Kim; Park, C.O.; and Cho, J. D.; “Hardware implementation for Real-time Census 3D disparity map Using dynamic search range,” Sungkyunkwan University School of Information and Communication, Suwon, Korea, (downloaded from vada.skku.ac.kr/Research/Census.pdfon Dec. 28, 2011).” |
Unknown Author, “3d Stereoscopic Photography,” downloaded from http://3dstereophoto.blogspot.com/2012/01/stereo-matching-local-methods.html on Oct. 19, 2012. |
Unknown Author, “Stereo Matching,” downloaded from www.cvg.ethz.ch/teaching/2010fall/compvis/lecture/vision06b.pdf on Oct. 19, 2012. |
Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; and Vetterling, W. T., “14.8 Savitzky-Golay Smoothing Filters,” in Numerical Recipes in C: The Art of Scientific Computing (ISBN 0-521-43108-5), Cambridge University Press, 1988-1992, pp. 650-655. |
Wu, H.-S. et al., “Optimal segmentation of cell images”, IEE Proceedings: Vision, Image and Signal Processing, Institution of Electrical Engineers, GB, vol. 145, No. 1, Feb. 25, 1998, pp. 50-56. |
Halir, R.; and Flusser, J., “Numerically stable direct least squares fitting of ellipses.” Proc. 6th International Conference in Central Europe on Computer Graphics and Visualization. WSCG. vol. 98. 1998. |
Morse, B. S., “Lecture 2: Image Processing Review, Neighbors, Connected Components, and Distance,” Bringham Young University, Copyright Bryan S. Morse 1998-2000, last modified on Jan. 6, 2000, downloaded from http://morse.cs.byu.edu/650/lectures/lect02/review-connectivity.pdf on Jul. 15, 2011, 7 pp. |
Garnica, C.; Boochs, F.; and Twardochlib, M., “A New Approach to Edge-Preserving Smoothing for Edge Extraction and Image Segmentation,” International Archives of Photogrammetry and Remote Sensing. vol. XXXIII, Part B3. Amsterdam 2000, pp. 320-325. |
Wang Z et al., “Shape based leaf image retrieval”, IEE Proceedings: Vision, Image and Signal Processing, Institution of Electrical Engineers, GB, vol. 150, No. 1, Feb. 20, 2003, pp. 34-43. |
Grigorescu, C. et al., “Distance sets for shape filters and shape recognition”, IEEE Transactions on Image Processing, IEEE Service Center, Piscataway, NJ, US, vol. 12, No. 10, Oct. 1, 2003, pp. 1274-1286. |
Bernier, T. et al., “A new method for representing and matching shapes of natural objects”, Pattern Recognition, Elsevier, GB, vol. 36, No. 8, Aug. 1, 2003, pp. 1711-1723. |
Darrell, T. et al:“Integrated person tracking using stereo, color, and pattern detection”, Internet Citation, 2000, XP002198613, Retrieved from the Internet: URL:http://www.ai.mit.edu/-trevor/papers/1998-021/TR-1998-021.pdf, [retrieved by Inernational Searching Authority/EPO on May 10, 2002]. |
International Search Report and Written Opinion of the International Searching Authority in International Application No. PCT/US2013/033539, Oct. 21, 2013, 17 pages. |
Number | Date | Country | |
---|---|---|---|
20130251194 A1 | Sep 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13429803 | Mar 2012 | US |
Child | 13465059 | US |