The present invention relates to methods for detecting and for tracking objects in motion in scenes observed by optical sensors.
Among the known techniques for detecting objects in images, there are iterative algorithms for searching for the closest points, i.e. of the ICP (Iterative Closest Point) type. These ICP algorithms are known for their effectiveness in applications such as range data registration, 3D reconstruction, object tracking and motion analysis. See for example the article “Efficient Variants of the ICP Algorithm”, by S. Rusinkiewicz and M. Levoy, 3rd International Conference on 3D Digital Imaging and Modeling, June 2001, pp. 145-152.
The principle of an ICP algorithm is to use a set of points used as a model delimiting the contour of the object in order to have it correspond with a set of points that is part of the acquired data. A transformation between the known model set and the set of points of the data is estimated in order to express their geometrical relationships by minimizing an error function. The tracking of an arbitrary shape can be resolved by ICP technique when a model of this shape is provided.
The article “Iterative Estimation of Rigid Body Transformations Application to robust object tracking and Iterative Closest Point”, by M. Hersch, et al., Journal of Mathematical Imaging and Vision, 2012, Vol. 43, No. 1, pp 1-9, presents an iterative method for executing the ICP algorithm. In order to determine a rigid spatial transformation T that makes it possible to detect in an image a pattern defined by a set of points {xi} to which points of the image respectively correspond, the classic analytic, closed form solution, consisting in seeking the transformation T by minimizing an error criterion of the shape
where the sum concerns the set of points xi of the pattern, is replaced with an iterative solution wherein an initial estimation of the transformation T is taken, and each iteration consists in randomly taking a point xi from the pattern, in finding its corresponding point yi, in the image and in updating the transformation T by subtracting a term that is proportional to the gradient ∇∥yi−Txi∥2 relatively to the parameters of translation and of rotation of the transformation T. When the transformation T becomes stationary from one iteration to the other, the iterations stop and T is retained as the final estimation of the transformation that makes it possible to detect the pattern in the image.
In the conventional vision based on successively acquired images, the rate of images of the camera (of about 60 images per second, for example) is often insufficient for ICP techniques. The repetitive calculation of the same information in successive images also limits the performance in real time of the ICP algorithms. In practice, they are restricted to cases for detecting simple shapes that do not move too quickly.
Contrary to conventional cameras that record successive images at regular sampling instants, biological retinas transmit only very little redundant information on the scene to be visualized, and this asynchronously. Asynchronous event-based vision sensors deliver compressed digital data in the form of events. A presentation of such sensors can be consulted in “Activity-Driven, Event-Based Vision Sensors”, T. Delbrück, et al., Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 2426-2429. Event-based vision sensors have the advantage of removing the redundancy, reducing latency time and increasing the dynamic range with respect to conventional cameras.
The output of such a vision sensor can consist, for each pixel address, in a sequence of asynchronous events that represent changes in the reflectance of the scene at the time they occur. Each pixel of the sensor is independent and detects changes in intensity greater than a threshold since the emission of the last event (for example a contrast of 15% on the logarithm for the intensity). When the change in intensity exceeds the threshold set, an ON or OFF event is generated by the pixel according to whether the intensity increases or decreases. Certain asynchronous sensors associate the detected events with measurements of light intensity. As the sensor is not sampled on a clock as a conventional camera, it can take the sequencing of events into account with very great time precision (for example of about 1 μs). If such a sensor is used to reconstruct a sequence of images, an image frame rate of several kilohertz can be achieved, compared to a few tens of hertz for conventional cameras.
Event-based vision sensors have promising perspectives, and it is desirable to propose effective methods for tracking objects in motion using signals delivered by such sensors.
In “Fast sensory motor control based on event-based hybrid neuromorphic-procedural system”, ISCAS 2007, New Orleans, 27-30 May 2007 pp. 845-848, T. Delbrück and P. Lichtsteiner describe an algorithm for tracking clusters (cluster tracker) that can be used for example for controlling a soccer goalkeeper robot using an event-based vision sensor. Each cluster models a mobile object as a source of event. Events that fall in the cluster change the position of the latter. A cluster is considered as visible only if it has received a number of events greater than a threshold.
In “Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics”, Z. Ni, et al., IEEE Transactions on Robotics, 2012, Vol. 28, No. 5, pp. 1081-1089, an event-based version of the ICP algorithm is presented, which is based on minimizing a cost function in analytical form.
There is a need for a method for tracking shapes that is rapid and that has good temporal precision.
A method of tracking a shape in a scene is proposed, comprising:
The updating comprises, following detection of an event:
Matching of the points observed with the model is not carried out in a grouped way after the acquisition of a complete image or even a sufficient number of events with respect to the tracked shape in the scene. The tracking of shapes via the iterative algorithm is carried out much more quickly, as the asynchronous events arrive.
Determining the spatial transformation allowing the model to be updated is usually based on minimizing a cost function of the form:
ΣD(p[ev],Ft(A[ev])) (1)
In the method proposed, the approach is different because the association between the current event and the point of the model that has been associated with it is taken into account, but not prior associations. As the cost function cannot be minimized based on this alone, each iteration makes it possible to calculate a corrective term, not unlike in a gradient descent, which is applied to the model in order to converge the latter to the solution that correctly tracks the shape in the scene. Such convergence is ensured even when the object is in motion, thanks to the dynamics and to the high number of events that the motion causes.
In order to filter the acquisition noise, one can refrain from updating the model when no point of the model is located at a distance less than a threshold with respect to the pixel from which originates a detected event, which in this case is not attributed to the object.
An interesting embodiment of the method further comprises:
The properties of the aforesaid “plane of displacement” allow for several useful pieces of processing, in particular in the case where a plurality of objects have respective shapes tracked in the scene, each one of the objects having a respective model updated after detection of events that are attributed thereto and an estimated plane of displacement.
For instance, following detection of an event attributable to at least two of the objects, it is possible to calculate respective distances, in the three-dimensional space, between a point that marks the detected event and the planes of displacement respectively estimated for said objects, and attribute the detected event to the object for which the calculated distance is minimal. This makes it possible to combines spatial and time constraints in order to remove ambiguities between several objects to which a detected event is attributable.
Another possibility is to estimate a statistical distribution of distances between the plane of displacement of the object and of points marking detected events which were attributed to the object and then, after detection of an event, take into account the estimated plane of displacement of the object and of the estimated statistical distribution in order to decide whether or not to attribute the event to an object. This makes it possible to take into account the possible motion of the background of the scene when the asynchronous sensor is itself in motion. In particular, it is possible to determine an interval of admissible distance values based on the estimated statistical distribution, and to not attribute a detected event to the object if the point marking this detected event in the three-dimensional space has, with respect to the estimated plane of displacement, a distance that falls outside of the interval of the admissible distance values.
Other features can be provided when a plurality of objects have respective shapes tracked in the scene, each one of the objects having a respective model updated after detection of events attributed thereto.
For instance, if following detection of an event, only one of the objects satisfies a condition of having in its model a point that has a distance less than a threshold with respect to the pixel of the matrix from which the detected event originates, attributing the detected event to said one of the objects.
Following detection of an event attributable to at least two of the objects, it is possible to take spatial constraints into account in order to remove the ambiguity. A possibility is to associate with the detected event, for each object to which the detected event is attributable, a point of the model of this object by minimizing a respective distance criterion with respect to the pixel of the matrix from which the detected event originates, and to attribute the detected event to the object for which the minimized distance criterion is lowest. An alternative consists in assigning the detected event to none of the objects.
Another alternative consists in assigning the detected event to each of the objects to which the detected event is attributable. Updating the models of objects to which the detected event is attributed may be carried out with weightings that depend on the distance criteria respectively minimized for said objects.
Following detection of an event attributable to at least two objects, it is also possible to take time constraints into account in order to remove the ambiguity. A possibility is, for each object, to estimate a rate of events attributed to it and memorize the instant at which was detected the last event that was attributed to it. An event attributable to at least two objects is then attributed to the one of the objects for which the product of the estimated rate of events by the time interval between the memorized instant and the instant of detection of said event is closest to 1.
In an embodiment of the method, determining the updated model comprises estimating a spatial transformation defined by a set of parameters, and applying the estimated spatial transformation to the model. Estimating the spatial transformation comprises calculating said parameters as a function of a gradient of a distance, in the plane of the matrix of pixels, between the pixel of the matrix from which the detected event originates and a point obtained by applying the spatial transformation to the point of the model associated with the detected event.
A particular case is the one where the spatial transformation is a rigid transformation, including a translation and a rotation in the plane of the matrix of pixels. A possibility is to take for the translation a vector ΔT equal to −η1·∇Tf(Δθ0, ΔT0) and for the rotation an angle Δθ equal to −η2·∇θf(Δθ0, ΔT0), where η1 and η2 are predefined positive convergence steps and Δθ0 and ΔT0 are particular values of the angle of rotation and of the translation vector. For example, Δθ0=mĉp and ΔT0=cp−RΔθ
Another case of interest is the one where the spatial transformation is an affine transformation further including application of respective scaling factors according to two axes included in the matrix of pixels. The scaling factors sx, sy along the two axes x, y may be calculated according to sx=1+η3·(|px|−|mx|) and sy=1+η3·(|py|−|my|), respectively, where η3 is a predefined positive convergence step, px and py are the coordinates along the axes x and y of the pixel of the matrix from which the detected event originates, and mx and my are the coordinates along the axes x and y of the point of the model associated with the detected event.
Another aspect of the present invention relates to a device for tracking shape in a scene, comprising a computer configured to execute a method such as defined hereinabove using asynchronous information received from a light sensor.
Other features and advantages of the present invention will appear in the description hereinafter, in reference to the annexed drawings, wherein:
The device shown in
A computer 20 processes the asynchronous information originating from the sensor 10, i.e. the sequences of events ev(p, t) received asynchronously from the various pixels p, in order to extract therefrom information Ft on certain shapes changing in the scene. The computer 20 operates on digital signals. It can be implemented by programming a suitable processor. A hardware implementation of the computer 20 using specialized logic circuits (ASIC, FPGA, . . . ) is also possible.
For each pixel p of the matrix, the sensor 10 generates an event-based asynchronous signal sequence using the variations of light detected by the pixel in the scene that appears in the field of vision of the sensor.
The asynchronous sensor carries out an acquisition for example according to the principle shown by
The activation threshold Q can be set, as in the case of
By way of example, the sensor 10 can be a dynamic vision sensor (DVS) of the type described in “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor”, P. Lichtsteiner, et al., IEEE Journal of Solid-State Circuits, Vol. 43, No. 2, February 2008, pp. 566-576, or in patent application US 2008/0135731 A1. The dynamics of a retina (minimum duration between the action potentials) of about a few milliseconds can be approached with a DVS of this type. The performance dynamically is in any case largely superior to that that can be achieved with a conventional video camera that has a realistic sampling frequency. Note that the shape of the asynchronous signal delivered for a pixel by the DVS 10, which constitutes the input signal of the computer 20, can be different from a succession of Dirac peaks, as the events shown can have any temporal width or amplitude or waveform in the event-based asynchronous signal.
Another example of an asynchronous sensor that can be used advantageously in the context of this invention is the asynchronous time-based image sensor (ATIS) of which a description is given in the article “A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS”, C. Posch, et al., IEEE Journal of Solid-State Circuits, Vol. 46, No. 1, January 2011, pp. 259-275.
In the case where the sensor 10 consists of a two-dimensional matrix of pixels, the events originating from pixels can be placed in a three-dimensional space-time representation such as shown in
via the motion of a star rotating at a constant angular speed as diagrammed in the inset A. The major portion of these points are distributed in the vicinity of a surface with a generally helical shape. Furthermore, the figure shows a certain number of number of events at a distance from the helical surface which are measured without corresponding to the effective movement of the star. These events are acquisition noise.
The principle of an ICP algorithm is to use a set of points forming a model representing the shape of an object, for example describing the contour of this object, in order to have it correspond with a set of points provided by acquisition data, then to calculate the geometrical relationship between this set of points and the model by minimizing an error function.
In the case where the scene is observed by an event-based asynchronous sensor rather than by a conventional camera, the events are received with a precise time stamping as they occur. An ICP type of algorithm does not need to wait to have information on the entire extent of the scene in order to be able to simultaneously process all the events that it contains.
An event ev(p, t) describes an activity in the space-time domain. In the event-based version of the ICP algorithm shown in
S(t)={ev(p,t′)/0<t−t′≤Δt} (2)
where Δt is a persistence time. After the time interval Δt has elapsed following activation of an event, this event is eliminated from the set S(t).
Matching the points between the model and the acquisition data constitutes the most demanding step in terms of calculation resource. G(t) denotes the set of positions of the points of the two-dimensional model defining the shape of the object at an instant t. The association between the acquisition points and the points of the model can be carried out sequentially. Each time a new event is activated, it is matched with an element of G(t), preferably with an element of G(t) that has not already been associated with an event of S(t). It is possible to add to this step a verification of a distance criterion in order to reject noise or other points that are not part of the shape sought.
Because the sensor is asynchronous, the number of associated points varies rather substantially. There are typically from a few points to a few hundred points associated during the persistence time Δt. This is very different from the conventional frame-based approach. Immobile objects do not generate any stimulus, so that it is not necessary to update their position. When the scene comprises little motion, only a small part of the calculation resources are used, while in highly dynamic situations, the algorithm requires full capacity in order to update the information.
In the example of
In the version of the algorithm shown in
Following reception of an event ev(p, t) originating from a pixel of position p in the matrix at time t (step 22), two operations are carried out: updating the set S(t) and associating with the detected event a point of the model G. In the loop 23-24, the events dating by more than Δt are eliminated from S(t): test 23 checks if the time T(a) is greater than t−Δt. If T(a) is not greater than t−Δt, the number a is incremented by one unit in the step 24 and test 23 is reiterated. Too old events are eliminated when T(a)>t−Δt at test 23.
The algorithm then proceeds with associating a point of G with the new event in step 25. This associated point is the one of which the position m is closest to the pixel p from which originates the event ev(p, t) among the points of the model that have not already been associated with a point of the set S(t), i.e. among the points of the set GM:
The distance criterion d(.,.) used in this step 25 is for example the Euclidean distance in the plane of the matrix. Before deciding if the event ev(p, t) will be included in the set S(t), the algorithm examine in the step 26 if the minimized distance is less than a threshold dmax. By way of example, the threshold dmax can be chosen as corresponding to 6 pixels. A different threshold value can naturally be retained if tests show that it is better suited to a particular application. If d(p, m)≥dmax, the event is set aside and the algorithm returns to the step 21 of waiting for the next event.
If the event is attributed to the object sought (d(p, m)<dmax at the test 26), the index b is incremented by one unit in step 27, then the detection time t, the position p of this event and the point m of the model that has just been associated with it are recorded as T(b), P(b) and M(b) in the step 28. The processing consecutive to the detection of the event ev(p, t) is then terminated and the algorithm returns to the step 21 of waiting for the next event.
At the expiration of the updating period of the spatial transformation, a test 30 is carried out in order to ensure that a sufficient number of events are present in the set S(t) in order to contribute to the updating, for example L=5 events. Hence, if b<a+L−1 (test 30), no updating is carried out and the algorithm returns to the step 21 of waiting for the next event.
If there are enough events (b≥a+L−1), a minimization operation 31 is carried out in order to choose an angle of rotation Δθ and a vector of translation ΔT in the case where the spatial transformation Ft, that is sought using the model G, is a combination of a rotation RΔθ of angle Δθ and of a translation of vector ΔT.
The minimization operation 31 consists in finding the parameters Δθ, ΔT that minimize a criterion of distance such as for example a sum of the form:
which is a particular case of expression (1) where the parameters to be estimated are the angle Δθ of the rotation RΔθ, defined by the matrix
and the coordinates of the vector ΔT. In the expression (4), the notations cP(n) and cM(n) represent the vectors that have for origin the center c of the rotation RΔθ and pointing respectively to P(n) and M(n). The position of the center c of the rotation RΔθ can be defined in relation to model G(t). For example, it is possible to place the point c at the center of gravity of the points of the model G(t), as shown in
The spatial transformation Ft comprised of the rotation RΔθ and of the translation ΔT is here the one that moves the model G(t) to bring it as close as possible to the pixels where the events recently taken into account were detected, i.e. events of the set S(t). This is what is shown in
The rotation RΔθ and the translation ΔT that minimize the criterion (4) reveal the motion of the shape corresponding to the model G between the updating instant of the spatial transformation and the preceding updating instant. In step 32, the same transformation is applied to the points of the sets G and M in order to update these two sets. Each position X of the model G (or of the set M) is replaced with a position Y such that cY=RΔθ[cX]+ΔT. After step 32, the algorithm returns to the step 21 of waiting for the next event.
The special transformations Ft thus characterized by the angles Δθ of the rotations RΔθ estimated successively and by the corresponding vectors of translation ΔT represent the motions of the shape tracked in the scene. Their parameters are the outputs of the computer 20 of
The embodiment shown in
The persistence time Δt is set according to the dynamic content of the scene. In an implementation based on SVD calculations, it is desirable that the time interval Δt is long enough so that the set S(t) retains a complete contour of the mobile object sought, in such a way that almost all of the points of this contour can be put into correspondence with events. On the other hand, an excessively long duration Δt increases the calculation load, and does not make it possible to correctly track fast objects. The duration Δt is typically chosen between 10 μs and 20 ms.
Another approach in shape tracking in the scene seen by the asynchronous sensor 10 is shown in
In the embodiment of
In order to initiate the tracking of each object k, its model Gk is initialized (step 40) with a positioning that is rather close to that of this object in the field of vision of the sensor 10. Then, in step 41, the algorithm waits for the new events originating from the sensor 10.
Following reception of an event ev(p, t) originating from a pixel of position p in the matrix in the time t (step 42), a step 43 of associating a point mk of the model Gk with the detected event is carried out for each object k (k=1, 2, . . . , K). For each object k, step 43 is identical to step 25 described hereinabove in reference to
In step 44, the event ev(p, t) which was detected in step 42 is attributed to an object k or, lacking this, excluded as not being in relation with the motion of a tracked object in the scene. If the event ev(p, t) is not attributed to any object, the algorithm returns to step 41 of waiting for the next event. In the case of attribution to an object k, the spatial transformation Ft is calculated in the step 45 for the model Gk of this object.
Several tests or filtering can be carried out in step 44 in order to make the decision whether or not to attribute the event ev(p, t) to an object k.
The simplest is to proceed as in step 26 described hereinabove in reference to
Another processing that can take place in step 44 is taking into account of the possible motion of the background. In particular, of the asynchronous sensor 10 is itself in motion, the fixed background is in relative displacement and generates the detection of many events which are to be excluded from the processing concerning the tracking of objects of interest. A way to take the motion of the background into account will be described hereinafter.
Once the event ev(p, t) has been attributed to an object k, the parameters of the spatial transformation Ft are calculated in step 45 then this transformation Ft is applied to the model Gk in order to update the latter in step 46. Finally, a plane of displacement of the object k, noted as Πk, is estimated in step 47. The algorithm then returns to the step 41 of waiting for the next event.
Being limiting to one current event p associated with a point m of the model Gk in order to calculate a spatial transformation Ft results in introducing a component f of a cost function:
f=d[p,Ft(m)] (5)
where d[., .] is a measurement of distance in the plane of the matrix of pixels. It can in particular be a quadratic distance.
If the rigid spatial transformations are considered for the updating of the model Gk, a determination must be made of the angle Δθ of a rotation RΔθ of a given center c and the vector ΔT of a translation, the cost function component with a quadratic distance is written:
f=∥cp−RA0[cm]−ΔT∥2 (6)
where cp and cm designate the vectors that have for origin the center c of the rotation RΔθ and respectively point to the points p and m.
This component f can be minimized for an infinity of pairs (Δθ, ΔT), since for any angle Δθ, the choice ΔT=cp−RΔθ[cm] gives f=0. The objective is to minimize a global cost function of which f is only a component. However, this component f allow for an estimation of the gradient terms ∇θf, ∇Tf in relation to the angle of rotation θ (or Δθ) and to the vector of translation T (or ΔT), in order to carry out a sort of gradient descent during the updating of the model Gk. In other terms, values of parameters are retained for the spatial transformation Ft:
ΔT=−η1·∇Tf(Δθ0,ΔT0) (7)
Δθ=−η2·∇θf(Δθ0,ΔT0) (8)
where η1 and η2 are predefined positive convergence steps. By way of example, η1=0.25 and η2=0.025 can be taken to obtain good sensitivity. The suitable values of η1 and η2 are to be adjusted for each application, if necessary by performing some simulations or experiments. In (7) and (8), the partial derivatives ∇θf, ∇Tf are taken for suitable values Δθ0, ΔT0 of the angle of rotation and of the translation vector.
The partial derivatives of f have for expression:
∇Tf(Δθ0,ΔT0)=2(ΔT0−cp+RΔθ
∇θf(Δθ0,ΔT0)=2(ΔT0cp)TRΔθ
where (.)T represents the operation of transposition and
These partial derivatives are to be calculated for the particular values of ΔT0 and Δθ0. The results ∇Tf(Δθ0, ΔT)), ∇0f(Δθ0, ΔT0) are then injected into (7) and (8) in order to obtain the parameters ΔT and Δθ used in the step 46 in order to update the model Gk.
In an embodiment of the method, the partial derivatives are calculated according to (9) and (10) by taking for Δθ0 the angle mĉp and for ΔT0 the vector cp−cm′, with cm′=RΔθ
Other choices are possible for the calculation (9)-(10), for example Δθ0=0 and ΔT0=mp (simple translation that brings m on p), or Δθ0=ΔT0=0. As the elementary displacements between two iterations are of low amplitude, the precise point (ΔT0, Δθ0) where the partial derivatives (9)-(10) are calculated has probably little influence if it is chosen at (0, 0) or with respect to the distance between m and p. Furthermore, this choice varies according to the convention of choice of the center c of the rotation. The center of rotation c is typically the center of gravity of the points of the model Gk, but this is not required.
In many applications of the method, the spatial transformation Ft can be represented by a combination of a rotation and of a translation as described above. Alternatives are however possible by allowing for deformations of the model Gk of an object.
In particular, it is possible to take into account affine transformations Ft. This allows for the taking into account of three-dimensional motion of the object sought, and not only motion limited to the image plane. The 2D affine matrix stems from a matrix of rotation RΔθ by the application of scaling factors sx, sy along the two axes. This reverts to seeking to match the points m and p according to a relation of the form
where the point c can again be taken at the center of gravity of the points of the model Gk. Through a calculation of the partial derivatives of the cost function component f in relation to scaling factors sx, sy the same principle of a gradient descent can be applied in order to estimate these scaling factors. As a first approximation, it is possible to use another convergence step η3, and take:
sx=1+η3·(|px|−|mx|) (11)
sy=1+η3·(|py|−|my|) (12)
in order to complete the estimation of ΔT and Δθ according to (6) and (7). In the expressions (11) and (12), |px| and |py| are absolute values of the coordinates of the vector cp, and |mx| and |my| are the absolute values of the coordinates of the vector cm.
In the case where the object k tracked in the asynchronous signal of the sensor is simply an edge being displaced at a constant speed
In practice, the acquisition noise and the possible errors of attributing events to the object are such that there is a certain dispersion of the events around the plane Πk(t) that extends as a mean plane passed through by the events recently attributed to the object.
The plane Πk(t), or Πk if the time index t is omitted in order to simplify the notations, can be defined by any of its points gk(t), or gk, and a vector nk(t), or nk, giving the direction of its normal. In the representation of
This minimizing calculation is carried out in step 47 to estimate the plane Πk which is representative of the instantaneous displacement of the object k.
For more details on the way to determine the plane of displacement Πk, it is possible to refer to the article “Event-based Visual Flow”, by R. Benosman, et al., IEEE Transaction On Neural Networks and Learning Systems, Vol. 25, No. 2, September 2013, pp. 407-417, or to patent application WO 2013/093378 A1.
In the case where the tracked object is not a simple edge, but an object of which the shape seen by the sensor extends according to two dimensions, it is also possible to determine the plane of displacement Πk by minimizing the total of the distances between the recent events assigned to the object k and the plane defined by its parameters nk, gk. In the three-dimensional space-time representation, this plane Πk reveals the local displacement of the object k as a whole.
The plane of displacement Πk estimated for an object k can be used in several ways in step 44 of
Returning to step 44, it can in particular include, in the case where several objects are tracked (K>1), resolution of occlusion cases, or more generally cases of ambiguity between several objects for the attribution of an event. In the step 43, the respective distances d(mk, p) between the event ev(p, t) and the points mk that are closest to the models Gk of the various objects tracked were calculated. If only one of these distances d(mk, p) is less than a threshold dth, for example dth=3 pixels, then the event is attributed to this object. On the contrary, it is considered as an ambiguous event, that attributable to several different objects.
This processing is shown in
The taking into account of spatial constraints in the removing of ambiguities 56 can take place according to several strategies:
The very high temporal resolution of the event-based acquisition process provides additional information for the resolution of ambiguous situations. A current event rate rk can be determined for each shape Gk being tracked, which contains information of the object k and partially codes the dynamics of this object.
Here, tk,0, tk,1, . . . , tk,N(k) denotes the time labels of the most recent events, whose number is N(k)+1, that have been attributed to an object k during a time window of which the length Δt can be of about a few milliseconds to several tens of milliseconds, with tk,0<tk,1< . . . <tk,N(k) (the detected event at tk,N(k) is therefore the most recent for the object k). These time labels make it possible to calculate for each object k a moving average of the event rate rk, defined by:
This calculation of the current event rate rk can be carried out as soon as an event has been attributed to the object k (step 44) following the detection of this event at the instant tk,N(k).
Then, when a next event ev(p, t) gives rise to an ambiguity between several tracked objects, the step 44 can comprise the calculation of a score Ck for each object k to which the ambiguous event ev(p, t) can be attributed, according to the expression:
Ck=tk,N(k))rk (14)
This score Ck makes it possible to evaluate the time coherency of the ambiguous event ev(p, t) with each object k. It can be expected that the duration t−tk,N(k) is close to the inverse of the current rate rk if the event ev(p, t) belongs to the object k. The taking into account of the time constraint in the step 44 then consists, after having calculated the score Ck according to (14), in choosing from the various objects k to which the event ev(p, t) is attributable, the one for which the score is closest to 1. Once this choice has been made, the rate rk can be updated for the chosen object and control can pass to step 45 of
This processing which takes into account the time constraints for removing ambiguities, and which forms a mode of execution of step 56 of
Another manner for removing the ambiguity in step 56 is to combine spatial and time constraints by making reference to planes of displacement Πk(1), . . . , Πk(j) of the various objects k(1), . . . , k(j) attributable to the event ev(p, t).
In particular, it is possible to retain for the attribution of the event ev(p, t) the object k which, among k(1), . . . , k(j), minimizes the distance, measured in the three-dimensional space-time representation, between the event ev(p, t) and each plane Πk(1), . . . , Πk(j)). The event is then attributed to the object k such that:
where “.” designates the scalar product between two vectors in the three-dimensional space, Πk(i) is the vector giving the direction of the normal to the plane Πk(i) and egk(i) is the vector pointing from the point e that marks the detected event ev(p, t) in the three-dimensional space to one of the points gk(i) of the plane Πk(i).
This processing that combines space and time constraints for removing ambiguities, and forming another embodiment of step 56 of
When the asynchronous sensor 10 is in motion, events are also generated by the fixed background of the scene.
One way to filter the events originated from the background of the scene then consists in estimating the statistical distribution of the distances between the plane of displacement Πk of the tracked object (estimated in step 47 of
For example, the interval Ik is centered on the value of the mean distance dk, and its width is a multiple of the standard deviation σk.
In order to take into account the possible motion of the background, the step 44 of
The processing described in reference to
Digital experiments were conducted in order to reveal the performance of the method disclosed hereinabove. These experiments are covered in the following examples.
The experiment was conducted in the case of a star drawn on a rotating disk, as indicated hereinabove in reference to
The disc was rotated at a speed of 670 revolutions per minute. The pattern H giving the shape of the model G was generated manually by selecting 6 points per edge of the star from a snapshot. A distance threshold of 6 pixels was used to eliminate the impact of noise and reduce the calculation burden. As shown by
In order to evaluate the precision of the shape tracking, the mean distance between the model set and the locations of the active events is calculated every 200 μs. The mean errors are respectively 2.43, 1.83 and 0.86 pixels for the curves 85, 86 and 87, with respective standard deviations of 0.11, 0.19 and 0.20 pixels. Taking into account the asynchronous signal of the sensor allows for a notable improvement of the shape tracking method, especially in the case of
The superior time precision leads to more precise tracking. The error curve shows oscillations (inset in
The number of points retained for the model G has an influence on the cost and the precision of the calculations.
In the case of an embodiment according to
In the example, the tracking program was executed on a computer provided with a central processing unit (CPU) of the “Intel Core i5” type clocked at 2.8 GHz and occupying 25% of the capacity of this CPU. In this configuration, it appeared that a size of 90 points in the model can provide a frequency of detection that corresponds to an equivalent image rate of 200 kHz. Up to about 2000 points in the model, the latter can be updated with an equivalent image rate of at least 11 kHz. Experience has shown that for a model of 60 to 70 points, the algorithm is able to track in real time a shape that is displaced at a speed up to 1250 rpm.
Generally, there is an interest in including the corners of the contour of the object in the model. Along a straight edge of the shape of the object, it is possible however to reduce the number of points without negatively influencing the final precision of the tracking.
When the number of points increases, the tracking error does not tend to zero, but to a value of about 0.84 pixels, linked to the spatial resolution limit of the asynchronous sensor. Logically, the more points the model contains, the better the precision of the tracking is, but with a higher calculation cost. A size of 60 to 100 points for the model is a good compromise to obtain reasonable precision (around 0.90 pixels) by maintaining a high tracking frequency (around 200 kHz).
The experiment was conducted in the case of several shapes (a H shape, a car shape and a star shape) by taking into account an affine spatial transformation calculated using expressions (6), (7), (11) and (12) in an embodiment according to
The method described hereinabove in reference to
In this experience, automobile traffic data was acquired with the asynchronous sensor. As shown in
Two shapes 95, 96 corresponding respectively to a car and to a truck were tracked by means of the method shown in
The mean tracking error was 0.86 pixels with a standard deviation of 0.19 pixels for the event-based method according to
It is notable that the superior time precision procured by the method according to the invention is accompanied by better tracking stability than the conventional frame-based method. In the conventional method, the (expensive) solution consisting in increasing the acquisition frequency is not always sufficient to correctly process the situations of occlusion. Inversely, the dynamic content of the event-based signal produced by the asynchronous sensor procures more stable input data for the algorithm. The static obstacles do not generate any events and therefore have practically no impact on the tracking process.
Several strategies for removing ambiguity have been tested for the tracking of multiple objects that can have occlusions.
The shapes of a “car” object and of a “truck” object being displaced in the same direction but with different speeds were tracked simultaneously in an actual scene comprising automobile traffic. For a time, the shapes of the truck and of the car are superimposed in the field of vision of the sensor 10, until the truck passes the car. The objects other than the truck and the car are processed as background noise.
Ambiguities are produced when these curves pass through similar values, meaning that the shapes of the two vehicles overlap in the field of vision (between about 2.2 and 2.9 s). In this case, the use of spatial information can be insufficient, unless the size of the common region is very small.
Generally, it is typically preferred to combine the time constraint with additional constraints, as for example spatial constraints.
It can be seen in
The “Update all” strategy (
The “Weighted update” strategy distributes the dynamics introduced by the ambiguous events between the various objects with weightings that depend on distances.
The “Time constraint based on rate rk” strategy (
The “Combination of space and time constraints using the plane of displacement Πk” strategy was used with a period Δt of 3 s in order to estimate the planes of displacement Π1 of the “truck” object and Π2 of the “car” object (
Two sequences of an asynchronous signal generated by a mobile sensor were tested by applying the method for removing events generated by the background which was described in reference to
In the first sequence, a star shape was displaced in an interior environment while the asynchronous sensor was held in hand and moved simultaneously.
To evaluate the results, the speed of the star calculated by the method according to
The second sequence comes from an exterior scene with again automobile traffic. A car shape is tracked using an asynchronous visual sensor 10 displaced manually.
Globally, event-based tracking such as exposed hereinabove is robust even when the sensor and the tracked objects are in motion.
The embodiments described hereinabove are illustrations of this invention. Various modifications can be made to them without leaving the scope of the invention which stems from the annexed claims.
Number | Date | Country | Kind |
---|---|---|---|
14 54003 | Apr 2014 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2015/051129 | 4/24/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/166176 | 11/5/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7750841 | Oswald | Jul 2010 | B2 |
8320618 | Ikenoue | Nov 2012 | B2 |
8335345 | White | Dec 2012 | B2 |
9213902 | Benosman | Dec 2015 | B2 |
9886768 | Regnier | Feb 2018 | B2 |
9934557 | Ji | Apr 2018 | B2 |
9952323 | Deane | Apr 2018 | B2 |
20070086621 | Aggarwal | Apr 2007 | A1 |
20070248244 | Sato | Oct 2007 | A1 |
20080135731 | Lichtsteiner et al. | Jun 2008 | A1 |
20080204322 | Oswald | Aug 2008 | A1 |
20080219509 | White | Sep 2008 | A1 |
20090296989 | Ramesh | Dec 2009 | A1 |
20100296697 | Ikenoue | Nov 2010 | A1 |
20100322516 | Xu | Dec 2010 | A1 |
20110052002 | Cobb | Mar 2011 | A1 |
20120112038 | Hamoir | May 2012 | A1 |
20130113934 | Hotta | May 2013 | A1 |
20130335595 | Lee | Dec 2013 | A1 |
20140286537 | Seki | Sep 2014 | A1 |
20140363049 | Benosman | Dec 2014 | A1 |
20150285625 | Deane | Oct 2015 | A1 |
20160078321 | Wang | Mar 2016 | A1 |
20160086344 | Regnier | Mar 2016 | A1 |
20160203614 | Wang | Jul 2016 | A1 |
20180017853 | Smits | Jan 2018 | A1 |
20180122085 | Regnier | May 2018 | A1 |
Number | Date | Country |
---|---|---|
2013093378 | Jun 2013 | WO |
Entry |
---|
“Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking”; Lagorce, Xavier; Meyer, Cedric; Ieng, Sio-Hoi; Filliat, David; Benosman, Ryad; IEEE Transactions on Neural Networks & Learning Systems. Aug. 2015, vol. 26 Issue 8, p. 1710-1720. |
“Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras”; Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad; Neural Computation. 2015, vol. 27 Issue 4, p. 925-953. |
Delbruck et al.: “Fast sensory motor control based on event-based hybrid neuromorphic-procedural system”, ISCAS 2007, May 27, 2007 (May 27, 2007), pp. 845-848, XP031181393, DOI: doi:10.1109/ISCAS.2007.378038. |
Benosman R. et al.: “Event-Based Visual Flow”, IEEE Transactions on Neural Networks and Learning Systems, IEEE, Piscataway, NJ, USA, vol. 25, No. 2, Feb. 2014 (Feb. 1, 2014), pp. 407-417, XP011536914, ISSN: 2162-237X, [retrieved on Jan. 10, 2014], DOI: 10.1109/TNNLS.2013.2273537. |
Paul B. et al.: “A Method for Registration of 3-D Shapes”, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society, USA, vol. 14, No. 2, Feb. 1992 (Feb. 1, 1992), pp. 239-256, XP000248481, ISSN: 0162-8828, DOI: 10.1109/34.121791. |
Posch et al.: “A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS”, IEEE Journal of Solid-State Circuits, vol. 46, No. 1, Jan. 2011 (Jan. 1, 2011), pp. 259-275, XP055185382, DOI: doi:10.1109/JSSC.2010.2085952. |
Micha H. et al.: “Iterative Estimation of Rigid Body Transformations—Application to robust object tracking and Iterative Closest Point”, Journal of Mathematical Imaging and Vision, vol. 43, No. 1, 2012, pp. 1-9, XP035038476, DOI: doi:10.1007/s10851-011-0279-x. |
Lichtsteiner et al.: “A 128×128 120 dB 15 us Latency Asynchronous Temporal Contrast Vision Sensor”, IEEE Journal of Solid-State Circuits, vol. 43, No. 2, Feb. 2008 (Feb. 1, 2008), pp. 566-576, XP011200748, DOI: doi:10.1109/JSSC.2007.914337. |
Rusinkiewicz et al.: “Efficient Variants of the ICP Algorithm”, 3rd International Conference on 3D Digital Imaging and Modeling, Jun. 2001 (Jun. 1, 2001), pp. 145-152, XP010542858. |
Delbruck et al.: “Activity-Driven, Event-Based Vision Sensors”, Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS, pp. 2426-2429, XP031724396. |
Zhenjiang Ni et al.: “Asynchronous Event-Based Visual Shape Tracking for Stable Haptic Feedback in Microrobotics”, IEEE Transactions on Robotics, IEEE Service Center, Piscataway, NJ, US, vol. 28, No. 5, Oct. 2012 (Oct. 1, 2012), pp. 1081-1089, XP011474264, ISSN: 1552-3098, DOI: 10.1109/TRO.2012.2198930. |
Delbruck et al. “Frame-free dynamic digital vision”, Proceedings of Intl. Symp. on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, Univ. of Tokyo, Mar. 6-7, 2008, pp. 21-26. |
International Search Report, dated Jul. 6, 2015, from corresponding PCT application. |
Number | Date | Country | |
---|---|---|---|
20170053407 A1 | Feb 2017 | US |