METHOD OF OBJECT TRACKING IN 3D SPACE BASED ON PARTICLE FILTER USING ACOUSTIC SENSORS

Information

  • Patent Application
  • 20100316233
  • Publication Number
    20100316233
  • Date Filed
    April 04, 2008
    16 years ago
  • Date Published
    December 16, 2010
    14 years ago
Abstract
There is provided a method of tracking an object in a three-dimensional (3-D) space by using particle filter-based acoustic sensors, the method comprising selecting two planes in the 3-D space; executing two-dimensional (2-D) particle filtering on the two selected planes, respectively; and associating results of the 2-D particle filtering on the respective planes.
Description
TECHNICAL FIELD

The present invention relates to a method of tracking an object in a three-dimensional space by using a particle filter including passive acoustic sensors, and more particularly, to a method of tracking an object in a three-dimensional space, which is capable of reducing complexity of calculation simultaneously with accurately executing three-dimensional object tracking by decomposing a three-dimensional particle filter into simple two-dimensional particle is filters instead of directly extending a conventional particle filtering algorithm for bearings-only tracking to a three-dimensional space.


BACKGROUND ART

Locating and tracking an object using passive sensors both indoor and outdoor have been great interests in numerous applications. For tracking an object with passive sensors, several approaches, based on time-delay estimation (TDE) methods and beamforming methods, have been proposed. The TDE method estimates location based on the time delay of arrival of signals at the receivers [1]. The beamforming method uses the frequency-averaged output power of a steered beamformer. The TDE method and beamforming method attempt to determine the current source location using data obtained at the current time only.


Each method transforms the acoustic data into a function which represents a peak in the location corresponding to the source in a deterministic way.


However, the estimation accuracy of these methods is sensitive to the noise-corrupted signals. In order to overcome the drawback of the methods, a state-space driven approach based on particle filtering was applied and proposed. The particle filtering is an emerging powerful tool for sequential signal processing, especially for nonlinear and non-Gaussian problems. The previous work on tracking with particle filters was formulated for the source localization. It presented the framework with revised TDE-based or beamforming methods using particle filtering, and the sensors are positioned at specified location at a constant height to estimate an object's trajectory in two dimensional (2-D) space. However, in those methods, the extension to three dimensional space is quite difficult and inflexible. More than the number of positioned microphones are required for generating another 2-D plane in order to extend to 3-D. In addition, mobility of the sensors cannot be supported due to their fixed position. In order to overcome the mobility problem, Direction of Arrival (DOA) based bearings-only tracking has been widely used in many applications.


In this paper, we analyze the tracking methods based on passive sensors for the flexible and accurate 3-D tracking. Tracking in 3-D has been addressed by directly extending 2-D bearings-only tracking problem to 3-D problem. Instead of directly extending traditional particle filtering algorithms for bearings-only tracking for 3-D space, we propose to decompose the 3-D particle filter into several simpler particle filters designed for 2-D bearings-only tracking problems. The decomposition and selection for the 2-D particle filters are based on the characterization of the acoustic sensor operation under noisy environment. As the passive acoustic localizer model, there is used a passive acoustic localizer proposed in M. Stanacevic, G. Cauwenberghs, “Micropower Gradient Flow acoustic Localizer,”in Solid-State Circuits Conf. (ESSCIRC03), pp. 69-72, 2003. The acoustic localizer detects two angle components (azimuth angle θ, elevation angle φ) between a sensor and an object. We extend the approach to multiple particle filter fusion for robust performance. We compare the proposed approach with the directly extended bearings-only tracking method using Cramer-Rao Low Bound.


DETAILED DESCRIPTION OF THE INVENTION
Technical Problem

The present invention provides a method of tracking an object in a three-dimensional space by using particle filter-based acoustic sensors capable of increasing accuracy while reducing complexity of calculation.


Technical Solution

According to an aspect of the present invention, there is provided a method of tracking an object in a three-dimensional (3-D) space by using particle filter-based acoustic sensors, the method including: selecting two planes in the 3-D space; executing two-dimensional (2-D) particle filtering on the two selected planes, respectively; and associating results of the 2-D particle filtering on the respective planes.


Preferably, in the selecting two planes, the two planes may be selected from planes in a 3-D space formed by a single sensor. In this case, the two selected planes may be determined based an elevation of three planes in the 3-D space with respect to the single sensor.


On the other hand, in the selecting two planes, the two planes may be selected from planes in a 3-D space formed by each of a plurality of sensors. In this case, the two selected planes may be determined based on an azimuth and an elevation of the planes in the 3-D space with respect to the each of the plurality of sensors.


Preferably, the selecting two planes may be executed by using independent is k-multiple sensors. On the other hand, the selecting two planes may be executed by using common-resampling k-multiple sensors. On the other hand, the selecting two planes may be executed by using one merged k-multiple sensors.


Preferably, the associating results of the 2-D particle filtering on the respective planes, may be performed regarding weights with respect the same factors in two different planes as the same. On the other hand, in the associating results of the 2-D particle filtering on the respective planes may be performed by adding a weight of each of the same factors in two different planes to each other.


Advantageous Effects

There is a merit of accurately tracking an object in a three-dimensional space while reducing complexity of calculation by decomposing a three-dimensional particle filter into various simple two-dimensional particle filters in a method of tracking an object in a three-dimensional space by using particle filter-based acoustic sensors according to an embodiment of the present invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates conversion of originally measured angles;



FIGS. 2 and 3 illustrate angle variances in projected yz and zx planes, respectively;



FIG. 4 illustrates a case where xy and yz planes are selected in projection plane selection according to an embodiment of the present invention;



FIG. 5 illustrates a tracking deviation on a yz plane where a combining is method according to an embodiment of the present invention is not used;



FIG. 6 illustrates object tracking based on the combining method;



FIG. 7 illustrates a corn shape likelihood function for three-dimensionally distributed particle weights;



FIG. 8 illustrates radial error estimation with one angle;



FIG. 9 illustrates coordinate systems with respect to global coordinates, that is, primary sensor coordinate systems;



FIG. 10 illustrates two methods for common resampled particles;



FIG. 11 illustrates a process of resampling of multiple sensors;



FIG. 12 illustrates weight calculation in CRMS-II;



FIG. 13 illustrates particle weight calculation in wn(i) (P) by using all values measured by R sensors selected according to an embodiment of the present invention;



FIG. 14 illustrates performance of multiple sensors using two optimized planes selected in IMS;



FIGS. 15 and 16 illustrate low bounds in every direction, respectively;



FIGS. 17 and 18 illustrate cases of using one sensor and using multiple sensors, respectively.





BEST MODE FOR CARRYING OUT THE INVENTION

To fully understand advantages of operations of the present invention and the objects obtained by embodiments of the present invention, it is required to refer to attached drawings illustrating preferable embodiments of the present invention and contents shown in the drawings. Hereinafter, the preferable embodiments of the present invention will be described in detail with reference to the attached is drawings. The same reference numerals shown in each drawing indicate the same elements.


A three dimensional localizer model and its implementation is described in M. Stanacevic, G. Cauwenberghs, “Micropower Gradient Flow acoustic Localizer,”in Solid-State Circuits Conf. (ESSCIRC03), pp. 69-72, 2003. The localizer is based on gradient flow to determine the Direction of Arrival (DOA) of the acoustic source.



FIG. 1 illustrates the angle conversion process. Based on the measured two angles, azimuth θ and elevation φ, (0≦θ≦2π, 2≦φ≦π), from the acoustic localizer, three angles for two dimension (2-D) planes are derived: θxy, θyz, θzx. Each of these three angles is used for a 2-D tracking using particle filter. For example, θxy is used in xy plane, θyz and and θzx are used in yz plane and zx plane, respectively. The angles are defined as












θ
xy

=
θ







θ
yz

=



tan

-
1


(




sec





θ




tan





θtanφ


)

+
β









θ
zx

=



tan

-
1




(


tan





φ




sec





θ




)


+
γ






(
1
)







where β=0 for (y≧0, z≧0), β=π for (y<0), β=2π for (y≧0, z<0), γ=0 for (z≧0, x≧0), γ=π for (x<0), γ=2π for (z≧0, x<0).


For simplicity, we assume the variances σ(θ)2 and σ(φ)2 of the originally measured angles θ and φ are identically measured and denoted as σ2. The noise-corrupted measurements θ and φ with variance σ2 are conveyed to the projected plane angles θxy, θyz and θzx with variances σxy2, σyz2, and σzx2 as





θxy,n={dot over (θ)}xy,n+Enxy,





θyz,n={dot over (θ)}yz,n+Enyz





θzx,n={dot over (θ)}zx,n+Enzx   (2)


where {dot over (θ)}p,n is a true angle in p plane: x-y, y-z or z-x plane. Enp is projected noise with variance σp2 in p plane. Note that the original variance σ2 is individually represented as σxy2, σyx2, σzx2 through the projection.


Each projected measurement variance is derived from (1). However, it is difficult to express the mathematical derivation which requires variances product and the variance of nonlinear function. In addition, the mathematical derivation is only an approximated solution. FIG. 2 and FIG. 3 represent projected angle variances in y-z and z-x planes, which are obtained empirically with 10,000 trials. FIGS. 2 and 3 illustrate the projected angle variances σxy2, , and σzx2 when σ2 of θ and φ are both 1. Note that the projected angle θxy in x-y plane is the same as the original θ; thus, σxy2 is the same as σ2.


The projected variances in y-z and z-x planes change as θ and φ change. In y-z plane, φ in the range between 45± and 135± results in less variance than the original measurement variances of 1. In addition, as θ approaches to 0± or 180±, the variance decreases further. On the other hand, in z-x plane, the other range of φ and θ result in less variance than the original measurement variances. Through the projected measurement variance, we approach an object tracking to method in a 3-D space.


There will be described formulation for three-dimensional space estimation.


Consider an object state vector Xn, which evolves according to






X
n
=f
n−1(Xn−1)+Qn−1   (3)


where fn is a nonlinear, state transition function of the state Xn, and Qn−1 is the non-Gaussian noise process in the time-instant interval between n and n−1. The measurements of the evolving object state vector is expressed as






Z
n
=h
n(Xn)+En   (4)


where hn is a nonlinear and time-varying function of the object state, En is the measurement error referred to as a measurement noise sequence which is independent identically distributed (IID) white noise process. Then, the prediction probability density function (pdf) is obtained as






p(Xn|Z1:n−1)=∫p(Xn|Xn−1)p(Xn−1|Z1:n−1)dXn−1   (5)


where Z1:n represents the sequence of measurements up to time instant n, and p(Xn|Xn−1) is the state transition density with Markov process of order one related to fn(·) and Qn−1 in (3). Note that p(Xn−1|Z1:n−1) is obtained from the previous time-instant n−1, recursively.


For the next time-instant estimation based on Bayes' rule, the posterior pdf involving prediction pdf is obtained as










p


(


X
n



Z

1
:
n



)


=



p


(


Z
n



X
n


)




p


(


X
n



Z

1
:

n
-
1




)







p


(


Z
n



X
n


)




p


(


X
n



Z

1
:

n
-
1




)






X
n









(
6
)







where p(Zn|Xn) is the likelihood or measurement density in (4) related to measurement model hn(·) and noise process En, and the denominator is the normalizing constant. In other words, the measurement Zn is used to modify the prior density (5) to obtain the current posterior density (6).


For the consistent denotation of variables, the projected plane angle measurements θxy, θyz, θzx at time-instant n are denoted as Zn(xy), Zn(yz), Zn(zx), and object state vectors in 2-D planes (Xn(xy), Xn(yz), Xn(zx)) and 3-D space (Xn) are defined as






X
n
=[X
n
Vx
n
y
n
Vy
n
z
n
Vz
n]  (7)






X
n(xy)=[xn(xy) Vxn(xy) yn(xy) Vyn(xy)]  (8)






X
n(yz)=[yn(yz) Vyn(yz) zn(yz) Vzn(yz)]  (9)






X
n(zx)=[zn(zx) Vzn(zx) xn(zx) Vxn(zx)]  (10)


where {xn, yn, zn} and {Vxn, Vyn, Vzn} are the true source location and velocity in 3-D Cartesian coordinates, respectively. [xn(xy), yn(xy)] and [Vxn(xy), Vyn(xy)] are the projected true source location and velocity on a x-y plane; y-z and z-x planes are applied in the same way. Note that xn, xn(xy) and xn(zx) are all different since xn(xy) and xn(zx) are estimated independently, and xn is the final fused value based on xn(xy) and xn(zx); the rest of components are applied similarly. Then, the three posterior pdf involving prediction probability density functions p(Xn(xy)|Z1:n(xy)), p(Xn(yz)|Z1:n(yz)) and p(Xn(zx)|Z1:n(zx)) are given as











p


(



Z
n



(
xy
)





X
n



(
xy
)



)




p


(



X
n



(
xy
)





Z

1
:

n
-
1





(
xy
)



)







p


(



Z
n



(
xy
)





X
n



(
xy
)



)




p


(



X
n



(
xy
)





Z

1
:

n
-
1





(
xy
)



)







X
n



(
xy
)









(
11
)








p


(



Z
n



(
yz
)





X
n



(
yz
)



)




p


(



X
n



(
yz
)





Z

1
:

n
-
1





(
yz
)



)







p


(



Z
n



(
yz
)





X
n



(
yz
)



)




p


(



X
n



(
yz
)





Z

1
:

n
-
1





(
yz
)



)







X
n



(
yz
)









(
12
)








p


(



Z
n



(
zx
)





X
n



(
zx
)



)




p


(



X
n



(
zx
)





Z

1
:

n
-
1





(
zx
)



)







p


(



Z
n



(
zx
)





X
n



(
zx
)



)




p


(



X
n



(
zx
)





Z

1
:

n
-
1





(
zx
)



)







X
n



(
zx
)









(
13
)







The objective is to utilize three 2-D estimates from posterior probability density functions and fuse them into a single 3-D estimate.


The Equations (11) to (13) are only for the conceptual purpose which cannot in general be computed analytically except in special cases such as the linear Gaussian state space model. In any nonlinear system, particle filter approximates is the posterior distribution using a cloud of particles. Here, sequential importance sampling (SIS) is widely applied to perform nonlinear filtering. The SIS algorithm is for the required posterior pdf by a set of random samples with associated weights and to compute estimates based on these samples and weights. In addition, it derives sequential importance sampling (SIR) particle filter algorithm which chooses the candidates of importance density and performs the resampling step at every time-instant. The SIR algorithm forms the basis for most of the proposed particle filters. In this paper, we apply SIR particle filter which has generic particle filtering algorithm for an object tracking. Under the assumption in which nonlinear functions fn-1 and hn in (11) to (13) are known, SIR PF method has the advantage that the importance weights are easily evaluated and the importance density can be easily sampled.


Several dynamic models have been proposed which aim for the time-varying location and velocity. Especially, in bearings-only tracking, there are provided a Constant Velocity (CV) model, a Clockwise Coordinated Turn (CT) model, and an Anti-clockwise Coordinated Turn (ACT) model, which are exnressad as










F
n

(
1
)


=

(



1



T
s



0


0




0


1


0


0




0


0


1



T
s





0


0


0


1



)





(
14
)







F
n

(
p
)


=

(



1




sin


(



k

(
p
)




T
s


)




k

(
p
)





0



-


(

1
-

cos


(



k

(
p
)




T
s


)



)



k

(
p
)








0




(

1
-

cos






(



k

(
p
)




T
s


)



)



k

(
p
)





1




sin


(



k

(
p
)




T
s


)




k

(
p
)







0



cos


(



k

(
p
)




T
s


)




0



-

sin


(



k

(
p
)




T
s


)







0



sin


(



k

(
p
)




T
s


)




0



cos


(



k

(
p
)




T
s


)





)





(
15
)







where p=2, 3 and n(p) is called mode conditioned turning rates expressed as following;









n

(
2
)


=

α



v

x
,
n

2

+

v

y
,
n

2





,



n

(
3
)


=


-
α




v

x
,
n

2

+

v

y
,
n

2









where α is the factor determining rotated angle degree. In this paper, by modifying (14), the CA model, which also represents the CV model, is revised as










F
n
xy

=

(



1






A
x



T
s
2



2


V

x
,

n
-
1





+

T
s




0


0




0






A
x



T
s



V

x
,

n
-
1




+
1



0


0




0


0


1






A
y



T
s
2



2


V

y
,

n
-
1





+

T
s






0


0


0






A
y



T
s



V

y
,

n
-
1




+
1




)





(
16
)







where Ax and Ay denotes acceleration in a x-y plane. For the other planes, y-z and z-x planes, Ax and Ay are replaced according to the target state vector. Furthermore, the CA model becomes the CV model when the values of Ax and Ay are zeros.


After a dynamic model propagates a previous set of M particles X(n−11:mP) in each plane P, the new set of particles Xn1:m(P) are generated, and then an observation likelihood function p(Zn(P)|Xn1:m(P)) are formulated as Gaussian distributions. The observation likelihood function p(Zn(P)|Xn1:m(P)) calculates weights of the generated particles and estimates the object state vector Xn(P) through a resampling process in each plane P.


There will be described a method of selecting projected planes for tracking an object in a three-dimensional space, according to an embodiment of the present invention. Hereinafter, the method of selecting projected planes is referred to as a Projected Planes Selection (PPS) method. First, there will be described plane selection and particle generation in the PPS method, and then, there will be described a combining method considering redundancy.


Planes selection and particles generation: Instead of using particle filter formulation for 3-D directly, the approach is to use at least two out of three possible 2-D particle filter formulations in order to estimate 3-D state information. In the PPS method, we choose the two planes with the smallest variance according to FIG. 2 and FIG. 3. Note that x-y plane is always chosen because the variance for this plane is the second best plane with the same variance as the originally measured azimuth angle. The remaining one is selected based on the measured angle. For example, when both angles are in between 45° and 135°, the plane y-z is chosen. Otherwise, the plane z-x is chosen.


Once the planes are selected, the 2-D particle filters estimates states independently. FIG. 4 illustrates an example where x-y and y-z planes are chosen to (i.e., the projected measurement variance in y-z plane is less than the variance in z-x plane according to the originally measured θ and φ). FIG. 4 illustrates the two selected independent 2-D particle filters for the 3-D state vector estimations. While the particle filters in the chosen planes estimate the state vector, the particle filter in the other remained plane is waiting for the selection. When the observed is object is coming to the range where the projected measurement variance for the remained plane becomes less, the plane selection will be changed.


Regardless of which two planes are selected, there is one redundant component that appears in both plane (i.e., y component appears in x-y and y-z planes). However, since two particle filters are estimating the states independently, the y component from two particle filters may differ. As discussed in (7) through (10), the intermediate 2-D target state vectors are [xn(xy), Vxn(xy), yn(xy), Vyn(xy)] from the x-y plane particle filter, [yn(yz), Vyn(yz), zn(yz), Vzn(yz)] from the y-z plane particle filter. The final 3-D target state vector Xn will be determined by combining the two estimations.


Redundancy consideration in combining method: We previously stated that the plane selection method always generates redundancy. For example, when planes x-y and y-z are selected, y direction state vectors are obtained from the particle filters from the two planes. There are two ways to combine redundant information for y direction state vectors: the planes weighted combining and the equal weight combining. The equal weight combining method simply takes an average value imposing equal weight to the redundant component y. On the other hand, in the planes weighted combining method, the redundant component is weighted according to the particles weight-sum Σi=1Mwn(i). The value of the particle weight-sum roughly indicates the reliability of the state estimation. The weight-sums of the two filters Σi=1Mwn(i)(P) are weights on each plane filter for the final estimation where P represents the selected planes. Based on each plane weight, planes weighted combining method is considered. In the case of selected planes x-y and y-z planes, the final 3-D target state vector Xn based on the planes weighted combining method is










X
n

=




X
n



(

x

xyz

)




(



1


0


0


0


0


0




0


1


0


0


0


0



)










+



X
n



(

y

xyz

)




(



0


0


1


0


0


0




0


0


0


1


0


0



)










+



X
n



(

z

xyz

)




(



0


0


0


0


1


0




0


0


0


0


0


1



)







(
17
)







where Xn(x|xyz), Xn(y|xyz) and Xn(z|xyz) respectively represent final x, y and z state vector for the final 3-D state vector expressing












X
n



(

x

xyz

)


=


X
n



(

x

xy

)










X
n



(

y

xyz

)






(
18
)






=





X
n



(

y

xy

)







i
=
1

M




w
n

(
i
)




(
xy
)




+



X
n



(

y

yz

)







i
=
1

M




w
n

(
i
)




(
yz
)










i
=
1

M




w
n

(
i
)




(
xy
)



+




i
=
1

M




w
n

(
i
)




(
yz
)









(
19
)








X
n



(

z

xyz

)


=


X
n



(

z

yz

)






(
20
)







where Xn(x|xy) represents the x component of 2-D state vector in x-y plane. Note that since the equal weight combining method ignores the weights-sum of each plane, the redundant component y in (19) based on the equal weight combining method is replaced as











X
n



(

y


x





y





z


)


=




X
n



(

y


x





y


)


+


X
n



(

y


y





z


)



2





(
21
)







Hereinafter, there will be described comparison between the PPS method according to an embodiment of the present invention and a method of directly to extending to three-dimensional object tracking (hereinafter, referred to as “a direct three-dimensional method”).


Effectiveness of the Planes Weighted Combining Method: Thus far, we assumed that the nonlinear dynamic transition matrix fn is known. If the dynamic model fn changes to gn in the middle of tracking without any change information to particle filter, the tracking may diverge. Suppose that an object is displaced with a dynamic model gn in a plane P, but particle filtering uses a dynamic model fn. Then, true source state vector is gn(Xn−1(P)) while a particle filter formulates following observation likelihood function:






p(Zn(P)|Xn(1:M)(P))=p(Zn(P)|fn({circumflex over (X)}n−1(1:M)(P)))   (22)


Since the dynamic models of an object and particle filter are different, the estimation is definitely diverged.






X
n(Pfn(Xn−1(1:M)(P))≠gn(Xn−1(P))   (23)


Furthermore, if the state of the unmatched model lasts longer, the estimation may not recover even after matching the models. The planes weighted combining method discards the estimation from the plane with negligent particles weights sum based on p(Zn(P)|Xn−11:m(P)), and thus prevents the estimation from deviation.


The equal weight combining and the planes weighted combining methods may have similar tracking performance if all selected plane-particle filters track an object well. However, if one of two particle filters unreliably tracks an object, the weighted combining method performs better. Table 1 and Table 2 show the comparison between the two combining methods. 1000 particles are used in generating the results.












TABLE 1







Equal Weight Combining
Weighted Combining




















x error
1.7399
1.7479



y error
0.5641
0.5589



z error
1.7334
1.7408



MSE
1.3458
1.3492




















TABLE 2







Equal Weight Combining
Weighted Combining




















x error
1.7541
1.7263



y error
15.690
0.5914



z error
1.4675
1.7063



MSE
6.3037
1.3413










Table 1 represents error rate (%) when all plane-particle filters have good tracking accuracy. As shown in the table, two methods have the almost same result. However, when the particle filter for one of two planes tracking is unreliable, the planes weighted combining method compensates as shown in Table 2 II. (i.e., malfunction of a particle filter may be caused by abrupt moving trajectory or radius error which tracks only angles, not distance from a sensor). Note the error rate of y direction component. The phenomenon is shown in FIG. 5 and FIG. 6 as a simulation result. The selected planes are x-y and y-z planes, and y direction estimated state vectors are modified. FIG. 5 represents the tracking deviation in a y-z plane without combining method. FIG. 6 represents the tracking based on the combining method. Especially in FIG. 6(b), it is shown that planes weighted combining method enhances the tracking performance by considering the contribution from different plane accordingly.


Comparison with Direct 3-D Method: The PPS method approximates the estimate of the direct 3-D method which has a 3-D target state model. FIG. 7 illustrates the direct 3-D method which has corn shape likelihood for assigning 3-D distributed particle weights. The PPS method has the same effect which the two variables observation likelihood function and 3-D distributed particles are projected onto two selected planes. Since planes are selected with a small variance according to the given projected measurement variance, PPS is better estimator comparing direct 3-D Method. The comparison of performance will be evaluated based on Cramer-Rao Low bound (CRLB). CRLB will be described later in detail.


Hereinafter, there will be described a method of tracking an object in a three-dimensional space by using multiple sensors according to an embodiment of the present invention.


The tracking trajectory deviation due to the unexpected change of an object dynamic model in (23) can be partially solved by the planes weighted combining method. However, single particle filter is unable to distinguish radial error which is illustrated in FIG. 8. For this reason, the multiple sensors based particle filtering has been introduced for robust tracking even when the measurement of one of the particle filter is severely corrupted. In this section, we present several approaches and sensor selection methods.


First, there will be described a dynamic model and measurement function transformation.


For an additional measured angle obtained from the kth sensor located at (xn(s)k, yn(s)k), the measurement function is revised as










Z
n







k


=



h




(

X
n
k

)


+

E
n







(
24
)





where












h




(

X
n
k

)


=


tan

-
1






y
n

-

y

n


(
s
)


k




x
n

-

x

n


(
s
)


k








(
25
)







The primary sensor, S1, is assumed to be placed at the origin as shown in



FIG. 9. FIG. 9 illustrates relative coordinate systems with respect to the global coordinate (i.e., the coordinate system for the primary sensor). Each of the coordinate systems must satisfy (25).


The object dynamic model is also transformed for each sensor. Since the additional sensors shown in FIG. 9 have a different coordinate system, respectively, the target dynamic models are transformed with respect to the primary sensor coordinate systems as











F
n



[

S





2

]


=

(



1






-

A
x




T
s
2



2


V

x
,

n
-
1





+

T
s




0


0




0






-

A
x




T
s



V

x
,

n
-
1




+
1



0


0




0


0


1






A
y



T
s
2



2


V

y
,

n
-
1





+

T
s






0


0


0






A
y



T
s



V

y
,

n
-
1




+
1




)





(
26
)








F
n



[

S





3

]


=

(



1






A
x



T
s
2



2


V

x
,

n
-
1





+

T
s




0


0




0






A
x



T
s



V

x
,

n
-
1




+
1



0


0




0


0


1






-

A
y




T
s
2



2


V

y
,

n
-
1





+

T
s






0


0


0






-

A
y




T
s



V

y
,

n
-
1




+
1




)





(
27
)








F
n



[

S





4

]


=

(



1






-

A
x




T
s
2



2


V

x
,

n
-
1





+

T
s




0


0




0






-

A
x




T
s



V

x
,

n
-
1




+
1



0


0




0


0


1






-

A
y




T
s
2



2


V

y
,

n
-
1





+

T
s






0


0


0






-

A
y




T
s



V

y
,

n
-
1




+
1




)





(
28
)







where Fn[S2], Fn[S3] and Fn[S4] are the transformed dynamic models with respect to each sensor S2, S3 and S4.


There will be described an independent K-multiple sensors (IMS) method according to a first embodiment of the according to an embodiment of the present invention.


We consider tracking with two sensors out of K sensors. The objective is to select most effective two sensors for tracking. While the number of sensors can be extended to consider more than 2 sensors, we limit our discussion to two sensors. First, the best plane (i.e., with lowest measurement variance) from each sensor is selected. Among the selected K planes, the best two planes are again selected based on the measurement θ and φ. The y-z plane is selected when θ is in the range between 45° and 135° and φ is close to 0° or 180°. This selection is based on FIG. 2. In contrast, the z-x plane is selected when θ is in the range between 0° and 45°, or 135° and 180°, and φ is close to 90° or 270°. This selection is based on FIG. 3. Note that in order to avoid selecting the two planes estimating same components, consideration of the second best planes selection for different planes is required.


After planes selection, each particle filter estimates the target state vector independently. For an estimated 3-D state vector in combining method, similarly to the planes weighted combining method of single sensor, the nodes weighted combining method is proposed to weight the selected sensor nodes. For the reliability criterion of each sensor node, particles weights-sum obtained from the two planes are considered similarly to single sensor.


In the case of selecting y-z plane from sensor U and z-x plane from sensor V, the nodes weight is derived as










W
n
U

=




i
=
1

M




w
n

U


(
i
)





(

y





z

)







(
29
)







W
n
V

=




i
=
1

M




w
n

V


(
i
)





(

z





x

)







(
30
)







where wnk(1)(P) represents the ith particle weight in a p plane of the kth sensor. In the same planes selection, the final estimated state vector of each component Xn(x|xyz), Xn(y|xyz) and Xn(z|xyz) is











X
n



(

x


x





y





z


)


=



X
n
V



(

x


z





x


)


.





(
31
)








X
n



(

y


x





y





z


)


=



X
n
U



(

y


y





z


)


.





(
32
)








X
n



(

z


x





y





z


)


=





X
n
U



(

z


y





z


)




W
n
U


+



X
n
V



(

z


z





x


)




W
n
V





W
n
U

+

W
n
V







(
33
)







Finally, (17) is used to obtain the final 3-D state vectors.


Next, there will be a common-resampling K-multiple sensors method according to a second embodiment of the present invention.


Common-Resampling K-Multiple Sensors (CRMS) algorithm employs redundancy within the selected planes (i.e., multiple x-y, y-z and/or z-x planes). Instead of selecting 2 planes as in the IMMS, the CRMS selects R planes with the lowest observation input variance. In the CRMS algorithm, the multiple sensors is depend on each other. FIG. 10 illustrates the two possible methods for common resampled particles.


CRMS-I: As shown in FIG. 10(a), each sensor generates particles and computes their weights independently but the common resampled particles are obtained through integrating all particles. The resampling process of multiple sensors is illustrated in FIG. 11. The sizes of circles represent the weights of the particles. From the resampling process, the plane with larger weights-sum of particles contributes more for the final estimation. The final estimated state vector of CRMS-I is the same as IMS calculation with new common resampled particles.


CRMS-II: FIG. 10(b) illustrates an alternative method for generating the common resampled particles. In this method, the particles are generated independently, but the weight computation is performed on all particles at the same time. After the common particles weights calculation, resampling process is operated in the same way as CRMS-I. CRMS-II associates particles of each sensor node from weight computation while CRMS-I associates them from resampling process. The rest of estimation process is the same as the method of CRMS-I. The CRMS-II weights calculation is illustrated in FIG. 12. Each cluster of particles from R same planes are gathered and the particles weights wn(i)(p) are calculated by measured angles from selected R sensors. The particles weights are computed R times for R×M particles weights as following:











w
n

(
i
)




(
P
)


=




k
=
1

R




w
n

k


(
i
)





(
P
)







(
34
)







where, wnk(i)(P) is as follows. In this case, p(Zn|Xn) is likelihood function for assigning particles weights.











w
n

k


(
i
)





(
P
)


=


p


(


Z
n






k
=
1

R




X
n

k


(
i
)





(
P
)




)


.





(
35
)







Next, there will be described a One Merged K-Multiple Sensors (OMMS) method according to a third embodiment of the present invention.


OMMS algorithm also utilizes more than two selected planes. All received angles from R selected sensors are directly considered. Hence, the sensors are associated at the beginning stage (i.e, particles generation stage). All selected particle clusters are gathered in each plane which acts as a single sensor node. The difference between the OMMS and the single sensor PF is the number of measured angles. Particles weights calculation is illustrated in FIG. 13 where wn(i)(P) are calculated by measured all angles from the selected R sensors. This weight computation method is similar to CRMSII weights calculation except for the number of particles. OMMS particles weight is expressed in (36). The rest of estimation process is the same as the method using single sensor node.











w

n
,

x


x





y




K





m


=



k



w

n
,
k
,

x


x





y




K





m












w
n

(
i
)




(
P
)


=




k
=
1

R




w
n

k


(
i
)





(
P
)








(
36
)







Hereinafter, there will be described sensor nodes association and complexity of the three-dimensional object tracking methods using multiple sensors according to an embodiment of the present invention.


Sensor nodes association: The four multiple sensors algorithms are classified according to sensor nodes association timing as shown in Table III. IMS algorithm implements particle filtering using entirely independent multiple sensor nodes except for combining final state vectors which are combined based on the nodes weighted combining method. The other algorithms CRMS-I, CRMS-II and OMMS associate the sensor nodes data or particles in each different particle filtering step. Note that a FC estimates the state vector after sensor nodes association.














TABLE 3







IMS
CRMS-I
CRMS-II
OMMS






















Particles
X
X
X




generation



Weight
X
X





computation



Resampling
X













Complexity: Generally, in the selected R planes for each plane, algorithm IMS requires totally 3R 2-D particle filters since each of the selected 3R planes particle filters is implemented independently. However, when scheduling method using only the two best planes is applied, only two 2-D particle filters are required as same as the single sensor based PPS method.


Algorithm CRMS requires 3R 2-D particle filters since each of the selected R sensors generates particles in respective three planes. Commonly, R×M particles are newly resampled to M particles in each plane. In addition, in CRMS-II, R×M particles are gathered in each plane and weights are determined.


Algorithm OMMS requires totally only two or three 2-D particle filters; thus, a FC can simply manage particle filters with few planes. Except for R times weight computation comparing with single sensor estimation, all estimating process is the same as the complexity of single sensor estimation. Therefore, OMMS algorithm reduces the overall complexity among the proposed algorithms.


Next, there will be described performance analysis for the three-dimensional object tracking method according to an embodiment of the present invention. Particularly, the performance analysis is executed by using Cramer-Rao Lower Bound (CRLB). First, general CRLB will be described, and then, the performance analysis executed by using CRLB will be described. CRLB has been widely used as a reference in evaluating approximated solution. The CRLB represents the best achievable performance by identifying the low bound. The bound expression is derived assuming that the process noise Qn is zero for constant velocity (CV) model and constant acceleration (CA) model. The error covariance matrix Cn of an unbiased estimator {circumflex over (X)}n of the state vector is bounded by






C
n
=E({circumflex over (X)}n−Xk)({circumflex over (X)}n−Xk)T   (37)


where E is the expectation operator. The CRLB expression is obtained from inverse of information matrix which is defined as






J
n
=E[∇x
n log p(Xn|Zn)][∇xn log p(Xn|Zn)]T   (38)


where Jn is the information matrix, ∇xn denotes the gradient operator as to state vector Xn, and the diagonal elements of Jn−1 represents the low bound.


In the absence of the process noise, the evolution of the state vector is deterministic resulting in






J
n+1
=[F
n
]TJnFn−1+Hn+1TRn+1−1Hn+1   (39)


where Fn is transition matrix that represents CV or CA as shown in (16), Rn+1 is the covariance matrix of the measurement variance σθ2 , and Hn is the gradient component of a measurement function hn. Hn, referred to as Jacobian of hn, is given by






H
n
=[∇x
n+1
h
n+1
T(Xn+1)]T.   (40)


In addition, J1 for the initial bound is derived by the initial covariance matrix of the state estimate as following:






J
1
=C
1
−1   (41)


where C1 may be applied to an x-y plane as (42), which is applied in the same way for other planes such as y-z and z-x planes.










C
1

=

(




σ
x
2



0



σ

x





y

2



0




0




σ

V





x

2

+


σ

a





x

2


T
s





0


0





σ

x





y

2



0



σ
y
2



0




0


0


0




σ

V





y

2

+


σ

a





y

2


T
s






)





(
42
)







The initial covariance matrix in (42) is given by the prior knowledge of target which is the range of initial position, speed and acceleration: initial target range N( r, σr2) with N( θ, σθ2), initial speed and acceleration N( s, σs2) and N( α, σα2) where r, θ, s and α are the mean values of initial distance, angle, speed and acceleration, and σr2, σθ2, σs2 and σα2 are the variances of initial distance, angle, speed and acceleration.


Through the conversion from polar to Cartesian coordinates, σs2, σy2 and σxy2 are derived as following:





σx2= r2σθ2 sin2 θr2 cos2 θ  (43)





σy2= r2σθ2 cos2 θr2 sin2 θ  (44)





σxy=(σr2r2σθ2)sin θ cos θ  (45)


Hereinafter, there will be described CRLB on the PPS method for a single sensor according to an embodiment of the present invention.


In the projection method for 3-D construction, three information matrices in (39) are generated in each plane. Here, for clear notation, we put the plane type on the upper-right of information matrix Jn such as Jnp which represents Jnxy, Jnyz and Jnzx. In addition, transition matrix, measurement variance and Jacobian of hn are also denoted as Fnp, Rnp, and Hnp, respectively. For further discussion, a dynamic model is assumed to be CV in an x-axis, CA with Ay and Az in y and z-axis. Based on (16), transition matrices Fnp are derived as










F
n

x





y


=

(



1



T
s



0


0




0


1


0


0




0


0


1






A
y



T
s
2



2


V

y
,

n
-
1





+

T
s






0


0


0






A
y



T
s



V

y
,

n
-
1




+
1




)





(
46
)







F
n

y





z


=

(



1






A
y



T
s
2



2


V

y
,

n
-
1





+

T
s




0


0




0






A
y



T
s



V

y
,

n
-
1




+
1



0


0




0


0


1






A
z



T
s
2



2


V

z
,

n
-
1





+

T
s






0


0


0






A
z



T
s



V

z
,

n
-
1




+
1




)





(
47
)







F
n

z





x


=

(



1






A
z



T
s
2



2


V

z
,

n
-
1





+

T
s




0


0




0






A
z



T
s



V

z
,

n
-
1




+
1



0


0




0


0


1



T
s





0


0


0


1



)





(
48
)







The covariance matrix of the measurement variance Rnp is σp2, which is a variance of bearing measurement in projected plane P (1×1 matrix for a single bearing). Here, we should consider the measurement variance enhancing estimation performance. As the projected measurement variance was shown in FIGS. 2 and 3, the raw bearings θ and φ are projected in three planes with different angle variance according to an object position. Therefore, plane selection based on small angle variance is expected to increase the estimation accuracy.


In the last stage, Jacobian Hnp is derived as













H
n

x





y


=


[





x

n
+
1




(

x





y

)






h

n
+
1

T



(


X

n
+
1




(

x





y

)


)



]

T







=

(







(


tan

-
1




(

y
x

)


)




x




0






(


tan

-
1




(

y
x

)


)




y




0



)








=

(





-
y



x
2

+

y
2





0



x


x
2

+

y
2





0



)


,







(
49
)










H
n

y





z


=


[





x

n
+
1




(

y





z

)






h

n
+
1

T



(


X

n
+
1




(

y





z

)


)



]

T







=

(







(


tan

-
1




(

z
y

)


)




y




0






(


tan

-
1




(

z
y

)


)




z




0



)








=

(





-
z



y
2

+

z
2





0



y


y
2

+

z
2





0



)


,







(
50
)










H
n

z





x


=


[





x

n
+
1




(

z





x

)






h

n
+
1

T



(


X

n
+
1




(

z





x

)


)



]

T







=

(







(


tan

-
1




(

x
z

)


)




z




0






(


tan

-
1




(

x
z

)


)




x




0



)







=


(





-
x



x
2

+

z
2





0



z


x
2

+

z
2





0



)

.








(
51
)







Next, there will be described CRLB analysis in a direct 3-D method using a single sensor.


In the direct 3-D method, information matrix Jn is expressed as 6×6 matrix. Note that transition matrix, measurement variance and Jacobian of hn for Jn of 3-D state vector do not have upper-right denotation in contrast to the 2-D projection method. The low bound is directly obtained from (39) with extension of 2-D state vector based matrices. Here, transition matrix is expressed as










F
n

=

(



1



T
s



0


0


0


0




0


1


0


0


0


0




0


0


1






A
y



T
s
2



2


V

y
,

n
-
1





+

T
s




0


0




0


0


0






A
y



T
s



V

y
,

n
-
1




+
1



0


0




0


0


0


0


1






A
z



T
s
2



2


V

z
,

n
-
1





+

T
s






0


0


0


0


0






A
z



T
s



V

z
,

n
-
1




+
1




)





(
52
)







In this method, measured bearings vector [θ, φ]T are given with variances σ02 and σφp. We notice that the two bearings tracking are simply extended to multiple sensors tracking. For the 3-D state vector estimation, only single sensor detects bearings physically. However, the bearings measurement should be interpreted that different two sensors detect each angle at the same place. Thus, is the measurement error covariance Rn and the Jacobian Hn+1 should be expressed in multiple sensors case following as










R
n

=

(




σ
θ
2



0




0



σ
φ
2




)





(
53
)








H
n

=


[




x

n
+
1




[



h

n
+
1


(
1
)




(

X

n
+
1


)





h

n
+
1


(
2
)




(

X

n
+
1


)



]



]

T


,




(
54
)









H
n






=

(







h

(
1
)






x

n
+
1










h

(
1
)






V







x

n
+
1










h

(
1
)






y

n
+
1










h

(
1
)






V







y

n
+
1










h

(
1
)






z

n
+
1










h

(
1
)






V







z

n
+
1












h

(
2
)






x

n
+
1










h

(
2
)






V







x

n
+
1










h

(
2
)






y

n
+
1










h

(
2
)






V







y

n
+
1










h

(
2
)






z

n
+
1










h

(
2
)






V







z

n
+
1







)







=


(





-
y



x
2

+

y
2







x





z



(


x
2

+

y
2

+

z
2


)





x
2

+

y
2









0


0





x


x
2

+

y
2







y





z



(


x
2

+

y
2

+

z
2


)





x
2

+

y
2









0


0




0




-



x
2

+

y
2






(


x
2

+

y
2

+

z
2


)





x
2

+

y
2









0


0



)

T








(
55
)







where hn(1) and hn(2) are measurement function of bearings θ and φ, respectively.


Next, there will be described CRLB analysis in the PPS method for multiple sensors according to an embodiment of the present invention.


For the continuous estimation evaluation based on the low bound, the to evaluation by multiple sensors are worthy of being considered under the several proposed fusion method. However, the fusion method cannot be entirely applied to the CRLB because the bound considers only dynamic model, measurement function with error covariance and the prior knowledge of initial state vector and external factors2 in the absence of process noise. Thus, we will address the several possible bounds only with formulating the direct 3-D method and our proposed PPS method with more choices due to multiple sensors. Given the possible boundaries, we will analyze the performance related to the proposed fusion method indirectly and finally compare with the single sensor estimation. Note that the direct 3-D method results in only one single boundary while the planes-projection estimation results in several ones. The reason is explained in this part and we learn the flexibility of the proposed method, which is more advantageous than the direct 3-D method.


In the PPS method with multiple sensors, generally 6R low boundaries are obtained where R is the number of selected planes. Importantly, all factors affecting CRLB, which are Fn, Rn, and Hn are considered to be transformed according to each selected sensor. Based on the different factors, we derive the 3R evolving information matrices in general:






J
n+1
p
[k]=[F
n
p−1
[k]]
T
J
n
p
[k]F
n
p−1
[k]+H
n+1
T
[k]R
n+1
p-1
[k]H
n+1
p
[k]  (56)


where p denotes a plane (i.e., x-y, y-z or z-x plane) and k denotes a sensor index.


The dynamic models Fnp[k] are transformed with respect to each position as discussed in Section IV-A where transformed dynamic model in the view of sensors are derived in (26) through (28), and k=1, 2, 3, . . . , R for the number of selected sensors. The 3R dynamic models Fnp[k] are derived with transformation based on (46) through (48) incorporated to (26) through (28).


The measurement covariance error Rn+1p−1[k] is denoted as the variance of bearing measurement as explained. Here, the main advantages of using multiple sensors are addressed. The advantage is not only increasing the estimation accuracy based on multiple trials positioned in different locations, but also having a variety of choices for planes selection with the smallest bearings variances.


The Jacobian Hn+1pT[k] are extended in the same as (53) which two virtual sensors are measuring two bearings. In general, the Jacobian of measurement function in p with K sensors are expressed as






H
n
pT
[k]=[∇x
n+1
p
[h
n+1
p(1)(Xn+1p)hn+1p(2)(Xn+1p) . . . hn+1p(R)(Xn+1p)]]T   (57)


As an example, in an x-y plane, it is expressed as









(







h

(
1
)






x

n
+
1










h

(
1
)






V







x

n
+
1










h

(
1
)






y

n
+
1










h

(
1
)






V







y

n
+
1












h

(
2
)






x

n
+
1










h

(
2
)






V







x

n
+
1










h

(
2
)






y

n
+
1










h

(
2
)






V







y

n
+
1


























h

(
R
)






x

n
+
1










h

(
R
)






V







x

n
+
1










h

(
R
)






y

n
+
1










h

(
R
)






V







y

n
+
1







)




(
58
)







Next, there will be described CRLB analysis on a direct 3-D method using multiple sensors.


Similarly to single sensor based direct 3-D method, the information matrix Jn is 6×6 matrix. The low bound is






J
n+1
[k]=[F
n
−1
[k]]
T
J
n
[k]F
n
−1
[k]+H
n+1
T
[k]H
n+1
−1
[k]H
n+1
[k]  (59)


The dynamic model Fn[k] is transformed with respect to each sensor position.


Based on bearings θ1 , φ1, θ2, φ2, θK and φK from K multiple sensors, the augmented bearings measurement vector is denoted as [θ1 φ1 θ2 φ2 . . . θK φK]T extending the equations of Rn and Hn in (53) and (54) which are derived by










R
n

=

(




σ

θ





1




0


0


0


0


0


0




0



σ

φ





1




0


0


0


0


0




0


0



σ

θ





2




0


0


0


0




0


0


0



σ

φ





2




0


0


0



























0


0


0


0


0



σ

θ





R




0




0


0


0


0


0


0



σ

φ





R





)





(
40
)










H
n
T



[
k
]







=


[




x

n
+
1




[



h

n
+
1


(
1
)




(

X

n
+
1


)





h

n
+
1


p


(
2
)





(

X

n
+
1


)















h

n
+
1


(
R
)




(

X

n
+
1


)



]



]

T







=


(







h

(
1
)






x

n
+
1










h

(
1
)






V







x

n
+
1










h

(
1
)






y

n
+
1










h

(
1
)






V







y

n
+
1










h

(
1
)






z

n
+
1










h

(
1
)






V







z

n
+
1












h

(
2
)






x

n
+
1










h

(
2
)






V







x

n
+
1










h

(
2
)






y

n
+
1










h

(
2
)






V







y

n
+
1










h

(
2
)






z

n
+
1










h

(
2
)






V







z

n
+
1
































h

(
R
)






x

n
+
1










h

(
R
)






V







x

n
+
1










h

(
R
)






y

n
+
1










h

(
R
)






V







y

n
+
1










h

(
R
)






z

n
+
1










h

(
R
)






V







z

n
+
1







)

.








(
61
)







In the above, the embodiments of the present invention have been described. There will be described a simulation result and analysis on the same.


In this section, the performance of PPS method is demonstrated comparing with the direct 3-D method based on several scenarios. The scenarios 1 and 2 show the single sensor based planes selection according to φ. The scenario 3 shows the changing planes selection from x-y and y-z selected planes to x-y and y-z selected planes according to φ. The scenario 4 shows the multiple sensors based planes and sensors selection according to θ and φ.


Scenario 1: This scenario is that an object is moving in the range φ between 45o and 64o. Single sensor is placed in an origin (0, 0, 0). Initial position of an object is (1 m, 1 m, 3 m) with initial velocity (1 m=s, 1 m=s, 1 m=s). A sensor is measuring two measured angles θ and φ at the interval 0.1 second with measured variance 3 both. The observed object is moving CV in x direction, CA in y and z direction, 0.1 m/s2 and 0.5 m/s2, respectively. Since the φ is in the range between 45° and 64°, x-y and y-z planes are selected.


Scenario 2: This scenario is that an object is moving in the range φ between 24° and 32°. Similarly to the scenario 1, single sensor is placed in an origin (0, 0, 0) with same initial velocity and movement; CV in x direction, CA in y and z direction, 0.1 m/s2 and 0.5 m/s2, respectively. Initial position of an object is (2 m, 1 m, 1 m). Since the φ is in the range between 24° and 32°, x-y and z-x planes are selected.


3) Scenario 3: This scenario is that an object is moving in the range φ between 40° and 48° crossing 45°. Similarly to the scenarios 1 and 2, a single sensor is placed in an origin (0, 0, 0) with same initial velocity and movement; CV in x direction, CA in y and z direction, 1 m/s2 and 0.5 m/s2, respectively. Initial position of an object is (2 m, 1 m, 2.5 m). Since the φ of the first 13 time-instants is in the range between 45o and 48o, x-y and y-z planes are selected. In the last 37 time-instants, x-y and z-x planes are selected since the φ is the range between 45° and 48°.


Scenario 4: This scenario is that an object is moving as same as the scenario 3. Here, three multiple sensors are placed in (0, 0, 0) called sensor 1, (10, 0, 0) called sensor 2, and (10, 10, 10) called sensor 3. The measured angle φ is different as shown in FIG. 14. Based on PPS method, scheduling presented in Section IV-B is possible through the multiple sensors. We show the multiple sensor performance using the two best selected planes of IMS only since other multiple sensors algorithms focus on fusion strategy through nodes association. During the first 13 time-instants, y-z planes from sensor 1 and z-x planes from sensor 3 are selected. After time-instant 13, φ of all three sensors leads to select a z-x plane. Hence, from time-instants 14 to 27, z-x plane from sensor 3 is selected where the φ of sensor is close to 0 resulting in small measurement variance. In addition, one more selection is any x-y plane which is insensitive to the projection. Finally from time-instants 28 to 50, z-x plane from sensor 2 is selected with any x-y plane. Note that the planes selection is based on the projected variance characteristic illustrated in FIGS. 2 and 3.



FIGS. 15 and 16 represent the low bound in each direction. In FIG. 15, the selection of y-z plane with x-y plane, in FIG. 16, the selection of z-x plane with x-y plane shows the good performance which proves the PPS method is a good estimator. Note that all boundaries are presented for the comparison of other planes selection. In addition, single sensor and multiple sensor based estimations are compared in FIGS. 17 and 18 which have the same scenario except for the number of sensors. In particular, the multiple sensor based estimation is using the scheduling method finding the best two planes among three sensors. Since the multiple sensors support broader choices for planes selection, the performance is shown to be better comparing single sensor based estimation.


As described above, exemplary embodiments have been shown and described. Though specific terms are used herein, they are just used for describing the present invention but do not limit the meanings and the scope of the present invention disclosed in the claims. Therefore, it would be appreciated by those skilled in the art that changes may be made to these embodiments without departing from the principles and spirit of the invention. Accordingly, the technical scope of the present invention is defined by the claims and their equivalents.


INDUSTRIAL APPLICABILITY

The present invention may be applied to the field of 3-D object tracking.

Claims
  • 1. A method of tracking an object in a three-dimensional (3-D) space by using particle filter-based acoustic sensors, the method comprising: selecting two planes in the 3-D space;executing two-dimensional (2-D) particle filtering on the two selected planes, respectively; andassociating results of the 2-D particle filtering on the respective planes.
  • 2. The method of claim 1, wherein, in the selecting two planes, the two planes are selected from planes in a 3-D space formed by a single sensor.
  • 3. The method of claim 2, wherein the two selected planes are determined based an elevation of three planes in the 3-D space with respect to the single sensor.
  • 4. The method of claim 1, wherein, in the selecting two planes, the two planes are selected from planes in a 3-D space formed by each of a plurality of sensors.
  • 5. The method of claim 4, wherein the two selected planes are determined based on an azimuth and an elevation of the planes in the 3-D space with respect to the each of the plurality of sensors.
  • 6. The method of claim 4, wherein the selecting two planes is executed by using independent k-multiple sensors.
  • 7. The method of claim 4, wherein the selecting two planes is executed by using common-resampling k-multiple sensors.
  • 8. The method of claim 4, wherein the selecting two planes is executed by using merged k-multiple sensors.
  • 9. The method of claim of 1, wherein the associating results of the 2-D particle filtering on the respective planes is performed regarding weights as the same with respect to the same factors in two different planes.
  • 10. The method of claim 1, wherein the associating results of the 2-D particle filtering on the respective planes is performed by adding a weight of each of the same factors in two different planes to each other.
Priority Claims (1)
Number Date Country Kind
10-2008-0017936 Feb 2008 KR national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/KR08/01916 4/4/2008 WO 00 8/20/2010