The invention pertains to monitoring and particularly to camera-based monitoring. More particularly, the invention pertains to tracking objects across networks of cameras.
The invention includes camera networks for tracking objects across various fields-of-view and a processor for noting the tracking of the objects within its field-of-view.
a and 15b show an example of background subtraction of a static scene;
a and 16b show an example of background subtraction of a scene involving a non-static object;
a and 18b are graphs showing an effect of background subtraction from a scene having a target object;
a and 24b reveal subtraction of background for a selected image patch;
a and 27a show selected frames from the series of frames in
b and 27b show a target patch of the frame in
a, 29b, 30a and 30b show image examples for evaluation of a multi-resolution histogram; and
a and 31b show images having a histogram and particles shown as rectangles of a tracking task.
Effective use of camera-based monitoring and surveillance systems may require continuous (i.e., temporal) tracking of objects across networks of cameras with overlapping and/or non-overlapping fields-of-view (FOVs). Practical reasons for deploying these systems, especially those used for object tracking in large areas, may limit the number of cameras to be deployed. Furthermore, in order to maximize the coverage area of useful tracking, the cameras may be positioned with non-overlapping FOVs. Additionally, strict security requirements favor surveillance systems that may have the location of the object being tracked (e.g., a person at an airport) at all times. The present system may relate to the issue of continuous object tracking across a network of cameras with or without overlapping fields-of-view.
The present system may incorporate a Bayesian methodology regarded as some sort of sequential Monte-Carlo (SMC) approach. An SMC approach may provide a solution to the problem of image-based tracking through statistical sampling. As a result, this tracking approach may cope with scenarios in which object tracking lasts for as long as the object remains in the FOV of a camera, stops while it is outside of the FOV, and automatically resumes when the object reappears in the camera's FOV. The present system may use a combination of both color and shape information of the object to be tracked.
Tracking across two or more cameras may be achieved according to the following. Tracking may be initiated within a first camera manually or via a user input or automatically which last while the object being tracked is within the first camera's FOV. Object information may be simultaneously communicated to other cameras which are in the topological proximity of the first camera. Tracking tasks in the other cameras may be initiated and put into a mode as if the object had disappeared from the other cameras' FOVs waiting to resume tracking when the object appears again in their FOVs.
To implement and use the present system, a list of cameras may be arranged according to the potential travels or routes that people or moving objects of interest follow during their typical course of moving activity. Based on a camera arrangement, a notion of topological proximity may thus be ascribed. One or more computers may be deployed for processing camera images. Computing resources per camera may be allocated in a predefined or adaptive way. A tracking task of a moving object may be initiated by a single click with an object's silhouette. A motion detection procedure may be used to derive the color and shape representation of a moving object. If the object is not moving, a rectangle that encompasses the object of interest may be used. A thread implementing SMC tracking may begin with a camera's FOV. As the object moves towards the camera's image boundaries in a particular direction, the camera(s) which is (are) in the topological neighborhood may be conditioned to expect an arrival of the object started to be tracked. Camera conditioning may mean that another SMC tracking is spawned using a representation of the object provided by the previous tracking and so on.
The system may use a combination of color and shape for an object representation. The specific object representation may be embedded on an SMC framework for tracking. Topological arrangement of cameras covering a large area may be based on the potential routes or paths of moving objects. Computing resources allocated to the cameras may be based on a Quality of Service (QoS) concept which is derived from the topological proximity among network cameras.
The user interface 11 may have inputs from a Safeskies™ user interface sub-module 21 and a Safeskies™ test user interface sub-module 22, as shown in
User interface sub-module 22 may be used for quickly testing different modules of system 10. The sub-module 22 may be utilized for exposing the system's capability and by minimizing the processing overhead. The user interface sub-module 21 may be implemented when the modules are ready and debugged using a plug-in framework.
The image processor module 13 may have a background subtractor sub-module 23 and a particle filter sub-module 24 connected to it as shown in
The manipulator module 14 may implement an appearance model of the object, such as color and multi-resolution histograms.
The manager module 12 may have a threads sub-module 28 that may implement multiple threads associated with every camera node that composes the camera network, as shown in
The direct X module 15 may be an interface for connecting to digital cameras. The MFC 6.0 module 16 is for interfacing with certain Microsoft™ software.
The manager 12 module may have a connection to image processor module 13 which in turn has a connection to a MIL 7 module 18. Manager 13 may include a camera selector of a network of cameras covering a given area. These cameras might or might not have over-lapping FOVs. Module 18 is an interface enables a direct connection with cameras. Image processor module 13 is also connected to manipulator module 14.
Manager module 12 is connected to a video capture module 17. Video capture module 17 may have a video grabber sub-module 31 which facilitates grabbing of image frames for processing. It is for common camera hook-ups. Module 17 may have a Mil grabber 32 which supports the Mil system for analog cameras. Image frames may be captured either by frame grabbers (such as MIL grabbers) or digitally via a USB or fire wire connection. Additionally, the sub-modules of module 17 may facilitate processing of video clips or composite image frames such as quad video coming from four cameras. A DS video grabber sub-module 34 may be a part of module 17. Sub-module 34 may be a direct show connection for a digital interface, in that it will permit the capturing of images from digital media. There may be a quad grabber sub-module 33.
The tracking algorithm of the present system 10 may use histograms. One feature representation of the object may be a color histogram of the object. The color histogram may be computed effectively and achiever significant image data reduction. These histograms, however, may provide low level semantic image information. To further improve tracking capabilities of an object or target, a multi-resolution histogram may be added to obtain texture and shape information of the object. The multi-resolution histogram may be a composite histogram of an image patch (or particle) at multiple resolutions. For example, to compute a multi-resolution histogram of an image patch, a multi-resolution decomposition of the image patch may be first obtained. Image resolution may be decreased with Gaussian filtering. The image patch at each resolution k may give a different histogram hk. A multi-resolution histogram H may then be constructed by concatenating the intensity histograms at different resolutions H=[h0, h1, h2, . . . hj-1].
Multi-resolution histograms may add robustness for tracking an object relative to noise, rotation, intensity, resolution and scale. These properties may make the system a very powerful representation tool for modeling the appearance of people when considered in tracking.
One may note the performance of multi-resolution histograms of a person's body parts, (i.e., upper and lower body and head). The same person viewed by a camera at different positions, orientations, scales and illuminations is shown in
From a theoretical point of view, the difference histograms may relate to the generalized Fisher information measures, as described in the following formulas.
where I is the intensity image, I(x) is the intensity value at pixel x; G(I) is Gaussian filter, I is the resolution, I*G(l) means filtered image;
is the difference histogram between consecutive image resolutions; vj is the value of histogram density j, and Jq(I) is the generalized Fisher information, which is proportional to the difference histogram.
One may continue to build on the representation methodology by discussing the matching process for determining the similarity and ultimately the match between two different object representations. Two steps in particle filter tracking may be, first, a prediction step (that predicts the change of state of the target, i.e., position and size of the target object); and, second, a measurement step, i.e., image measurements that facilitate the process of building confidence about the prediction regarding the target at hand. The object representation and related matching algorithm are essential items of the measurement approach.
A matching algorithm may first use color information to form object representations. An object representation may be a weighted histogram built using the following formula.
qu=CΣif(δ[b(xi)−u]) (1)
The candidate object representation may be given by a similar weighted color histogram as shown in the following formula.
pu(st(n))=CΣif(δ[b(xi)−u]) (2)
In target representation (1) and candidate object representation (2), C is normalization factor, and f(·) may be a kernel function to weight more on the center of the region. Then the matching algorithm may use the following distance function which is given by the following Bhattacharyya coefficient (3).
m(pu(St(n)),qu) (3)
The smaller the distance between the target model and candidate region, the more similar the regions are. The relationship of the matching distance function in the measurement step in the particle filtering tracking algorithm may be given by the following formula (4),
πt(n)=p(Zt|xt=st(n))=m(pu(St(n)),qu) (4),
where the weighted sample set
(St(n),πt(n)),n=1, . . . , N
N represents the number of particles used for tracking. Particles may be used to approximate the probability distribution of the target state that the tracking algorithm predicts. A visualization of the respective probability density function is shown in
For target representation, one may use a multi-resolution histogram rather than the scheme described in formulae (1) and (2) above. A multi-resolution histogram may include color, shape and texture information in one compact form.
Background subtraction may be shown in
hA=(1; 0; 0), hB=(0; 1; 0), and hC=(0; 0; 1). Histogram hA may be more similar to histogram hB, than histogram hA is to histogram hC. The L1 distance, however, between hA and hB is the same as the L1 distance between hA, and hC. That is,
|hA−hB|1=|hA−hC|1.
Therefore, the histograms in their original form do not necessarily represent the fact that hA is more similar to hB than hA is to hC. The corresponding cumulative histograms are hcum_A=(1; 1; 1), hcum_B=(0; 1; 1), and hcum_C=(0; 0; 1). The distances between the cumulative histograms satisfy:
|hcum
as one may expect.
Matching image regions may be important during tracking (although matching is not equivalent to tracking). In a particle filter framework, the matching performance may greatly affect the measurement step.
a, 29b, 30a and 30b represent a scenario of evaluating performance of a multi-resolution histogram representation on prerecorded video sequences that may represent increasingly complex tracking scenarios. In
a shows a target patch 103 on a person 104 in a relatively populated area, such as an airport. A color histogram may be taken of the patch 103.
In the present specification, some of the matter may be of a hypothetical or prophetic nature although stated in another manner or tense.
Although the invention has been described with respect to at least one illustrative example, many variations and modifications will become apparent to those skilled in the art upon reading the present specification. It is therefore the intention that the appended claims be interpreted as broadly as possible in view of the prior art to include all such variations and modifications.
Number | Name | Date | Kind |
---|---|---|---|
5245675 | Ferre et al. | Sep 1993 | A |
5396284 | Freeman | Mar 1995 | A |
5434617 | Bianchi | Jul 1995 | A |
5473369 | Abe | Dec 1995 | A |
5610653 | Abecassis | Mar 1997 | A |
5655028 | Soll et al. | Aug 1997 | A |
5715166 | Besl et al. | Feb 1998 | A |
5724435 | Malzbender | Mar 1998 | A |
5724475 | Kirsten | Mar 1998 | A |
5845009 | Marks et al. | Dec 1998 | A |
6011901 | Kirsten | Jan 2000 | A |
6049281 | Osterweil | Apr 2000 | A |
6100893 | Ensz et al. | Aug 2000 | A |
6215519 | Nayar et al. | Apr 2001 | B1 |
6359647 | Sengupta et al. | Mar 2002 | B1 |
6370260 | Pavlidis et al. | Apr 2002 | B1 |
6437819 | Loveland | Aug 2002 | B1 |
6445298 | Shepher | Sep 2002 | B1 |
6483935 | Rostami et al. | Nov 2002 | B1 |
6499025 | Horvitz et al. | Dec 2002 | B1 |
6504482 | Mori et al. | Jan 2003 | B1 |
6532299 | Sachdeva et al. | Mar 2003 | B1 |
6542621 | Brill et al. | Apr 2003 | B1 |
6611206 | Milanski et al. | Aug 2003 | B2 |
6678413 | Liang et al. | Jan 2004 | B1 |
6718049 | Pavlidis et al. | Apr 2004 | B2 |
6795567 | Cham et al. | Sep 2004 | B1 |
6882959 | Rui et al. | Apr 2005 | B2 |
6950123 | Martins | Sep 2005 | B2 |
6985620 | Sawhney et al. | Jan 2006 | B2 |
6999601 | Pavlovic et al. | Feb 2006 | B2 |
7035764 | Rui et al. | Apr 2006 | B2 |
7072494 | Georgescu et al. | Jul 2006 | B2 |
7184071 | Chellappa et al. | Feb 2007 | B2 |
7283668 | Moon et al. | Oct 2007 | B2 |
7286157 | Buehler | Oct 2007 | B2 |
7336803 | Mittal et al. | Feb 2008 | B2 |
7376246 | Shao et al. | May 2008 | B2 |
7418113 | Porikli et al. | Aug 2008 | B2 |
7486815 | Kristjansson et al. | Feb 2009 | B2 |
7522186 | Arpa et al. | Apr 2009 | B2 |
20020063711 | Park et al. | May 2002 | A1 |
20020075258 | Park et al. | Jun 2002 | A1 |
20020076087 | You et al. | Jun 2002 | A1 |
20020105578 | Hunter | Aug 2002 | A1 |
20020140822 | Kahn et al. | Oct 2002 | A1 |
20020180759 | Park et al. | Dec 2002 | A1 |
20030040815 | Pavlidis | Feb 2003 | A1 |
20030053658 | Pavlidis | Mar 2003 | A1 |
20030053659 | Pavlidis | Mar 2003 | A1 |
20030053664 | Pavlidis et al. | Mar 2003 | A1 |
20030076417 | Thomas et al. | Apr 2003 | A1 |
20030095186 | Aman et al. | May 2003 | A1 |
20030123703 | Pavlidis et al. | Jul 2003 | A1 |
20030179294 | Martins | Sep 2003 | A1 |
20030209893 | Breed et al. | Nov 2003 | A1 |
20040030531 | Miller | Feb 2004 | A1 |
20040104935 | Williamson et al. | Jun 2004 | A1 |
20040240711 | Hamza et al. | Dec 2004 | A1 |
20050055582 | Bazakos et al. | Mar 2005 | A1 |
20050110610 | Bazakos et al. | May 2005 | A1 |
Number | Date | Country |
---|---|---|
1465115 | Jun 2004 | EP |
03051059 | Jun 2003 | WO |
Number | Date | Country | |
---|---|---|---|
20060285723 A1 | Dec 2006 | US |