The present application is based on, and claims priority from, British Application Number 0522225.2, filed Oct. 31, 2005, the disclosure of which is hereby incorporated by reference herein in its entirety.
The present invention relates to a method of tracking and/or deleting an object in a video stream, and particularly although not exclusively to a method capable of operating on a real-time video stream in conjunction with a sub-real-time object detector.
In recent years, algorithms designed to detect faces or other objects within a video stream have become much more efficient, to the extent that some are now capable of operating in real-time or near real-time when run on a powerful platform such as a PC. However, there is now an increasing demand for face and object detection to be provided on low powered platforms such as hand-held organisers, mobile telephones, still digital cameras, and digital camcorders. These platforms are typically not sufficiently high powered to allow real-time operation using some of the better and more robust face/object detectors. There is accordingly a need to speed up object detection/tracking.
There have of course been many advances in recent years designated to speed up an object detection. Most of these operate on a frame-by-frame basis. That is, speedup is achieved by designing a faster frame-wise detector. In some cases, detectors are specifically designed for operation on a video stream, in which case some amount of historical information may be propagated to the current frame of interest. This may be done for reasons of speedup, robustness, or sometimes both.
Some examples of object detection and tracking algorithms designed specifically for a video stream are described below. Note that each of these methods presume the existence of a face/object detector that can operate on a single frame. The detector is generally assumed to give accurate results that may be enhanced or verified by historical data.
US Patent US20040186816 A1.—This is an example of a combined detection/classification algorithm, utilized for mouth tracking in this case. The inventors use a face detector initially to locate the face and mouth, then track the mouth using a linear Kalman filter, with the mouth location and state verified by a mouth detector in each frame. If the mouth is lost in any frame, the face detector is re-run and the mouth location re-initialized.
Keith Anderson & Peter McOwan. “Robust real-time face tracker for cluttered environments”, Computer Vision and Image Understanding 95 (2004), pp 184-200.—The authors describe a face detection and tracking system that uses a number of different methods to determine a probability map for face locations in an initial frame of the video sequence. This probability map is then updated frame by frame using the same detection methods, so that in any given frame the recent history is included in the probability map. This has the effect of making the system more robust.
R. Choudhury Verma, C. Schmid, K. Mikolajczyk, “Face Detection and Tracking in a Video by Propagating Detection Probabilities”, IEEE Trans on Pattern Analysis and Machine Intelligence, Vol. 25, No. 10 pp 1215-1228, 2003.—The authors describe a face detection and tracking system similar to the previous two mentioned. Faces are detected in each frame, and a probability map of face locations in the video stream is updated using the CONDENSATION algorithm. This algorithm is described in Isard & Blake, “Condensation—Conditional Density Propagation for Video Tracking”, Int. J. Computer Vision, Vol. 29, 1998, pp 5-28.
According to a first aspect of the present invention there is provided a method of tracking an object in a video stream comprising:
The invention further extends to a computer program for operating such a method, and to a computer readable medium bearing such a computer program.
According to a second aspect of the invention there is provide an apparatus for tracking an object in a video stream comprising a plurality of video frames, the apparatus including an object detector comprising a programmed computer for:
(a) running an object detector at a plurality of sampling locations, the locations defining a first grid spaced across a first frame, and recording a hit at each location where an object of interest is found; and
(b) running the object detector at a further plurality of sampling locations defining a second grid spaced across a second frame, the second grid being offset from the first grid, and running the detector in addition at one or more further locations on the second frame derived from the or each location on the first frame at which a hit was recorded.
A particular feature of the present invention is that it may be used in conjunction with a variety of standard and well-understood face or object detection algorithms, including algorithms that operate sub-real-time.
Staggered sampling grids may easily be integrated into many existing detection and tracking system, allowing significant additional speed up in detection/tracking for a very small computational overhead. Since the preferred method applies only to the run-time operation, there is no need to retrain existing detectors, and well understood conventional object/face detectors may continue to be used.
It has been found that in some applications the use of a staggered grid may actually outperform the conventional fine grid (one pass) approach, both for false negative and for false positives. This is believed to be because the use of a local search, in some embodiments, allows attention to be directed at locations which do not occur even on a fine grid, thereby reducing the false negative rate. In addition, a coarse sampling grid is likely to locate fewer false positives, which are typically fairly brittle (that is, they occur only in specific locations), and those that are found are unlikely to be successfully propagated.
The invention may be carried into practice in a number of ways, and one specific embodiment will now be described, by way of example, with reference to the accompanying Figures.
In the present embodiment we wish to attempt real-time or near real-time face detection/tracking on a video stream, but using a face detector/tracker which operates only in sub-real-time.
Any convenient face or object detection/tracking algorithm may be used, including the following: Virma, Schmitd & Mikolajczyk, “Face Detection & Tracking in a Video by Propagating Detection Probabilities”, IEEE Trans. On Pattern Analysis and Machine. Intelligence, Vol. 25, No. 10, October 2003. p 1215; Andersen & McOwan, “Robust real-time face tracker for cluttered environments”, computer Vision and Image Understanding, 95 (2004), 184-200; and Isard & Blake, (op cit).
In a practical embodiment, the face detector may actually operate at a plurality of different scales and may attempt to find a face at a variety of different sizes/resolutions to the right of and below the nominal starting position 16. Thus, the dotted rectangle 14, within which the face 12 is located, may be of a variety of differing sizes depending upon the details of the image being analysed and the details of the face detector. For the purpose of simplicity, however, the following description will assume that we are interested in detecting faces or other objects at a single resolution only. It will of course be understood that the method generalises trivially to operate at multiple resolutions.
If the face detector were to be capable of operating sufficiently rapidly, we could simply define a fine grid across the image, and run the face detector at every point on the grid, frame by frame. However, robust and reliable face detectors are computationally intensive, and it may not be possible for the detector to keep up with the incoming video stream if the detector is called at each and every point on a fine grid.
In the present embodiment, the detector is called not at each point of a fine grid but at each point of a larger 2×2 grid, as shown by the cells annotated with the numeral 1 in
Once the first frame has been analysed, as shown in
At any pass, if a face is located, the location and scale/size of the face is propagated to the next frame in order to assist detection and/or tracking of the face in that new frame.
In the example shown, the first pass of
In the third pass, shown in
The propagation of information from 1 frame to a subsequent frame may take a variety of forms, including any or all of the following:
Preferably, the method is applied to consecutive sequential frames within the video stream, but given a sufficiently high frame rate the algorithm will still operate even if some frames (for example every other frame) are dropped.
It will of course be understood that the method may equally well be applied using coarse grids having a size other than 2×2, based upon the size of the fine grid which ultimately has to be covered.
If the desired sampling resolution (the cell size of the fine grid) is given by the variable “step” then a staggered algorithm based on a sampling resolution of twice that size may be generated as follows:
This uses a procedure called “detect_object” operating on a particular image location (i, j), the inner two loops representing a coarser sampling grid that is staggered by the index in the outer loop, so that all of the locations in the original finer sampling grid are covered. It may be noted that apart from a small overhead this algorithm requires almost no greater computational effort than the effort required to scan the finer grid in a single pass.
The method is shown, schematically, in
On completion of the steps 52, 54, these two steps may be repeated (again, in either order) for a sequence of subsequent frames, with each respective sampling grid being offset from the grid used on the preceding frame. That is illustrated schematically in
In a practical implementation, the invention may be embodied within some hardware or apparatus, such as a still or video camera 40, shown schematically in
Number | Date | Country | Kind |
---|---|---|---|
0522225.2 | Oct 2005 | GB | national |
Number | Name | Date | Kind |
---|---|---|---|
4692801 | Ninomiya et al. | Sep 1987 | A |
4853968 | Berkin | Aug 1989 | A |
4873573 | Thomas et al. | Oct 1989 | A |
5150207 | Someya | Sep 1992 | A |
5247353 | Cho et al. | Sep 1993 | A |
5359426 | Honda et al. | Oct 1994 | A |
5611000 | Szeliski et al. | Mar 1997 | A |
5617482 | Brusewitz | Apr 1997 | A |
5691769 | Kim | Nov 1997 | A |
5892849 | Chun et al. | Apr 1999 | A |
5910909 | Purcell et al. | Jun 1999 | A |
5917953 | Ausbeck, Jr. | Jun 1999 | A |
6009437 | Jacobs | Dec 1999 | A |
6064749 | Hirota et al. | May 2000 | A |
6215890 | Matsuo et al. | Apr 2001 | B1 |
6278803 | Ohashi | Aug 2001 | B1 |
6289050 | Ohtani et al. | Sep 2001 | B1 |
6456340 | Margulis | Sep 2002 | B1 |
6480615 | Sun et al. | Nov 2002 | B1 |
6546117 | Sun et al. | Apr 2003 | B1 |
6671321 | Ohtani et al. | Dec 2003 | B1 |
6760465 | McVeigh et al. | Jul 2004 | B2 |
6782132 | Fogg | Aug 2004 | B1 |
7133453 | Arita et al. | Nov 2006 | B2 |
7433497 | Chen | Oct 2008 | B2 |
7440587 | Bourdev | Oct 2008 | B1 |
7473495 | Tanaka et al. | Jan 2009 | B2 |
7522748 | Kondo et al. | Apr 2009 | B2 |
7616780 | Bourdev | Nov 2009 | B2 |
7817717 | Malayath et al. | Oct 2010 | B2 |
7885329 | Nakamura et al. | Feb 2011 | B2 |
20020114525 | Bolle et al. | Aug 2002 | A1 |
20020141615 | Mcveigh et al. | Oct 2002 | A1 |
20020159749 | Kobilansky | Oct 2002 | A1 |
20020167537 | Trajkovic | Nov 2002 | A1 |
20030053661 | Magarey | Mar 2003 | A1 |
20030067988 | Kim et al. | Apr 2003 | A1 |
20030198385 | Tanner et al. | Oct 2003 | A1 |
20030227552 | Watanabe | Dec 2003 | A1 |
20040186816 | Lienhart et al. | Sep 2004 | A1 |
20050069037 | Jiang | Mar 2005 | A1 |
20050089196 | Gu et al. | Apr 2005 | A1 |
20050089768 | Tanaka et al. | Apr 2005 | A1 |
20050117804 | Ida et al. | Jun 2005 | A1 |
20050286637 | Nakamura et al. | Dec 2005 | A1 |
20060140446 | Luo et al. | Jun 2006 | A1 |
20070047834 | Connell | Mar 2007 | A1 |
20070097112 | Greig | May 2007 | A1 |
20070279494 | Aman et al. | Dec 2007 | A1 |
Number | Date | Country |
---|---|---|
0955605 | May 1998 | EP |
Number | Date | Country | |
---|---|---|---|
20070097112 A1 | May 2007 | US |