The present application is related to U.S. patent application entitled “Multispectral Detection of Personal Attributes for Video Surveillance,” identified by Ser. No. 12/845,121 and filed concurrently herewith, the disclosure of which is incorporated by reference herein in its entirety.
Additionally, the present application is related to U.S. patent application entitled “Facilitating People Search in Video Surveillance,” identified by Ser. No. 12/845,116, and filed concurrently herewith, the disclosure of which is incorporated by reference herein in its entirety.
Also, the present application is related to U.S. patent application entitled “Semantic Parsing of Objects in Video,” identified by Ser. No. 12/845,095, and filed concurrently herewith, the disclosure of which is incorporated by reference herein in its entirety.
Embodiments of the invention generally relate to information technology, and, more particularly, to video surveillance.
Tracking people across multiple cameras with non-overlapping fields of view poses a challenge. Existing approaches include using face recognition technology or relying on soft-biometrics features such as clothing color or person height to identify and track a person over different camera views. However, face recognition techniques are very sensitive to illumination changes, face pose variations, and low-resolution imagery (typical conditions in surveillance scenes). Also, general features like clothing color are subject to ambiguity (for example, two different people may be wearing clothes with the same color). Additionally, color and appearance of a person can change dramatically from camera to camera, due to lighting changes, different camera sensor responses, etc.
Principles and embodiments of the invention provide techniques for attribute-based person tracking across multiple cameras. An exemplary method (which may be computer-implemented) for tracking an individual across two or more cameras, according to one aspect of the invention, can include steps of detecting an image of one or more individuals in each of two or more cameras, tracking each of the one or more individuals in a field of view in each of the two or more cameras, applying a set of one or more attribute detectors to each of the one or more individuals being tracked by the two or more cameras, and using the set of one or more attribute detectors to match an individual tracked in one of the two or more cameras with an individual tracked in one or more other cameras of the two or more cameras.
One or more embodiments of the invention or elements thereof can be implemented in the form of a computer product including a tangible computer readable storage medium with computer useable program code for performing the method steps indicated. Furthermore, one or more embodiments of the invention or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps.
Yet further, in another aspect, one or more embodiments of the invention or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s), or (iii) a combination of hardware and software modules; any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a tangible computer-readable storage medium (or multiple such media).
These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
Principles of the invention include attribute-based people tracking across multiple cameras (for example, video cameras). One or more embodiments of the invention can be used, for example, for smart video surveillance to note trajectories for each of multiple people in a three-dimensional (3D) space. This can be useful, by way of example, for tracking individuals across cameras, observing common shopper patterns in retail stores, etc.
As described herein, one or more embodiments of the invention can address the issue of, for example, given a tracked person in camera A, and a person in camera B, determining whether they correspond to the same person (so that the person can be unified into a single video track). As such, one or more embodiments of the invention include tracking people across multiple cameras based on fine-grained body parts and attribute detectors.
As detailed herein, images of people can be matched based on a set of fine-grained attributes such as, for example, the presence of beard, moustache, eyeglasses, sunglasses, hat, absence of hair (bald people), shape of features (long nose, eyes shape, short/long shirt sleeves), color and texture of clothing, etc. These attribute detectors are learned from large amounts of training images, in multiple levels of resolution. Further, one or more embodiments of the invention can also consider non-visual attributes from other sensors such as, for example, odor and temperature to improve the matching process.
By way of example and not limitation, consider a set of surveillance cameras with non-overlapping fields of view. In each camera, people are detected and tracked in the local field of view using standard computer vision algorithms. Now suppose a person moves from a place that is monitored by camera A to another place that is monitored by camera B. One or more embodiments of the invention include assigning a unique “track ID” (track identifier) for the trajectories of the person locally tracked in cameras A and B, so that tracking in a global coordinate system can be performed. In one or more embodiments of the invention, a “track ID” might simply include a number to identify the track, such as, for example, track ID 1, track ID 2, etc.
This implies a need to match a person tracked in one camera with other people tracked in other cameras. In one or more embodiments of the invention, this matching can be performed by taking into account the geometric configuration of the cameras, the time information (that is, the time usually taken by a person to walk from camera A to B), and a set of fine-grained person attributes as described herein.
One or more embodiments of the invention can be implemented in connection with a set of surveillance cameras with non-overlapping fields of view. As described herein, a unique track identifier (ID) can be generated and used for the trajectories of a person locally tracked in cameras. For each person that enters the scene, a “track” is started. The “track” ends when the person leaves the scene (that is, leaves the camera field of view). Each track has a unique track ID to identify the track. Also, by way of example, this can be generated incrementally—track 1, track 2, . . . , track n.
Further, one or more embodiments of the invention use a feature for matching a person tracked in one camera with other people (for example, the same person) tracked on other cameras using a set of fine-grained person attributes. Additionally, as further detailed herein, a technique and/or algorithm for matching people across cameras based on a set of fine-grained parts and attributes detectors is used, wherein the detectors are learned from large amount of training data (for example, using Adaptive Boosting (Adaboost) learning, thus being robust to lighting and viewpoint changes.
In one or more embodiments of the invention, as noted herein, matching people across cameras can also include a matching algorithm using the geometric configuration of the cameras (including time information) and a set of fine-grained person attributes. Also, one or more embodiments of the invention can include using a human parsing process and/or a methodology applying for both tracking and human parsing. Further, a weighted vector distance matrix and a threshold can be used, for example, in conjunction with a comparison method to determine if the person in camera A corresponds to the person in camera B.
By way of example and not limitation, consider again a person moving from camera A to camera B. One or more embodiments of the invention could proceed as follows. The person is tracked in camera A, for example, using standard tracking techniques. In addition, a set of fine-grained Adaboost detectors (including, for example, detectors for beard, glasses, hat, baldness, etc.) is applied to each image of the person. By way of example only, techniques for generating such detectors can be found in U.S. patent application Ser. No. 12/845,119 entitled “Multispectral Detection of Personal Attributes for Video Surveillance,” filed concurrently herewith, the disclosure of which is incorporated by reference herein in its entirety. This process can be referred to herein as human parsing.
Using the human parsing process, one or more embodiments of the invention can include obtaining a feature vector Fa=[a1, u2, a3, . . . , an] for the person in camera A, corresponding to the max confidence values of each Adaboost detector. Note that each “track” corresponds to a set of image frames with the location of the person in each frame. For each frame, one or more embodiments of the invention apply the set of attribute detectors (beard, eyeglasses, etc.) and obtain a confidence value for each detector in each frame. As such, each detector will have a set of confidence values (one for each frame of the track). For each detector, one or more embodiments of the invention take the maximum confidence value over the frames (that is, “max confidence value”) and store it as one element in the Fa vector. Accordingly, the first element of the Fa vector will contain the max confidence value, for example, for beard, the second element, for example, for eyeglasses, and so on.
By way of example and not limitation, consider that a person has been tracked over five frames. All detectors are applied in each one of these five frames, generating confidence values. For example, the beard detector may generate five confidence values [0.2, 0.5, 0.9, 0.2, 0.7]. The “max confidence value” for the beard attribute is therefore 0.9 (the maximum value of the five confidence numbers).
The same methodology of tracking and human parsing can be applied when the person is moving within the field of view of camera B, resulting in one or more embodiments of the invention obtaining the feature vector Fb=[b1, b3 . . . , bn].
One or more embodiments of the invention can additionally determine the amount of time taken by the person to leave camera A and appear in camera B (given the geometric arrangement of cameras) to see if it is consistent with an average or customary time calculated previously based on earlier data collected, for example, by camera A and camera B. One or more embodiments of the invention include specifying a range of time periods calculated based on earlier collected data, as well as defining a priori if it is possible for one person to move from one camera to another based on the spatial location of cameras.
If the amount of time is consistent, one or more embodiments of the invention can include computing a weighted vector distance between Fa and Fb:
and comparing the weighted vector distance to a threshold to see if the person in camera A corresponds to person in camera B. Both the threshold and the set of weights can be obtained from a standard learning process (for example, using artificial neural networks). The learning process can be seen as a black box where training data is provided (including input feature vectors and an output variable (indicating whether it is the same person or not). The output of the black box can include weights and the threshold. If there is a match (that is, the person in camera A corresponds to the person in camera B), then both trajectories are unified so that they correspond to a single track.
Also, as noted herein, in one or more embodiments of the invention, the weights wi are obtained offline according to the reliability of each detector. As an example, if a “beard detector” is more reliable than a “hat detector” then the beard detector confidence value will be associated with a larger weight.
As detailed herein, the task of a person detection module includes, given a specific video frame, finding whether there exists such a person as is being sought and, if so, determining his/her size and position in the image. The task of a person tracking module includes, given that a person was detected at frame N, localizing the same person at frame N+1, N+2, etc. until the person leaves the scene. Also, in one or more embodiments of the invention, a feature vector is created by the human parsing process via applying attribute detectors such as, for example, beard, etc. and taking the max confidence values as explained herein.
Additionally,
If the configuration and timing of the person being surveilled is not consistent with the pre-determined range, the process ends at step 130. If the configuration and timing of the person being surveilled is consistent with the pre-determined range, step 122 includes comparing feature vector 1 with feature vector 2 via use of a matching module as well as receiving input from a learning module 124 in the form of determinations of weights and thresholds for matching. Step 126 includes determining if the matching of feature vector 1 and feature vector 2 is acceptable. If no, the process ends at step 130. If yes, the tracks are unified at step 128, indicating that the same person is being compared (that is, track 1=track 2). Additionally, in one or more embodiments of the invention, the matching, that is, the distance between the two feature vectors is compared to a user-defined threshold to determine whether the persons are the same or not.
Step 204 includes tracking each of the one or more individuals in a (for example, a local) field of view in each of the two or more cameras. This step can be carried out, for example, using a person tracking module. Tracking each of the individuals in a local field of view in each of the two or more cameras can include using one or more computer vision algorithms.
Step 206 includes applying a set of one or more attribute detectors to each of the one or more individuals being tracked by the two or more cameras. This step can be carried out, for example, using a human parsing module. The set of attribute detectors can include, for example, a set of one or more fine-grained Adaptive Boosting attribute detectors. Further, the attribute detectors can include one or more attribute detectors for presence of a beard, presence of a moustache, presence of eyeglasses, presence of sunglasses, presence of a hat, absence of hair, shape of body features, shape of clothing features, clothing color, clothing texture, etc. Also, the attribute detectors can be learned from a set of training images in one or more levels of resolution.
Step 208 includes using the set of one or more attribute detectors to match an individual tracked in one of the two or more cameras with an individual tracked in one or more other cameras of the two or more cameras. This step can be carried out, for example, using a matching module. Using the set of attribute detectors to match an individual tracked in one of the cameras with an individual tracked in one or more other cameras can include, for example, using a geometric configuration of the cameras, time information, and/or a set of individual attributes.
Using the set of one or more attribute detectors to match an individual tracked in one of the two or more cameras with an individual tracked in one or more other cameras of the two or more cameras can also include steps 210, 212 and 214. Step 210 includes using a maximum confidence value of each attribute detector to generate a feature vector of each of the one or more individuals. Step 212 includes calculating a distance between each vector using a weighted vector distance between each vector. Also, step 214 includes comparing the distance to a threshold to determine if the individual tracked in one of the two or more cameras is the same individual as the individual tracked in one or more other cameras of the two or more cameras.
In one or more embodiments of the invention, the geometric configuration can be acquired before the system starts. By way of example and not limitation, consider Camera A, and assume that a person can walk from camera A to Camera B, but cannot to walk from camera A to Camera C (for example, because there is a physical wall, etc.). The geometric configuration will contain this information, indicating which cameras are adjacent.
Additionally, with respect to the matching process, consider (for illustrative purposes) the following example. A person shows up in camera A. He of she is detected, tracked, and the attribute detectors are applied. By taking the max confidence values of each detector, a feature vector Fa is formed. Assume, also, that there are five attributes (for example, beard, bald, eyeglasses, hat, and striped shirt). Accordingly, the feature vector Fa is 5-dimensional, for example, Fa=[0.4, 0.9, 0.0, 0.9, 0.2]. Subsequently, another person shows up in camera B. The same process is done and a second feature vector is obtained, for example, Fb=[0.2, 1.0, 0.0, 0.1, 0.1]. The distance between the two vectors (that is, matching) is calculated, as described herein, using a
weighted vector distance between Fa and Fb: and comparing that to a threshold to determine if the persons are the same or not.
The techniques depicted in
Also, one or more embodiments of the invention include determining an amount of time taken by the individual to leave a view of the first camera and appear within a view of the second camera to determine if the amount of time is consistent with an established range of time. Further, one or more embodiments of the invention can include computing, if the amount of time is consistent with the established range time, a weighted vector distance between the first camera and the second camera
and comparing the weighted vector distance to a threshold to determine if the individual in the view of the first camera corresponds to the individual in the view of the second camera.
A trajectory of the first camera and the second camera can be unified to correspond to a single track if the individual in the view of the first camera corresponds to the individual in the view of the second camera. In one or more embodiments of the invention, a trajectory refers to the path take by a person. When tracking the person in one camera, it can be determined, for example, that the person is moving right or left. Accordingly, his/her path is known within the field of view of a camera. In another camera, the person may follow another trajectory. When it is determined that the same person is present in different cameras views, the trajectories are linked. A single track, as used herein, refers to a single trajectory, rather than two different trajectories. One or more embodiments of the invention utilize this information to discover, for example, the path that a person-of-interest took, where he/she went, etc. Additionally, one or more weights used in the weighted vector can be obtained according to the reliability of each attribute detector. The reliability of each attribute detector can be determined using an artificial neural network (as well as, for example, other standard machine learning techniques).
The techniques depicted in
The techniques depicted in
The method steps can then be carried out using the distinct software modules of the system, as described above, executing on the one or more hardware processors. Further, a computer program product can include a tangible computer-readable recordable storage medium with code adapted to be executed to carry out one or more method steps described herein, including the provision of the system with the distinct software modules.
Additionally, the techniques depicted in
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
One or more embodiments of the invention, or elements thereof, can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps.
One or more embodiments can make use of software running on a general purpose computer or workstation. With reference to
Accordingly, computer software including instructions or code for performing the methodologies of the invention, as described herein, may be stored in one or more of the associated memory devices (for example, ROM, fixed or removable memory) and, when ready to be utilized, loaded in part or in whole (for example, into RAM) and implemented by a CPU. Such software could include, but is not limited to, firmware, resident software, microcode, and the like.
A data processing system suitable for storing and/or executing program code will include at least one processor 302 coupled directly or indirectly to memory elements 304 through a system bus 310. The memory elements can include local memory employed during actual implementation of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during implementation.
Input/output or I/O devices (including but not limited to keyboards 308, displays 306, pointing devices, and the like) can be coupled to the system either directly (such as via bus 310) or through intervening I/O controllers (omitted for clarity).
Network adapters such as network interface 314 may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
As used herein, including the claims, a “server” includes a physical data processing system (for example, system 312 as shown in
As noted, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Media block 318 is a non-limiting example. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, component, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that any of the methods described herein can include an additional step of providing a system comprising distinct software modules embodied on a computer readable storage medium; the modules can include, for example, any or all of the components shown in
In any case, it should be understood that the components illustrated herein may be implemented in various forms of hardware, software, or combinations thereof; for example, application specific integrated circuit(s) (ASICS), functional circuitry, one or more appropriately programmed general purpose digital computers with associated memory, and the like. Given the teachings of the invention provided herein, one of ordinary skill in the related art will be able to contemplate other implementations of the components of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
At least one embodiment of the invention may provide one or more beneficial effects, such as, for example, tracking people, in video surveillance, across multiple cameras based on fine-grained body parts and attribute detectors.
It will be appreciated and should be understood that the exemplary embodiments of the invention described above can be implemented in a number of different fashions. Given the teachings of the invention provided herein, one of ordinary skill in the related art will be able to contemplate other implementations of the invention. Indeed, although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be made by one skilled in the art.
Number | Name | Date | Kind |
---|---|---|---|
5870138 | Smith et al. | Feb 1999 | A |
6549913 | Murakawa | Apr 2003 | B1 |
6608930 | Agnihotri et al. | Aug 2003 | B1 |
6795567 | Cham et al. | Sep 2004 | B1 |
6829384 | Schneiderman et al. | Dec 2004 | B2 |
6885761 | Kage | Apr 2005 | B2 |
6920236 | Prokoski | Jul 2005 | B2 |
6967674 | Lausch | Nov 2005 | B1 |
7006950 | Greiffenhagen et al. | Feb 2006 | B1 |
7257569 | Elder et al. | Aug 2007 | B2 |
7274803 | Sharma et al. | Sep 2007 | B1 |
7277891 | Howard et al. | Oct 2007 | B2 |
7355627 | Yamazaki et al. | Apr 2008 | B2 |
7382894 | Ikeda et al. | Jun 2008 | B2 |
7391900 | Kim et al. | Jun 2008 | B2 |
7395316 | Ostertag et al. | Jul 2008 | B2 |
7406184 | Wolff et al. | Jul 2008 | B2 |
7450735 | Shah et al. | Nov 2008 | B1 |
7460149 | Donovan et al. | Dec 2008 | B1 |
7526102 | Ozer | Apr 2009 | B2 |
7822227 | Barnes et al. | Oct 2010 | B2 |
7974714 | Hoffberg | Jul 2011 | B2 |
8004394 | Englander | Aug 2011 | B2 |
8208694 | Jelonek et al. | Jun 2012 | B2 |
8411908 | Ebata et al. | Apr 2013 | B2 |
8421872 | Neven, Sr. | Apr 2013 | B2 |
8532390 | Brown et al. | Sep 2013 | B2 |
8588533 | Brown et al. | Nov 2013 | B2 |
20030120656 | Kageyama et al. | Jun 2003 | A1 |
20050013482 | Niesen | Jan 2005 | A1 |
20050162515 | Venetianer et al. | Jul 2005 | A1 |
20060165386 | Garoutte | Jul 2006 | A1 |
20060184553 | Liu et al. | Aug 2006 | A1 |
20060285723 | Morellas et al. | Dec 2006 | A1 |
20070052858 | Zhou et al. | Mar 2007 | A1 |
20070053513 | Hoffberg | Mar 2007 | A1 |
20070122005 | Kage et al. | May 2007 | A1 |
20070126868 | Kiyohara et al. | Jun 2007 | A1 |
20070177819 | Ma et al. | Aug 2007 | A1 |
20070183763 | Barnes et al. | Aug 2007 | A1 |
20070237355 | Song et al. | Oct 2007 | A1 |
20070237357 | Low | Oct 2007 | A1 |
20070294207 | Brown et al. | Dec 2007 | A1 |
20080002892 | Jelonek et al. | Jan 2008 | A1 |
20080080743 | Schneiderman et al. | Apr 2008 | A1 |
20080122597 | Englander | May 2008 | A1 |
20080123968 | Nevatia et al. | May 2008 | A1 |
20080159352 | Adhikari et al. | Jul 2008 | A1 |
20080201282 | Garcia et al. | Aug 2008 | A1 |
20080211915 | McCubbrey | Sep 2008 | A1 |
20080218603 | Oishi | Sep 2008 | A1 |
20080232651 | Woo | Sep 2008 | A1 |
20080252722 | Wang et al. | Oct 2008 | A1 |
20080252727 | Brown et al. | Oct 2008 | A1 |
20080269958 | Filev et al. | Oct 2008 | A1 |
20080273088 | Shu et al. | Nov 2008 | A1 |
20080317298 | Shah et al. | Dec 2008 | A1 |
20090046153 | Chen et al. | Feb 2009 | A1 |
20090060294 | Matsubara et al. | Mar 2009 | A1 |
20090066790 | Hammadou | Mar 2009 | A1 |
20090074261 | Haupt et al. | Mar 2009 | A1 |
20090097739 | Rao et al. | Apr 2009 | A1 |
20090174526 | Howard et al. | Jul 2009 | A1 |
20090261979 | Breed et al. | Oct 2009 | A1 |
20090295919 | Chen et al. | Dec 2009 | A1 |
20100106707 | Brown et al. | Apr 2010 | A1 |
20100150447 | GunasekaranBabu et al. | Jun 2010 | A1 |
20110087677 | Yoshio et al. | Apr 2011 | A1 |
20120039506 | Sturzel et al. | Feb 2012 | A1 |
Number | Date | Country |
---|---|---|
19960372 | Jun 2001 | DE |
2875629 | Mar 2006 | FR |
H0863597 | Mar 1996 | JP |
H10222678 | Aug 1998 | JP |
2004070514 | Mar 2004 | JP |
2005078376 | Mar 2005 | JP |
2009252118 | Oct 2009 | JP |
201006527 | Feb 2010 | TW |
201020935 | Jun 2010 | TW |
M381850 | Jun 2010 | TW |
WO 2009117607 | Sep 2009 | WO |
2009133667 | Nov 2009 | WO |
2010023213 | Mar 2010 | WO |
Entry |
---|
Ronfard et al. Learning to Parse Pictures of People, Jan. 1, 2002, pp. 1-15. |
Jiang et al. View synthesis from Infrared-visual fused 3D model for face recognition, Fifth Intl. Conference on Information, Communications and Signal Processing, pp. 177-180, 2005. |
Gundimada et al. Feature Selection for Improved Face Recognition in Multisensor Images. In R. Hammoud, B. Abidi and M. Abidi, editors. Face Biometrics for Personal Identification, Signals and Communication Technology, pp. 109-120, Springer Berlin Heidelberg, 2007. |
Viola et al. Rapid Object Detection using a Boosted Cascade of Simple Features. 2001. |
Viola and Jones. Robust Real-Time Object Detection. 2001. |
Tseng et al. Mining from time series human movement data. 2006. |
Raskar et al. Image Fusion for Context Enhancement and Video Surrealism. 2004. |
Abidi et al. Survey and Analysis of Multimodal Sensor Planning and Integration for Wide Area Surveillance. 2008. |
Kong et al. Recent advances in visual and infrared face recognition—a review. 2005. |
Hampapur et al. Multi-scale Tracking for Smart Video Surveillance. IEEE Transactions on Signal Processing, vol. 22, No. 2, Mar. 2005. |
Dalal et al., “Histograms of Oriented Gradients for Human Detection”, CVPR 2005, San Diego, USA. |
Comaniciu et al., “Real-Time Tracking of Non-Rigid Objects Using Mean Shift”, CVPR 2000, Hilton Head, SC, USA. |
U.S. Appl. No. 12/845,116, filed Jul. 28, 2010, titled, Facilitating People Search in Video Surveillance. |
U.S. Appl. No. 12/845,121, filed Jul. 28, 2010, titled, Multispectral Detection of Personal Attributes for Video Surveillance. |
Mittal et al, M2Tracker: A Multi-View Approach to Segmenting and Tracking People in a Cluttered Scene, International Journal of Computer Vision, vol. 51, Issue 3, pp. 189-203, Feb. 2003. |
Park et al., Simultaneous Tracking of Multiple Body Parts of Interacting Persons, Computer Vision and Image Understanting, vol. 102, Issue 1, pp. 1-21, 2006. |
Chibelushi et al., Facial Expression Recognition: A Brief Tutorial Overview, pp. 1-5, 2002. |
Xu et al., Pedestrian Detection with Local Feature Assistant, Control and Automation, 2007, ICCA 2007, IEEE International Conference on May 30-Jun. 1, 2007, pp. 1542-1547, 2007. |
Mohan et al., Example-Based Object Detection in Images by Components, IEEE Transaction of Pattern Analysis and Machine Intelligence, vol. 23, No. 4, pp. 349-361, Apr. 2001. |
Felzenszwalb et al., Pictorial Structures for Object Recognition, International Journal of Computer Vision (IJCV), pp. 1-42, Jan. 2005. |
Ramanan et al., Strike a Pose: Tracking People by Finding Stylized Poses, Computer Vision and Pattern Recognition (CVPR), San Diego, CA, Jun. 2005. |
Tran et al., Configuration Estimates Improve Pedestrian Finding, National Information Processing Systems Foundation, 2007. |
Tsochantaridis et al., Large Margin Methods for Structured and Interdependent Output Variables, Journal of Machine Learning Research (JMLR), Sep. 2005. |
Naive Bayes Classifier, Wikipedia, http://en.wikipedia.org/wiki/Naive—Bayes—classifier, Jul. 27, 2010, 7 pages. |
KaewTrakuPong et al., A Real Time Adaptive Visual Surveillance System for Tracking Low-Resolution Colour Targets in Dynamically Changing Scenes, Image and Vision Computing, vol. 21, Issue 1-0, pp. 913-929, 2003. |
Schmidt, Automatic Initialization for Body Tracking Human Upper Body Motions, 3rd International Conference on Computer Vision Theory and Applications (VISAPP), Jan. 22, 2008. |
Lao et al., Fast Detection and Modeling of Human-Body Parts from Monocular Video, F.J. Perales and R.B. Fisher (Eds.): AMDO 2008, LNCS 5098, pp. 380-389, 2008. |
U.S. Appl. No. 12/845,095, filed Jul. 28, 2010, titled, Semantic Parsing of Objects in Video. |
Mohan et al., Example-Based Object Detection in Images by Components, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, No. 4, Apr. 2001, pp. 349-361. |
International Search Report for PCT/EP2011/062910 dated Oct. 6, 2011. |
Ronfard et al., Learning to Parse Pictures of People, Lecture Notes in Computer Science—LNCS, vol. 2353, Jan. 1, 2002, pp. 700-714. |
Li-Jia Li et al., Towards Total Scene Understanding: Classification, Annotation and Segmentation in an Automatic Framework, Computer Vision and Pattern Recognition, 2009, CVPR 2009. IEEE, Piscataway, NJ, USA, Jun. 20, 2009, pp. 2036-2043. |
Szelisky, Computer Vision: Algorithms and Applications, Jan. 1, 2011, Springer, pp. 615-621. |
Marr, Vision, Jan. 1, 1982, Freeman, pp. 305-313. |
Ramanan, Part-Based Models for Finding People and Estimating Their Pose, in: Thomas B. Moeslund et al., Visual Analysis of Humans, Jan. 1, 2011, Springer, pp. 1-25. |
Zhu et al., A Stochastic Grammar of Images, Jan. 1, 2007, Now Publishers, pp. 259-362. |
Vaquero et al., Chapter 14: Attribute-Based People Search, in: Yundian Ma et al. Intelligent Video Surveillance: Systems and Technology, Jan. 1, 2009, pp. 387-405. |
Feris, Chapter 3, Case Study: IBM Smart Surveillance System, in: Yundian Ma et al., Intelligent Video Surveillance: System and Technology, Jan. 1, 2009, pp. 47-76. |
Nowozin et al., Structured Learning and Prediction in Computer Vision, Jan. 1, 2011, Now Publishers, pp. 183-365. |
Wu, Integration and Goal-Guided Scheduling of Bottom-up and Top-Down Computing Processes in Hierarchical Models, UCLA Jan. 1, 2011. |
Lin L et al., A Stochastic Graph Grammar for Compositional Object Representation and Recognition, Pattern Recognition, Elsevier, GB, vol. 42, No. 7, Jul. 1, 2009, pp. 1297-1307. |
Yang et al., Evaluating Information Contributions of Bottom-up and Top-down Processes, Computer Vision, 2009 IEEE, Piscataway, NJ, USA, Sep. 29, 2009, pp. 1042-1049. |
Tan et al., Enhanced Pictorial Structures for Precise Eye Localization Under Incontrolled Conditions, Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE, Piscataway, NJ, USA Jun. 20, 2009, pp. 1621-1628. |
Ioffe et al., Probabilistic Methods for Finding People, International Journal of Computer Vision, Kluwer Academic Publishers, Norwell, US, vol. 43, No. 1, Jun. 1, 2001, pp. 45-68. |
Samangooei et al., The Use of Semantic Human Description as a Soft Biometric, Biometrics: Theory, Applications and Systems, 2008. BTAS 2008. 2nd IEEE International Conference. |
Park et al., Multiresolution Models for Object Detection, in Proceedings of the 11th European Conference on Computer Vision: Part IV, pp. 241-254 (2010). |
Yokokawa et al., Face Detection with the Union of Hardware and Software, Technical Report of IEICE, The Institute of Electronics, Information and Communication Engineers, Jan. 10, 2007. |
Number | Date | Country | |
---|---|---|---|
20120026335 A1 | Feb 2012 | US |