Claims
- 1. A system, comprising:
a computer configured to determine a position and shape of an object of interest from video images and characterize activity of said object of interest based on analysis of changes in said position and said shape over time.
- 2. The system of claim 1, further comprising:
a video camera coupled to said computer for providing said video images.
- 3. The system of claim 2, further comprising:
a video digitization unit couple to said video camera and said computer for converting said video images provided by said video camera from analog to digital format.
- 4. The system of claim 3, further comprising:
a storage/retrieval unit coupled to said video digitization unit, said video camera, and said computer, for storing said video images and standard object video images.
- 5. The system of claim 1, wherein said computer includes an object identification and segregation module receiving said video images.
- 6. The system of claim 5, wherein said object identification and segregation module operates using a background subtraction algorithm in which a plurality of said video images are grouped into a set, a standard deviation map of the set of video images is created, a bounding box where a variation is greater than a predetermined threshold is remove from said set of video images, and the set of images less said bounding boxes is averaged to produce a background image.
- 7. The system of claim 6, wherein said computer further includes a behavior identification module for characterizing activity of said object, said behavior identification module being coupled to said object identification and segregation module.
- 8. The system of claim 7, wherein said computer further includes an object tracking module for tracking said object from one frame of said video images to another frame, and an object shape and location change classifier for classifying the activity of said object, coupled to each other, said object identification and segregation module, and said behavior identification module.
- 9. The system of claim 8, wherein said computer further includes a standard object behavior storage module that stores information about known behavior of a predetermined standard object for comparing the activity of said object, said standard object behavior storage module being coupled to said behavior identification module, and a standard object classifier module coupled to said standard object behavior module.
- 10. The system of claim 5, wherein said computer further includes a standard object behavior storage module that stores information about known behavior of a predetermined standard object for comparing the activity of said object, said standard object behavior storage module being coupled to said behavior identification module.
- 11. The system of claim 1, wherein said object is a living object.
- 12. The system of claim 1, wherein said object is an animal.
- 13. The system of claim 1, wherein said object is a mouse.
- 14. The system of claim 1, wherein said object is a human.
- 15. The system of claim 1, wherein said object is a man made machine.
- 16. A method of determining and characterizing activity of an object using computer processing of video images, comprising the steps of:
detecting a foreground object of interest in said video images; tracking changes to said foreground object over a plurality of said video images; identifying and classifying said changes to said foreground object; and characterizing said activity of said foreground object based on comparison to activity of a standard object.
- 17. The method of claim 16, wherein said step of characterizing said activity includes the steps of:
describing a sequence of postures as behavior primitives; and aggregating behavior primitives into actual behavior over a range of images.
- 18. The method of claim 16, wherein said foreground object detection includes the step of generating a background image from an average of a set of individual frames of said video images.
- 19. The method of claim 18, wherein said step of generating a background image includes the step of determining variation in intensity of pixels within said individual frames to identify a region where said foreground object is located.
- 20. The method of claim 19, wherein said step of generating a background image further includes the step of using non-variant pixels of the video images to generate said background image.
- 21. The method of claim 20, wherein said step of generating a background image is performed periodically to correct for changes in background objects and small movements of a camera capturing said video images.
- 22. The method of claim 16, wherein said detecting a foreground object includes using a background subtraction method comprising the steps of:
multiply frames in a neighborhood of current image; apply a lenient threshold on a difference between a current image and a background so as to determine a broad region of interest; classify by intensity various pixels in said region of interest to obtain said foreground object; and apply edge information to refine contours of said foreground object image.
- 23. The method of claim 16, wherein said step of detecting said foreground includes the step of manual identification of foreground objects to be tracked and characterized.
- 24. The method of claim 17, wherein said posture determination and description includes using statistical and contour-based shape information.
- 25. The method of claim 24, wherein said step of identifying and classifying changes to said foreground object includes using statistical shape information selected from the group consisting of:
area of the foreground object; centroid of the foreground object; bounding box and its aspect ratio of the foreground object; eccentricity of the foreground object; and a directional orientation of the foreground object relative to an axis as generated with a Principal Component Analysis.
- 26. The method of claim 24, wherein said step of identifying and classifying changes to said foreground object uses contour-based shape information selected from the group consisting of b-spline representation, convex hull representation, and corner points.
- 27. The method of claim 24, wherein said step of identifying and classifying changes to said foreground object includes identifying a set of model postures and their description information, said set of model postures including horizontal posture, vertical posture, eating posture, or sleeping posture.
- 28. The method of claim 27, wherein said step of identifying and classifying changes to said foreground object includes classifying the statistical and contour-based shape information from a current image to assign a best-matched posture.
- 29. The method of claim 17, wherein the said step of describing said behavior primitives includes the step of identifying patterns of postures over a sequence of images.
- 30. The method of claim 29, wherein said step of describing said behavior primitives step further includes the step of analyzing temporal information selected from the group consisting of direction and magnitude of movement of the centroid, increase and decrease of the eccentricity, increase and decrease of the area, increase and decrease of the aspect ratio of the bounding box, change in the b-spline representation points, change in the convex hull points, and direction and magnitude of corner points.
- 31. The method of claim 29, wherein the step of describing said behavior primitives step includes behavior of a standard object such as stationary, moving for left to right and vice versa, standing up, and falling down.
- 32. The method of claim 29, wherein the step of describing said behavior primitives step includes a step for providing a means for entering user defined customized behavior primitives.
- 33. The method of claim 17, wherein the said step of determining actual behavior by aggregating behavior primitives includes the step of analyzing temporal ordering of the primitives, such as using information about a transition from a previous behavior primitive to a next behavior primitive.
- 34. The method of claim 33, wherein said temporal analysis is a time-series analysis such as Hidden Markov Model (HMMs).
- 35. The method of claim 33, wherein the said step of determining actual behavior includes identifying actual behavior selected from a group consisting of sleeping, eating, roaming around, grooming, and climbing.
- 36. A method for background subtraction of a video image, comprising the steps of:
grouping a number of images into a set of video images; creating a standard deviation map of the grouped images; removing a bounding box area of said image where variation is above a predetermined threshold to create a partial image; and combining said partial image with an existing set of partial images by averaging the set of images to generate a complete background image deplete of a desired foreground object.
- 37. The method of claim 36, further comprising the step of subtracting said complete background image from a current image so as to obtain said desired foreground object.
- 38. The method of claim 36, wherein said steps are repeated periodically to update said complete background image.
- 39. A system, comprising:
a computer configured to detect and characterize at least a single behavior of an object of interest based on movement of said object, using video image analysis.
- 40. The system of claim 39, wherein said object is an animal and said behavior is detecting when said animal is freezing or a touch or sniff of a particular item.
- 41. The system of claim 39, wherein said object is an animal and said detecting and characterizing said behavior is determined by comparing behavior of said animal against a predetermined norm.
- 42. The system of claim 39, wherein said object is an animal and characterizing said behavior is determined by analyzing a daily pattern of said object against a statistical norm so as to detect effects of drugs or genetic manipulations on said anima.
GOVERNMENT RIGHTS NOTICE
[0001] Portions of the material in this specification arose as a result of Government support under contracts MH58964 and MH58964-02 between Clever Sys., Inc. and The National Institute of Mental Health, National Institute of Health. The Government has certain rights in this invention.
Continuations (1)
|
Number |
Date |
Country |
| Parent |
09718374 |
Nov 2000 |
US |
| Child |
10666741 |
Sep 2003 |
US |