1. Technical Field
Embodiments of the present disclosure relate generally to surveillance technology, and more particularly, to a time of flight camera and a motion tracking method using the time of flight camera.
2. Description of Related Art
Cameras installed on a track system have been used to perform security surveillance by capturing images of a monitored area. A camera installed on the track system can automatically and can regularly move along the track system but cannot respond to specific movements.
The disclosure, including the accompanying drawings, is illustrated by way of example and not by way of limitation. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean at least one.
Each of the modules 101-105 may include one or more computerized instructions in the form of one or more programs that are stored in the storage system 13 or a computer-readable medium, and executed by the processor 12 to perform operations of the TOF camera 1. In general, the word “module”, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, Java, C, or Assembly. One or more software instructions in the modules may be embedded in firmware, such as EPROM. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other storage device.
Referring to
The driving device 11 may be used to move the TOF camera 1 along the tracks of the track system 3 to track detected motion. In one embodiment, the driving device 11 may be composed of one or more servo motors.
The creation module 101 is operable to capture a plurality of three dimensional (3D) images of people, using the lens 10, and store the images in the storage system 10 to create a 3D image database. In the embodiment, each of the 3D images comprises characteristic human data such as facial features (e.g., nose, eyes and mouth shape and size), and the general dimensions of a human being.
The capturing module 102 is operable to control the lens 10 to capture scene images of the monitored area in real-time. In one embodiment, the capturing module 101 may control the lens 10 to capture a scene image at regular intervals, such as one or two seconds. Each of the scene images may include not only the image data but, in addition, data as to the distance information between the lens 10 and objects in the monitored area. As an example, referring to
The detection module 103 is operable to analyze the scene images to check for motion in the monitored area. In the embodiment, the motion may be defined as human movement in the monitored area. The detection module 103 may refer to the 3D images of the database to determine a human presence in the monitored area and to determine motion by a person.
The determination module 104 is operable to determine a movement direction of the motion when the motion is detected in the monitored area. In the embodiment, the determination module 104 may determine the movement direction of the motion by comparing the respective positions of the motion within two scene images of the monitored area that are consecutively captured by the lens 10.
The execution module 105 is operable to control the TOF camera 1 to move along the track system 3 to track the motion according to the movement direction using the driving device 11. For example, if a person moves towards the left hand side of the monitored area, the execution module 105 may control the TOF camera 1 to move correspondingly on the track system 3. If the person moves towards the right hand side of the monitored area, the execution module 105 may control the TOF camera 1 to move accordingly on the track system 3.
Referring to
In block S01, the creation module 101 captures a plurality of three dimensional (3D) images of people, using the lens 10, and stores the images in the storage system 10 to create a 3D image database. In the embodiment, each of the 3D images comprises characteristic general human data, such as the facial features (e.g., the general shape and size of the nose, eyes and mouth), and general dimensions of the human outline.
In block S02, the capturing module 102 controls the lens 10 to capture scene images of the monitored area in real-time. In one embodiment, the capturing module 101 may direct the capture of a scene image at regular intervals, such as one or two seconds.
In block S03, the detection module 103 analyzes the scene images to check for motion in the monitored area. In one embodiment, the motion may be defined as human movement in the monitored area. The detection module 103 may compare each of the scene images with the 3D images in the database to determine a human presence in the monitored area to check for the motion.
In block S04, the detection module 103 determines whether motion is detected in the monitored area. If motion is detected in the monitored area, block S05 is implemented. Otherwise, if no motion is detected in the monitored area, block S03 is repeated.
In block S05, the determination module 104 determines a movement direction of the motion. In the embodiment, the determination module 104 may determine the movement direction of the motion by comparing the respective positions of the motion within two consecutive scene images of the monitored area.
In block S06, the execution module 104 controls the TOF camera 1 to move along the track system 3 to track the motion using the driving device 11 according to the movement direction of the motion. Details of such control have been provided above.
In other embodiments, the detection module 103 further extracts the smallest possible rectangle which encloses a complete picture of the motion from a current scene image of the monitored area after the TOF camera 1 has been moved, and determines whether the ratio of that smallest possible rectangle is less than a preset value (e.g., 20%) of the full current scene image. If the ratio of that smallest rectangle is less than the preset value, the execution module 105 controls the TOF camera 1 to pan and/or tilt the lens 10 until the center of that smallest rectangle is at the center of the current scene image viewed by the TOF camera 1. To obtain a magnified or zoomed image of the motion, the execution module 105 directs the TOF camera 1 to increase the magnification of the current scene until the ratio of the smallest possible rectangle that encloses the complete picture of the motion is equal to or greater than the preset value of the full current scene image being viewed by the TOF camera 1. As an example, referring to
Although certain embodiments of the present disclosure have been specifically described, the present disclosure is not to be construed as being limited thereto. Various changes or modifications may be made to the present disclosure without departing from the scope and spirit of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
99134811 | Oct 2010 | TW | national |