Integrated smart camera 1202 and integrated smart camera 2204 are visible spectrum fixed mount cameras, which may include lens focal point adjustment, e.g., zoom, capability. Integrated smart camera 3206 is a visible spectrum camera controllably adjustable in terms of rotary position, tilt position and elevation, and may include zoom capability. Integrated smart camera 4208 is an infrared (IR) spectrum camera controllably adjustable in terms of rotary position, tilt position and elevation, and may include zoom capability. Integrated smart camera 5210 is an IR spectrum fixed mount camera 210 which may include a zoom capability. Integrated smart camera 6212 is a hybrid visible spectrum/IR fixed mount camera and may include zoom capability.
The control center 214, by including two control systems, centralized control center A 220 coupled via link 248 to centralized control center B 222, supports a division of tasks, fault detection, and dynamic reconfiguration in response to detected errors. Guard interface unit 216 includes a display, recording capability, and alarm signaling devices. In some embodiments, the guard interface unit 216 is included as part of the control center 214 or is co-located at the same site as the control center 214. Mobile node 218, e.g., a wireless communications device carried by a guard on patrol or investigating a detected potential threat or imbedded in a security vehicle, includes a GPS module 218 for determining the accurate position of MN 218 and directing the guard to a detected target for further investigation.
Exemplary smart camera 302 includes a camera housing 312, a camera sensor module 314, a video display path processing module 316, a detection/tracking path processing module 318, a camera positioning/adjustment control module 320 and an Internet Protocol (IP) interface 355. Camera sensor module 314 includes an input module 322, e.g., a lens module, an image sensor 324, e.g., a visible light sensor detector such as a charge coupled device, and a digital output interface 326 which communicates 16 bit/unit digital information signals 350, e.g., pixel value representations using 16 information bits to convey the representation.
Video display path processing module 316 includes a visual processing module 328, and a video compression module 332. In some embodiments, the visual processing module 328 is a non-uniform image compression module. In some such embodiments, an image processing control signal 366 controls the non-uniform image compression module to enhance the visual perceptibility of a detected target. Visual processing module 328 includes a visual gain adjustment/ contrast control module 334, a visual equalization adjustment module 344, and an overlay module 339. Visual processing module 328 receives a 16 bit/unit output signal 350 from the digital output interface 326 of the camera sensor module 314. In addition, video processing module 328 may, and sometimes does, receive one or more of overlay information signals 362, target/background information signals 365 and image processing control signals 366 from detection/tracking path processing module 318. Visual processing module 316 performs various video processing operations resulting in output 8 bit/unit output signal 354 which is input to the video compression module 332. The video compression module 332 is, e.g., a MPEG module. Video compression module 332 outputs, via IP interface 355, an IP video feed signal 356 to display 304.
The display 304 includes an interface/decoder module 370 and an output module 372. Interface/decoder module 370 receives the compressed IP video feed signal 356, performs decoding operations and generates a decoded signal which it feeds to output module 372 for a display which can be viewed by a human, e.g., guard.
Detection/tracking path processing module 318 includes a digital signal processor 336 and memory 338 coupled together via a bus 364 over which the processor 336 and memory 338 interchange data/and information. Memory 338 includes a detection/tracking module 340, target/detection information 342, and an image processing control module 341. DSP 336 executes the modules and uses the data/information in memory 338 to control the operation of the detection/tracking path processing module 318 and implement methods of the present invention. The DSP 336 receives as input 16 bit/unit output signals 350, e.g., 16 bit/unit pixel value representations, from the digital output interface 326 of the camera sensor module 314. The DSP 336 outputs information 360 which includes control information, detected threat information, and/or target information. Information 360 includes information used internally by camera 302, e.g., information directed to camera positioning/adjustment control module 320. Information 360 also includes information directed, via IP interface 355, to external devices, e.g., integrated smart camera 306, central control panel 308, and system control node 310. The DSP 336 also outputs signals directed to the visual processing module 328 of the video display path processing module 316, e.g., to advantageously adjust the visual processing of module 328 in view of detected threat and/or tracking information. Signals directed to visual path processing module 328 from detection/tracking path processing module 318 include overlay information 362, target/background information 365, and image processing control signals 366.
Signals 362, 365 and/or 360 communicate at least one of: an indication that a target has been detected, information identifying the target, information characterizing an identified target, a camera positioning control signal, a second camera positioning control signal used to control a second camera, and location information relating to tracking a target.
The detection/tracking module 340 detects threats, e.g., intruders, in an area of surveillance, and performs tracking operations, e.g., such that the camera is controlled to be focused in on and/or follow the intruder as the intruder moves in a surveillance coverage area. Detection/tracking module 340 generates various control, detected threat, and/or target information signals 360, overlay information signals 362, and target/background information signals 365. Image processing control module 341 generates image processing control signals 366. Target/detection information 342 includes information using 16 bit/unit resolution. Detection/tracking path processing module 318 by using the higher bit/unit resolution (16 bit/unit resolution) for its input signals and internal processing operations potentially can achieve improved detection and tracking capabilities over an embodiment which uses a 8 bit/unit resolution input. Detection/tracking path processing module 360, by using a parallel input from digital output interface 326, is not negatively impacted by particular visual processing operations which are well suited for a human's visual perception, but are disadvantageous to detection and/or tracking computer based operations.
Overlay information 362 includes, e.g., information to be used by overlay module 339 to put a box around a target or superimpose a symbol on a target. Target/background information 365 includes, e.g., information conveying a determined position associated with a target, information identifying a target or type of target, e.g., from a group of potential targets or types of targets, information associating an index value or identifier with a target, information associating a direction of movement with a target and/or information associating a velocity with a target. In some embodiments, information representative of target/background information 365 is added to the visual surveillance image, e.g., text, symbols, etc. Image processing control information signals 366 includes, e.g., control signals used by the visualization equalization adjustment module 344 and/or the visual gain adjustment/contrast control module 334.
Exemplary smart camera 402 includes a camera housing 412, a camera sensor module 414, a video display path processing module 416, a detection/tracking path processing module 418, a camera positioning/adjustment control module 420, and an IP interface 455. Camera sensor module 414 includes an input module 422, an image sensor 424, e.g., an infrared electromagnetic radiation detection sensor 424, and a digital output interface 426 which communicates 16 bit/unit digital information signals 450, e.g., pixel value representations using 16 information bits to convey the representation.
Video display path processing module 416 includes a visual processing module 428 and a video compression module 452. In some embodiments, the visual processing module 428 is a non-uniform image compression module. In some such embodiments, an image processing control signal 466 controls the non-uniform image compression module to enhance the visual perceptibility of a detected target. Video processing module 428 includes a visual gain adjustment/contrast control module 434, a visual equalization adjustment module 444, and an overlay module 439. Visual processing module 428 receives a 16 bit/unit output signal 450 from the digital output interface 426 of the camera sensor module 414. In addition, video processing module 428 may, and sometimes does, receive one or more of overlay information signals 462, target/background information signals 465 and image processing control signal 466 from detection/tracking path processing module 418. Visual processing module 416 performs various video processing operations resulting in output 8 bit/unit output signal 454 which is input to the video compression module 452. The video compression module 452, e.g., an MPEG module, compresses the signal 454 to generate an IP video feed signal 456, which is output via IP interface 455 to display 404.
The display 404 includes an interface/decoder module 470 and an output module 472. Interface/decoder module 470 receives the compressed IP video feed signal 456, performs decoding operations and generates a decoded signal which it feeds to output module 472 for a display which can be viewed by a human, e.g., guard.
Detection/tracking path processing module 418 includes a digital signal processor 436 and memory 438 coupled together via a bus 464 over which the processor 436 and memory 438 interchange data/and information. Memory 438 includes a detection/tracking module 440, an image processing control module 437 and target/detection information 442. Detection/tracking module 440 includes an IR based target discrimination module 441. IR based target discrimination module 441 includes a comparator 443, a position determination module 445, a background temperature determination module 447, and a target temperature determination module 449. DSP 436 executes the modules and uses the data/information in memory 440 to control the operation of the detection/tracking path processing module 418 and implement methods of the present invention. The DSP 436 receives as input 16 bit/unit output signals 450, e.g., 16 bit/unit pixel value representations, from the digital output interface 426 of the camera sensor module 414. The DSP 436 outputs information 460 which includes control information, detected threat information, and/or target information. Information 460 includes information used internally by camera 402, e.g., information directed to camera positioning/adjustment control module 420. Information 460 also includes information directed, via IP interface 455, to external devices, e.g., integrated smart camera 406, central control panel 408, and system control node 410. The DSP 436 also outputs signals directed to the visual processing module 428 of the video display path processing module 416, e.g., to advantageously adjust the visual processing of module 428 in view of detected threat and/or tracking information. Signals directed to visual path processing module 428 from detection/tracking path processing module 418 include overlay information 462, target/background information 465, and image processing control signals 466.
Signals 462, 465 and/or 460 communicate at least one of: an indication that a target has been detected, information identifying the target, information characterizing an identified target, a camera positioning control signal, a second camera positioning control signal used to control a second camera, and location information relating to tracking a target.
The detection/tracking module 440 detects threats, e.g., intruders, in an area of surveillance, and performs tracking operations, e.g., such that the camera can be controlled to be focused in on and/or follow the intruder as the intruder moves in a coverage area. Detection/tracking module 440 generates various control, detected threat and/or target information signals 460, overlay information signals 462, and target/background information signals 465. Image processing control module 437 generates image processing control signals 466. Target/detection information 442 includes information using 16 bit/unit resolution. Detection/tracking path processing module 418 by using the higher bit/unit resolution (16 bit/unit resolution) for its input signals and internal processing operations potentially can achieve improved detection and tracking capabilities over an embodiment which uses a 8 bit/unit resolution input. Detection/tracking path processing module 418, by using a parallel input from digital output interface 426, is not negatively impacted by particular visual processing operations which are well suited for a human's visual perception, but are disadvantageous to detection and/or tracking computer based operations.
IR based target discrimination module 441 performs various operations related to target discrimination, identification, and/or tracking using infrared related information. Discrimination module 441 discriminates between targets and non-target objects based on detected infrared signals. The discrimination of module 441 is based on first sized data units, e.g., 16 bit data units, which provide more information than second size data units, e.g., 8 bit size data units. In some embodiments, IR target discrimination is possible in accordance with the present invention by utilizing the first size data units as input, while IR target discrimination would not have been possible had the second size data units have been used as input. Background temperature determination module 447 determines background temperature values in a surveillance area. Target temperature determination module 449 determines the temperature of potential targets, identified targets, and targets being tracked. Comparator module 443 compares potential target temperatures to stored information associated with targets, e.g., a threshold corresponding to an expected target temperature, and/or to background temperature measurements. Position determination module 445 determines target related position, e.g., the position of a potential target, the position of an identified target, and/or the position of a target being tracked. Position determination module 445 determines if a possible target changes position with time.
Overlay information 462 includes, e.g., information to be used by overlay module 439 to put a box around a target. Target/background information 465 includes, e.g., information conveying a determined temperature associated with the background, information conveying a determined temperature associated with a target, information conveying a determined position associated with a target, information identifying a target or type of target, e.g., from a group of potential targets or types of targets, information associating an index value or identifier with a target, information associating a direction of movement with a target and/or information associating a velocity with a target. In some embodiments, information representative of target/background information 465 is added to the visual surveillance image, e.g., text, symbols, etc. Image processing control information signals 466 include, e.g., control signals used by the visualization equalization adjustment module 444 and/or the visual gain adjustment/contrast control module 434.
The exemplary method starts in step 502, where the integrated camera assembly is powered on and initialized. Operation proceeds from start step 502 to step 504. In step 504, the integrated camera assembly operates a sensor module to generate first fixed size data units. In some embodiments, the sensor module senses visible light. In some embodiments, the sensor module senses infrared electromagnetic radiation. In some embodiments, the sensor module senses both visible light and infrared electromagnetic radiation. Operation proceeds from step 504 to step 506 and step 508. In step 506, the camera assembly performs an image processing operation, e.g., a non-uniform image compression operation, on at least some of said first fixed size data units to produce second size data units, said second size data units being of a second fixed size which is smaller than said first fixed size. Operation proceeds from step 506 to step 510. In step 510, the camera assembly generates an image output signal from said second size data units.
Returning to step 508, in step 508, the camera assembly processes at least some of said first fixed size data units, which were also processed by said image processing operation (step 506), to perform at least one of a target detection and a target tracking operation. Step 508 includes one or both of sub-step 512 and sub-step 513, sub-step 528 and sub-step 530. For example, if the exemplary camera assembly is a visible spectrum camera assembly, sub-steps 513, 528 and 530 are included; if the camera assembly is an infrared camera assembly sub-steps 512, 528 and 530 are included; if the camera is a hybrid visible/IR camera sub-steps 512, 513, 528 and 530 are included.
In sub-step 512, the camera assembly discriminates between targets and non-targets based on detected infrared signals. Sub-step 512 includes sub-steps 514, 516, 518, 520, 522, 524, and 526. In sub-step 514, the camera assembly generates background temperature from sensed first size data units, while in sub-step 516, the camera assembly generates possible target temperature from sensed first size data units. Operation proceeds from sub-steps 514 and 516 to sub-step 518. In sub-step 518, the camera assembly compares a temperature of a possible target to a background temperature and then in sub-step 520 operation proceeds depending upon the result on the comparison. If the comparison of step 518, indicted that the possible target should still be considered a possible target operation proceeds from sub-step 520 to sub-step 522; otherwise the possible target is discarded.
In sub-step 522 the camera assembly compares a temperature of a possible target to a detection temperature threshold corresponding to an expected target temperature, and then in stub-step 524 operation proceeds depending on the result of the comparison, If the comparison of sub-step 522 determines that the potential target considered is a viable target, operation proceeds to sub-step 526; otherwise the potential target is discarded.
In sub-step 526, the camera assembly determines if the possible target has changed position with time.
Returning to sub-step 513, in sub-step 513, the camera assembly discriminates between targets and non-targets based on detected visible spectrum signals. The discrimination of sub-step 512 and/or sub-step 513 is based on first sized data units which provide more information than second size data units.
Operation proceeds from sub-step 512 and/or sub step 513 to sub-step 528 and sub-step 530. In sub-step 528, the camera assembly generates a target information signal. In some embodiments, the target information signal communicates at least one of: an indication that a target has been detected, information identifying the target, information characterizing a detected target, a camera positioning control signal, a second camera control signal used to control a second camera, and location information relating to tracking the target. In sub-step 530, the camera assembly generates an image control signal used to control said image processing operation. In some embodiments, the image processing operation is a non-uniform image compression operation, and the non-uniform image compression operation is adjusted as a function of said image control signal to enhance the visual perceptibility of a detected target.
In various embodiments elements described herein are implemented using one or more modules to perform the steps corresponding to one or more methods of the present invention. Thus, in some embodiments various features of the present invention are implemented using modules. Such modules may be implemented using software, hardware or a combination of software and hardware. Many of the above described methods or method steps can be implemented using machine executable instructions, such as software, included in a machine readable medium such as a memory device, e.g., RAM, floppy disk, etc. to control a machine, e.g., general purpose computer with or without additional hardware, to implement all or portions of the above described methods, e.g., in one or more nodes. Accordingly, among other things, the present invention is directed to a machine-readable medium including machine executable instructions for causing a machine, e.g., processor and associated hardware which may be part of a test device, to perform one or more of the steps of the above-described method(s).
Numerous additional variations on the methods and apparatus of the present invention described above will be apparent to those skilled in the art in view of the above description of the invention. Such variations are to be considered within the scope of the invention.