The present invention relates to wireless telecommunications in general, and, more particularly, to an integrated wireless location and surveillance system.
Video and audio surveillance systems are being deployed in increasing numbers, in both public and private venues, for security and counter-terrorism purposes.
The present invention comprises an integrated wireless location and surveillance system that provides distinct advantages over video and audio surveillance systems of the prior art. In particular, the integrated system comprises (i) a surveillance system comprising a plurality of cameras, each covering a respective zone, and (ii) a wireless location system that is capable of providing to the surveillance system, at various points in time, an estimate of the location of a wireless terminal that is associated with a person or item of interest. The surveillance system intelligently selects the video feed from the appropriate camera, based on the estimated location of the wireless terminal, and delivers the selected video feed to a display. As a person of interest moves from one zone to another, the surveillance system is capable of dynamically updating which video feed is delivered to the display.
In accordance with the first illustrative embodiment of the present invention, each camera is a digital pan-zoom-tilt (PZT) closed-circuit television camera that is automatically and dynamically controlled to photograph the current estimated location of a particular wireless terminal, following its movement within the zone. In addition, a microphone is paired with each camera, such that movements of the camera keep the microphone pointing to the estimated location of the wireless terminal.
The second illustrative embodiment also employs digital pan-zoom-tilt (PZT) closed-circuit television cameras; however, rather than the system automatically controlling the selected camera to track the wireless terminal, the selected camera is subjected to the control of a user, who can manipulate the camera via an input device such as a mouse, touchscreen, and so forth.
In accordance with the third illustrative embodiment, each camera is a fixed, ultra-high-resolution digital camera with a fisheye lens that is capable of photographing simultaneously all of the locations within the associated zone. In this embodiment, rather than the camera being manipulated to track the estimated location of the wireless terminal, a sub-feed that comprises the estimated location is extracted from the video feed, and a magnification of the extracted sub-feed is delivered to a display.
The illustrative embodiments comprise: receiving, by a data-processing system: (i) an identifier of a wireless terminal, and (ii) an estimate of a location that comprises the wireless terminal; and transmitting, from the data-processing system, a signal that causes a camera to photograph the location.
For the purposes of this specification, the following terms and their inflected forms are defined as follows:
Wireless location system 101 is a system that is capable of estimating the location of a plurality of wireless terminals (not shown in
Surveillance system 102 is a system that is capable of delivering video and audio feeds from a plurality of zones, of transmitting location queries to wireless location system 101, of receiving location estimates of wireless terminals from wireless location system 101, and of providing the functionality of the present invention. Surveillance system 102 is described in detail below and with respect to
Surveillance apparatus 201-i, where i is an integer between 1 and N inclusive, is a system that is capable of providing video and audio feeds from a respective zone. Surveillance apparatus 201-i is described in detail below and with respect to
Surveillance data-processing system 202 is a system that is capable of receiving video and audio feeds from surveillance apparatuses 201-1 through 201-N, of transmitting command signals to surveillance apparatuses 201-1 through 201-N, of receiving location estimates of wireless terminals from wireless location system 101, and of performing the pertinent tasks of the methods of
Camera 301-i is capable of photographing locations in zone i, of forwarding images to transceiver 303-i, of receiving command signals via transceiver 303-i, and of performing the received commands, in well-known fashion. In accordance with the first and second illustrative embodiments of the present invention, camera 301-i is a digital pan-zoom-tilt (PZT) closed-circuit television camera that is capable of photographing every location within its associated zone i. In accordance with the third illustrative embodiment of the present invention, camera 301-i is a fixed, ultra-high-resolution digital camera with a fisheye lens capable of photographing simultaneously all locations within zone i. As will be appreciated by those skilled in the art, some other embodiments of the present invention might employ a different type of camera, and it will be clear to those skilled in the art, after reading this disclosure, how to make and use such alternative embodiments.
Microphone 302-i is capable of receiving sound pressure waves from locations in zone i, of converting these waves into electromagnetic signals, of forwarding the electromagnetic signals to transceiver 303-i, and of receiving command signals via transceiver 303-i, in well-known fashion. In accordance with the first and second illustrative embodiments of the present invention, microphone 302-i is mounted on camera 301-i such that panning movements of camera 301-i accordingly change the direction in which microphone 302-i is pointed. In accordance with the third illustrative embodiment of the present invention, microphone 302-i is capable of changing its orientation directly in response to command signals received via transceiver 303-i, rather than indirectly via camera 301-i, as in the third illustrative embodiment camera 301-i is fixed.
Transceiver 303-i is capable of receiving electromagnetic signals from surveillance data-processing system 202 and forwarding these signals to camera 301-i and microphone 302-i, and of receiving electromagnetic signals from camera 301-i and microphone 302-i and transmitting these signals to surveillance data-processing system 202, in well-known fashion.
As will be appreciated by those skilled in the art, in some other embodiments of the present invention surveillance apparatus 201-i might comprise other sensors or devices in addition to, or in lieu of, camera 301-i and microphone 302-i, such as an infrared (IR)/heat sensor, a motion detector, a Bluetooth monitoring/directional antenna, a radio frequency identification (RFID) reader, a radio electronic intelligence gathering device, etc. Furthermore, in some other embodiments of the present invention surveillance apparatus 201-i might also comprise active devices that are capable of being steered or triggered based on location information, such as electronic or radio jammers, loudspeakers, lasers, tasers, guns, etc., as well as active radio sources that are designed to fool and elicit information from wireless terminals (e.g. fake cell sites, etc.). In any case, it will be clear to those skilled in the art, after reading this disclosure, how to make and use embodiments of the present invention that employ such variations of surveillance apparatus 201-i.
Surveillance server 401 is a data-processing system that is capable of receiving video and audio feeds from surveillance apparatuses 201-1 through 201-N and forwarding these feeds to surveillance client 403, of generating command signals and transmitting the generated command signals to surveillance apparatuses 201-1 through 201-N, of receiving command signals from surveillance client 403 and transmitting the received command signals to surveillance apparatuses 201-1 through 201-N, of receiving location estimates of wireless terminals from wireless location system 101, of reading from and writing to database 402, and of performing the pertinent tasks of the methods of
Database 402 is capable of providing persistent storage of data and efficient retrieval of the stored data, in well-known fashion. In accordance with the illustrative embodiments of the present invention, database 402 is a relational database that associates user identifiers (e.g., social security numbers, service provider customer account numbers, etc.) with wireless terminal identifiers (e.g., telephone numbers, etc.). As will be appreciated by those skilled in the art, in some other embodiments of the present invention database 402 might store other data in addition to, or instead of, that of the illustrative embodiment, or might be some other type of database (e.g., an object-oriented database, a hierarchical database, etc.), or both, and it will be clear to those skilled in the art, after reading this disclosure, how to make and use such alternative embodiments of the present invention.
Surveillance client 403 is a data-processing system that is capable of receiving video and audio feeds via surveillance server 401, of receiving command signals from a user for remotely manipulating surveillance apparatuses 201-1 through 201-N and transmitting these command signals to surveillance server 401, of receiving command signals from a user for locally manipulating the display of the received video feeds, and of performing the pertinent tasks of the methods of
Processor 501 is a general-purpose processor that is capable of receiving information from transceiver 503, of reading data from and writing data into memory 502, of executing instructions stored in memory 502, and of forwarding information to transceiver 503, in well-known fashion. As will be appreciated by those skilled in the art, in some alternative embodiments of the present invention processor 501 might be a special-purpose processor, rather than a general-purpose processor.
Memory 502 is capable of storing data and executable instructions, in well-known fashion, and might be any combination of random-access memory (RAM), flash memory, disk drive, etc. In accordance with the illustrative embodiments, memory 502 stores executable instructions corresponding to the pertinent tasks of the methods of
Transceiver 503 is capable of receiving signals from surveillance apparatuses 201-1 through 201-N, database 402, and surveillance client 403, and forwarding information encoded in these signals to processor 501; and of receiving information from processor 501 and transmitting signals that encode this information to surveillance apparatuses 201-1 through 201-N, database 402, and surveillance client 403, in well-known fashion.
Processor 601 is a general-purpose processor that is capable of receiving information from transceiver 603, of reading data from and writing data into memory 602, of executing instructions stored in memory 602, and of forwarding information to transceiver 603, in well-known fashion. As will be appreciated by those skilled in the art, in some alternative embodiments of the present invention processor 202 might be a special-purpose processor, rather than a general-purpose processor.
Memory 602 is capable of storing data and executable instructions, in well-known fashion, and might be any combination of random-access memory (RAM), flash memory, disk drive, etc. In accordance with the illustrative embodiments, memory 602 stores executable instructions corresponding to the pertinent tasks of the methods of
Transceiver 603 is capable of receiving signals from surveillance server 401 and forwarding information encoded in these signals to processor 601, and of receiving information from processor 601 and transmitting signals that encode this information to surveillance server 401, in well-known fashion.
Display 604 is an output device such as a liquid-crystal display (LCD), cathode-ray tube (CRT), etc. that is capable of receiving electromagnetic signals encoding images and text from processor 601 and of displaying the images and text, in well-known fashion.
Speaker 605 is a transducer that is capable of receiving electromagnetic signals from processor 601 and of generating corresponding acoustic signals, in well-known fashion.
Input device 606 is a device such as a keyboard, mouse, touchscreen, etc. that is capable of receiving input from a user and of transmitting signals that encode the user input to processor 601, in well-known fashion.
At task 710, variable k is initialized to zero by surveillance system 102
At task 720, an identifier of a wireless terminal T is received by surveillance system 102 and forwarded to wireless location system 101, in well-known fashion.
At task 730, an estimated location L of wireless terminal T is received by surveillance system 102 from wireless location system 101, in well-known fashion.
At task 740, surveillance system 102 selects a surveillance apparatus 201-i based on location L, where i is an integer between 1 and N inclusive, such that location L is within the zone i monitored by surveillance apparatus 201-i. If location L is not within any of zones 1 through N, then variable i is set to zero.
At task 750, surveillance system 102 tests whether i equals zero; if so, execution continues back at task 730, otherwise execution proceeds to task 755.
At task 755, surveillance system 102 tests whether i equals k; if not, execution proceeds to task 760, otherwise execution continues at task 790.
At task 760, surveillance system 102 tests whether k equals zero; if not, execution proceeds to task 770, otherwise execution continues at task 780.
At task 770, surveillance system 102 de-selects the audio/video feed from surveillance apparatus 201-k, in well-known fashion.
At task 780, surveillance system 102 selects the audio/video feed from surveillance apparatus 201-i, in well-known fashion.
At task 790, relevant actions are performed, depending on the particular embodiment. The actions for the first, second, and third illustrative embodiments are described in detail below and with respect to
At task 795, variable k is set to the value of i. After task 790, execution continues back at task 730.
At subtask 810, surveillance data-processing system 202 transmits a signal based on location L to surveillance apparatus 201-i that causes camera 301-i to photograph location L and microphone 302-i to capture sound from location L. As will be appreciated by those skilled in the art, in some other embodiments of the present invention, the signal transmitted by surveillance data-processing system 202 at subtask 810 might also be based on a predicted future location for wireless terminal T (e.g., a predicted future location based on the direction and speed of travel of wireless terminal T, etc.).
At subtask 820, the video feed of camera 301-i is output on display 604 and the audio feed from microphone 302-i is output on speaker 605, in well-known fashion. After subtask 820, execution continues at task 795 of
As will be appreciated by those skilled in the art, in some other embodiments of the present invention, one or more other actions might be performed at subtask 820 in addition to, or instead of, outputting the audio/video feed. For example, in some other embodiments of the present invention, the feed might be archived for future retrieval. As another example, in some other embodiments of the present invention in which surveillance client 503 comprises N displays, the feed might be labeled, thereby enabling a user to conveniently select one of the displays. In any case, it will be clear to those skilled in the art, after reading this disclosure, how to make and use such alternative embodiments.
At subtask 910, the video feed of camera 301-i is output on display 604 and the audio feed from microphone 302-i is output on speaker 605, in well-known fashion. As noted above with respect to subtask 820, in some other embodiments of the present invention, one or more additional actions might be performed at subtask 910 (e.g., archiving the feed, etc.), and it will be clear to those skilled in the art, after reading this disclosure, how to make and use such alternative embodiments.
At subtask 920, camera 301-i and microphone 302-i are subjected to remote manipulation by a user of surveillance client 503, via input device 606. Subtask 920 is described in detail below and with respect to
At subtask 930, the video feed from camera 301-i is subjected to manipulation by a user of surveillance client 503, via input device 606. Subtask 930 is described in detail below and with respect to
After subtask 930, execution continues at task 795 of
At subtask 1010, user input for manipulating camera 301-i and microphone 302-i is received via input device 606. For example, if input device 606 is a mouse, side-to-side movements of the mouse might correspond to lateral panning of camera 301-i and microphone 302-i, up-and-down movements of the mouse might correspond to vertical panning of camera 301-i and microphone 302-i, and rotation of a wheel on the mouse might correspond to zooming of camera 301-i's lens. As another example, if display 605 and input device 606 are combined into a touchscreen, then touching a particular pixel area of the video feed might indicate that camera 301-i should photograph the location corresponding to the pixel area.
At subtask 1020, surveillance data-processing system 202 transmits to surveillance apparatus 201-i a signal that causes manipulation of camera 301-i and microphone 302-i in accordance with the user input. After subtask 1020, execution continues at subtask 930 of
At subtask 1110, user input is received via input device 606 for extracting from the video feed of camera 301-i a sub-feed that contains location L. For example, in some embodiments where input device 606 is a mouse, a user might use the mouse to define a rectangular sub-feed for extraction as follows:
Alternatively, in some other embodiments of the present invention, a user might position the cursor on the person of interest (i.e., the person associated with wireless terminal T) and click on the mouse button, thereby defining the center of a rectangular sub-feed for extraction. As will be appreciated by those skilled in the art, in some such embodiments there might be a pre-defined width and length of the rectangular sub-feed (e.g., 400 pixels by 300 pixels, etc.) while in some other embodiments the user might specify these dimensions (e.g., via text input, via one or more mouse gestures, etc.).
As will further be appreciated by those skilled in the art, in some such embodiments where the user clicks on the person or interest, the coordinates of the mouse click might be used to generate an azimuth measurement from camera 301-i, which could then be fed back to wireless location system 101 to improve the location estimate for wireless terminal T. Moreover, once the user has identified the person of interest (or “target”) in this manner, such embodiments might employ image-processing software that is capable of continuously tracking the target, thereby enabling surveillance system 102 to continuously generate azimuth measurements and provide the measurements to wireless location system 101. Still further, such continuous target tracking could be incorporated into the method of
At subtask 1120, a magnification of the sub-feed is output on display 604, in well-known fashion. After subtask 1120, execution continues at task 795 of
At subtask 1210, a sub-feed that contains location L is extracted from the video feed of camera 301-i. For example, in some embodiments, a rectangular sub-array of pixels that is centered on location L might be extracted from the full rectangular array of pixels of the video feed, in well-known fashion.
At subtask 1220, surveillance data-processing system 202 transmits to surveillance apparatus 201-i a signal that causes microphone 302-i to capture sound from location L (e.g., by aiming microphone 302-i in the direction of location L, etc.).
At subtask 1230, the video sub-feed is output on display 604 and the audio feed from microphone 302-i is output on speaker 605, in well-known fashion. After subtask 1230, execution continues at task 795 of
As noted above with respect to subtasks 820 and 910, in some other embodiments of the present invention, one or more other actions might be performed at subtask 1230 in addition to, or instead of, outputting a magnification of the sub-feed (e.g., archiving the sub-feed, etc.), and it will be clear to those skilled in the art, after reading this disclosure, how to make and use such alternative embodiments.
As will further be appreciated by those skilled in the art, in some other embodiments of the present invention, there might be a location that can be photographed by two or more cameras. For example, in some such embodiments, such a location might be situated at the border of two adjacent zones (e.g., a street intersection, a corner inside a building, etc.), while in some other such embodiments, a zone might contain a plurality of cameras, rather than a single camera.
As will be appreciated by those skilled in the art, the manner in which feeds are handled in such embodiments is essentially a design and implementation choice. For example, in some such embodiments, all feeds that photograph the estimated location of wireless terminal T might be delivered to surveillance data-processing system 202, while in some other such embodiments, one of the feeds might be selected (e.g., based on which feed has the clearest picture of the person of interest, etc.). In any case, it will be clear to those skilled in the art, after reading this disclosure, how to modify the flowcharts of the illustrative embodiments to enable such functionality, and how to make and use embodiments of the present invention that implement the modified flowcharts.
It is to be understood that the disclosure teaches just one example of the illustrative embodiment and that many variations of the invention can easily be devised by those skilled in the art after reading this disclosure and that the scope of the present invention is to be determined by the following claims.
This application claims the benefit of U.S. Provisional Patent Application No. 61/351,622, filed Jun. 4, 2010, entitled “Wireless Location System Control of Surveillance Cameras,” (Attorney Docket: 465-066us) and U.S. Provisional Patent Application No. 61/363,777, filed Jul. 13, 2010, entitled “Wireless Location System Control of Surveillance Cameras,” (Attorney Docket: 465-067us), which are also incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61351622 | Jun 2010 | US | |
61363777 | Jul 2010 | US |