1. Field of the Invention
The present invention generally relates to the field of video surveillance systems, and more specifically, to software-controlled video surveillance systems.
2. Description of the Related Arts
Traditional video surveillance systems include one or more video cameras and may include, for example, associated motion detectors and other components. The video cameras, which may be networked together, are usually coupled to a central monitoring station. Many of these systems, however, can be difficult and costly to install, reducing their practicality in many markets. Each security camera must be individually mounted to a surface, such as a ceiling or wall, and usually requires wiring to provide electrical power to the camera as well as wiring to transmit the video signal from the camera to the central monitoring location. For example, installing a security system in a typical home with a plurality if cameras can require a full day for two technicians to install.
In addition to the difficulty and cost of setting up a traditional video surveillance system, transmission of streaming video over a conventional network generally exhibits some undesirable properties resulting from the unpredictability of the transport delay through the transport medium. For example, network transport times may vary from a few milliseconds to several seconds, depending on network congestion, the network route involved, and other factors. To compensate for this network unpredictability, typical video streaming viewers, usually designed for use in viewing over the Internet (e.g. Microsoft Windows® Media Player, Apple QuickTime Player®, RealOne® Player, and others), incorporate a substantial buffer between the network connection and the video being viewed. This is so that the video can be extracted from the buffer at a rate for generation of high video quality. Because of this buffering, however, the video being viewed is several seconds behind the source of the video stream. Even when the advanced user reduces the buffering delay to the shortest possible value, transport delays and connection times are quite high.
For viewing pre-recorded video, the delay means that a user must wait for several seconds before the video begins. For a system intended for viewing live-camera images, there are two significant drawbacks to this delayed approach. First, each time a connection is established between a camera and the central monitoring station, which may be, for example, a computing device, several seconds elapse before the first image appears (e.g., typically 9-15 seconds for Microsoft® Media Player). This can be confusing to a user, and the penalty for network errors is many seconds of lost video. Secondly, even after video is finally visible, the temporal lag between the live camera scene and the viewed scene is very disconcerting to a viewer who sees both the live scene and the video image, as is common with a video surveillance system.
Furthermore, most traditional video surveillance systems require a dedicated computer system to handle all of the compute-intensive tasks associated with handling multiple simultaneous video streams. Typical tasks include digitizing analog video, compressing digitized video before storage, performing motion detection, or rendering video images to a monitoring screen. These tasks can consume the available central processing unit (CPU) processing power on most modern computing devices (PC), reducing the resources available for other normal computing tasks such as word processing, spread sheet, budgeting, and other common applications.
Thus, there is a need for a low-cost video surveillance system that is user friendly, responsive to real-time viewing requirements, with low CPU usage to prevent interference with other applications, and with multiple functionalities, including, for example, live viewing, recording, and search/playback that are easily customizable by the user.
The present invention includes systems and methods for video surveillance including: cameras that capture digital streaming full color video, and a highly user friendly control system that displays and stores video data transmitted by the cameras.
An exemplary embodiment of a video surveillance system includes: a dual use medium; a control system; a first camera; a camera transceiver communicatively coupled to the dual use medium via a low latency video connection and configured to send video data from the first camera over the dual use medium and to receive control signals over the dual use medium. The control system has a control transceiver communicatively coupled to the dual use medium, the control transceiver is configured to receive video data from the first camera via the dual use medium and to send control signals to the first camera over the dual use medium. In an exemplary embodiment, the dual use medium is the electrical power wiring of a building, which provides power to the cameras as well as a secure communication channel through which video data is transmitted to the control system. The control system includes a software application running on a computing device. Various methods and systems for initializing and operating the video surveillance system, for example, in a live viewing mode, a record mode, a search/playback mode, or a setup mode, are also a part of the present invention.
An exemplary embodiment of the first camera includes a housing and an image capture system, which is supported by the housing and generates an analog video signal. The first camera also includes a processor enclosed in the housing and coupled to the image capture system. The processor transforms the analog video signal into a video data stream, and the processor includes a motion detection module for indicating a segment of the video data stream during which a motion-based event occurred. The first camera includes a transceiver coupled to the processor for sending the video data stream over a dual use medium. In particular, in exemplary embodiments of the first camera, some of the video processing tasks are performed by the camera to reduce use of the processing resources on the computing device, allowing other applications to operate normally and seamlessly.
The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawings will be provided by the Office upon request and payment of the necessary fee.
The invention has other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
Reference will now be made in detail to several embodiments of the present invention, examples of which are illustrated in the accompanying figures. Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Wherever practicable, similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
In particular, systems and methods for video surveillance systems are described. The description of the present invention is in the context of a system for a video surveillance system used in a home and includes a system for viewing, recording, searching, and playing video data. The system is also responsive to events associated with the video surveillance process. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details, and home video surveillance is just an example of the application of the principles of the present invention. In other instances, structures and devices are shown in block diagram form to avoid obscuring the invention. However, the present invention applies to any video data processing system that has video data such as medical video image processing, video monitoring of testing centers, test subjects, and businesses, or other video data processing systems for other purposes, and video surveillance of homes is only used here by way of example.
Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
Moreover, the present invention claimed below is operating on, or working in conjunction with, an information or computing system. Such a computing system as claimed may be an entire video surveillance system or only portions of such a system. For example, the present invention can operate with a computing system that need only be a digital camera in the simplest sense to process and store video data. Thus, the present invention is capable of operating with any computing system from those with minimal functionality to those providing all the functionality disclosed herein.
System Overview
Also coupled to the dual use medium 110, is a control system 112. The control system 112 includes a transceiver 114 to receive the video data for processing by a computing device 116 running a software application to control the video surveillance system 100. The transceiver 114 may encrypt outgoing data and decrypt incoming data. The transceiver 114 includes, for example, a USB Receiver Module with built-in surge protection that plugs directly into a wall outlet near the PC 116. A USB cable connects the USB Receiver Module to an available USB port on the PC 116.
The control unit 220A may comprise an arithmetic logic unit, a microprocessor, a general purpose computer, a personal digital assistant, or some other information appliance equipped to provide electronic display signals to the display device 210. In one embodiment, the control unit 220A comprises a general purpose computer having a graphical user interface, which may be generated by, for example, a program written in Java running on top of an operating system like WINDOWS® or UNIX® based operating systems. In one embodiment, one or more application programs are executed by the control unit 220A including, without limitation, word processing applications, electronic mail applications, financial applications, and web browser applications.
The control unit 220A is shown including a processor 202A, a main memory 204A, and a data storage device 206A, all of which are communicatively coupled to a system bus 208A.
The processor 202A processes data signals and may comprise various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor is shown in
The main memory 204A stores instructions and/or data that may be executed by the processor 202A. The instructions and/or data may comprise code for performing any and/or all of the techniques described herein. The main memory 204A may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, or some other memory device known in the art. The main memory 204A is described in more detail below with reference to
The data storage device 206A stores data and/or instructions for the processor 202A and comprises one or more devices including a hard disk drive, a floppy disk drive, a CD-ROM device, a DVD-ROM device, a DVD-RAM device, a DVD-RW device, a flash memory device, or some other mass storage device known in the art. The data storage device 206A may include a database for storing video data electronically.
The system bus 208A represents a shared bus for communicating information and data throughout the control unit 220A. The system bus 208A may represent one or more buses including an industry standard architecture (ISA) bus, a peripheral component interconnect (PCI) bus, a universal serial bus (USB), or some other bus known in the art to provide similar functionality. Additional components coupled to the control unit 220A through the system bus 208A include the display device 210, the keyboard 212, the cursor control 214, the network controller 216A, and the I/O audio device(s) 218.
The display device 210 represents any device equipped to display electronic images and data. The display device 210 may be, for example, a cathode ray tube (CRT), liquid crystal display (LCD), or any other similarly equipped display device, screen, or monitor.
The keyboard 212 represents an alphanumeric input device coupled to the control unit 220A to communicate information and command selections to the processor 202A.
The cursor control 214 represents a user input device equipped to communicate positional data as well as command selections to the processor 202A. The cursor control 214 may include a mouse, a trackball, a stylus, a touch screen, cursor direction keys, or other mechanisms to cause movement of a cursor.
The network controller 216A links the control unit 220A to a network that may include multiple processing systems. The network of processing systems may comprise a local area network (LAN), a wide area network (WAN) (e.g., the Internet), and/or any other interconnected data path across which multiple devices may communicate. The control unit 220A also has other conventional connections to other systems such as a network for distribution of data using standard network protocols such as TCP/IP, http, and SMTP as will be understood to those skilled in the art. The network controller 216A can be used to couple the video surveillance system 100 to a video data storage device, and/or other computing systems.
One or more I/O devices 218 are coupled to the system bus 208A. For example, the I/O device 218 may be a microphone for input and transmission of audio output via speakers. Optionally, the I/O audio device 218 may contain one or more analog-to-digital or digital-to-analog converters, and/or one or more digital signal processors (DSP) to facilitate processing.
Like the control unit 220A of the computing device 116, the control unit 220B of the image capture system 106N may comprise an arithmetic logic unit, a microprocessor, a microcontroller, or some other information appliance equipped to provide electronic signals to, and to receive electronic signals from, the input video device 222. In one embodiment, the control unit 220B comprises an integrated circuit DSP to provide control signals to the input video device 222.
The control unit 220B is shown including a processor 202B, a main memory 204B, and a data storage device 206B, all of which are communicatively coupled to the system bus 208B. Like the processor 202A of
The input video device 222 is coupled to the system bus 208A for input and transmission of video data. Optionally, the input video device 222 may include one or more analog-to-digital or digital-to-analog converters, and/or one or more digital signal processors to facilitate video processing.
Like the network controller 216A of
It should be apparent to one skilled in the art that the control units 220A, 220B may include more or less components than those shown in
Software Architecture
As shown in
The operating system 302A is preferably one of a conventional type such as, WINDOWS®, MAC®, SOLARIS® or LINUX® based operating systems. Although not shown, the memory 204A may also include one or more application programs including, without limitation, word processing applications, electronic mail applications, financial applications, and web browser applications.
The system setup module 304A is for initializing the video surveillance system 100 in accordance with the present invention. The system setup module 304A is responsive to the control environment and to input to the video surveillance system 100 and in response determines initial system parameters for the video surveillance system 100. The system setup module 304A is coupled to the camera discovery module 306A to determine the presence of the cameras 102A-102N, and it communicates with the live viewing module 310A, the record module 312A, and the search/playback module 314A to provide initial system setup parameters. The system setup module 304A preferably includes at least one wizard for automatically detecting and setting the operating parameters of the camera(s) 102N and the control system 112. The operation of the system setup module 304A will be described in more detail below with reference to
The camera discovery module 306A is coupled to the system setup module 304A and detects the presence of the cameras 102A-102N in the video surveillance system 100. The camera discovery module 306A also facilitates reestablishing connection to a particular camera 102A-102N when its connection 108A-108N is broken. The operation of the camera discovery module 306A will be described in more detail below with reference to
The receive data module 308A processes video data received from the cameras 102A-102N over the dual use medium 110. The receive data module 308A converts the video data signal from the format used to transmit over the dual use medium 110 into a format proper for processing by the control unit 220A. In particular, the receive data module 308A interfaces with the live viewing module 310A and the record module 312A, both of which process the video data. The receive data module 308A may also decrypt the video data signal if encryption is being used. The operation of the receive data module 308A will be described in more detail below with reference to
The live viewing module 310A works in conjunction with the receive data module 308A to provide live viewing of the video data received by the receive data module 308A. The live viewing module 310A provides a graphical user interface that allows a user to interact with the video surveillance system 100. In particular, the live viewing module 310A facilitates activation and deactivation of the cameras 102A-102N, changing of the viewing window format, changing of system parameters, access to the record mode, and access to the search/playback mode. The operation of the live viewing module 310A will be described in more detail below with reference to
The record module 312A works in conjunction with the receive data module 308A to record the video data received by the receive data module 308A. The record module 312A is responsive to user input to set the recording schedule for each camera 102A-102N, to set motion detection zones, and to allow recording in panic mode. The operation of the record module 312A will be described in more detail below with reference to
The search/playback module 314A is coupled to the data storage device 206A to allow searching and playback of previously recorded video data. The search/playback module 314A provides a graphical user interface that allows a user to interact with the video surveillance system 100. In particular, the search/playback module 314A facilitates searching through previously recorded video data segments, playback of particular selected video segments, changing of the viewing window format, changing of system parameters, access to the record mode, and access to the live viewing mode. The operation of the search/playback module 314A will be described in more detail below with reference to
The remote viewing module 316A works in conjunction with the network controller 218 and the receive data module 308A to send the video data received by the receive data module 308A to a remote location to facilitate remote viewing of the video data. The remote viewing module 316A may include several functionalities. For example, the remote viewing module 316A captures video frames from the video pipeline. It may perform conversion from the current video pipeline frame rate to a frame rate, which may be higher or lower than the video pipeline frame rate, suitable for remote streaming. The remote viewing module 316A may perform resampling of the video data format (i.e., pixel resolution) from the video pipeline video format to a data format suitable for remote streaming. This data format is usually lower than the video pipeline format, but not necessarily. For one-camera view modes, the remote viewing module 316A may perform selection of which camera, out of N, is to be streamed for remote viewing at a particular moment in time. This can be either a fixed selection, or the remote viewing module 316A can cycle through the N cameras, or through M selected cameras out of N, one at a time. For multi-camera view modes, the remote viewing module 316A may assemble mosaic formats, such as a 2×2 mosaic, of multiple camera images into a single video stream for remote viewing. Lastly, the remote viewing module 316A may communicate with a remote viewing server to provide status of the video surveillance system 100 and/or the cameras 102A-102N. Those of skill in the art will appreciate that this list of functionalities is not exclusive and that not all of these functionalities will be used under all conditions.
The external applications module 318A allows the video surveillance system 100 to provide video and control interfaces to other associated applications. The external applications module 318A works in conjunction with the receive data module 308A to facilitate sending of the video data received by the receive data module 308A to the external applications. The external applications module 318A may also work in conjunction with the network controller 218 to send the data to remote applications. As an example, a second computing device, such as a PC running the Windows XP® Media Center Edition (MCE) operating system, may be connected to the user's television or another video display system. The MCE PC can be interfaced to the video surveillance system 100 over a LAN or other network. A software module running on the MCE PC provides a user interface for the user to control the video surveillance system 100 remotely from the MCE PC, to view video data from the cameras 102A-102N, and/or to be notified of motion events, among other functionalities. For example, if the user is watching a television program using the MCE PC and a large screen TV, the external applications module 318A would allow a message to pop up saying “Camera 3 has detected motion. Do you wish to see this video?” Alternatively, the external applications module 318A would enable the video data to appear in a picture-in-picture window for a period of time. Thus, the MCE PC provides a mechanism to watch and control the video surveillance system 100 using the TV and the MCE PC. Those of skill in the art will appreciate that this example of an external application communicating with the video surveillance system 100 via the external applications module 318A to provide expanded system-wide functionality is merely illustrative, and other scenarios are possible.
The error handling and diagnostics module 320A works in conjunction with several of the preceding modules to handle and diagnose errors, for example, regarding data transmission or communication. For example, the error handling and diagnostics module 320A may work with the camera discovery module 306A in the event of a lost connection 108A-108N to a camera 102A-102N. As another example, the error handling and diagnostics module 320A may work with the receive data module 308A in the event of an incomplete data stream.
As shown in
The real time executive 302B is a conventional type known to those skilled in the art and controls interaction among the other modules of memory 204B.
The system setup module 304B is for initializing the camera 102N. The system setup module 304B is responsive to the environment and determines initial system parameters for the camera 102N. The system setup module 304A is coupled to the camera discovery module 306A to trigger an announcement of the presence of the camera 102N, and it communicates with the record module 312A to provide initial system setup parameters. Additionally, the system setup module 304A includes an update capability, for receiving updated system parameters and distributing them to the other modules in memory 204B. Furthermore, in an alternative embodiment, a user can independently interact with the system setup module 304B to alter system parameters. As will be apparent to one skilled in the art, the operation of the system setup module 304B is similar to that described below with reference to
The camera discovery module 306B is coupled to the system setup module 304B and signals the presence of the camera 102N in the video surveillance system 100. The camera discovery module 306B also facilitates re-announcing the presence of the camera 102N when its connection 108N is broken. As will be apparent to one skilled in the art, the operation of the camera discovery module 306B is similar to that described below with reference to
The motion detection module 322 is responsible for identifying video frames in which motion is detected. The motion detection module 322 is coupled to the send data module 308B to send the motion detection signal along with the transmitted video data signal. In another embodiment, the motion detection module 322 works in conjunction with the send data module 308B described below to control transmission of the video data so that the video data is only sent from the camera 102N to the computing device 116 over the dual use medium 110 when there is motion detected, allowing the video surveillance system 100 to conserve bandwidth when there is no motion detected. As will be apparent to one skilled in the art, the operation of the motion detection module 322 enables the motion detection feature described below with reference to
The image compression module 324 is responsible for compressing the video data signal prior to transmitting it over the dual use medium 110. The operation of the send data module 308B will be described in more detail below with reference to
The image management module 326 controls image quality of the video data. For example, the image management module 326 may affect such parameters as automatic brightness, resolution, and bit rate. The operation of the image management module 326 will be described in more detail below with reference to
The send data module 308B is responsible for network communication, for example, using an internet protocol (IP) stack, to transmit the video data signal over the dual use medium 110. In particular, the send data module 308B encrypts the video data signal if encryption is being used. The send data module 308B also interfaces with the record module 312B to provide the video data to the record module 312B for local recording, if necessary. The operation of the send data module 308B will be described in more detail below with reference to
The record module 312B locally records the video data received by the image capture system 106N, if necessary. The record module 312B is coupled to the system setup module 304B to receive such parameters as the default record mode and motion detection zones. As will be apparent to one skilled in the art, the operation of the record module 312B is similar to that described below with reference to
The external applications module 318B works in conjunction with the send data module 308B to send the data to remote applications, such as applications that may be located in the memory 204A of the computing device 116. Another example of an external application might be Windows® Media Player running on an external PC, in which a user enters a Uniform Resource Locator (URL) that identifies one of the cameras 102A-102N to view video data from the identified camera 102A-102N.
Methods are described below, particularly with respect to the flowcharts of
Video data from a camera 102N is presented to a network socket 414, for example, via an Ethernet network IP socket connection. The network socket 414 accomplishes the transfer of video data from the camera 102N to the rest of the data flow 400, using, for example, either TCP/IP or UDP/IP Ethernet packets. The network socket 414 also implements a retry and recovery mechanism in the event of network failures or errors.
A DirectX custom source filter 416 receives the video data stream from the network socket 414. The video data from the camera 102N is received as standard Ethernet packets. This packet data is combined into video frames, where each frame has a header, plus the video information for each frame. The header contains time stamp information, a frame type, and information about motion detection, if any, for that frame.
The frame type may be, for example, a Key frame or I frame. Most modern video compression schemes that achieve very high compression rates use a combination of Key frames and I frames. A Key frame is a stand-alone video frame, which can be rendered without any other information from previous frames. On the other hand, an I frame contains primarily information about how this particular I frame differs from the previous frame. Consequently, I frames are typically much smaller than Key frames, resulting in greater data compression. There are typically several I frames between Key frames, resulting in significant data reduction.
The output of the DirectX custom source filter 416 is DirectX video frames, which are transmitted to a record queue 402, to a DirectX RTP render filter 418, or to both. In one embodiment, the frames of the video data stream (i.e. the sequence of video frames) are encoded using the Microsoft Windows® Media 9 compression format; however, the frames can also be encoded in any popular video format such as MJPEG, MPEG-2, MPEG-4, or other formats.
The DirectX RTP render filter 418 receives video frames as input data and repackages these video frames into an RTP data stream and sends the data stream via the Internal RTP data bus 420. The DirectX RTP render filter 418 sends video data as RTP data packets via bus 420 to any registered destinations, such as the DirectX RTP source filters 422, 426, and 430.
If live viewing is active, the DirectX RTP source filter 422 registers itself as a destination for the RTP render filter 418 and then receives video frames via the RTP data bus 420. The DirectX RTP source filter 422 receives the RTP data packets, extracts the individual video frames from the RTP stream, and passes these video frames to the DirectX live viewing graph filter 424. If live viewing is not active, no data is sent to the DirectX RTP source filter 422 and subsequent blocks.
The DirectX live viewing graph filter 424 processes the video frames and prepares them for presentation to the DirectX video mixing renderer (VMR) 410. The VMR 410 includes the Windows® Media 9 decoder function, which creates full video frames from the compressed sequence of Key frames and I frames. It also superimposes text and graphics information over the video images. The resultant displayable image is then rendered onto the surface of a designated display window 412. Each camera 102A-102N has a designated display window 412.
Video data from the camera 102N that is received by the DirectX custom source filter 416 is also sent to the record queue 402. The record queue 402 is used to deal with the video compression format, which reduces network bandwidth by using a combination of Key frames and I frames. For example, a user might wish to start recording at the moment motion is detected in the camera 102N. But, due to the Key/I frame composition of the compressed video data stream, a new recording must begin with a Key frame, since I frames cannot be rendered without the previous sequence of frames, back to the previous Key frame. The record queue 402 stores the most recent set of frames, back to the most recent Key frame, or perhaps back a multiple number of Key frames if more information is stored in the record queue 402. Thus, when recording is to start, the recording can begin at a Key frame prior to the trigger point. The temporary storage performed by the record queue 402 may be organized as a software queue.
The DirectX writer 404 receives video data from the record queue 402 until the record queue 402 is empty, and thereafter, the DirectX writer 404 receives video data directly from the DirectX custom source filter 416. When recording is initiated, the processor 202 supplies a filename to the DirectX writer 404, which then writes a standard Windows® Media 9 (.wmv) data file under the designated disk filename in the disk storage 406. A particular feature of the present invention is that recording can start and stop as required, without disturbing the flow of video frames to the live viewing data path if live viewing is active.
A significant benefit derived from storing the recorded video data as standard Windows® Media 9 (.wmv) files is that the recorded video files can be played using the standard Windows® Media Player, and they can be viewed as thumbnail images in the Windows® Explorer. The recorded video files do not require the video surveillance system 100 for viewing. Thus, if a video clip is sent via email to some other location, it can be viewed using standard Windows® software components without requiring the video surveillance system 100 to be installed as a viewer.
When in search mode, the DirectX playback graph filter 408 receives a filename corresponding to the file the user has selected for playback. The DirectX playback graph filter 408 opens the file and begins playing the file by sending video frames to the VMR 410, which renders the displayable video image to the designated display window 412, similarly to the process used for live viewing. The user can specify a playback file position within the file, which is translated by the DirectX playback graph filter 408 from an absolute playback time to a time relative to the start of the particular recorded file.
The DirectX playback graph filter 408 also supports playback at rates other than normal (1×) playback speed. The DirectX playback graph filter 408 is responsible for sending each frame on to the VMR 410 at the correct time, according to the time stamp included with each video frame at the time it was acquired in the camera 102N, and according to the current playback rate (i.e., speed).
The internal RTP data bus 420 provides a flexible means of distributing video samples from the camera 102N to multiple destinations. These destinations might include the live viewing display window 412, a remote viewing connection, or another external viewing application. If remote viewing is active, the DirectX RTP render filter 418 sends the video frames via the data bus 420 to the DirectX source filter 426, which sends the video data to a remote viewing data socket 428 to transmit the data to a remote viewing application. If video data is intended for other external applications, the DirectX RTP render filter 418 sends the video frames via the data bus 420 to the DirectX source filter 430, which sends the video data to an external viewing data socket 432 to transmit the data to an external application such as a Microsoft Media Center PC.
As an example of remote viewing, the remote viewing data socket 428 of the video surveillance system 100 facilitates monitoring of nearly-live video data feeds from the cameras 102A-102N over the Internet. A user can specify one or more remote viewing locations, for example, Windows® Mobile enabled cell phones, handheld devices, Internet browsers on remote computing devices at a second home or office, and other devices that support Windows® Media 9 video. Examples of compatible cell phones include the Anextek SP230, Palm Treo 700w, and HP iPAQ hw6500 series. Examples of compatible wireless handled devices include the Asus MyPal A730W and Toshiba e805. Examples of compatible Internet browsers include Microsoft® Internet Explorer. Several such remote viewing locations may be enabled. When remote viewing is enabled, the computing device 116 acts as a video server ready to publish video from the secure environment created using the dual use network 110, over the Internet, to the remote viewing location.
One important consideration with the implementation of the RTP data bus 420 and the RTP render filter 418 is that destinations can be added or deleted without disturbing the operation of other destinations. For example, the DirectX RTP source filters 426, 430 can register themselves as destinations for the RTP render filter 418 without disrupting other operations of the data flow 400. In other words, the live viewing and/or recording do not have to temporarily halt while a remote viewing connection or external application destination is added or deleted. If remote or external viewing are not active, no data is sent to the DirectX RTP source filters 426, 430 and subsequent blocks.
Initialization and Operation of the Video Surveillance System
A significant advantage of the video surveillance system 100 of the present invention is ease of installation, which is accomplished in part using two wizards to help users make simple choices. When the memory 204A is first configured, for example, by installation via compact disk (CD), an installation wizard handles conventional tasks such as installing device drivers and copying required files to their proper destinations. When the video surveillance system 100 is operated for the first time, another wizard examines 802 the user's computer environment and sets up the remaining required items that are machine-dependent. This includes, for example, determining video disk storage location, and setting up parameters for a power line network, in the case where the dual use medium 110 is a building power line. Unless the user wishes to change a setting from the defaults suggested by the installation wizards, no user action is required other than to simply accept each suggestion.
Another way in which the video surveillance system 100 is characterized by ease of installations is through use of the dual use medium 110 to create a separate dedicated environment for the video surveillance system 100. Traditional networked cameras and computers can be difficult to set up properly due to the need to co-exist with other networked devices. These difficulties are avoided in the video surveillance system 100 through use of the dual use medium 110. The cameras 102A-102N operate in their own separate dedicated environment and can co-exist with conventional network devices. For example, where the dual use medium 110 is a building power line system, few homes will have pre-existing power line networks, which means the examination 802 process can determine address assignments and settings without worrying about compatibility with other devices. A separate network interface connection (NIC) is created on the computing device 116 to service the environment of the video surveillance system 100.
Another consideration addressed during the examination step 802 of the initialization process 800 is firewall handling, which also contributes to the ease of installation. Many computers contain built-in firewalls, which present a difficult issue for computer peripheral components used in networked systems, such as the video surveillance system 100. Many users may not know what firewall(s) are present or how to configure them. During the examination 802 of the computer environment, special test functions are used to detect and display helpful information to the user regarding firewalls. Such information includes (1) whether any firewall is preventing proper operation of the video surveillance system 100, and (2) what type of traffic is currently being blocked (e.g., UDP broadcast, UDP P-P, TCP P-P, and Universal Plug and Play). For the most popular firewall programs, a message is displayed to the user, notifying the user of the presence of the particular firewall.
For some common firewall programs, for example, the built-in Windows XP® firewall, the installation wizard used in the examination 802 step can automatically reconfigure the firewall to allow the video surveillance system 100 to operate normally. If such automatic reconfiguration is not possible, the installation wizard invokes a help system that displays information telling the user how to reconfigure the firewall to permit operation of the video surveillance system 100. This directed troubleshooting process performs the most difficult parts of the task for the user—determining that there is a firewall problem and what needs to be changed in the firewall setup—and provides appropriate information to the user.
The initialization process 800 also includes a system to automatically detect 804 cameras 102A-102N. The video surveillance system 100 employs the industry standard Universal Plug and Play (UPnP) protocol to establish a connection between the cameras 102A-102N and the control system 112, in particular the memory 204A. The UPnP protocol provides reliable discovery and control between units operating on a common network segment (e.g., network 110).
When the cameras 102A-102N are first turned on, each announces its presence over the dual use medium 110 with a UPnP “notify” message. The cameras 102A-102N continue to do so periodically, according to the UPnP protocols. Similarly, as part of the initialization process 800, UPnP “search” messages are sent out by the control system 112, requesting that any cameras 102A-102N announce their presence. This UPnP discovery process provides a very reliable means of automatically detecting 804 the presence of the cameras 102A-102N in the video surveillance system 100. The user simply plugs in a camera 102A-102N to a power outlet and connects the PC 116 to the dual use medium 110 (e.g., a power line) through the transceiver 114 (e.g., a USB power line adapter).
Once a camera 102A-102N is detected, the initialization process 800 establishes 806 a low latency video connection (e.g., connection 108A-108N) with the camera 102A-102N. The architecture combines DirectX components with custom software components to achieve the low latency video connection 108A-108N as the interface between the camera 102A-102N and control system 112. Connection times are generally about one second, and typical steady-state latency times are on the order of one-third to one-half second. The connection time is longer than the steady-state latency because the control system 112 must wait for the next Key frame to come from the camera 102A-102N, which may occur about every one second. In one embodiment, the control system 112 may request the camera 102A-102N to send a Key frame on demand, so that no waiting is required, reducing the connection time. The reduced connection and steady-state latency times provide the feel of a “real-time” video connection, which is possible due to elimination of the conventional network buffer. Elimination of the conventional buffer is feasible because the video surveillance system 100 employs a dedicated communication environment via the dual use medium 110, which allows a much tighter control of latency than traditional networks such as the Internet can provide.
If a connection 108A-108N to a camera 102A-102N is “lost” 812 due to some temporary problem with the connection 108A-108N, the detect 804 camera step sends out new search messages to attempt to reestablish the connection 108A-108N to the camera 102A-102N. This particular portion of the initialization process 800 remains active throughout the operation of the video surveillance system 100 to address lost camera connections that may occur at any time during operation.
A user can accept the default configuration suggested by the installation wizards during the examine environment and configure step 802. Alternatively, a user can choose to modify parameters via a manual system setup 808, which includes a graphical user interface described in more detail below with reference to
The initialization process 800 receives 810 video data from all cameras 102A-102N detected 804 on the dual use medium 110. The video data from the cameras 102A-102N is sent as a special digitally-encoded data stream over the dual use medium 110 to the control system 112. To enhance security for the video data, a system password entered by the user, as described above, is used as an encryption key for the video data on the dual use medium 110. Without this encryption key, the video data cannot be decrypted or viewed by another party, even if they gain physical access to the user's dual use medium 110, which may be a power line, and can “see” the video data.
The initialization process 800 of
As shown in
Under the Maintenance menu option, the Rebuild Video Segment option examines all video files in the video surveillance system 100 directories and rebuilds the segment list. This action is appropriate if a user suspects an error in the list of video segments. The Rescan For Cameras option initiates the detect cameras 804 step to rescan the dual use medium 110 to detect and show all cameras 102A-102N connected to the dual use medium 110. The Camera Network option allows a user to view and modify settings for the camera network, e.g., the power line network established via the dual use medium 110. For example, the user can monitor data rates and view the power line network nodes (e.g., cameras 102A-102N and USB adapters for the control system 112). The Set Password option allows a user to specify a password. A password may be used to access certain maintenance functions and to prevent inadvertent changes to, or shutdown of, the video surveillance system 100. The password also serves as an encryption key, which will be discussed further below.
Under the How Do I? menu option, various options provide instructions regarding how to Add A Camera to the network, Assign A Camera To A Window, Manually Record A Camera, Schedule Video Recording For A Camera, and Review Recorded Video. Under the Help menu option, the About option displays company and system version information.
The setup program tab screens 506A of the graphical user interface 500A provide access to features through various tab screens, such as Cam Properties, Cam Statistics, Recording Schedule, Video Segments, Video Adjust, Motion Detection, Disk Usage, eMail, and Advanced. As shown in
On the Recording Schedule tab screen, the user can change the recording mode and edit the recording schedule for the cameras 102A-102N. On the Video Segments tab screen, the user can view a list of recorded video events near the selected time. On the Video Adjust tab screen, the user can adjust video settings, such as brightness, contrast, quality, and sharpness, for the selected camera 102A-102N. An Auto checkbox, if selected, allows the selected camera 102A-102N to activate a built-in algorithm to seek automatic brightness and contrast settings, based on the camera's environment.
On the Motion Detection tab screen, the user can create new motion detection zones, delete old motion detection zones, or modify existing motion detection zones for the cameras 102A-102N. The user can also modify settings relating to the capturing and recording of video segments. For example, the default setting for motion detection sensitivity is pre-programmed into the firmware on the cameras 102A-102N; however, the user can modify the Sensitivity to influence how sensitive the motion triggering algorithm is to motion. A low Sensitivity setting will require more motion to trigger recording.
On the Disk Usage tab screen, the user can view statistics on the disk allocation for the saved video files, such as video path, free space, maximum allocation, current usage, and discard date. The user can indicate how much disk space to allow the video surveillance system 100 to use on the computer's hard drive. For example, a user with a 100 GB drive may wish to allocate 20 GB to video storage. The video files are stored under a common root directory, which can be designated by the user. Each camera 102A-102N has its own subdirectory under the root video directory, where the video files are stored. There may be hundreds or thousands of video files in each camera's directory. As a default, each file is given a unique filename according to a sequential numbering algorithm (e.g., F—000001, F—000002, etc.).
When the user's designated disk quota is reached, existing video files are deleted to make room for new video files, starting with the oldest file among all of the cameras 102A-102N. The user can designate any video file as “protected,” which will make that file read-only in the operating system and also set a flag in the internal file structure to prevent automatic deletion of the protected video file if the disk quota is reached. Such protected files are shown as red in the search mode timeline, discussed in detail below, instead of the normal green segment indication.
On the eMail tab screen, the user can specify local eMail account information and desired eMail destinations. On the Advanced tab screen, the user can view and edit advanced properties for the selected camera 102A-102N, such as horizontal and vertical offset, resolution (Res) amounts, and the bit rate type and amount.
The options described with respect to the graphical user interfaces 500A and 500C are available initially during the system setup 808 of the initialization process 800. The options are also available to the user, however, via the same graphical user interfaces 500A and 500C, during regular operation of the video surveillance system 100 for further customization of the video surveillance system 100, as will be discussed with respect to
To improve reliability, the video surveillance system 100 can be configured to terminate and restart its operation at a set time each day. The video surveillance system 100 can typically shut down and be fully operational in less than fifteen seconds, minimizing any down time, while providing a fresh start each day. This feature can be disabled or enabled by the user, and the time of restart can be set by the user.
The video surveillance system 100 Watchdog Timer is an additional reliability mechanism. The operating process 900 is continually monitoring and executing its normal functions with a background executive scheduler. If, for some reason, the executive scheduler becomes inactive for a period of time (e.g., 30 seconds), the Watchdog Timer will terminate and restart the video surveillance system 100 using the same restart mechanism that is used for the daily program restart. Thus, the video surveillance system 100 can detect problems in its own execution and recover without user intervention.
Operation of the video surveillance system 100 includes four main modes: system setup, live viewing, record, and search/playback. If the user desires to change the system setup 904, the operating process 900 enters the system setup 808 described previously with respect to
Those of skill in the art will appreciate that the modules described in the operating process 900 of
Those of skill in the art will appreciate that, with minor modifications, operating process 900 of
Live Viewing Mode
The live viewing mode 1000 of the video surveillance system 100 receives 1002 video data from all of the cameras 102A-102N over the dual use medium 110. The video data received may be encrypted for additional system security as described above with respect to receiving video data 810 of
The live viewing mode 1000 includes steps necessary to process the received video data, including formatting 1004 a viewing window, displaying 1006 the video data, controlling camera activation 1008, and displaying a timestamp 1010 associated with the received data. Each of these steps is described in greater detail with reference to
The live viewing mode 1000 also provides support for sending 1012 video data, for example, by eMail, to alternative destinations under certain events or conditions. As discussed above with respect to system setup 808 of
If the user wishes to change modes 1012, the live viewing mode 1000 terminates, returning the user to the operating process 900 of
The live viewing mode GUI 600A is straightforward and intuitive, with a minimum of unnecessary or advanced controls, and includes title bar application options 604, main screen feature buttons 608, a viewing window 602A, multiple camera view selectors 606, a message window 610, a date/time stamp window 612, a current time clock 614, and a camera activation panel 616. The GUI 600B and the GUI 600C are identical to the GUI 600A, except for differently configured viewing windows 602B and 602C, respectively.
The title bar application options 604 includes four buttons to allow the user to control various application options of the video surveillance system 100. Selection, for example, by clicking, of the help button, marked with a question mark, opens a help system for the video surveillance system 100. The help system allows the user to access topics by clicking through a Table of Contents, searching using a Search option, or browsing through a comprehensive Index. Selection of the minimize button, marked with an underscore, collapses the GUI 600A to the task bar of the computing device 116. Selection of the full screen button, marked with a box, allows the user to view the GUI 600A without the GUI controls, i.e., showing only the video images. Selection of the exit button, marked with an “X,” closes the GUI 600A, but does not terminate the operation of the video surveillance system 100. In particular, clicking on the exit button displays a system message, for example, “The application will continue to run and record to the DVR. If you wish to exit the application, right-click on the icon in the system tray and select the Exit option.”
The main screen feature buttons 608 include three buttons to allow the user to change the operating mode of the video surveillance system 100. Selection of the setup button, the small leftmost button marked with an S, ends the live viewing mode 1000 and enters the system setup 808 as shown in
The viewing window 602A displays 1006 the video data feed coming from the camera 102A. A message, shown here in the upper left corner of the viewing window 602A, displays the name of the camera 102A being monitored, for example, “C1 Camera 1,” and the current status of the camera 102A, for example, “(Live).” If the camera 102A is instead recording, the message will display “(Rec)” in place of “(Live).” The location of the message text can be set by the user.
The multiple camera view selectors 606 include three buttons that allow the user to select a format for the viewing window 602A. The GUI for the live viewing mode 1000 can display 1006 video data from the cameras 102A-102N in three different formats-one, four, or six images tiled together. As cameras 102A-102N are discovered via the UPnP protocol, the live viewing mode 1000 automatically selects the layout for the viewing window 602A-602C that accommodates that number of cameras 102A-102N. The user can override this selection manually, using the multiple camera view selectors 606 to select another layout mode. Selection, for example, by clicking, of the leftmost button, marked with a single box, selects the GUI 600A of
Various additional viewing options exist in the four- or six-camera mosaic modes. Using the GUI 600B or the GUI 600C, the user can click on an image from any camera 102A-102N to expand it to fill the viewing window 602B or 602C. Clicking again returns to the multi-image mosaic for the viewing window 602B or 602C. This feature provides a simple mechanism to take a quick, more detailed look at the image from a particular camera 102A-102N. Additionally, the user can right-click on the image from any camera 102A-102N to bring up a context-sensitive menu of possible actions for that particular camera. For example, the user can print the image, eMail the image, look at image statistics, change the name of the camera, or change the camera number for the camera, among other options. In particular, the user can change the order of the cameras 102A-102N in the four- and six-screen view modes by choosing the “change camera order” option and selecting a new number, e.g., 1-6, for the particular camera. That particular camera will then be assigned to the image position associated with the new number, and the camera previously in that position will swap camera numbers with the camera just changed. The order of the cameras 102A-102N can also be changed via system setup 808 using, for example, the Camera tab of the setup program tab screens 506C of the graphical user interface 500C, by changing the camera order number as described above.
Returning to the GUI 600A of
The date/time stamp window 612 displays the current date and time. The format of the display is day of the week, calendar date, running time in the format hours:minutes:seconds, and AM or PM. The current time clock 614 displays the current time in an analog clock format.
The camera activation panel 616 includes a column of easily viewable status indicators for each camera 102A-102N. The GUI 600A shows six such columns for six cameras 102A-102F. In each column, an active camera indicator 618 is highlighted if the respective camera is active. A blue highlight indicates that the camera is active, but eMail alerts have been disabled for that camera. A red highlight indicates that the camera is active, and eMail alerts are set up and enabled. Left-clicking on the active camera indicator 618 toggles the enabling or disabling of eMail alerts for the particular camera. A representation of a green light emitting diode (LED) 620 is illuminated if the camera connection is good and the camera is sending video data. A representation of a red LED 622 is illuminated if the video data is currently being recorded to disk. An on/off button 624 allows the user to activate or deactivate the camera. As shown in
Record Mode
If the record mode 1100 is triggered in a panic mode 1104, for example, by user selection of the panic button in the main screen feature buttons 608 area of the GUI 600A of
The record mode 1100 allows the user to view and set the record mode and schedule 1108 for each camera 102A-102N. The video surveillance system 100 provides multiple modes for triggering recording of video files, including, for example, motion-based, continuous, and off. By default, the entire recording schedule for all cameras 102A-102N is initially set to the motion-based mode. Under motion-based recording, each time a particular camera 102A-102N sends a motion detection signal to the control system 112, a minimum of a few seconds, for example, five seconds, of video are recorded for that camera. The recording continues as long as motion is detected and for a small time, for example, five seconds, after motion is no longer detected. As discussed previously, in another embodiment, each of the cameras 102A-102N may only send video data to the computing device 116 over the dual use medium 110 for recording when there is actual motion detected, allowing the video surveillance system 100 to conserve bandwidth when there is no motion detected. Under continuous recording, video data from a camera 102A-102N is recorded during the time periods designated in the recording schedule for that camera, regardless of whether motion is detected or not. Under the off recording mode, no recording of video data from a camera 102A-102N will occur, even if motion is sensed by that camera.
Using the Recording Schedule tab screen of the graphical user interface 500A for performing system setup 808, described previously, the user can independently set the recording mode for each camera 102A-102N. Some cameras 102A-102N may be in motion-based recording mode, while others are in the off or continuous modes. Using the GUI 500A, the user can also independently set the recording schedules for each camera 102A-102N in the off or continuous recording modes by designating certain periods of time during a weekly schedule to be off or to be continuously recording.
The record mode 1100 allows the user to set motion detection zones 1110 for each camera 102A-102N for use in the motion-based recording mode. For each camera 102A-102N, the video surveillance system 100 initially defaults to having the entire camera image serve as an active zone. The user can further refine the motion detection system, however, by designating one or more “zones” for each camera 102A-102N in which motion detection is to be active. In particular, the user can use the Motion Detection tab screen of the GUI 500A of
As part of setting motion detection zones 1110 for each camera 102A-102N, the video surveillance system 100 may employ an automatic image quality system to automatically adjust video quality within the user defined motion detection zones. Video image parameters that may be adjusted include, for example, focus, contrast, brightness, color, exposure time, and other parameters. Thus, if a user specifies a motion detection zone in a camera 102N, a video-quality-enable property is attached to that motion detection zone that, if enabled, commands the camera 102N to focus on the video quality of that particular motion detection zone. The video-quality-enable feature may be enabled or disabled independently for each user designated motion detection zone of each of the cameras 102A-102N.
As an example, a camera 102N may be installed in a fixed location to monitor a specific area that has lighting conditions that change over the course of the day and night, or with objects moving into and out of the area. Although the camera 102N may be designed to automatically adjust contrast, brightness, exposure, and color in response to the varying lighting conditions on a continuous basis, it defaults to seeking a specific average luminance in the overall picture. Seeking an overall average may be acceptable where the lighting conditions change uniformly throughout the camera image, but it is not necessarily useful where the lighting conditions vary in different regions of the camera image.
In an alternative embodiment, each particular user-designated motion detection zone can be assigned one or more specific attributes by the user. Each attribute designates that that motion detection zone is used, or not used, in a particular operation, such as focus, brightness, contrast, color adjustment, etc.
Returning to
A significant advantage of the background recording mode 1112 is the resultant low central processing unit (CPU) load. The background recording mode 1112 provides recording capability even while the computing device 116 is being used to perform other tasks. When the user is actively viewing video images, for example, in the live viewing mode 1000 or the search/playback mode 1200, the video rendering can consume a substantial portion of the CPU cycles, depending on the number of cameras 102A-102N being viewed, and the speed of the computing device 116. But when images are not being viewed, such as when the application is minimized or operating in the background recording mode 1112, the CPU usage drops to a very low level, leaving the computing device 116 free to perform other tasks. The video surveillance system 100, on the other hand, continues normal operation: the cameras 102A-102N respond to motion detection events and send video data, the control system 112 receives video data from the cameras 102A-102N, and the computing device 116 records 1114 video data to disk based on motion detection or on the recording schedule calendar.
The computing device 116 controls the recording 1114 of video data to disk according to the mode of each particular camera 102A-102N. Under the motion-based mode, when motion is detected in a particular motion detection zone of a camera 102A-102N, recording begins at the last Key frame, for example, 0-2 seconds before the motion detection event. This is possible through the use of the record queue 402. Under the continuous mode, recording begins at the last Key frame prior to the start time indicated in the recording schedule calendar, also accomplished through use of the record queue 402.
The record mode 1100 optionally displays 1116 recording statistics. If the video surveillance system 100 is simultaneously operating in the record mode 1100 and also either the live viewing mode 1000 or the search/playback mode 1200, the user can view recording statistics by right-clicking in the viewing window, for example, viewing window 602A-602C, and selecting “Recording Statistics” to initiate display 1116 of the recording statistics. If the record mode 1100 is operating in the background recording mode 1112, however, no recording statistics are displayed. Recording statistics may include, for example, record start time, record trigger, recording duration, the number of bytes transferred to a particular file during recording, and/or the average bitrate, among other statistics.
The record mode 1100 also provides support for sending 1118 a notification or video data, for example, by email, to alternative destinations under certain events or conditions. As discussed above with respect to system setup 808 of
If the user wishes to change modes 1120, the record mode 1100 terminates, returning the user to the operating process 900 of
Those skilled in the art will recognize that any number of the steps of the method of
Search/Playback Mode
At its most basic, the search/playback mode 1200 includes processes necessary to format 1202 a viewing window, retrieve 1208 video data, and display 1212 that video data in the viewing window. Retrieval 1208 of the video data is facilitated through a search mode calendar 1204 and a search navigation 1206 modules. Display 1212 of the video data is controlled by a video playback navigation 1210 module. Each of these steps is described in greater detail with reference to
The search/playback mode 1200 also optionally displays 1214 playback statistics. The user can view playback statistics by right-clicking in a viewing window, discussed below with reference to
The search/playback mode 1200 also provides support for sending 1216 video data, for example, by email, to alternative destinations under certain events or conditions. As discussed above with respect to system setup 808 of
If the user wishes to change modes 1218, the search/playback mode 1200 terminates, returning the user to the operating process 900 of
Those of skill in the art will appreciate that, with minor modifications, the steps of
The search/playback mode 1200 GUI 700 is straightforward and intuitive, with a minimum of unnecessary or advanced controls, and includes title bar application options 604, main screen feature buttons 708, a viewing window 702, multiple camera view selectors 606, a message window 610, a calendar icon 712, a calendar 710, a search navigation panel 716, a date/time stamp window 612, and a video playback navigation panel 714.
The title bar application options 604, the multiple camera view selectors 606, and the message window 610 were discussed previously with respect to the live viewing mode 1000 GUI 600A of
The main screen feature buttons 708, similar to the main screen feature buttons 608 of the live viewing mode 1000 GUI 600A of
The viewing window 702 displays the video data retrieved from the disk storage 406. A message in the upper left corner of the viewing window 702 displays the name of the camera 102A that captured the video data, for example, “C1 Camera 1,” and the current status of the system, for example, “(Search).”
The calendar icon 712 at the lower right corner of the message window 610 allows the user to open or close the calendar 710. The calendar 710, displayed in the message window 610, is one mechanism to simplify the potentially tedious process of reviewing recorded video segments, which may include many clips stored over a long period of time. The calendar 710 may be implemented as a drop-down menu to select from the various months of the year and includes arrow buttons at the top of the calendar 710 to navigate to previous or following months. The current date circled in red. Any dates with recorded video data available are shown bolded. When a particular date is selected, for example, by clicking on it, the playback time is set to the start of the first video clip for that date.
The search navigation panel 716 shows a search timeline displaying the time span of each recorded video clip, for all cameras 102A-102N, at a glance, for the particular date selected on the calendar 710. As an example, the search navigation panel 716 shows available video clips from six cameras as bolded line segments. The user can change the time scale across the top of the search navigation panel 716, with the scale varying from five seconds per division, to four hours per division. This permits the user to see a large time span at a glance, for example, more than a full day, and then to focus in on a particular window of time. With the time line magnified, the user can easily see the start and stop of each recorded video segment. Clicking anywhere in the search navigation panel 116 selects a particular recorded video segment or point in time.
The user can also preview a particular recorded video segment by simply clicking on the timeline to set the playback time, and then dragging the timeline cursor to review the recorded video segment. This provides a mechanism to drag the cursor through a video clip quickly to see what it contains, without actually using the video playback navigation panel 714. Dragging the cursor to the left edge or the right edge of the timeline scale slews the time line in the corresponding direction, making the timeline a virtual endless timeline, with a window of that timeline shown on the search navigation panel 716.
Right clicking in the search navigation panel 716 brings up a context-sensitive menu of possible actions for the recorded video segment. For example, the user can protect or unprotect the video segment, save it under a different filename, delete the video segment, email the video segment or a particular frame, or print the video frame, among other options.
As discussed above, clicking on a particular recorded video segment in the search navigation panel 716 selects that video segment. The date/time stamp window 612 displays the date and time at which the segment of recorded video was captured. The format of the display is day of the week, calendar date, running time in the format hours:minutes:seconds, and AM or PM.
As an intuitive aid to communicate the recording time at a glance, the GUI 700 additionally provides an analog clock 718. To enhance the analog time readout, and to resolve the ambiguity resulting from the 12-hour clock face, the clock face changes color for day or night recording times. Video captured between 6 AM and 6 PM results in a light-colored clock face, while video captured between 6 PM and 6 AM results a dark clock face. The day/night analog clock 718 provides an enhanced human-factors interface to the user while searching through recorded video segments.
The video playback navigation panel 714 enhances the video search functionality with the easy-to-use shuttle control interface, which controls display 1212 of the recorded video data. The video playback navigation panel 714 allows standard play/pause functionality. In one embodiment, the play button toggles between showing play and pause. The video segment can be played in either the forward or reverse direction. In addition, it has a button for single-frame-advance, plus buttons to advance to the next video clip, or to the start of the current clip, or to the start of the previous clip.
Of particular interest is the ability to vary the playback speed of a recorded video segment. The standard Microsoft® DirectX Reader used for viewing Media 9 video streams can only play back video streams at a rate of 1.0×, i.e., normal viewing speed. To overcome this shortcoming, the video surveillance system 100 employs a custom stream reader that provides support for playback speeds slower and faster than 1.0×. Although the custom stream reader can support a variety of rates, in one embodiment, the video playback navigation panel 714 of the GUI 700 provides a range of playback speeds from ⅛× to 8× of normal speed. The playback speed can be varied by rotating, for example, by dragging, the shuttle wheel of the video playback navigation panel 714 clockwise or counter-clockwise to change the playback speed.
Of additional interest is the ability of the video surveillance system 100 to maintain temporal synchronization among multiple cameras 102A-102N during playback of recorded video segments. If standard Microsoft® Readers were used to view the video data from the multiple cameras, it would be difficult to ensure that the several readers maintained temporal synchronization over long periods of playback. One embodiment of the video surveillance system 100, however, includes six cameras, and the viewing window 702 of the GUI 700 for the search/playback mode 1200 includes a six-image mosaic format, similar to the format of the viewing window 602C of the GUI 600C of
Video Surveillance Camera
The camera 102N can be placed on any flat and stable surface, such as a window sill, bookshelf, or on top of a bureau or desk or other item of furniture. Alternatively, a suction cup (not shown) can be attached to the back or front of the camera 102N to facilitate mounting the camera 102N to, for example, a window. If the suction cup is mounted to the back of the camera 102N, the camera's lens 1306 can be directed into the interior of the building or house. Mounting the suction cup on the front of the camera 102N allows directing the camera lens 1306 to monitor a zone outside of the building through a window, for example. Alternatively, a wall or ceiling mount may be provided for the camera 102N.
A video sensor 1414, for example, a CMOS or CCD device, captures video frames from a camera optics system 1416, including the lens 1306, for processing by a main camera board 1412. The video sensor 1414 and the camera optics system 1416 form the basis for the input video device 222 of
The main camera board 1412 includes, for example, the processor 202B and the main memory 204B, which are used to control operation of the camera 102N via the modules in the main memory 204B shown in
The transceiver 104N couples the camera 102N to the dual use medium 110. In the embodiment of
The camera 102N also includes a motion detector 1408, which indicates each frame in which there is motion. An infrared emitter 1410 facilitates infrared reflection operation. A set of status LEDs 1420 shows the status of the camera 102N, for example, whether the connection 108N to the dual use medium 110 is active, whether the camera 102N is functioning, and whether the camera 102N is recording. For example, if the connection 108N is active, green and yellow LEDs are on, and if the camera 102N is recording, a red LED is on.
The camera 102N also optionally includes a back-up battery pack 1404 to provide power to the camera 102N in the event of failure of the 120V power supply. In particular, the back-up battery pack 1404 provides the basis for a back-up system for fault tolerance. If the 120V power to the camera 102N fails or is turned off, the camera 102N hibernates, but the motion detector 1408 continues to operate. A motion-based event triggers the camera 102N to power up, using the back-up battery pack 1404, and to transmit video to the control system 112, either wirelessly via the wireless modem 1418 or through power line communication via the power line adapter 1422 of the transceiver 104N. Alternatively, the data storage 206B, for example, a FLASH memory device, may be used to store, and later download, video segments recorded during power-down conditions. This ensures reliable surveillance under all conditions.
Those of skill in the art will appreciate that alternative embodiments of the camera 102N may not include all of the components 1400 shown in
The video surveillance system of the present invention preserves the advantages of traditional video surveillance while overcoming many of its deficiencies by providing a low cost, user friendly, multi-functional video surveillance system that is responsive to real-time viewing requirements, and yet requires low CPU processing resources thereby preventing interference with other computer tasks.
Upon reading this disclosure, those of skill in the art will appreciate additional alternative structural and functional designs for systems and processes for video surveillance through the disclosed principles of the present invention. Thus, while particular embodiments and applications of the present invention have been illustrated and described, the invention is not limited to the precise construction and components disclosed herein and various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation, and details of the methods and apparatus of the present invention disclosed herein without departing from the spirit and scope of the invention as defined in the appended claims.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 60/641,392, titled “Video Surveillance System” to Thomas R. Rohlfing, et al., and filed Jan. 4, 2005; and to U.S. Provisional Patent Application No. 60/661,305, titled “Security Camera With Adaptable Connector For Coupling To Track Lighting And Back-Up System For Fault Tolerance” to Andrew Hartsfield, et al., filed Mar. 10, 2005; and to U.S. Provisional Patent Application No. 60/681,003, titled “Modular Design For A Security System” to Andrew Hartsfield, et al., filed May 12, 2005, the contents of each are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
60641392 | Jan 2005 | US | |
60661305 | Mar 2005 | US | |
60681003 | May 2005 | US |