1. Fields of the Invention
The present invention relates generally to surveillance systems and more particularly to an image sensor, a motion image sensor, and improved, cost-effective image analysis and motion image analysis methods.
2. Description of the Prior Art
Generally, in many imaging systems implemented for detecting moving objects in an image field, there is a cost associated with sampling parts of the image. This cost includes but is not limited to:
In any moving image, there are typically areas of interest and areas that are not of interest. Traditional imaging systems expend the same amount of cost to sample all areas of the image. When an image comes to be stored on a non-volatile medium, the moving image will typically be compressed. Compression reduces the amount of data to be stored, but it does not remove the cost of making the image in the first place.
Jonas Nilsson in the work entitled “Visual Landmark Selection and Recognition for Autonomous Unmanned Aerial Vehicle Navigation”—Master's degree project, The Royal Institute of Technology, Sweden, 2005, (hereinafter Nilsson)—disclosed image analysis methods that aim at enhancing the performance of a navigation system for an autonomous unmanned aerial vehicle. Nilsson investigates algorithms for the selection, tracking and recognition of landmarks based on the use of local scale invariant features. For example, Nilsson disclosed the affine tracker algorithm combined with Kalman filters to track an object in an image plane. Nilsson further discloses a landmark recognition algorithm that has the capability of recognizing landmarks despite the presence of noise and significant variation in scale and rotation. The performance of the landmark recognition algorithm allows for a substantially reduced sampling rate but the uncertainty of the landmark location results in larger image search areas and therefore an increase in the computational burden. That means the landmark recognition algorithm is more suitable for stable (i.e. unchanging) image regions than for unstable (i.e. changing) image regions.
Therefore, it would be highly desirable to provide a system and method for minimizing the cost in monitoring a visual space (i.e. everything that a camera sees).
The present invention is directed to a system and method for minimizing image sampling costs in monitoring an area, e.g. visual space. The system and method utilize motion detection and estimation techniques within a feedback loop to control the subsequent sampling method.
In the method of the invention, the following steps are performed:
a. Obtaining image frames and feeding the image frames to a computer to be analyzed.
b. The computer uses image analysis software to identify an area of interest and areas that might become of interest. This may be done by comparing a number of frames against a threshold value or against a reference frame. These could be, for example, areas within the visual space where movement has been detected.
c. The computer provides information to the imaging system that allows it to optimize how future frames are sampled in such a way as to reduce the cost.
d. Repeating the operations in the steps a to c.
In one embodiment, the present invention includes a system to reduce a cost of sampling a moving image comprising:
In another embodiment, the present invention includes a method for minimizing costs associated with use of an image sensor device comprising:
Both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
The accompanying drawings are included to provide a further understanding of the present invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. In the drawings,
Images are captured by an image capture device 110 at step 120 and passed on to a central processing device (not shown). This may comprise a continuous datastream of images in, for instance, a surveillance system; where other technology is deployed (such as motion detectors), then image capture may be disabled and (re)-enabled as appropriate. In the latter case where the image capture is (re)-enabled, at startup, then a set of images may first need to be collected to bootstrap the process.
As images are captured and passed to the central processing device, they are stored in an image buffer as shown at step 130 so that they can be compared with each other as detailed below by the image processing modules 140 to 160 where, at module 140, object priority and sensitivity is established; at module 150, frame to frame changes (i.e., comparing the Nth frame with the N−1th frame within a datastream) are checked; and, at module 160, motion is identified and/or predicted.
Any surveillance system using visual imaging will be able to establish what the background is and to have variants of this background to accommodate changes such as may be caused by weather and time of day. Using this background information, it is possible to establish (through image subtraction) areas in the image, which need more attention, because they show priority additions to the image.
More particularly, the image is automatically reviewed at step 140 to identify whether there are animated objects in the frame, or objects which may be particularly sensitive to light or radiation of any type. This may be performed using one or more motion detection techniques, including but not limited to standard edge detection and texture analysis techniques. The edge detection establishes which part of visual spaces belongs to the same object. Texture detectors are used to establish a texture (such as smooth, rough, soft, hard, etc) of the object. Based on the edge detection and texture analysis, the object is identified in a rough term as, for example, a bird, a person, a car, etc. Based on the rough identification, a determination is made as to whether the object is animate or inanimate. A decision on whether the sensitivity of surveillance needs to be increased or not is decided based on the animate/inanimate distinction (i.e., is more light necessary?; can radiation be applied?). At this stage, information is retained and used in conjunction with the next stages to modify default parameter reset actions 170.
At step 150, a number of frames, (N, N+1 . . . N+M) are used to establish whether there are any frame-to-frame changes within a datastream by comparing the frames captured during current monitoring. At the simplest level, this may be a simple delta calculation averaged over the M frames: values above a configurable threshold are identified as containing significant change. The process is then repeated but with localized areas in the images to localize the part or parts of the image which are detected as changing, using one or more motion detection techniques. The amounts of change as well as location are stored and passed on to modify the parameter reset at step 170.
Finally, at step 160, movement is determined and flagged with speed and direction. This can be done by looking at localized differences determined at step 150 where general shape integrity has been maintained between images. Comparing successive images and the progression of a similar shape across the images indicates speed and direction of object movement.
From these analyses, characteristics concerning the nature of objects in the image are determined at module 140; how and where the images are changing are determined at module 150; rate and direction of the change are determined at module 160. Taking these together, the process then determines:
Factoring these determinations together enables a re-configuring and/or resetting of the parameters 170 at the image capture device 110 before image capture 120 continues.
In an example implementation, the present invention is used as a security camera system: a camera is used to monitor a visual space for security purposes. During the night, the image is not so clear, so an infrared lamp is provided to illuminate the visual space. It is expensive to illuminate the visual space all night. The present invention can be used to monitor a dark image for movement. A system can be programmed to ignore certain kinds of movement, for instance, traffic on a distant road and to detect abnormal movements. When an abnormal movement is detected, the infrared light is automatically switched on. If the light beam can be steered, then it is pointed at the area of movement. This provides a clearer image of the area where the movement occurred or is occurring. The system may additionally alert a security guard to the availability of an image of probable significance. Should the movement simply have come from an animal, for example, then a picture of the animal is recorded. Should the image be of an unidentified intruder, then the image recorder is well lit and will be of use in identifying the intruder. However, the present invention achieves this without lighting the whole area for the entire night.
Although the embodiments of the present invention have been described in detail, it should be understood that various changes and substitutions can be made therein without departing from the spirit and scope of the inventions as defined by the appended claims. Variations described for the present invention can be realized in any combination desirable for each particular application. Thus particular limitations, and/or embodiment enhancements described herein, which may have particular advantages to a particular application need not be used for all applications. Also, not all limitations need be implemented in methods, systems and/or apparatus including one or more concepts of the present invention.
The present invention can be realized in hardware, software, or a combination of hardware and software. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. The present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which—when loaded into a computer system—is able to carry out these methods.
Computer program means or computer program in the present context include any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after conversion to another language, code or notation, and/or reproduction in a different material form.
Thus the invention includes an article of manufacture which comprises a computer usable medium having computer readable program code means embodied therein for causing a function described above. The computer readable program code means in the article of manufacture comprises computer readable program code means for causing a computer to effect the steps of a method of this invention. Similarly, the present invention may be implemented as a computer program product comprising a computer usable medium having computer readable program code means embodied therein for causing a function described above. The computer readable program code means in the computer program product comprising computer readable program code means for causing a computer to effect one or more functions of this invention. Furthermore, the present invention may be implemented as a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for causing one or more functions of this invention.
It is noted that the foregoing has outlined some of the more pertinent objects and embodiments of the present invention. This invention may be used for many applications. Thus, although the description is made for particular arrangements and methods, the intent and concept of the invention is suitable and applicable to other arrangements and applications. It will be clear to those skilled in the art that modifications to the disclosed embodiments can be effected without departing from the spirit and scope of the invention. The described embodiments ought to be construed to be merely illustrative of some of the more prominent features and applications of the invention. Other beneficial results can be realized by applying the disclosed invention in a different manner or modifying the invention in ways known to those familiar with the art.
Number | Name | Date | Kind |
---|---|---|---|
3988533 | Mick et al. | Oct 1976 | A |
4015366 | Hall, III | Apr 1977 | A |
4081830 | Mick et al. | Mar 1978 | A |
4257063 | Loughry et al. | Mar 1981 | A |
4337481 | Mick et al. | Jun 1982 | A |
4417306 | Citron et al. | Nov 1983 | A |
4679077 | Yuasa et al. | Jul 1987 | A |
5669387 | Mine | Sep 1997 | A |
5912822 | Davis et al. | Jun 1999 | A |
5930379 | Rehg et al. | Jul 1999 | A |
5995095 | Ratakonda | Nov 1999 | A |
6057847 | Jenkins | May 2000 | A |
6256046 | Waters et al. | Jul 2001 | B1 |
6603503 | Ribera et al. | Aug 2003 | B1 |
6967612 | Gorman et al. | Nov 2005 | B1 |
7016537 | Cooper | Mar 2006 | B2 |
7114383 | Byrne | Oct 2006 | B2 |
7289215 | Spady et al. | Oct 2007 | B2 |
7317759 | Turaga et al. | Jan 2008 | B1 |
20020054210 | Glier et al. | May 2002 | A1 |
20020191828 | Colbert et al. | Dec 2002 | A1 |
20030209893 | Breed et al. | Nov 2003 | A1 |
20050105772 | Voronka et al. | May 2005 | A1 |
20050131607 | Breed | Jun 2005 | A1 |
20050275721 | Ishii | Dec 2005 | A1 |
20060204057 | Steinberg | Sep 2006 | A1 |
20060227997 | Au et al. | Oct 2006 | A1 |
20060251293 | Piirainen et al. | Nov 2006 | A1 |
20080056570 | Williams et al. | Mar 2008 | A1 |
Number | Date | Country |
---|---|---|
0867839 | Mar 1995 | EP |
0872808 | Apr 1998 | EP |
WO 9811734 | Sep 1997 | WO |
WO 9810370 | Mar 1998 | WO |
WO 03077552 | Sep 2003 | WO |
WO 2005050972 | Jun 2005 | WO |
WO 2006085960 | Jun 2005 | WO |
WO 2006074328 | Jul 2006 | WO |
WO 2007054742 | Nov 2006 | WO |