1. Field of the Invention
The present invention relates to a surveillance method and a camera system, and more particularly, to a surveillance method and a camera system capable of monitoring a full view of an environment and providing a high resolution image of a part of the environment.
2. Description of the Prior Art
A surveillance system is extensively applied to the public place, such as the train station, the supermarket, the street, etc. A fisheye camera or a pan-tilt-zoom (PTZ) camera is applied to the conventional surveillance system. The fisheye camera is able to capture a wide-angle (wide-range) image of an environment. The PTZ camera is able to be panned, tilted and zoomed in/out to capture a high resolution image of a narrow-range of the environment. However, a capturing orientation of the fisheye camera is fixed and a resolution of the fisheye camera is relatively low. It is difficult for the fisheye camera to provide a clear vision of an object of interest in the environment. In addition, a field of view of the PTZ camera is narrow, compared to the fisheye camera, and thus, the object of interest is usually beyond the field of view of the PTZ camera. Therefore, it is necessary to improve the prior art.
It is therefore a primary objective of the present invention to provide a surveillance method and a camera system capable of monitoring a full view of an environment and providing a high resolution image of a part of the environment, to improve over disadvantages of the prior art.
An embodiment of the present invention discloses a surveillance method, utilized in a camera system, the camera system comprising a display device, a controller, a first camera disposed fixedly on a base of the camera system and constantly facing toward a first direction, and at least a second camera disposed on the base and controlled by the controller to rotate around the first camera, the surveillance method comprising the display device displaying a wide-angle image captured by the first camera; the controller receiving at least a directional instruction corresponding to at least a specific part of the wide-angle image; and the controller generating a plurality of control signals to steer the at least a second camera toward at least a second direction according to the at least a directional instruction.
An embodiment of the present invention further discloses a camera system comprising a base; a first camera, disposed on the base, constantly facing toward a first direction, and configured to capture a wide-angle image; at least a second camera, disposed on the base, adjustably facing toward at least a second direction, and controlled to rotate around the first camera; a display device, coupled to the first camera and the at least a second camera, configured to display the wide-angle image; and a controller, coupled to the display device, the first camera and the at least a second camera, configured to generate a plurality of control signals to steer the at least a second camera toward at least a second direction according to at least a directional instruction.
An embodiment of the present invention further discloses a surveillance method, utilized in a camera system, the camera system comprising a controller, a first camera disposed fixedly on a base of the camera system and constantly facing toward a first direction, and at least a second camera disposed on the base and controlled to rotate around the first camera, the surveillance method comprising the first camera capturing a wide-angle image; the controller identifying at least an image object in the wide-angle image, where the at least an image object is corresponding to at least a moving object in an environment; and the controller generating a plurality of control signals to steer the at least a second camera such that the at least a moving object is within at least a field of view of the at least a second camera.
An embodiment of the present invention further discloses a camera system, comprises a base; a first camera, disposed on the base, constantly facing toward a first direction, and configured to capture a wide-angle image; at least a second camera, disposed on the base, and controlled to rotate around the first camera; and a controller, coupled to the first camera and the at least a second camera, configured to identify at least an image object in the wide-angle image and generate a plurality of control signals to steer the at least a second camera such that at least a moving object is within at least a field of view of the at least a second camera; wherein the at least an image object is corresponding to the at least a moving object in an environment.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Please refer to
Furthermore, the base 12 includes a central portion 24 and a rotary portion 26. The central portion 24 may be an upright structure, and the rotary portion 26 may be an annular holder or an arc holder that moves along a track encircling the upright structure. The first camera 14 is disposed on the central portion 24 without rotary/shift movement. The second camera 16 is disposed on the rotary portion 26, and may encircle around the first camera 14 by revolution of rotary portion 26 round the central portion 24. Generally, the rotary portion 26 is preferably a tray with a central hole 261, and the central portion 24 passes through the central hole 261 and is encircled by the rotary portion 26.
In an embodiment, the first camera 14 captures the wide-angle image I1, and the display device 22 displays the wide-angle image I1 captured by the first camera 14. After the user perceives the wide-angle image I1 through the display device 22, if the user is interested in a specific part of the wide-angle image I1 corresponding to an object of interest in the environment, the user may input a directional instruction to the camera system 10. The controller 18 may generate a pan signal and a tilt signal to the rotating mechanism 20 and the second camera 16, such that the second camera 16 is steered to capture the interested image I2 of the object of interest in the environment.
Operations of the camera system 10 steering the second camera 16 to capture the interested image I2 of the object of interest in the environment may be referred to
Step 400: Start.
Step 402: The display device 22 displays the wide-angle image I1 captured by the first camera 14.
Step 404: The controller 18 receives a directional instruction corresponding to a specific point within the wide-angle image I1.
Step 406: The controller 18 obtains a Cartesian coordinate (x1,y1) of the specific point within the wide-angle image I1.
Step 408: The controller 18 transfers the Cartesian coordinate (x1,y1) into a polar coordinate (r1,θ1).
Step 410: The controller 18 generates a pan signal PS and a tilt signal TS according to the polar coordinate (r1,θ1) to steer the second camera 16 toward the second direction D2.
Step 412: End.
According to the surveillance process 40, the camera system 10 is able to steer the second camera 16 so as to capture the interested image I2 of the object of interest in the environment according to the directional instruction. Specifically, in Step 402, the display device 22 displays the wide-angle image I1 captured by the first camera 14, where the wide-angle image I1 may be a circular image of the environment. In Step 404, the directional instruction, inputted by the user, may be a mouse click command pointing at the specific point within the wide-angle image I1 on the display device 22, where the mouse click command is inputted by the user via a mouse coupled to the display device 22. The directional instruction may also be a touch command pointing at the specific point within the wide-angle image I1, where touch command is inputted via a finger of the user, if the display device 22 is a touch panel with touch sensing capability.
After the controller 18 receives the directional instruction, in Step 406 and Step 408, the controller 18 obtains the Cartesian coordinate (x1,y1) of the specific point within the wide-angle image I1 and transfers the Cartesian coordinate (x1,y1) into the polar coordinate (r1,θ1) by computing r1=√{square root over (x12+y12)} and θ1=tan−1 (x1/y1).
In Step 410, the controller 18 generates the pan signal PS and the tilt signal TS according to the polar coordinate (r1,θ1) to steer the second camera 16 toward the second direction D2. The pan signal represents an angle which the second camera 16 should be rotated with respect to the central portion 24. The pan signal PS may be generated by computing PS=σ1+θ0, where θ0 is a default value. The tilt signal TS represents an angle between the first direction D1 and the second direction D2 in a vertical plane. The tilt signal TS may be determined by r1 of the polar coordinate (r1,θ1) and a distortion curve. The distortion curve represents an amount of distortion caused by the wide-angle lens, and an exemplary distortion curve is illustrated in
After the pan signal PS and the tilt signal TS are generated, the pan signal PS may be delivered to the rotating mechanism 20 and the tilt signal TS may be delivered to the second camera 16, such that the second camera 16 is steered toward the second direction D2 to capture the interested image I2 of the object of interest in the environment.
Notably, the user is not limited to the mouse click command or the touch command pointing at the specific point within the wide-angle image I1. The user may select a specific rectangle via a mouse within the wide-angle image I1 displayed on the display device 22. The controller 18 may interpret a relative location of the rectangular zone within the wide-angle image I1 selected by the user as the direction instruction. Meanwhile, the controller 18 may also interpret a size of the specific rectangle as a zooming instruction. According to the zooming instruction, the controller 18 may generate a zoom signal for the second camera 16. The second camera 16 may adjust a focal length thereof, such that a field of view (FOV) of the second camera 16 is corresponding to the specific rectangle within the wide-angle image I1, i.e., the interested image I2 captured by the second camera 16 represents a high resolution image corresponding to the specific rectangle.
Notably, the user is not limited to select the specific rectangle such that the controller 18 interprets the size of the specific rectangle as the zooming instruction and the second camera 16 zooms in or zooms out accordingly. The user may input a mouse scrolling up/down command via a scrolling wheel of a mouse as the zooming instruction. The user may input an extending gesture or a shrinking gesture on the display device 22, if the display device 22 has touch sensing capability. Operations of the camera system 10 controlling the second camera 16 to zoom in or zoom out can be summarized as a zooming process 60, which is illustrated in
Step 600: Start.
Step 602: The controller 18 obtains the zooming instruction.
Step 604: The controller 18 generates a zoom signal according to the zooming instruction.
Step 606: The second camera 16 zooms in or zooms out according to the zoom signal.
Step 608: End.
Detail operations of the zooming process 60 may be referred to the paragraph stated in the above, which is not narrated herein. According to the zooming process 60, the interested image I2 captured by the second camera 16 would be a clear and high resolution vision of the object of interest in the environment.
Furthermore, in another embodiment, the camera system 10 may track a moving object in the environment. Operations of the camera system 10 tracking the moving object in the environment may be referred to
Step 700: Start.
Step 702: The first camera 14 captures the wide-angle image
Step 704: The controller 18 identifies an image object OBJ in the wide-angle image I1, wherein the image object OBJ in the wide-angle image I1 is corresponding to a moving object OBm in the environment.
Step 706: The controller 18 obtains a Cartesian coordinate (x2,y2) of the image object OBJ.
Step 708: The controller 18 transfers the Cartesian coordinate (x2,y2) into a polar coordinate (r2,θ2).
Step 710: The controller 18 generates the pan signal PS and the tilt signal TS according to the polar coordinate (r2,θ2) to steer the second camera 16 such that an image of the moving object OBm captured by the second camera 16 is at a center of the interested image I2.
Step 712: End.
According to the surveillance process 70, the camera system 10 is able to steer the second camera 16 to track the moving object OBm in the environment. Specifically, in Step 702, the image object OBJ may be identified by the controller 18 by an object recognition technique. The object recognition technique is known by those skilled in the art and not narrated herein. In Step 704, the Cartesian coordinate (x2,y2) of the image object OBJ is a representative of the image object OBJ, e.g., the Cartesian coordinate (x2,y2) may be a Cartesian coordinate of a center of the image object OBJ.
In Step 710, the controller 18 generates the pan signal PS and the tilt signal TS, so as to steer the second camera 16 such that the image of the moving object OBm captured by the second camera 16 is substantially at the center of the interested image I2. In other words, the controller 18 generates the pan signal PS and the tilt signal TS to steer the second camera 16 such that the moving object OBm is within the FOV of the second camera 16 and substantially at a center of the FOV of the second camera 16. The rest steps of the surveillance process 70 are similar to the surveillance process 40, which may be referred to the paragraph stated in the above and not narrated herein.
Furthermore, if the moving object OBm is substantially at the center of the FOV of the second camera 16 but a part of the moving object OBm is out of the FOV of the second camera 16, the camera system 10 may generate a zooming signal to control the second camera 16 to zoom out, so as to capture the image of the moving object OBm entirely. In addition, the camera system 10 may generate the zooming signal to control the second camera 16 to zoom in or zoom out, such that a size of the image of the moving object OBm is substantially kept as a specific portion of the interested image I2, where the specific portion may be specified by system requirements or by the user. Operations of the camera system 10 controlling the second camera 16 to zoom in or zoom out to track the moving object OBm can be summarized as a zooming process 80, which is illustrated in
Step 800: Start.
Step 802: The controller 18 obtains the zooming instruction.
Step 804: The controller 18 generates a zoom signal according to the zooming instruction.
Step 806: The second camera 16 zooms in or zooms out according to the zoom signal, such that the moving object OBm is within the FOV of the second camera 16.
Step 808: End.
The zooming process 80 is similar to the zooming process 60, and detail operations of the zooming process 80 may be referred to the paragraph stated in the above, which is not narrated herein. According to the zooming process 80, the focal length of the second camera 16 is adjusted according to the zoom signal, such that the moving object OBm is within the FOV of the second camera 16.
Notably, the embodiments stated in the above are utilized for illustrating the concept of the present invention. Those skilled in the art may make modifications and alternations accordingly, and not limited herein. For example, the rotating mechanism 20 is not limited to be the slide rail mechanism or the gear mechanism. The rotating mechanism 20 may be any mechanical mechanism capable of stably rotating the second camera 16, which conforms to the scope of the present invention. In addition, the wide-angle image I1 captured by the first camera 14 is not limited to be the circular image. The wide-angle image I1 may also be a 360° panorama image. Notably, when the wide-angle image I1 is the 360° panorama image, the controller 18 may obtain the polar coordinate (r1,θ1) of the specific point directly, i.e., there is no need for the controller 18 to transform the Cartesian coordinate into the polar coordinate, which conforms to the scope of the present invention.
Furthermore, the camera system of the present invention may comprise a plurality of second cameras. For example, please refer to
In summary, the camera system of the present invention is able to monitor a full view of an environment via the first camera and to provide high resolution images of interest objects or moving objects of the environment via the second cameras as well.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
This application is a continuation-in-part application of U.S. application Ser. No. 14/487,108 filed on Sep. 16, 2014.
| Number | Date | Country | |
|---|---|---|---|
| Parent | 14487108 | Sep 2014 | US |
| Child | 14927489 | US |