The present invention relates generally to a method and apparatus for choosing a desired field of view from a wide-angle image or video, and in particular, to a method and apparatus for choosing a desired field of view from a wide-angle wearable camera.
Wearable cameras are being deployed on law enforcement personnel to help capture video evidence. These cameras, which are typically mounted at the temple, shoulder, or chest, are subject to field-of-view variations due to deviations in mounting the camera. Additionally, user activity can result in field-of-view variations. Variations in the field of view of the camera can have the unintended result of shifting the field of view of the camera away from a desired position (e.g., pointing in front of the wearer) resulting in important video images being missed or only partially captured. Therefore, a need exists for a method and apparatus for choosing a desired field of view from an image, and in particular for choosing a desired field of view from a wearable camera.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and/or relative positioning of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention. It will further be appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required. Those skilled in the art will further recognize that references to specific implementation embodiments such as “circuitry” may equally be accomplished via replacement with software instruction executions either on general purpose computing apparatus (e.g., CPU) or specialized processing apparatus (e.g., DSP). It will also be understood that the terms and expressions used herein have the ordinary technical meaning as is accorded to such terms and expressions by persons skilled in the technical field as set forth above except where different specific meanings have otherwise been set forth herein.
In order to alleviate the above-mentioned need, a method and apparatus for choosing a desired field of view from a wide-angle image or video is provided herein. During operation, a wide-angle camera will collect wide-angle images. A portion of the wide-angle image (desired field of view) will be selected based on accelerometer readings. More particularly, to keep a desired field of view of a camera in the appropriate position, a correction is made to the tilt and roll of the desired field of view by using motion sensors to determine the horizon based on measuring the direction of gravity. A correction is also made to the yaw of the desired field of view using motion sensors to determine the forward facing position when the user is in motion. Because the desired field of view is corrected for variations resulting from user activity, any image collected from the camera is more likely to be pointed at a desired position.
The present invention encompasses a method for choosing a desired field of view from image or video. The method comprises the steps of collecting an image or video from image collecting circuitry, determining a direction of motion from an accelerometer, and choosing the desired field of view from the image or video based on the direction of motion determined from the accelerometer.
The present invention additionally encompasses a method for choosing a desired field of view from image or video. The method comprises the steps of collecting an image or video from wide-angle image collecting circuitry having a first field of view, determining a direction of motion from an accelerometer, and determining a direction of gravity from the accelerometer. Finally, a desired field of view is chosen from the image or video based on the direction of motion and the direction of gravity determined from the accelerometer. The desired field of view is smaller than the first field of view.
Finally, the present invention encompasses an apparatus for choosing a desired field of view from image or video. The apparatus comprises image collecting circuitry collecting an image or video, and logic circuitry determining a direction of motion from an accelerometer and choosing the desired field of view from the image or video based on the direction of motion determined from the accelerometer.
Turning now to the drawings, where like numerals designate like components,
For example, the desired portion may be wirelessly transmitted to a dispatch center where it may be viewed in real time or stored as evidence. Similarly, the desired portion may be output to local storage where it may be later retrieved.
Capturing only a portion of the wide angle image (typically a 40-60 degree horizontal field of view) serves to increase resolving power for 640×480 images output from camera 102. For example there is a 3-4× reduction in the horizontal linear resolution of the wide angle image (e.g. 180 degree horizontal field of view) when compared to an image having a 50 degree horizontal field of view. For evidentiary purposes it is desirable to capture sufficient resolution in the scene to be able to clearly identify objects (e.g. weapons) or people of interest.
Logic circuitry 301 comprises a digital signal processor (DSP), general purpose microprocessor, a programmable logic device, or application specific integrated circuit (ASIC) and is utilized to accesses and control collection circuitry 302, to select a desirable field of view from images or video fed to it from circuitry 302, and to output the desired field of view. The desired field of view can be stored, transmitted, or both.
Optional storage 303 comprises standard random access memory or flash memory and is used to store the desired field of view selected by logic circuitry 301. It should be noted that storage 303 may exist internal to, or external to camera 102.
Accelerometer 304 comprises a standard micro machined accelerometer used to measure a proper acceleration it experiences relative to freefall. Preferably, accelerometer 304 comprises a multi-axis accelerometer that is capable of detecting magnitude and direction of acceleration as a vector quantity, and can be used to sense orientation, vibration, motion and shock.
Finally, transmitter 305 comprises common circuitry known in the art for communication utilizing a well known communication protocol, and serve as means for transmitting images or video. Possible transmitters include, but are not limited to transmitters utilizing Bluetooth, IEEE 802.11, or HyperLAN protocols, or any cellular communication system protocol (e.g., CDMA, TDMA, GSM, WCDMA, . . . , etc.).
As discussed above, wearable cameras are subject to field-of-view variations due to deviations in mounting the camera or user activity. Variations in the field of view of the camera can have the unintended result of shifting the field of view of the camera away from a desired position (e.g., pointing in front of the wearer) resulting in important video images being missed or only partially captured. This is illustrated in
As shown in
In order to correct the improper alignment of camera 102 a desired field of view will be selected based on accelerometer 304 readings. More particularly, to select the desired field of view of camera 102, a correction is made by logic circuitry 301 to the tilt and roll of field 404 by using accelerometer 304 to determine the horizon based on measuring the direction of gravity. The tilt correction is given by τ, the angular difference between a1 and the horizon, where a1 is a direction that the camera is pointing. The roll correction is given by ρ, the angular difference between a2 and the horizon, where a2 is a direction perpendicular to a1. Ideally, both a1 and a2 are perpendicular to the direction of gravity. This is illustrated in
A correction ψ is also made to the yaw of field of view 404 using accelerometer 304 to determine the forward facing position when the user is in motion (e.g., walks or runs). In particular, field of view 404 is positioned to point parallel the direction of motion. Because the desired field of view is adjusted to point perpendicular to the direction of gravity, and parallel to the direction of motion, any image collected from the camera is more likely to be pointed at a desired position.
The above technique is further illustrated with reference to
In
The image is created on the image sensor using a lens as shown in
Shown in
t=f*tan(τ+α)−f*tan(α)
If the light ray was originally centered on the image sensor (i.e. α=0), then this expression simplifies to:
t=f*tan(τ)
This expression is valid for an angular deviation from the center of the image independent of whether it occurs in the vertical (tilt) or horizontal (yaw) direction. The same mathematical relationship can be used to determine the distance y (assuming α=0):
y=f*tan(ψ)
The above expressions are valid for an ideal projection lens. Other lens types will have different mathematical relationships between image sensor distances and angular changes in the camera direction. Also the maximum corrections that can be obtained will be limited by the angle of capture of the lens and the dimensions of the image sensor.
To obtain the desired image it is necessary to translate the center of the uncorrected field of view 602 by the distances t and y and roll by the angle ρ so that it aligns with the direction of gravity.
The corrections described above can be carried out in real time or may be carried out at certain intervals or implemented manually. One reason that real time corrections may not be desired, is that occasionally a user may bend over to look down or pick something up. It would be desirable to capture an image or video of this downward looking scene. However, if the corrections are made in real time, the corrected image will only be that of the horizon. To overcome this problem, the corrections can be implemented at selected intervals. For example, the logic circuitry can detect from the accelerometer that the user is in motion, and is therefore capable of triggering a correction to be made whenever the user is walking. Another alternative is to apply the correction only when the user manually instructs the device to make a correction. For example, the user could press a button or issue a voice command to make the correction.
While the invention has been particularly shown and described with reference to particular embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. For example, although the above description was given with regards to determining a desired field of view based on the direction of motion and the direction of gravity, it is intended that the desired field of view may be determined based solely on the direction of motion, or the direction of gravity. It is intended that such techniques come within the scope of the following claims: