The present disclosure is generally related to gun scopes, and more particularly to rifle scopes configured to track a target.
Conventionally, a telescopic device uses lenses to focus light, magnifying at least a portion of a view area in front of the telescope. Jitter (human movements and twitches) are magnified by the telescopic device. Such jitter in combination with the movement of a target within the view area make tracking of the target difficult at almost any level of magnification. Further, once the target leaves the view area, it can be very difficult to reacquire the target again using the telescopic device.
In an embodiment, a gun scope includes at least one optical sensor configured to capture a video of a view area, a display, a processor coupled to the display and to the at least one optical sensor, and a memory accessible to the processor. The memory stores instructions that, when executed, cause the processor to receive user input that identifies a target within the video, apply a visual tag to the target within the video, and adjust the visual tag to track the target within a sequence of frames. The memory further stores instructions that, when executed, cause the processor to provide the video including the visual tag to the display.
In another embodiment, a method includes capturing a video using a circuit of a gun scope, receiving a user input at the circuit that identifies a target within the video, and automatically processing the video to track the target, frame-by-frame, within the video. The method further includes providing the video to a display of the gun scope.
In still another embodiment, a circuit includes an input interface configured to receive a sequence of frames of a video corresponding to a view area of a telescopic device, a user interface configured to receive user inputs, and a processor coupled to the input interface and to the user interface. The circuit further includes a memory coupled to the processor. The memory is configured to store instructions that, when executed by the processor, cause the processor to receive a user input to select a target within the video, automatically apply a visual tag to the target in response to receiving the user input, adjust the visual tag, frame-by-frame, to track the target, and provide the video including the visual tag to a display.
In the following discussion, the same reference numbers are used in the various embodiments to indicate the same or similar elements.
Embodiments of a telescopic device, circuits and methods are described below that include circuitry configured to process video to track a target. The telescopic device can be a rifle scope, a binocular display device, a spotting scope, or another type of telescopic device. In an example, the circuitry is configured to apply a visual tag to a target in a frame of a video, to detect localized movement of the target relative to visual elements within the video, frame-by-frame, and to adjust a location of the visual element to track the target as it moves within and through the view area. In a particular example, the circuitry includes or is coupled to a display configured to display the video including the visual tag.
Telescopic device 100 includes user-selectable buttons 110 and 112 on the outside of housing 106 that allow the user to interact with circuitry 108 to select between operating modes, to adjust settings, and so on. In some instances, the user may interact with at least one of the user-selectable buttons 110 and 112 to select a target within the view area. Further, telescopic device 100 includes thumbscrews 114, 116, and 118, which allow for manual adjustment of the telescopic device 100. In an example, thumbscrews 114, 116 and 118 can be turned, individually, to adjust the crosshairs within a view area of telescopic device 100. In some instances, thumbscrews 114, 116, and 118 can be omitted. Alternatively or in addition, user selectable buttons may be provided on a device or component that can be coupled to the telescopic device 100 (such as through a universal serial bus (USB) interface or wireless interface or through a wired or wireless connection to buttons on a grip of a firearm) to allow the user to interact with circuitry 108 and/or to select a target.
Housing 106 includes a removable battery cover 120, which secures one or more batteries (or other charge storage device) within housing 106 for supplying power to circuitry 108. Housing 106 is coupled to a mounting structure 122, which is configured to mount to a surface using fasteners 124 and 126. In a particular example, mounting structure 122 can be secured to a portable structure, such as a tripod, a rifle, an air gun, or another structure. In some instances, mounting structure 122 may be omitted, and a handle or strap may be provided to assist the user in holding the telescopic device 100 in his/her hand.
Circuitry 108 is configured to capture video of a view area in front of optical element 104. The user may interact with buttons 110 and/or 112 to control selection of a target within the view area in conjunction with the reticle. Circuitry 108 is configured to generate a reticle that is provided within the video provided to the display. Further, circuitry 108 is configured to receive the user input and to generate and apply a visual tag to the selected target. The visual tag may be a square, a circle, an “X”, or some other visual indicator that can be superimposed on the target within the video. Circuitry 108 superimposes the visual indicator on the target in the video and provides the video to the display. Further, circuitry 108 is configured to track the target from frame-to-frame within the video and to adjust the visual tag within the video to stay on the target even as the target moves, independent of the position of the reticle.
While the above-example depicts a telescopic device that could be used as a gun scope, a spotting scope, or a telescope, circuitry 108 may also be incorporated in other optical devices. An example of a binocular device that incorporates circuitry 108 is described below with respect to
In this example, circuitry 108 includes optical sensors configured to capture video associated with a view area that is observed through at least one of the optical elements 204. Circuitry 108 aligns visual elements within a frame to corresponding visual elements within a previous frame, frame-by-frame, stabilizing visual background elements within the sequence of frames and tracks the selected target whether it remains stationary or moves from frame-to-frame. Circuitry 108 places a visual tag on the target to visually track the target. Examples of the view area showing a target and a visual tag are described below with respect to
In the above-discussion, it is assumed that the target 304 has moved. In one example, prior to application of the tag or visual marker to the target, the reticle is a view reticle that is centered within the view area of the scope. Upon selection of the target, the reticle can become a ballistics reticle that is aligned to the instantaneous impact location of the shot if it were taken at that instant. In particular, in response to selection of the target, circuitry 108 can calculate the impact location of the shot based on range information, ballistic information, orientation of the rifle, and so on. The resulting impact location can then be reflected by a ballistics reticle, which corresponds to the calculated impact location. In some instances, circuitry 108 may cause the view area to scroll until the reticle is centered on the calculated impact location. In other instances, circuitry 108 may display an indicator to direct the user to adjust the orientation of the rifle, for example, to realign the ballistics reticle to the previously selected target. In general, the ballistics reticle will reflect, for example, temperature, wind, elevation, range, bullet drop, and other factors that can affect the trajectory of the bullet.
While the views 300 and 300′ in
Circuitry 108 receives a second frame 504 including visual elements 506′, which are shifted relative to optical elements 506 in previous frame 502. Circuitry 108 compresses the second frame 504 through a compression operation to produce a compressed frame 504′ having compressed visual elements. Circuitry 108 further compresses the compressed frame 504′ through one or more compression operations to produce compressed frame 504″ having compressed visual elements.
As shown in frame 518, when the visual elements from compressed frames 502″ and 504″ are combined, the relative positions of visual elements 506 and 506′ within their respective frames are shifted, which shift may be caused by movement of the telescopic device 100 by the user. However, the visual elements 506 and 506′ represent background objects that have not moved relative to one another within their respective frames 502 and 504. Frame 518 depicts the relative positions of the visual elements 506 and 506′ as if frames 502″ and 504″ were combined. Circuitry 108 aligns visual elements 506 and 506′ as depicted in frame 520 to determine alignment information. However, it should be noted that target 508′ has moved relative to the other visual elements 506′ and relative to target 508 in frame 502. Circuitry 108 uses the alignment information determined from frame 520 and further refines it with respect to visual elements within frame 522 relative to frame 502′. Circuitry 108 uses the refined alignment information from frame 522 to align visual elements elements with those within frame 502, and further refines that alignment information to produce an adjusted frame 524, which can be presented to a display device as a second frame in a sequence of frames, providing frame-to-frame video stabilization. Further, the relative movement of target 508′ can be calculated as a motion vector, as generally indicated by arrow 526.
By aligning visual elements 506 and 506′ from frame-to-frame, circuitry 108 stabilizes the images to reduce or eliminate jitter. Further, by utilizing compression, pixel alignment can be performed at various levels of compression (various levels of granularity) to produce aligned frames at a desired level of resolution. Additionally, localized movement of one or more visual elements (particularly a selected target) can be detected by circuitry 108 and can be used to reposition or relocate the visual tag to track the selected target from frame-to-frame.
In the example of
While the example of
The motion vector 626 can be used to adjust the location of the visual tag so that the visual tag tracks the movement of the target from frame-to-frame. Further, the circuitry 108 utilizes the differences between the alignment vector 622 and the motion vector 626 to differentiate between the background and the target.
While the above-description has described one possible method of tracking a target within a view area, other methods may also be used. One possible example of a circuit configured to track a target within a view area is described below with respect to
Circuitry 108 includes a field programmable gate array (FPGA) 712 including one or more inputs coupled to outputs of image (optical) sensors 710. FPGA 712 further includes an input/output interface coupled to a memory 714, which stores data and instructions. FPGA 712 includes a first output coupled to a display 716 for displaying video and/or text. FPGA 712 is also coupled to a digital signal processor (DSP) 730 and a micro-controller unit (MCU) 734 of an image processing circuit 718. DSP 730 is coupled to a memory 732 and to MCU 734. MCU 734 is coupled to a memory 736. Memories 714, 732, and 736 are computer-readable and/or processor-readable data storage media capable of storing instructions that are executable (by FPGA 712, DSP 730, and/or MCU 734, respectively) to perform various operations.
Circuitry 108 also includes sensors 720 configured to measure one or more environmental parameters (such as wind speed and direction, humidity, temperature, and other environmental parameters), to measure motion of the telescopic device, and/or to measure optical elements, such as reflected laser range finding data, and to provide the measurement data to MCU 734. In one example, sensors 720 include inclinometers 750, gyroscopes 752, accelerometers 754, and other motion detection circuitry 756.
FPGA 712 is configured to process image data from image (optical) sensors 710. FPGA 712 processes the image data to stabilize the video by aligning adjacent frames. Further FPGA 712 enhances image quality through digital focusing and gain control. In some instances, FPGA 712 also performs image registration and cooperates with DSP 730 to perform visual target tracking FPGA 712 further cooperates with MCU 734 to mix the video data with reticle information and provides the resulting video data to display 716.
While the example of
In the illustrated example, memory 732 stores reticle generator instructions 742 that, when executed by DSP 730, cause DSP 730 to generate a reticle that can be superimposed or otherwise provided within a view area of the video stream. Further, memory 732 stores visual tag generator instructions 744 that, when executed, cause DSP 730 to generate a visual tag that can be applied to a selected target within the view area.
Further, memory 736 stores target selection instructions 746 that, when executed cause MCU 734 to receive user input corresponding to selection of a target within the video stream. Further, when executed, target selection instructions 746 cause MCU 734 to communicate target selection information to the FPGA 712 and to DSP 730 for use in processing the video.
Memory 714 stores localized motion detection instructions 758 that, when executed, cause FPGA 712 to determine a local motion vector for a selected target, and target tracking instructions 760 that, when executed, cause FPGA 712 to track a target from frame-to-frame within the video and to move the visual tag or marker to visually track the target within the video. Memory 714 also stores edge detection instructions 762 that, when executed, cause FPGA 712 to detect edges of a selected target to disambiguate a selected target from background portions of the video. Memory 714 further stores texture detection instruction 764 that, when executed, cause FPGA 712 to use texture within the frame to differentiate or isolate a target. Memory 714 may also include other detection instructions 766 that, when executed, cause FPGA 712 to differentiate between background and target information through some other algorithm or data point. In some instances, memory 714 may include algorithm selection instructions that, when executed, cause FPGA 712 to select one or more algorithms to detect the target. In one instance, such instructions cause FPGA 712 to apply multiple algorithms. In another instance, such instructions cause FPGA 712 to select algorithms having higher selectivity in low-contrast environments (for example, to enhance target acquisition and tracking) and lower-selectivity in high contrast environments (for example, to conserve power).
Circuitry 108 is configured to initially stabilize the entire view area. In high contrast environments, circuitry 108 may utilize edge detection algorithms to automatically differentiate a selected target from background aspects of the view area. Alternatively, circuitry 108 may attempt to identify contrasts or color changes relative to the target selected by the user input to attempt to detect an outline or edge of a potential target. If the object is moving, relative movement may be used to automatically detect or to refine detection of the selected target. In some instances, texture detection or other types of optical detection (or a combination of detection algorithms, infrared input, light detection and ranging (LIDAR), acoustic detection, and data from other types of sensors) can be used to isolate a potential target from the stabilized background. In an example, texture of an image can be analyzed as a function of the spectral content of the pixels after application of one or more filters.
To define the texture content of a target, circuitry 108 can be configured to construct an energy vector for the target within the view area. A camouflaged target within a view area can present a different texture than surrounding or neighboring pixel areas, making it possible to identify a camouflaged target within the view area even when little light contrast is available. Once the target is identified by the user, circuitry 108 can track the target over time. In a first example where the target is moving within the view area, the changes in the pixel area over time can be used to track the selected target.
As the user orients the telescopic device 100 to change the view area, telescopic device 100 captures neighboring view areas, and circuitry 108 calculates texture similarities in order to stitch together adjacent view areas as part of a smoothing function to align adjacent frames to smooth out the view area. When the optical view area is changed, one possible way to reacquire a target in a current view area includes searching neighboring regions for a pixel intensity distribution similar to the selected target and can include minimizing a distance between the target pixel distribution and that of the candidate target area. However, due to low contrast of the target relative to the background, the update may not necessarily occur when the target is correctly localized. To enhance target acquisition, circuitry 108 updates the shape/outline/model of the selected target when the correlation of the identified model with the selected object exceeds a threshold. In other words, when movement of the selected target and/or the background contrast provides sufficient information to enhance the model of the selected target, circuitry 108 can enhance the selected target information with the additional information.
In a particular example, as the target moves within the view area and continues outside of the original view area, the user may adjust the orientation of the telescopic device 104 to follow the target. In response to such movement, the view area will shift, and circuitry 108 operates to smooth the movement on the display to provide a relatively seamless display image, continuing to provide the visual marker or tag on the previously selected target.
The above examples can be used with any telescopic device, including, but not limited to, telescopes, rifle scopes, spotting scopes, binoculars, microscopes, and the like. An example of telescopic device in conjunction with a rifle is described below with respect to
In this example, circuitry 108 allows the user to select a target and automatically tracks the target, over time, assisting the user to continue to view the target and/or engage or shoot the target being tracked. Further, circuitry 108 can include logic configured to determine the orientation and motion of rifle 802 relative to a selected target and to prevent discharge until the rifle 802 is aligned to the target within an acceptable margin of error.
Circuitry 108 is configured to track a selected target within a view area in response to user selection of the target. One possible example of a method of tracking the selected target is described below with respect to
Continuing to 908, circuit 108 determines local motion of the target relative to a background within the video. Proceeding to 910, circuit 108 selectively adjusts a position of the visual tag within the video to visually track the target in the view area. In an example, the visual tag is presented as if it were physically attached to the target as the target moves. Moving to 912, circuit 108 provides the video stream, including the visual tag, to a display.
It should be understood that the method 900 in
In conjunction with the systems, circuits, and methods described above with respect to
Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the scope of the invention.