The present disclosure is generally related to telescopic devices, and more particularly, to rifle scopes and methods of providing embedded training.
Conventionally, training with gun scopes requires the firearm owner to take the firearm with the gun scope to a firing range or a field to shoot at targets and to make adjustments to the gun scope setting. Training with other firearm users, including military or police personnel, may include simulated firing and/or paint ball training exercises.
In some instances, military and police personnel may use situational training systems involving actuated targets and/or simulated targets to train to improve aim and shooting skills for firing rifles, shotguns, handguns, air guns, and other weapons. Such systems may display targeting environments on a screen and may include sensors configured to detect signals corresponding to the discharge of the training device and to determine the aim point of the training device. The determination of the aim point allows the system to determine whether a target was hit and to adapt the targeting environment to reflect the result of the shot.
However, such training systems utilize specialized equipment, allowing the user to train with the specialized equipment. Unfortunately, such specialized equipment can differ from the user's actual weapon in significant ways and may have different aim point characteristics as compared to the user's weapon. Further, such training systems can be expensive and require facilities designed to house such systems.
In an embodiment, a method includes providing a visual representation of a targeting environment to a display of a rifle scope, where the visual representation includes a target and a reticle. The method further includes receiving a trigger pull signal at a processor coupled to the display, determining an impact location of a virtual shot in response to receiving the trigger pull signal, and dynamically adjusting the target within the visual representation in response to determining the impact location.
In another embodiment, a rifle scope includes a display, an input interface configured to receive a user input, a processor coupled to the display and the input interface, and a memory coupled to the processor. The memory is configured to store instructions that, when executed by the processor, cause the processor to provide a visual representation of a targeting environment to the display, where the visual representation includes a target. The memory further includes instructions that, when executed, cause the processor to receive a trigger pull signal from the input interface, determine an impact location of a virtual shot in response to receiving the trigger pull signal, and dynamically adjust the target within the visual representation based on determining the impact location.
In still another embodiment, a rifle scope having embedded training includes a display, a processor coupled to the display, and a memory accessible to the processor. The memory is configured to store instructions that, when executed, cause the processor to provide a visual representation of a targeting environment to the display, determine a trigger pull, and provide training results to the display by selectively adjusting at least one target within the visual representation in response to the trigger pull.
In the following discussion, the same reference numbers are used in the various embodiments to indicate the same or similar elements.
Embodiments of systems and methods are described below that provide embedded training. In an example, a rifle scope includes a display and a controller coupled to the display and configured to provide a visual representation of a targeting environment to the display. The controller is configured to detect a trigger pull and to determine an impact location of a virtual shot relative to a target based on the movement and angle of a firearm attached to the telescopic device when the trigger is pulled. The controller is further configured to adjust the position of the target and/or to cause the target to move or respond to the virtual shot based on the impact location. An example of a telescopic device that can be implemented as a rifle scope and that is configured to provide embedded training is described below with respect to
Telescopic device 100 includes an eyepiece 102 and an optical element 104 coupled to a housing 106. Housing 106 defines an enclosure sized to receive embedded training circuit 108. Optical element 104 includes an objective lens and other components configured to receive light and to direct and focus the light toward optical sensors associated with embedded training circuit 108.
Telescopic device 100 further includes user-selectable buttons 110 and 112 on the outside of housing 106 that allow the user to interact with embedded training circuit 108 to select between operating modes, to adjust settings, and so on. In some instances, the user may interact with at least one of the user-selectable buttons 110 and 112 to select a target within the view area, to initiate laser range finder operations, and so on. In another instance, target selection may be performed by selecting a button on a grip of the firearm, which may be coupled to embedded training circuit 108 through a wired or wireless connection. Further, telescopic device 100 includes thumbscrews 114, 116, and 118, which allow for manual adjustments.
Housing 106 includes a removable battery cover 120, which secures a battery within housing 106 for supplying power to embedded training circuit 108. Housing 106 is coupled to a mounting structure 122, which is configured to mount to a surface of a portable structure (such as a rifle or other firearm) and which includes fasteners 124 and 126 that can be tightened to secure the housing to the portable structure.
In an example, telescopic device 100 is mounted to a firearm as a rifle scope and configured to detect a trigger pull and/or to receive user inputs. A user may view a visual representation of a view area of telescopic device 100. In a first mode, the visual representation may correspond to optical data captured by optical element 104 and provided to optical sensors. In a second mode, the visual representation may correspond to a training environment including one or more targets, which can be presented on a display that is coupled to or part of embedded training circuit 108. Embedded training circuit 108 detects user interactions, such as button presses and trigger pulls, and makes adjustments to the visual representation according to the context.
In one example, the user may interact with a button (such as buttons 110 and 112 or a button on a grip or trigger assembly of an associated firearm) to cause a processor of telescopic device 100 to provide a visual representation of a targeting environment to a display within telescopic device 100. The user may then aim and fire at selected targets within the targeting environment, and embedded training circuit 108 is configured to determine the impact location of the virtual shot based on the visual representation and to selectively alter the target position or its response (in the event of a virtual animal target) to the impact location. For example, in the event that the impact location is determined to have missed the target, embedded training circuit 108 may determine that the target would flee from the impact location and may show the target fleeing the view area. In another example, if the impact location is determined to have hit the target, embedded training circuit 108 may alter a position of the target within the view area, for example, by displaying an exploding bottle (if the target is a bottle) or by showing the animal target fall to the ground. In general, embedded training circuit 108 determines an appropriate response for the target based on the determined impact location and adjusts the visual representation accordingly.
The above-example is a telescopic device 100 that could be implemented as a rifle scope or as some other optical device that provides magnification of a view area. Telescopic device 100 can be implemented as a digital device that can communicate with smart phones, other telescopic devices, and other circuitry. One possible example of a system including embedded training circuit 108 is described below with respect to
Embedded training circuit 108 includes a processor 202 coupled to a display 204 and to a memory 206. Processor 202 is also coupled to one or more input interfaces 208, to sensors 214, and to optical sensors 240. Optical sensors 240 receive directed light from optical elements 104 to sense visual elements, for example, when telescopic device 100 is in a telescope mode as opposed to a training mode. Optical sensors 240 provide optical data corresponding to a view area of telescopic device 100 to processor 202.
Sensors 214 include one or more gyroscopes 216, one or more inclinometers 218, one or more accelerometers 220, other motion/orientation sensors 222, or any combination thereof. Sensors 214 communicate motion, incline, and orientation data associated with an orientation of the telescopic device 100 (assuming telescopic device 100 is aligned to the longitudinal axis of the corresponding firearm) to processor 202.
Input interfaces 208 include a first interface coupled to a trigger assembly 210 of the firearm for receiving a signal corresponding to movement of the trigger shoe. Input interfaces 208 further include a second interface configured to receive one or more signals from a target selection interface 211, such as buttons (on telescopic device 100, on a grip of the firearm, or in another location), a touch screen, or another user interface. Input interfaces 208 also include a third interface configured to communicate with a computing device or another training system 212 through a wired or wireless interface. In an example, input interfaces 208 include one or more transceivers configurable to communicate bi-directionally with the computing device or training system 212.
Processor 202 executing instructions stored in memory 206 operates as a controller configured to provide a visual representation of a targeting environment to a display. Memory 206 is a computer or processor-readable storage medium configured to store data and processor-executable instructions. Memory 206 stores a visual representation generator 224 that, when executed, causes processor to provide a visual representation and a reticle to display 204. The visual representation includes one or more targets. In one instance, the visual representation can be a combination of captured optical information from optical elements 104 and optical sensors 240 and overlay information, such as laser range finding data, a reticle, a visual marker or tag visibly attached to a selected target, and the like. In another instance, the visual representation provided to display can include a generated visual representation of a target environment plus the reticle and other information. Using movement and orientation data from sensors 214, processor 202 can adjust the visual representation to reflect the orientation information.
Memory 206 further includes trigger pull detection instructions 226 that, when executed, cause processor 202 to detect a trigger pull based on a signal received from trigger assembly 210. Memory 206 also includes impact location calculator instructions 228 that, when executed, cause processor 202 to calculate an impact location of a virtual shot within the visual representation based on orientation and movement information from sensors 214. Memory 206 also includes visual impact result calculator instructions 230 that, when executed, cause processor 202 to calculate a change in the visual representation based on the impact location. When a shot hits a target or misses, the impact of the shot should leave a hole or kick up a cloud of dust or something to reflect the impact location in the visual representation. Additionally, memory 206 includes target position adjustment instructions 232 that, when executed, causes processor 202 to adjust the visual representation to reflect a change in the position of the target based on the impact location. For example, if the selected target is can or bottle and the impact location indicates that the shot was successful, the can or bottle should move based on the impact location, and target position adjustment instructions 232 are executed by processor 202 to determine a location where the target comes to rest after impact.
Memory 206 further includes target reaction simulator instructions 234 that, when executed, cause processor 202 to determine a reaction by the target (in the event that the target is a live target) to the impact location. In particular, if the shot misses, an animal may be startled by the sound of the impact and may flee the view area. Similarly, if an animal is hit, but the shot is not a “kill shot”, the animal may react to the impact and flee or take evasive action, such as ducking into a nearby hole or hiding in tall grass. Target reaction simulator instructions 234 are used by processor 202 to generate a likely reaction by the target, and the resulting information can be used to update the target position within the visual representation.
Memory 206 also includes environmental parameter generator instructions 236 that, when executed, cause processor 202 to calculate environmental parameters, such as wind speed and direction, rain, humidity, barometric pressure, or other environmental conditions. In some instances, such information can be used to adjust the visual representation such as by causing visual elements within the visual representation to bend or move, for example, to make the visual representation more realistic for the user. Further, environmental parameter generator instructions 236 may include randomness functions to simulate variability of environmental parameters, which information can be included within the impact location calculations to predict an impact location, which may be a hit or a miss, depending on the particular shot. Memory 206 may further include shot delay logic 238 that, when executed, causes processor 202 to delay discharge of the associated firearm (after detecting a trigger pull from trigger assembly 210) until a selected target is aligned to the center of the reticle within the visual representation. In an example, a user may interact with target selection interface 211 to select a target and to visually mark the target. In one example, the user selects the target by pressing a target selection button when the target is at a center of the reticle. In another example, the user selects the target by pressing the target selection button, aligning the center of the reticle to the desired target in the visual representation, and releasing the target selection button when the center of the reticle is aligned to the target. In one instance, shot delay logic 238 causes processor 202 to delay the virtual shot until the center of the reticle is aligned to the target; however, user jitter, random environmental parameters, and other variables may cause impact location calculator 228 to determine that the target is missed, in which case target reaction simulator instructions 234 and visual representation generator instructions 224 cooperate to provide a relatively realistic visual representation including a likely reaction by the target to the impact location of the missed shot.
As discussed above, processor 202 executes visual representation generator instructions 224 that can produce a visual representation of a targeting environment and a reticle configured to overlay the visual representation. The visual representation of the targeting environment is adjusted automatically by processor 202 executing visual representation generator instructions 224 such that, as the user changes the orientation of telescopic device 100, the visual representation is adjusted to reflect the changing orientation. An example of a visual representation of a view area that may be generated by embedded training circuit 108 for presentation to display 204 of telescopic device 100 is described below with respect to
It should be appreciated that visual representation generator instructions 224 may be configured to cause processor 202 to provide a variety of different visual representations and corresponding targets, including a savannah environment with corresponding animal targets, a jungle environment with corresponding animal targets, a woodland environment with corresponding animal targets, a field with various targets, a target range, a mountainous environment, and the like. In police and military contexts, visual representation generator instructions 224 may be configured to cause processor 202 to provide cityscape environments, jungle environments, mountainous or cavernous environments, and other training environments, including residential scenarios, hostage situation scenarios, and various other training environments, including human or animal targets.
While the above-examples have depicted and described a telescopic device that can be used as a gun scope and that includes embedded training, it should be appreciated that the functionality described above can be extended to other telescopic environments that require user training, including microscope environments that could present a visual scenario to a user and then adjust the visual representation based on the user's interactions with the microscope controls to train the user. Further, gaming-type scenarios may also be presented to allow the user to receive firearm training against surreal or imaginary foes. Additionally, though the above-described device and circuitry includes a display, in some instances, the training environment may be presented to a display of a smart phone or tablet computer, to an attached display, or to another telescopic device through a wireless communication channel. Alternatively, telescopic device 100 may communicate with a helmet, glasses, or goggles configured to receive data corresponding to the embedded training environment and that displays the data on a display.
In an example, telescopic device 100 is configured to calculate or estimate an impact location corresponding to a ballistics reticle when the user selects a target and to estimate an impact location of a shot relative to the ballistics reticle in response to a trigger pull. The user may interact with the training environment presented on a display of the scope. One possible example of a method of providing embedded training using a telescopic device is described below with respect to
Continuing to 406, an impact location of a shot is determined in response to receiving the trigger pull signal. As mentioned above, processor 202 executes impact location calculator instructions 228 to determine the impact location as a function of the orientation and movement of the gun scope at the time the shot was taken as well as environmental parameters calculated by environmental parameter generator 236 at the time the shot was taken.
Continuing to 408, the target is dynamically adjusted within the visual representation in response to determining the impact location. In an example, processor 202 executes visual representation generator instructions 224, target position adjustment instructions 232, and target reaction simulator instructions 234 to determine the result of the shot with respect to the visual representation of the target. If the shot is missed, the target may flee or an object hit by the shot may reflect the impact (such as with the display of a gash or hole). The target reaction and/or the effect of the shot may be calculated and used to adjust the visual representation.
Method 400 represents one possible flow diagram of a method of providing feedback to the user (as part of the embedded training) to reflect the user's shot. Another example of a method is described below with respect to
Moving to 508, orientation information associated with the rifle scope is determined. It should be appreciated that movement and changes in orientation of the rifle scope impact the visual representation, and that processor 202 continuously adjusts the visual representation to reflect movement and orientation of the rifle scope.
Proceeding to 510, processor 202 receives a trigger pull signal. Advancing to 512, timing of a virtual shot is delayed in response to the trigger pull signal until a center of the reticle is aligned to the visible tag (which was applied to the target in 506). In an example, processor 202 tracks movement of the target and adjusts the position of the visible tag within the visual representation to remain attached to the target independent of the position of the reticle. Continuing to 514, an impact location of the virtual shot is calculated with respect to the visual representation based on the orientation information, the timing, and ballistic data. Further, as mentioned above, the impact location may be influenced by movement of the user or the target, by generated environmental parameters, and the like.
Moving to 516, if the target is hit, method 500 advances to 518 and a visual appearance of the target is altered within the visual representation. For example, if the target is a static target, such as a bull's eye or a tree, the visual representation may be updated to depict a hole in the bull's eye or the tree representing the impact of the shot. If the target is an animal, the target may be updated to depict a wound or to reflect the animal falling to the ground.
At 516, if the target is not hit, method 500 advances to 520 and a response is determined based on the impact location, where the response represents at least one possible reaction by the target in response to the impact location of the virtual shot. For example, if the shot hits a nearby tree, the target may be startled and may flee. Alternatively, the target may look around without moving. The target reaction may be variable and may include at least some randomness to allow for variability of the target reaction. Continuing to 522, a position of the target is altered based on determining the response. In other words, the calculated reaction of the target may be used to estimate the target's reaction to the miss and the visual representation generator instructions 224 to cause processor 202 to represent the target within the visual representation to reflect the calculated reaction. In some instances, the target may flee the view area and/or hide.
While the above-discussion of
In an example, each telescopic device 100, 100′, and 100″ includes visual representation generator instructions 224 within an embedded training circuit 108 that is configured to provide a visual representation. The visual representation may represent a pre-defined training scenario, and telescopic devices 100, 100′ and 100″ may be configured to share timing information and virtual shot trajectory (impact location data) to synchronize the display of the visual representations, though each telescopic device 100, 100′, and 100″ displays a portion of the visual representation that corresponds to the orientation and movement of the particular telescopic device 100, 100′, and 100″ independent of the others. To the extent that two telescopic devices, such as telescopic devices 100 and 100′ are oriented toward the same view area, the resulting visual representations on the displays of those devices should be synchronized as well, such that they see the same visual representation. In other instances, telescopic device 100 may transmit the visual representation to the other telescopic devices 100′ and 100″ to allow a shared training experience. In either instance, virtual shot information may be shared between the telescopic devices 100, 100′ and 100″ to update the visual representations on each of their respective displays. In one example, a group of military or police personnel may train with one another on a shared training exercise through a coordinated visual representation.
In conjunction with the systems, circuits, and methods described above with respect to
Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the scope of the invention.