The present disclosure relates generally to firearms, and more particularly to a firearm and system employing enhanced optics.
Particular embodiments of the disclosure are related to systems and methods for improving training of marksmanship.
Law enforcement and military training, operations and logistics applications all require weapon capabilities which deliver an accurate measurement of the weapon orientation and point of aim; a recognition of “shot break” and the intended target; a linkage of weapon telemetry data to the individual pulling the trigger and the ability to move telemetry data off of the weapon to local and/or central data repositories. Successfully supporting the four imperatives above requires advanced technologies as disclosed herein, which tailor form to functional requirements and leverage foundational capabilities to create new, previously unachievable capabilities.
The firearm and system of the present disclosure addresses a range of foundational challenges. For example, with respect to optics, law enforcement and military personnel require optics which can “recognize” multiple threat/targets at varying physical distances and orientations. Once the threat/target is identified, the capability must support “indexing” which assigns priority to each target in terms of risk. The optics must be able to deal with threats/targets presented on the oblique, partially obscured and in low light conditions.
With respect to moving target engagement, weapons skill development (e.g., basic rifle marksmanship) has traditionally relied on training a static shooter to hit a static target at a known distance. Unfortunately, threats do not generally stand still in a fight. Real world weapon engagements frequently occur with both the law enforcement/military personnel moving, as well as the threat actor. To be fully effective, a weapon system must support recognition of threat/target movement (direction and speed), but current systems do not.
With regard to predictive metrics related to marksmanship training, it is currently extremely expensive for law enforcement and military to transport weapon operators to qualification events. Unfortunately, significant numbers of personnel fail to qualify with their assigned weapons. Law enforcement and the military lack an objective mechanism to screen personnel to evaluate their likelihood of qualifying prior to being transported to qualification events.
With regard to optic zeroing, it is well understood that, when installed on a weapon, the optical unit must align with the bore of the weapon. Adjustment must occur in a manner which is easily executed by the operator and must remain fixed/stable during prolonged use during live fire.
Form factor is another issue, as the service weapon for law enforcement is typically a pistol. Adding an optical unit to a pistol requires a form factor which 1) will still fit inside a standard holster; and 2) avoids displacing the pistol flashlight which is typically mounted on the Picatinny rail attachment point. Further, weight must be considered, as attaching an external optical unit, particularly one requiring on-board power, such as a battery having sufficient longevity, must not add so much weight that it affects usage. In addition, the optical unit must be rugged enough to work on pistols, rifles, carbines and automatic rifles.
In addition to the above, law enforcement and the military have limited resources for capability acquisition. However, it is common to see unique, unrelated systems procured to separately support training, operational use and logistics applications. As a result, training may not accurately reflect operational environments (negative training), operational capabilities may not be regularly exercised, and the logistics community may not receive accurate information relevant to supporting law enforcement/military personnel in the field.
Supporting the movement of the telemetry data to a wide range of local devices, training and operational data become useful for a wide range of applications, including logistics.
In order to address the problem of identifying targets at varying distances, the presently disclosed device and method integrates multiple optical units/sensors which are optimized for target identification and/or engagement at varying distances from the weapon. The image processor associated with the present system can advantageously interleave video streams from each optical unit to incorporate concurrent target identification at multiple ranges/distances to present the operator with unified target identification within the field of view. In the background, object recognition capability can preferably evaluate the individual video streams, applying object recognition applications to recognize and identify threats/targets within the effective range of each optical unit/image sensor. This enables real-time, rapid target identification across multiple distances and prioritization based upon the object recognition algorithm.
Preferably, the object recognition algorithm is based upon a machine learning algorithm.
Advantageously, the presently disclosed device, system and method leverage video generated from each optical unit/sensor to define threat/target movement. Advantageously, the system handles the treatment of relative movements of threat/target by quantifying (frame-by-frame) the relative movement of the threat/target in relation to the operator's point of aim. This onboard capability provides the operator with information on the direction and speed of movement of the threat/target, thereby enabling calculation of an appropriate “lead” within the point of aim to support accurate target engagement.
In addressing marksmanship training, the presently disclosed device, system and method preferably leverage the weapon operator's dry-fire performance to accurately predict the outcome during live-fire qualification events. Preferably, the system can accurately predict qualification scores on existing ranges based on as few as ten dry-fire shots.
Preferably, the device, system and method of the invention provide a novel external and/or a unique internal optic zeroing capability. Both capabilities deliver the ability to align the unit with the bore of the weapon. Independently and/or together, they provide the ability to provide a rugged, enduring alignment.
Advantageously, the device, system and method of the invention constrain the form factor of the device to ensure that it fits within a standard pistol (handgun) holster. To avoid displacing the pistol flashlight, the present device incorporates both visible and invisible illumination and pointers. Further, the device and system accommodate batteries providing sufficient continuous usage and are rugged enough to work on rifles, carbines and automatic rifles.
Advantageously, the system, device and method according to the invention create a common hardware/firmware/software solution which supports training, operations and logistics applications. Preferably, the capture and interpretation of telemetry data is “agnostic” and is moved from the optical unit to a local device, such as one or more smart phones or other computing devices, one or more head mounted displays (e.g., augmented reality and/or Head-Up display) and/or local data stores which can communicate with the enterprise. By configuring the optical unit for “permanent” attachment to a weapon, weight/balance becomes part of the expectation during training and operations.
Object recognition can be based on recognition of predetermined shaped targets/threats in case of training conditions (i.e. the training uses predetermined scenarios wherein the trainer organization knows the targets/threat upfront). Alternatively, it can be based upon body recognition, such as, for example, described in the article of Ke Sun et Al. “Deep high-resolution representation learning for human pose estimation” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5693-5703, 2019. In this latter case, depending on the part of the body of the point of aim, a lethality estimation can be performed in real-time. This estimation can for example be used to enable or disable the possibility of shooting. This can for example be used in less lethal weapons used by law enforcement officials.
Advantageously, the present device, system and method can take advantage of data streams from the weapon system (e.g., fire control, optics and other sensors) to collect large volumes of data on every shot taken (e.g., who is taking the shot (biometrics), shooter location, speed of magazine change, position, meteorological data, target type, speed/direction of target, type of weapon, type of round, shot trace, hit/miss, score, lethality calculation, etc.). Preferably, the presently disclosed device and system can support emerging technological advancements which require integration of new sensors, a head mounted display (e.g., HMD/goggles) and/or smart phones/personal digital assistants used by law enforcement and the military. Integration of these capabilities in association with current and next generation squad weapons enables new training operational and logistics applications.
Embodiments of the present disclosure employ enhanced optics on a firearm to support training and effective use. In various embodiments, hardware elements include advanced optics to support multi-target identification and recognition, moving target engagement for training (with dry fire and/or live fire), augmented reality training, and external and internal optical device zeroing.
Embodiments of the present disclosure assist with law enforcement marksmanship training using their service pistol, enabling practice of drawing the weapon, engaging targets and returning the weapon to the holster.
Embodiments of the present disclosure further assist with supporting the use of pistols in conventional and special operations of military environments. In various embodiments, the same system used for the pistol form factor can be employed to support close quarters battle and marksmanship applications, using a rifle or carbine, at ranges up to eighty meters, for example. Embodiments of the present disclosure further support weapon tracking to enable augmented reality marksmanship training. Embodiments of the present disclosure further enable the exchange of data with a central data repository, creating a persistent, detailed record of a shooter's performance over time. Embodiments of the present disclosure further provide instructional tools to measure shooter performance and recommend strategies and drills to enhance skill. These tools can reduce the requirement for skilled staff, enhance consistency of performance measures and provide leadership with objective reports on efforts to improve marksmanship.
Embodiments of the presently disclosed design utilize a camera-based imaging device to recognize a target and provide accurate measurement of shooter performance and enable intelligent tutoring through the use of built-in tools. Camera-based technologies enable moving target engagement, and target engagement at an extremely close range. This provides a dry-fire close quarters battle capability.
The present invention discloses a method for analyzing a firearm shot, comprising the steps of:
According to preferred embodiments, the method of the present invention comprises one or more of the following features:
Another aspect of the invention is related to a firearm shot analysis system comprising:
According to preferred embodiments, the system of the present invention comprises one or more of the following features:
Another aspect of the invention is related to a firearm comprising the system according the invention.
Preferably, the firearm is a handgun fitting in a standard handgun holster.
In a preferred alternative, the firearm is an automatic rifle.
The presently disclosed subject matter now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the presently disclosed subject matter are shown. Like numbers refer to like elements throughout. The presently disclosed subject matter may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Indeed, many modifications and other embodiments of the presently disclosed subject matter set forth herein will come to mind to one skilled in the art to which the presently disclosed subject matter pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the presently disclosed subject matter is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims.
More specifically, the present disclosure encompasses any suitable combinations of embodiments described herein separately.
Example embodiments such as disclosed herein can incorporate a controller having a processor and an associated memory storing instructions that, when executed by the processor, cause the processor to perform operations as described herein. It will be appreciated that reference to “a”, “an” or other indefinite article in the present disclosure encompasses one or more than one of the described element. Thus, for example, reference to a processor encompasses one or more processors, reference to a memory encompasses one or more memories, reference to an optic device encompasses one or more optic devices and so forth.
As shown in
As shown in
In order to reduce the dimensions of the electronics, the PCB 50 represented in
It will be appreciated that the housing 35 can be formed with different external shapes while accommodating the necessary internal components in order to have sufficient form and weight to enable the operation, storage and features described herein. The image processor stored in the housing 35 can interleave video streams from each optical unit 25, 30 to present the operator with a unified field of view. This field of view can be provided to the operator by means of a heads-up display 60, a handheld computing device 65 or a separate user interface on another computing device 70, for example, as illustrated in
The target identification and recognition algorithm can either be directly integrated into the device 15 by means of fixing onto the firearm 20, or integrated in a wireless connected device such a heads-up display 60, a handheld computing device 65, or a separate user interface on another computing device 70. In this latter case, interleaved video streams or individual video streams are wirelessly sent to the remote device which makes the analysis.
In various embodiments, the weapon mounted imaging device 15 is small enough to be mounted on a pistol and rugged enough to operate on machine guns. Further, the device 15 can capture weapon position (GPS) and orientation (e.g., nine degrees of freedom (9 DOF)) and limited meteorological data. Additionally, the device 15 incorporates a camera capable of: 1) capturing shooter point of aim; 2) object (target) recognition; and 3) (live) moving target engagement metrics. The device 15 can capture large volumes of actual shooter performance data streams to enable development of intelligent tutoring capable of delivering tailored recommendations for training approaches for individuals, teams, and squads, thereby rapidly increasing training proficiency.
Examples of tailored recommendations include, for example, identifying the size of a shooter “aim box” measuring a shooter's stability in real-time. Such identification can enable the shooter to clearly quantify what he or she is doing incorrectly. By specifically identifying the symptom of a shooter's instability (e.g., triggering, breathing, sight alignment, etc.), the system enables the shooter to focus training on the specific weakness which is resulting in poor accuracy.
In various embodiments, the device integrates “artificial intelligence” into the chipset within the weapon system. This capability enables the weapon to “learn” how the shooter engages targets, provide training, and enhance accuracy in combat. The device can further integrate “computer vision” into the chipset within the weapon system. This capability enables the weapon to identify (e.g., via object recognition) targets, estimate range, and estimate speed of targets. This enables moving target engagement training, as well as supporting operational use. Such artificial intelligence can be accomplished through suitable programming stored in memory and operable by the processor maintained within the housing 35.
In various embodiments, a six-degrees of freedom (6 DOF) accelerometer/inertial measurement unit (IMU) is secured within the housing 35 to measure and track weapon position in relation to the target. It will be appreciated that the present device and system is interoperable with the Army IVAS head mounted display supporting augmented reality and can include position and location tracking to the weapon (real or surrogate) to provide position and location within the training environment. Additionally, the present system can track location using the Nett Warrior & Marine Common handheld communications devices.
Precise inertial measurement can for example be used to detect a movement of the firearm before the occurrence of a shot, and to wake up the electronics at an adequate timing before the occurrence of the shot. For example, the precise inertial measurement can be used to detect when a handgun is removed from a holster, or when a rifle is set or positioned on a probable shot position. This allows to record the entire sequence of the firearm use, thereby improving training capability.
As described elsewhere herein, the present system can leverage video generated from each optical unit/sensor to define threat/target movement. In specific embodiments, by quantifying (frame-by-frame) relative movement of the threat/target in relation to the operator point of aim. Such quantification can be accomplished through suitable programming stored in memory and operable by the processor maintained within the housing 35. This onboard capability provides the operator with direction and speed of movement of the threat/target enabling appropriate “lead” within point of aim to support accurate target engagement.
While static target training teaches good shooter mechanics, gunfighting or close combat engagement typically involves moving targets. The present system teaches users to successfully engage realistic moving targets (e.g., in dry-fire prior to moving to live training and/or operational engagements).
In various embodiments, the present system can be employed to assist with static shooter/moving target and moving shooter/moving target environments. Programming stored on the device 15 can apply algorithms that measure weapon orientation, cant, stability and shot break, for example. Another application can measure a shooter's ability to engage moving and static targets and can report specific shooter issues, generate performance metrics, and provide coaching tools.
By tracking operator dry-fire performance, the presently disclosed device can accurately predict outcomes during live-fire qualification events. Specifically, the device can accurately predict qualification scores on existing ranges based upon as few as ten dry-fire shots. Further, the system can support individualized training based on the identity of the shooter and their role within a given unit. This allows the system to “tune” expectations to novices (e.g., recruits, new personnel, CSS units), intermediate (e.g., riflemen, CS, support elements), and expert (e.g., SDM, sniper, competitive shooter, SNCOs). This ability to tailor metrics provides foundational support to intelligent tutoring as well as the ability to “rollup” individual performance within a team/squad to assess collective engagement skills.
It will be appreciated that the present device can be used dry-fire and live-fire. As a result, “reps & sets” in the company area can flow into qualification and other live fire events. This enables the quick diagnosis of what the shooter is doing differently on the range (as opposed to in the company area). Integration of the sensor at qualification enables intelligent tutoring and prescriptive remedial training without regard to how well the shoot scores. As a result, the focus becomes improvement and sustainment, rather than Go/No-Go qualification, for example.
With regard to the form, fit and function of the Pistol/Close Quarters Battle (PV/CQB) embodiment, the present device is modular, facilitates aiming and targeting in a dynamic environment and minimizes the amount of support equipment that is necessary to operate and sustain the product. Wireless communications between the device and an external user interface can be accommodated via Bluetooth, WiFi or other wireless communication protocol. It will be appreciated that the device's form factor can be constrained to ensure that it fits within a standard pistol holster. To avoid displacing the pistol flashlight, the device can incorporate both visible and invisible illumination and pointers. In various embodiments, the device is shock resistant to a one-meter drop and can withstand temperatures from −40 to +71° C. The device can be embodied so as to survive cleaning related to NRBC, using typical chemical treatment.
It will be appreciated that the present device enables delivery of new training modalities (e.g., mixed reality/augmented reality) without altering the hardware or software architecture. In various embodiments, a common software package is employed as a common baseline for Windows™, Android™ and iOS™ variants.
As shown in
In various embodiments, the device can function with dry fire, blank fire, live fire, simunition and simulated recoil during training without adjustment to system. The system can function for dry fire with “red” reset trigger training pistols: FN509 Red, Glock 17R.
As shown in
As shown in
The system can work on devices using drop-in bolt(s), recoil simulators, UTM and Simunition. The system can further operate on surrogate/simulated weapons (e.g., Cybergun, Airsoft) using battery or CO2 power. The system can detect the presence of a magazine and magazine changes. The system can detect the user's position, and can calculate the user's time to draw, engage a target and fire the device. The system can further calculate the time it takes a user to complete a magazine change. The system can further detect when the device is removed from the holster.
As disclosed herein, the system can recognize targets based on predefined target images, such as may be stored in memory. In various embodiments, target identification is accomplished via software programming employed with the optical device. If needed, QR Code or other identification elements on targets can be provided as passive elements (e.g., stickers).
The system permits targets to be added or removed from the stored library. In combination with eyewear such as goggles, the system supports eye tracking. The system can digitally adjust the zoom automatically to the target and report shooter eye movement during target engagement. In various embodiments, the system incorporates an embedded image stabilization algorithm. The system can operate in standalone mode or in streaming mode and can record full streaming video. In various embodiments, the device streams the video to an end device and the operator can manually delimit the shape of the target.
The system is operable with, and can capture the shooter perspective, when employing an advanced combat optical gunsight (ACOG).
Augmented reality (AR) technologies can be employed with embodiments of the device of the present disclosure. In various embodiments, a Heads-Up Display (HUD/goggles) and/or the Nett Warrior device are integrated with the device. It will be appreciated that integration of these capabilities enables new training, operational, and logistics applications. For example, shooter marksmanship performance during live fire and dry fire can be shared and viewed.
Operator movement associated with acquiring the target can be recorded. The user interface can measure and present a graphic to see, for example, if a correct and quick position is taken before shooting.
In various embodiments, the device can be provided with multiple LED lights. For example, two bicolor (Red/Green) LED lights can be provided, wherein a first light turns green when power is ON but not recording or streaming and turns red when recording or streaming. The second light can detect and display level of battery charge remaining, where green indicates good charge and power remaining, amber indicates less than thirty minutes of remaining power and red indicates less than ten minutes of remaining power. To preserve battery life, the system can detect user inactivity and go into “sleep mode”.
In embodiments, the video of the training can start from an event call “Start”, which can be triggered in streaming mode when the operator initiates the “start session”. This can occur, for example, when the pistol is coming out of the holster after selecting this option in the software at the session level, or when the rifle/rifle/carbine is at an angle when targets are typically engaged after selecting this option in the software at the session level.
The Start can be triggered in standalone mode when the pistol is coming out of the holster, when the user pushes the pushbutton on the side of the device or when the rifle/rifle/carbine is at an angle when targets are typically engaged after selecting this option in the software at the session level. The operator can engage a target from the oblique (viewing angle 40° to 160°) and the user can adjust the oblique angle of engagement in the software.
In various embodiments, a shot trace is overlaid on a video of the target engagement and can register the movement three seconds before the shot break and one second following shot break in order to see the shooter's performance (e.g., breathing, holding, aiming, triggering). This can be performed by, for example, continuously recording the video streams and keeping in volatile memory only the last three seconds and storing these last three seconds in permanent memory upon detection of a shot.
In embodiments, 200 milliseconds prior to the shot will represent triggering, one second prior to shot break indicates point of aim and the preceding two seconds represents the stability of the shooter while engaging the target. The system can also generate an “aim box” using the same algorithms. The system can also capture one or more seconds of follow through. It will be appreciated that various modes of operation can affect the trace duration. For example, in a dynamic mode, the trace could be less than three seconds.
The system can record each shot that has been fired within a certain time frame (e.g., less than 100 ms). The system can also “recognize” targets based on imagery captured, such as, for example, an indicator such as a number or code printed on the target. The system can operate with targets of different size through all ranges. For example, “small” targets may be defined as NTM10 and/or 5×5 cm targets and “big” targets can be defined as human size to two humans separated by at least three cm.
Based upon aggregated shooter data, the system can identify specific drills and/or corrective action to improve shooter mechanics. The system can support the export of all shooter metrics and recommended drills/corrective action to a database.
In embodiments, based on multiple (e.g., ten) shots in one or more shooting positions, the system can predict the shooter's probable qualification score and likely level of qualification. Based upon shots taken, and the grouping defined within the system, if appropriate, the system can recommend specific adjustments to mechanically zero the device.
In addition to the above, the system can support, score and “grade” predefined scenarios and events, such as multiple position, timed events, and magazine changes (these directly support qualification training events). The system can also support a user's ability to build a “script” which moves the shooter through various shooting positions and/or targets. For example, a script may provide a qualification scenario reflecting position and distance changes. The script may be saved and selected by the shooter for future training.
In various embodiments, the system is constructed so as to have a form resembling or similar in shape and size to the Streamlight TLR-1 or Surefire X300 weapon mounted flashlights, with the intent of fitting inside a flashlight compatible law enforcement retention holster.
The system can be mounted on a firearm and is compatible with pistols, rifles and carbines.
In various embodiments, the zeroing retention performances shall (separately) withstand four hundred rounds of ammunition (e.g., five and seven mm, and/or nine mm), while keeping accuracy within 0.5 minutes of angle (MOA). The optical unit can have an embedded laser pointer which can be independently turned off/on, wherein the pointer is not harmful to the eye. In various embodiments, the laser is capable of pointing to a target at fifty meters in sunlight condition (i.e., >35 mW).
The optical unit camera can be provided with an IR/night vision capability and can be provided with an illuminator such as a 300 lumen “flashlight” which can be independently turned off and on.
In various embodiments, the system supports up to four concurrent users on a single workstation, such that four different optical units can be in communication with a single user interface. The system can sense downrange imagery and weapon orientation with at least 6 DOF accuracy, according to various embodiments. The system can also sense temperature and barometric pressure through appropriate sensors. The system can further sense orientation of the device and can be provided with user adjustable settings to avoid false shot detection (e.g., by limiting shot recognition when the device is not oriented consistent with target engagement). The system can contain one or more push button(s) located on one side, for example, to control mechanical operations. It will be appreciated that buttons can be placed ergonomically in a manner that enables a shooter to touch buttons without removing his or her trigger hand and/or removing the device from his or her shoulder. It will further be appreciated that the device is operable when the user is wearing gloves.
In various embodiments, the weapon mounted sensor is designed to provide shooter telemetry data in support of training (e.g., Augmented Reality training—IVAS/HMD, operations and logistics applications). In embodiments, machine gun training (live and virtual) can be enabled by the weapon system as disclosed herein.
As noted elsewhere herein, the presently disclosed system creates a common hardware/firmware/software solution which supports training, operations and logistics applications. In specific embodiments, the capture and interpretation of telemetry data is “agnostic” and will be moved from the optical unit to local smart phone(s), computer(s), Head Mounted (augmented reality) Displays and/or local data stores which can communicate with the enterprise. By configuring the optical unit for “permanent” attachment to a weapon, weight/balance become part of the expectation during training and operations. Supporting the movement of the telemetry data to a wide range of local devices, training and operational data becomes useful for a wide range of applications, including logistics. Data streams from the weapon system (fire control, pptics, and other sensors) assist with collect large volumes of data on every shot taken (who is taking shot (biometrics), shooter location, speed of magazine change, position, meteorological data, target type, speed/direction of target, type of weapon, type of round, shot trace, hit/miss, score, lethality calculation, etc.). The present device supports emerging technological advancements which require integration of new sensors, the Head Mounted Display (HMD/goggles) and/or smart phones/personal digital assistants used by law enforcement and the military. Integration of these capabilities in association with current and next generation squad weapons enables new training operational and logistics applications.
In various embodiments, the local user interface enables operations and training in environments where an augmented reality head mounted display is not available. In addition, the user interface can support local coaching and observation of shooter performance. It will be appreciated that the present system can support individual training, small group coaching and competitive training by providing a local user interface that displays and stores shot-by-shot shooter performance. The user interface enables live training in a barracks, conference room, basketball court, etc., and enables shooter skills assessment, training level validation, and provides a “virtual gate” to higher levels of training as quantified by Army Integrated Weapon Training Strategy (IWTS), for example. In embodiments, the sensor connects with a local user interface to provide immediate shooter feedback, support intelligent tutoring, and provide coaching tools. The system can also share shooter data with an enterprise training management system, for example. The sensor promotes the evolution from individual weapon skills development to collective tasks, which, in turn, enhances squad lethality.
The presently disclosed system can employ an open development architecture that enables interoperability with parallel development by others, for example. In various embodiments, the presently disclosed system uses the Unity development platform, which provides immediate interoperability, and other critical and Augmented Reality Head Mounted Display capabilities. The present system can generate data and user interfaces accessible in Windows, Android, and iOS operating environments. In addition, the present system fully embraces integration of the Army Nett Warrior and/or Marine Common Handheld programs. Weapon platforms in accordance with the present disclosure can capture and prioritize information relevant to shooter engagement, situational awareness, weapon status (serviceability), and other applications relevant to reducing the burden on team/squad leaders.
Coaching tools embedded within the present system can automatically assess shooter mechanics (stability, point of aim, triggering, etc.) and identify where the user needs coaching. Leveraging the massive volume of shooter data generated by the use of the system, individually tailored intelligent tutoring can be delivered real time during marksmanship training. Examples of tailored recommendations include, for example, identifying the size of a shooter “aim box” measuring a shooter's stability in real-time. Such identification can enable the shooter to clearly quantify what he or she is doing incorrectly. By specifically identifying the symptom of a shooter's instability (e.g., triggering, breathing, sight alignment, etc.), the system enables the shooter to focus training on the specific weakness which is resulting in poor accuracy.
In various embodiments, two communication paths are supported between the weapon system and the local network: 1) Wireless (Bluetooth, a shorter distance wireless protocol, or a longer distance wireless protocol such as WiFi); and 2) USB direct wired connection.
The above-described embodiments of the present disclosure may be implemented in accordance with or in conjunction with one or more of a variety of different types of systems, such as, but not limited to, those described below.
The present disclosure contemplates a variety of different systems each having one or more of a plurality of different features, attributes, or characteristics. A “system” as used herein refers to various configurations of: (a) one or more central servers, central controllers, or remote hosts; (b) one or more imaging devices with integrated optics and components as described herein; and/or (c) one or more personal computing devices, such as desktop computers, laptop computers, tablet computers or computing devices, personal digital assistants, mobile phones, and other mobile computing devices. A system as used herein may also refer to: (d) one or more imaging devices in combination with one or more central servers, central controllers, or remote hosts; (e) a single imaging device; (f) a single central server, central controller, or remote host; and/or (g) a plurality of central servers, central controllers, or remote hosts in combination with one another.
In such embodiments as described above, the device is configured to communicate with a central server, a central controller, a remote host or another device (such as a heads-up display or portable communications device) through a data network or remote communication link. As shown in
The machine learning component 108 enables the weapon to “learn” how the shooter engages targets, provide training, and enhance accuracy in combat. The location tracking component 109 enables the location of the device to be tracked. The body/form tracking component 110 detects and records the user's setup and positioning during training and operation. The temperature and/or pressure component 111 records temperature and/or pressure during operation. The IMU 112 detects relative positioning of the device. Communications component 113 facilitates communication between the device and external devices such as a heads-up display, portable communications device, and/or a central or remote computing device, for example. The moving target engagement component 114 executes algorithms to assist the user in engaging moving targets. As shown in
As shown in
It will thus be appreciated that embodiments of the present disclosure provide, in part, a method, device and system for recognizing multiple targets at varying distances and orientations, comprising some or all of:
a firearm;
a camera-based imaging device secured to the firearm, wherein the imaging device comprises multiple optical units/sensors;
a computer processor; and
a computer-readable memory and program instructions encoded by the computer-readable memory for causing the processor, when executed, to perform steps comprising:
recognizing one or more targets via the imaging device; assigning priority to the one or more targets based on risk; and
interleaving video streams from the multiple optical units/sensors.
It will further be appreciated that the embodiments of the present disclosure provide, in part, a method, device and system for weapons skill development, comprising some or all of:
a firearm;
a camera-based imaging device secured to the firearm, wherein the imaging device comprises multiple optical units/sensors;
a computer processor; and
a computer-readable memory and program instructions encoded by the computer-readable memory for causing the processor, when executed, to perform steps comprising:
recognizing one or more targets via the imaging device; and
quantifying relative movement, such as frame-by-frame, of the target in relation to the point of aim of a firearm operator.
It will be appreciated that the embodiments of the present disclosure further provide, in part, a method, device and system for marksmanship training, comprising some or all of:
a firearm;
a camera-based imaging device secured to the firearm, wherein the imaging device comprises multiple optical units/sensors;
a computer processor; and
a computer-readable memory and program instructions encoded by the computer-readable memory for causing the processor, when executed, to perform steps comprising:
track operator dry-fire performance of the firearm;
measure shooter performance and recommend one or more strategies or drills to enhance skill of the operator; and
accurately predict outcomes for the operator during live-fire qualification events.
It will be appreciated that the embodiments of the present disclosure further provide, in part, a method, device and system for optic zeroing, comprising some or all of:
a firearm;
a camera-based imaging device secured to the firearm, wherein the imaging device comprises multiple optical units/sensors;
internal or external optic zeroing components as described herein;
a computer processor; and
a computer-readable memory and program instructions encoded by the computer-readable memory for causing the processor, when executed, to perform steps comprising:
enabling an operator to adjust the point of aim of the firearm to coincide with sights on the firearm.
In certain embodiments in which the system includes a firearm and imaging device in combination with a central server, central controller, or remote host, the central server, central controller, or remote host is any suitable computing device (such as a server) that includes at least one processor and at least one memory device or data storage device. As further described herein, the imaging device can include at least one device processor configured to transmit and receive data or signals representing events, messages, commands, or any other suitable information between the imaging device and other devices, which may include a central server, central controller, or remote host. The imaging device processor can be configured to execute the events, messages, or commands represented by such data or signals in conjunction with the operation of the imaging device. Moreover, the processor of the additional device, central server, central controller, or remote host is configured to transmit and receive data or signals representing events, messages, commands, or any other suitable information between the central server, central controller, or remote host and the additional device. One, more than one, or each of the functions of the central server, central controller, remote host or other devices may be performed by the processor of the imaging device. Further, one, more than one, or each of the functions of the imaging device processor may be performed by the at least one processor of the central server, central controller, remote host or other device.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “circuit,” “module,” “component,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 2003, Peri, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS).
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that when executed can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions when stored in the computer readable medium produce an article of manufacture including instructions which when executed, cause a computer to implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable instruction execution apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatuses or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
In various embodiments, the display devices include, without limitation: a monitor, a television display, a plasma display, a liquid crystal display (LCD), a display based on light emitting diodes (LEDs), a display based on a plurality of organic light-emitting diodes (OLEDs), a display based on polymer light-emitting diodes (PLEDs), a display based on a plurality of surface-conduction electron-emitters (SEDs), a display including a projected and/or reflected image, or any other suitable electronic device or display mechanism. In certain embodiments, as described above, the display device includes a touch-screen with an associated touch-screen controller. The display devices may be of any suitable sizes, shapes, and configurations.
The at least one wireless communication component 1056 includes one or more communication interfaces having different architectures and utilizing a variety of protocols, such as (but not limited to) 802.11 (WiFi); 802.15 (including Bluetooth™); 802.16 (WiMax); 802.22; cellular standards such as CDMA, CDMA2000, and WCDMA; Radio Frequency (e.g., RFID); infrared; and Near Field Magnetic communication protocols. The at least one wireless communication component 1056 transmits electrical, electromagnetic, or optical signals that carry digital data streams or analog signals representing various types of information.
The at least one geolocation module 1076 is configured to acquire geolocation information from one or more remote sources and use the acquired geolocation information to determine information relating to a relative and/or absolute position of the device. For example, in one implementation, the at least one geolocation module 1076 is configured to receive GPS signal information for use in determining the position or location of the device. In another implementation, the at least one geolocation module 1076 is configured to receive multiple wireless signals from multiple remote devices (e.g., devices, servers, wireless access points, etc.) and use the signal information to compute position/location information relating to the position or location of the device.
The at least one user identification module 1077 is configured to determine the identity of the current user or current owner of the device. For example, in one embodiment, the current user performs a login process at the device in order to access one or more features. Alternatively, the device is configured to automatically determine the identity of the current user based on one or more external signals, such as an RFID tag or badge worn by the current user and that provides a wireless signal to the device that is used to determine the identity of the current user. In at least one embodiment, various security features are incorporated into the device to prevent unauthorized users from accessing confidential or sensitive information.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/075390 | 9/10/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62898260 | Sep 2019 | US |