SYSTEM AND METHOD FOR SHOOTER IMAGERY AND TARGET SHOOTING ANALYTICS

Information

  • Patent Application
  • 20250237482
  • Publication Number
    20250237482
  • Date Filed
    January 16, 2025
    11 months ago
  • Date Published
    July 24, 2025
    4 months ago
Abstract
An electronic weapon training system is described. The system features one or more targets and a computing device that is configured to concurrently process (i) data associated with shots detected to hit the target to generate metrics associated with these shots and a training session in which the shots occurred and (ii) imagery associated with a shooter captured by a camera integrated as part of the computing device. The shooter imagery is used to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.
Description
FIELD

Embodiments of the disclosure relate to the field of real-time data analytics for training systems. More specifically, one embodiment of the disclosure relates to an electronic weapon training system configured to concurrently capture and analyze target shooting results and shooter imagery.


BACKGROUND

Firing ranges are specialized facilities that provide individuals using weapons (hereinafter, “shooters”) with a hands-on opportunity to safely practice the handling of their weapons. These facilities are commonly used by individuals employed by law enforcement, military, or other governmental agencies for weapons training and qualification. Such training and qualification routines are designed to improve combat marksmanship and tactical weapon handling skills, which can be achieved through the development of appropriate neural pathways via task specific repetition over time.


Commonly, firing ranges includes paper targets that are positioned a few yards from the shooter. After firing the weapon a few times, the shooter may retrieve the target and analyze the bullet holes to determine where the bullets struck the target. This requires measurements to be undertaken by a person (e.g., shooter, instructor, etc.) to determine compliance with the training/qualifications criteria. However, due to the costs of ammunition and replacement targets, such training is expensive. Given this expense, shooters tend to refrain from performing the necessary amount of shooting repetition needed to maintain or improve their weapon handling skills. This lack of training has resulted in a steady decline in shooting performance and weapon safety by law enforcement and individuals at large. An electronic weapon training system is needed to provide a low-cost, practice system that may be installed on or off the firing range to facilitate increased expertise and safety in handling of a weapon.


SUMMARY

An electronic weapon training system is described. The system features one or more targets and a computing device that is configured to concurrently process (i) data associated with shots detected to hit the target to generate metrics associated with these shots and a training session in which the shots occurred and (ii) imagery associated with a shooter captured by a camera integrated as part of the computing device. The shooter imagery is used to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.


In an exemplary embodiment, an electronic weapon training system comprises: a computing device including one or more cameras; and an electronic target communicatively coupled to the computing device, the electronic target is configured to (i) detect a plurality of shots provided from a training weapon in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the electronic target, (ii) compute metrics associated with each shot of the plurality of shots, and (iii) transmit the metrics to the computing device, wherein the computing device is configured to concurrently process (i) the metrics to generate additional metrics associated with a training session during which the plurality of shots are provided from the training weapon and (ii) imagery of a shooter of the training weapon captured by a first camera of the one or more cameras to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.


In another exemplary embodiment, the metrics computed for each shot of the plurality of shots, including a second shot of the plurality of shots, include a geographic location of the second shot of the plurality of shots. In another exemplary embodiment, the metrics computed for each shot of the plurality of shots including the second shot include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image rendered on the front surface of the electronic target. In another exemplary embodiment, the metrics computed for each shot of the plurality of shots including the second shot include a timestamp of the second shot identifying a time of contact with the front surface of the electronic target by the light beam corresponding to the second shot.


In another exemplary embodiment, the computing device includes a visual cue analytics logic configured to identify visual cues associated with the imagery of the shooter captured by the first camera by parsing the imagery and conducting analytics on positioning or movement of a body part associated with the shooter against known body part positionings or movements that detrimentally influence shooting accuracy or weapon safety. In another exemplary embodiment, the visual cue analytics logic operates in combination with one or more machine-learning models in conducting analytics on a visual cue directed to the positioning or movement of the body part associated with the shooter to identify positioning or movement of the body part needs adjustment or calibration to improve shooting accuracy or weapon safety.


In an exemplary embodiment, an electronic weapon training system comprises: a training weapon to emit a light beam to represent a shot fired from the training weapon; a target made of a light reflecting material or a photoluminescent material; and a computing device including a plurality of cameras including a first camera and a second camera, a processor, and a non-transitory storage medium including shot analytics logic, visual cue analytics logic, and training session control logic, wherein the shot analytics logic is configured to receive imagery associated with the target from the first camera, detect a plurality of shots from the training weapon during a current training session in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the target, and compute metrics associated with each shot of the plurality of shots including a geographic location of each shot, wherein the visual cue analytics logic is further configured to concurrently receive imagery associated with a shooter of the training weapon captured by second camera and identify visual cues associated with positioning or movement of body parts by the shooter that may be used by the training session control logic to select of drills or exercised performed during the current training session or during a future training session.


In another exemplary embodiment, the metrics computed for each shot of the plurality of shots, including a second shot of the plurality of shots, include the geographic location of the second shot of the plurality of shots. In another exemplary embodiment, the metrics computed for each shot of the plurality of shots including the second shot include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image placed on the front surface of the target. In another exemplary embodiment, the metrics computed for each shot of the plurality of shots including the second shot include a timestamp of the second shot identifying a time of contact with the front surface of the target by the light beam corresponding to the second shot.


In another exemplary embodiment, the visual cue analytics logic of the computing device is configured to operate with one or more machine-learning (ML) models to identify the visual cues associated with the imagery of the shooter captured by the second camera by at least parsing the imagery of the shooter and conducting analytics on positioning or movement of one of more of the body parts associated with the shooter against known body part positionings or movements analyzed by the one or more ML models that detrimentally influence shooting accuracy or weapon safety.


In an exemplary embodiment, a method for an electronic weapon training system comprises: providing a computing device including one or more cameras; coupling an electronic target communicatively to the computing device; configuring the electronic target to (i) detect a plurality of shots provided from a training weapon in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the electronic target, (ii) compute metrics associated with each shot of the plurality of shots, and (iii) transmit the metrics to the computing device; and configuring the computing device to concurrently process (i) the metrics to generate additional metrics associated with a training session during which the plurality of shots are provided from the training weapon and (ii) imagery of a shooter of the training weapon captured by a first camera of the one or more cameras to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.


In another exemplary embodiment, configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots, including a second shot of the plurality of shots, to include a geographic location of the second shot of the plurality of shots. In another exemplary embodiment, configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots including the second shot to include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image rendered on the front surface of the electronic target. In another exemplary embodiment, configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots including the second shot to include a timestamp of the second shot identifying a time of contact with the front surface of the electronic target by the light beam corresponding to the second shot.


In another exemplary embodiment, providing the computing device includes configuring a visual cue analytics logic to identify visual cues associated with the imagery of the shooter captured by the first camera by parsing the imagery and conducting analytics on positioning or movement of a body part associated with the shooter against known body part positionings or movements that detrimentally influence shooting accuracy or weapon safety. In another exemplary embodiment, configuring the visual cue analytics logic includes configuring the visual cue analytics logic to operate in combination with one or more machine-learning models in conducting analytics on a visual cue directed to the positioning or movement of the body part associated with the shooter to identify positioning or movement of the body part needs adjustment or calibration to improve shooting accuracy or weapon safety.


These and other features of the concepts provided herein may be better understood with reference to the drawings, description, and appended claims.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1A is a block diagram of a first exemplary embodiment of an electronic weapon training system that is configured to concurrently capture and analyze shot metrics and shooter imagery.



FIG. 1B is a detailed embodiment of a computing device deployed within the electronic weapon training system of FIG. 1A to process metrics associated with the target shooting results and the shooter imagery.



FIG. 2A is a block diagram of a second exemplary embodiment of an electronic weapon training system that is configured to concurrently capture and analyze target shooting imagery and the shooter imagery.



FIG. 2B is a detailed embodiment of a computing device deployed within the electronic weapon training system of FIG. 2A to capture and analyze both the target shooting imagery and the shooter imagery.



FIG. 3 is an exemplary embodiment of a split-screen illustration performed by the computing device of FIG. 1B or the computing device of FIG. 2B illustrating target shooting results and shooter imagery captured by the computing device.



FIG. 4A is an exemplary embodiment of an interactive screen display illustrating target shooting results including geographic locations of a cluster of shots detected to strike the target during a training session and the aggregate metrics associated with the cluster of shots.



FIG. 4B is an exemplary embodiment of a screen display of metrics associated with a selected shot from the cluster of shots illustrated in FIG. 4A.



FIG. 5A is an exemplary embodiment of an interactive screen display illustrating historical metrics of the target shooting results for a particular shooter during multiple training sessions.



FIG. 5B is an exemplary embodiment of a scatter plot representation of the historical metrics based on deviation in delay times between shots within the training sessions of FIG. 5A.





While each inventive aspect of the disclosure may be subject to various modifications, equivalents, and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will now be described in detail. It should be understood that each inventive aspect is not limited to the particular embodiments disclosed. On the contrary, the intention is to cover modifications, equivalents, and alternative forms of the inventive aspects within the specific embodiments as each inventive aspect may be implemented into different embodiments of those illustrated as well as different embodiments than those illustrated.


DETAILED DESCRIPTION

Embodiments of the disclosure generally relate to an electronic weapon training system configured to concurrently (i.e., at same time or in an overlapping manner) analyze both shooter imagery and target shooting results such as shot metrics and/or target shooting imagery. According to a first embodiment of the disclosure, the electronic weapon training system features a computing device communicatively coupled to one or more electronic targets. Herein, for this embodiment, an “electronic target” constitutes an electronic device that features one or more components, which are adapted to (a) detect one or more shots (light beams emitted from a training weapon) at a silhouette image with at least one predetermined region during a shooting training session and (b) compute metrics associated with each shot (emitted light beam) detected by the electronic target. The computed shot metrics may include, but are not limited or restricted to (i) a location of the shot (light beam) striking an area within the silhouette image, (ii) an identification as to whether the detected shot resides within the predetermined region identifying a successful shot (i.e., a “hit”) or outside the predetermined region identified as an unsuccessful shot (i.e., a “miss”), and/or (iii) an assigned time at which the shot occurred (timestamp). The shot metrics may be computed for each shot conducted during a multiple-shot training session.


For the first embodiment of the electronic weapon training system, the component(s) implemented with the electronic target may include logic to detect the shot, logic to compute metrics associated with the shot, and logic to generate and transmit a message to the computing device, where the message includes the shot metrics. The computing device is configured to process the contents of the message, and based on the shot metrics associated with each shot, determine further metrics directed to the shot and the training session during which the shot occurred. These further metrics may include a delay time between successive shots during the training session and/or a distance (spread) from the determined location of the shot to a reference point within the predetermined region corresponding to the intended (aimed) location for the shot. Collectively, shot progression (order) may be further computed based on the shot timestamps.


According to a second embodiment of the disclosure, the electronic weapon training system features a computing device configured to capture imagery associated with a shot (light beam) contacting a target (e.g., a sheet of light reflective material, a sheet of photoluminescence material, etc.). This imagery, referred to “target shooting imagery,” may constitute one or more images or a video captured when the shot hits the target. The content of the target shooting imagery may be processed to compute the shot metrics and the multiple-shot training session metrics. The shot metrics may be based, in part, on the first camera deployed within the computing device capturing an image of the light beam (shot) being reflected from the target or an image of a portion of the target illuminated in response to the light beam (shot) emitted from the training weapon contacting the target. The multiple-shot training session metrics may be based, at least in part, on the computed shot metrics.


For both of these embodiments, the computing device features a processor, a non-transitory storage medium, a display, one or more cameras, and one or more communication interfaces. For this architecture, the computing device may be configured to (i) render an image of the detected shots and their locations within the silhouette image formed in the target as an overlay or (ii) render the image of the detected shots and their locations integrated as part of the silhouette image. It is contemplated that light beams (shots) fired by the training weapon may miss the target completely, and as an optional feature, the training weapon may provide a count value of the number of shots fired during a training session to the electronic target for routing to the computing device and/or to the computing device directly to account for errand shots.


Additionally, for both embodiments, the electronic weapon training system further includes a camera to capture and record imagery of the shooter during the training session (hereinafter, “shooter imagery”). The shooter imagery may be analyzed by logic, stored within the non-transitory storage medium and executed by the processor, to identify, in real-time, visual cues associated with the shooter. These “visual cues” may be directed to positioning or movement of a body part of the shooter that may influence shooting accuracy and identifying certain tactical actions conducted by the shooter. The visual cues may be determined based on real-time analytics by the logic, and in response, the visual cues may be used to initiate drills and/or exercises during the current training session or to initiate specific drills and/or exercises for a subsequent training session conducted by the shooter.


As an illustrative example, the hand positioning on the training weapon, head movement, arm movement, and/or shoulder placement (posture) may constitute visual cues, where these visual cues may signify desired or undesired handling of the training weapon during the training session that may affect shooting performance or safety. Moreover, the raising or lowering of the training weapon, perceived nervousness by the shooter (e.g., shaking hands, legs, etc.), the immediate firing of the weapon in response to external prompts (e.g., startling, or loud audio, visual prompts displayed behind the target, etc.) may identify correct or incorrect tactical actions undertaken by the shooter. The undesired handling of the training weapon and/or incorrect tactical actions by the shooter may cause a decrease in shooting accuracy, unwanted or unintended shot delay, unsafe retention of the training weapon, or the like. Hence, the visual cues may be analyzed by the logic to (i) identify shortcomings in the handling of the training weapon or tactical actions by the shooter that may cause a reduction in shooting accuracy, and (ii) select additional training sessions or real-time changes to the current training session (e.g., drills, exercises, etc.) to improve these shortcomings.


More specifically, the analytics logic of the computing device may utilize machine learning (ML) models, accessible within a cloud service or stored on-premises, to conduct analytics of certain visual cues associated with the shooter in order to compute a shooter activity score. The shooter activity score may be used by the computing device in the selection of drills and/or exercises stored as training material within the cloud service to be utilized in the current training session or in an upcoming training session for the shooter.


In summary, as on-demand combat marksmanship and tactical weapon handling skills are achieved through the development of appropriate neural pathways via task specific repetition over time, analytics logic deployed within the computing device (shot analytics logic and/or visual cue analytics logic) is configured to conduct concurrent, real-time analytics on the target shooting results and shooter imagery to provide a more effective training in the use and safe handling of a weapon. Such training includes the following:


Speed and Accuracy Data—For every shot fired, the analytics logic may be configured to record and log precise time data (time since start signal, split times between shots, total time, etc.) and accuracy.


Video Capture—The analytics logic may be configured to use a second camera on a computing device to record video of shooter performance for self-analysis or coaching by an instructor.


Training Log—The analytics logic may be configured to record all training sessions for review and analysis, and automatically keeps a detailed log of each session, accessible for replay, including any associated video.


Performance Analysis—The analytics logic may be configured to provide an ability to analyze shooter time and accuracy data over time, to track improvement and optimize training sessions.


Practice Notification—The analytics logic may be configured to notify the student and/or instructor when a training session is due and/or past due, as scheduled by the student and/or instructor.


Drill and Evaluation Library—The analytics logic may be configured to access an extensive drill library of proprietary and industry standard practice drills and evaluation exercises, where the drill library may be maintained within a storage repository on-premises or as part of a cloud service.


Visual Cues—The analytics logic may be configured use programmable visual cues to initiate drills and exercises, and to indicate additional real-time tactical actions during a given iteration.


Target Library—The analytics logic may be configured to populate and access an extensive library of available targets, including common standard civilian, law enforcement, and military qualification silhouettes.


Electronic Targets—The electronic targets may be configured to communicate with the computing device (e.g., the analytics logic deployed therein) via wireless connectivity, and facilitate both time and accuracy data from the targets, and optionally render images associated with drills and/or exercises for the shooter.


Shot Timer with Video—The analytics logic may be configured to function as a live-fire range shot timer, with video of both the shooter and targets downrange.


Internet Upload—The analytics logic may be configured to upload shooter sessions, including video, to a selected website, social media, and/or specified email and/or smart phone addresses.


Training Weapon Compatibility—The analytics logic may be configured to support a specific training weapon provided as an accessory to the electronic weapon training system or any other light-emitting training weapons.


I. Terminology

In the following description, certain terminology is used to describe aspects of the invention. In certain situations, the terms “logic” and “component” are representative of hardware, firmware, and/or software that is configured to perform one or more functions. As hardware, the logic (or component) may include circuitry having data processing and/or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor (e.g., microprocessor, digital signal processor, programmable gate array, microcontroller, an application specific integrated circuit, etc.), wired or wireless receiver/transmitter/transceiver circuitry, semiconductor memory, or combinatorial logic.


Alternatively, or in combination with the hardware circuitry described above, the logic (or component) may be software in the form of one or more software modules, which may be configured to operate as it its counterpart circuitry. For instance, a software module may be a software instance that operates as a processor, namely a virtual processor whose underlying operations is based on a physical (hardware) processor. Additionally, a software module may include an executable application, a daemon application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or one or more instructions. The software module may be coded in any of a variety of programming languages such as lower-level programming language associated with a particular hardware or operating system (e.g., assembly) or higher-level programming languages (e.g., source code). Other programming languages, such as scripts, shell or command language, query or search languages may be used.


The software module(s) may be stored in any type of a suitable non-transitory storage medium, or a transitory storage medium (e.g., electrical, optical, acoustical, or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of a non-transitory storage medium may include, but are not limited or restricted to a programmable circuit, a semiconductor memory, non-persistent storage such as volatile memory (e.g., any type of random-access memory “RAM”), persistent storage such as non-volatile memory (e.g., read-only memory “ROM,” power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device.


The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware.


The symbol “(s)” represents one or more quantities of an item. For example, the term “component(s)” represents one or more components.


The term “target” is directed to any object positioned within an aim of a shooter or other marksperson, intended to receive an incoming light beam representative of a shot fired by a training weapon. For example, the target may include a light reflective or photoluminescent material placed on a front surface of the target. Additionally, or in the alternative, the target may operate as an electronic target, which features components to (i) detect an occurrence and location of the incoming light beam pertaining to a shot conducted by a shooter (independent of or in relation to a reference point operating as an intended shot location), (ii) compute shot metrics, and (iii) transmit these metrics as a message to a computing device. A silhouette image may be featured on the front surface of the target persistently or may be rendered as a programmable, interchangeable, displayed image by the electronic device.


The term “computing device” may constitute a commercial electronics device such as a laptop, a smartphone, a tablet, a wearable (e.g., smart glasses, headset, etc.), or the like. According to one embodiment, the “training weapon” may be construed as physical device that resembles an electronic handgun or rifle, but emits a light beam (e.g., a laser beam, etc.) upon trigger activation. Alternatively, the training weapon may be construed as an actual weapon (e.g., handgun, rifle, etc.) with an accessory adapted to the weapon that causes a light beam to be emitted proximate to a muzzle opening in response to trigger activation of the weapon.


A “message” generally refers to information in a prescribed format and transmitted in accordance with a suitable wired or wireless connectivity scheme such as wireless peer-to-peer communications (e.g., Bluetooth™, etc.), wireless networks (e.g., Wireless Fidelity “WiFi” networks such as WLANs, etc.), cellular, or the like. Each message may be in the form of one or more packets, frames, or any other series of bits having the prescribed format.


The term “transmission medium” generally refers to a physical or logical communication link (or path) between the target and a computing device. For example, as a physical communication path, transmission medium may be in the form of electrical wiring, optical fiber, or a cable. As a logical communication link, the transmission medium may be a wireless channel established between components within the target and/or computing device that support wireless transmissions such as Bluetooth™, radio frequency (RF) or other wireless signaling.


Finally, the terms “or” and “and/or” as well as the symbol “/” positioned between two elements are to be interpreted as inclusive or meaning any one or any combination. As an example, “A or B,” “A and/or B,” or “A/B” mean “any of the following: A; B; A and B.” Likewise, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps, or acts are in some way inherently mutually exclusive.


As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.


Various embodiments of the disclosed invention will be described in reference to one or more accompanying drawings. Herein, the references of “illustrative embodiment” and “exemplary embodiment” are used as examples that particular features, characteristics, and/or structures may be included in at least the described embodiment and may be included in other embodiments or non-illustrated versions of an embodiment. The embodiments are directed to articles of manufacturer, systems, and software modules, methods, or the like.


Herein, embodiments of the disclosure provide advancements in the field of electronic weapon training systems by providing an automated feedback system that concurrently monitors both the target shooting results along with the visual cues of the shooter to improve overall safety and performance for the shooter. Additionally, the electronic weapon training system provides real-time feedback to the shooter and/or instructors for each shot, the entire training session, and even multiple training sessions in which historical target shooting results and visual cues may be accessed and computations made to show performance trends. The real-time feedback provides shooters with information for conducting adjustments and calibrations to their weapon handling and/or tactical actions by the shooter.


II. Electronic Weapon Training System

Referring to FIG. 1A, a block diagram of a first exemplary embodiment of an electronic weapon training system 100 is shown, where the electronic weapon training system 100 is configured to concurrently capture and analyze target shooting results and shooter imagery. Herein, for this illustrative embodiment, the electronic weapon training system 100 features a computing device 110, which is communicatively coupled to one or more electronic targets (hereinafter, “electronic target(s)” 120) and a cloud service 130. The cloud service 130 is adapted to operate as a repository for storage of (i) target shooting results 132 associated with one or more training sessions conducted by a particular shooter (determined and categorized by a shooter identifier (ID) 140 received as input by the computing device 110 prior to commencing a training session), (ii) visual cues 134 associated with the shooter (or shooter imagery), (iii) a training session materials 136, and/or (iv) machine-learning (ML) models 137. The training session materials 136 include a collection of proprietary and industry standard practice drills and evaluation exercises that are available for selection by the computing device 110 based on the target shooting results 132 and/or the visual cues 134.


As further shown in FIG. 1A, a training weapon 150 may be adapted with a trigger that, upon activation (e.g., depressed by a shooter 145), causes the training weapon 150 to transmit a light beam 160. The light beam 160, if accurately aimed, is intended to contact the electronic target 120, namely contact a portion of a silhouette image 170 operating to define a boundary for an entity targeted by the shooter 145. According to one embodiment of the disclosure, the electronic target 120 may include a light detecting component 122, such as a layer of photo-sensitive material or a layer of photoluminescent material for example, positioned across a front surface 123 of the entire electronic target 120 or positioned within the silhouette image 170. As shown, the silhouette image 170 features at least one predetermined region 172 towards which the shooter 145 aims and fires the training weapon 150 (i.e., emits the light beam 160). The light beam (shot) 160, upon coming into contact with the light detecting component 122, produces a point of contact 174 on the electronic target 120. Also, upon detection of the light beam 160, electrical signaling from that contact point 174 is provided to components 124 deployed within the electronic target 120.


According to one embodiment of the disclosure, the components 124 may include circuitry 126 configured to assist in (a) detecting the light beam 160 in contact with the electronic target 120, (b) determining a geographic location of the contact point 174 on the silhouette image 170, and (c) determining whether the contact point 174 resides within the predetermined region 172. The circuitry 126 may include a processor 128 to compute metrics associated with the detected light beam 160 (hereinafter “shot metrics”). These shot metrics may include, but are not limited or restricted to (i) a geographic location 175 of the shot (light beam 160) striking the electronic target 120, inside or outside the silhouette image 170, (ii) an identification 176 as to whether the detected shot resides within the predetermined region 172 representing a successful shot (i.e., a “hit” 176) or resides outside the predetermined region 172 representing an unsuccessful shot (i.e., a “miss”), and/or (iii) a time of the shot (timestamp) 177. Herein, the shot metrics 175-177 may be computed for each shot fired by the training weapon 150 during a training session and hitting the electronic target 120 (i.e., multiple light beam strikes on the electronic target 120), where each of the shot metrics may be included within a message 180 provided over a transmission medium established between the electronic target 120 and the computing device 110.


The computing device 110 is configured to receive the message 180 and utilize these shot metrics 175-177 to generate additional metrics 178 associated with each particular shot such as a distance (spread) from the location of the shot to a reference point 173 corresponding to an intended location of the shot within the predetermined region 172. The computing device 110 is further configured to generate additional metrics 179 associated with the multiple-shot training session such as a delay time between successive shots or average delay time between shots, accuracy metrics (#hits/total shots (hits & misses)), maximum spread range, or the like. Also, the computing device 110 is configured to generate historical metrics associated with multiple training sessions for the shooter 145 taking into account the metrics 175-179 for each of the shots conducted during the current training session to identify improvement or deterioration in shooting performance.


The computing device 110 further includes at least one camera 185, which is configured to capture imagery associated with the shooter 145, such as video associated with the shooter 145 or one or more images associated with the shooter 145 (e.g., a single image, a series of successive images, etc.). Upon processing this shooter imagery, the computing device 110 is configured to identify visual cues 184, such as characteristics associated with the handling of the training weapon 150 by the shooter 145 for example, which may require adjustment or calibration. The visual cues 184 may be used by the computing device 110 to identify specific drills and/or exercises to be conducted during the current training session or during a future training session to improve shooting performance by reinforcing or correcting certain activities by the shooter 145. The drills and/or exercises may be retrieved by the computing device from the cloud service 130.


The computing device 110 is configured to provide an aggregate 182 of the metrics (provided from the message 180 and generated by the computing device 110) and the visual cues 184 (with or without the shooter imagery in its entirety) to the cloud service 130. The shooter ID 140 would be included with the aggregate metrics 182 and the visual cues 184 to properly store the data as historical data associated with the shooter 145. The data may be retrieved by the computing device 110 to perform computations on the data for rendering training session metrics, determine future drills and/or exercises to address shortcomings in the target shooting results and/or shooter activity, or shooter/instructor review.


Referring to FIG. 1B, a detailed embodiment of the computing device 110 deployed within the electronic weapon training system 100 of FIG. 1A is shown. According to this embodiment, the computing device 110 features a processor 190, a non-transitory storage medium 191, a plurality of interfaces 192 (e.g., Bluetooth™ interface 192A and a wireless network interface 192B), one or more cameras (e.g., the first camera 185 and an optional second camera 193), a display screen 194, and/or a data store 199. Herein, the Bluetooth™M interface 192A is configured to receive the message 180 from the electronic target 120 and provide content within the message 180 (e.g., metrics 175-177) to analytics logic 195, which includes shot analytics logic 196, visual cue analytics logic 197 and training session control logic 198.


More specifically, as shown in both FIGS. 1A-1B, the analytics logic 195 is stored within the non-transitory storage medium 191 and executable by the processor 190. Herein, the shot analytics logic 196 is configured to (i) conduct further analytics on the content of the message 180 (e.g., shot metrics 175-177) and (ii) generate the aggregate metrics 182. The shot analytics logic 196 is configured provide the aggregate metrics 182 to the training session control logic 198, where the aggregated metrics 182 and/or scoring 183 associated with the visual cues 184 (determined by the visual cue analytics logic 197) may be used to determine drills and/or exercises to be acquired from the proprietary and industry standard practice drills and evaluation exercises stored as part of the training session materials 136 within the cloud service 130. The training session control logic 198 may be further configured to upload the aggregate metrics 182 and/or visual cues 184 (and scoring 183 thereof) to the cloud service 130 for storage, retrieve specific stored aggregated metrics 182 and/or visual cues 184 for selected training sessions for analysis by an instructor (or shooter), and/or retrieve the stored aggregate metrics 182 and/or visual cues 184 to compute historical metrics for rendering on the display screen 194.


The visual cue analytics logic 197 is configured to conduct analytics on the incoming imagery captured by the camera 185 (“shooter imagery”) to identify the visual cues 184 from the captured shooter imagery. From these the visual cues 184, the visual cue analytics logic 197 may be configured to operate with ML models 137 within the cloud service 130 to conduct analytics of certain visual cues 184 associated with the shooter 145 in order to initiate scores associated with various body part positioning and movement that is considered to influence shot accuracy more than a prescribed amount. The ML models 137 may be configured to identify known body part positioning and/or movements that detrimentally influence shooting accuracy or weapon safety and score visual cues associated with such positioning and/or movement to cause adjustment or calibration. These scores may be utilized by the training session control logic 198 to select drills and/or exercises stored within the cloud services 130 to adjust the current training session or customize a future training session with the goal of adjusting positioning and/or movement of the shooter 145 to improve shooting performance. Also, the visual cues 184 may be analyzed to interpret tactical actions by the shooter, and score these tactical actions to identify recommended and non-recommended (or unsafe) tactics.


As an illustrative example, the visual cue analytics logic 197 and the ML models 137 may conduct analytics on the shooter's hand position and assign a score thereto. If the score falls below a prescribed value, the hand position may be deemed unacceptable or unsafe. Upon receiving the visual cue and its assigned score from the visual cue analytics logic 197, the training session control logic 198 may determine that certain drills and/or exercises maintained within the cloud service 130 should be conducted in the current training session or a future training session to encourage the shooter to adjust her or his current hand position to achieve better shooting accuracy and/or better weapon safety. For instance, drills and/or exercises associated with a future training session may be selected from the cloud service 130, where the selected drills and/or exercises are designed to encourage adjustment or calibration of the shooter's hand position (e.g., drills and/or exercises where shooting performance is likely diminished by the current hand position and more favorable with the recommended hand position). Similar scoring and training session selection may be conducted for different types of visual cues.


In general, according to this embodiment of the disclosure, the analytics logic 195 may be configured to perform tasks in a plurality of operating modes: (1) computation mode, (2) display mode, and/or (3) storage mode. When operating in “computation” mode, the analytics logic 195 generates the aggregate metrics 182 based on metrics provided from the electronic target 120. The aggregate metrics 182 are performed by the analytics logic 195 to offload processing being conducted by the processor 128 implemented within the electronic target 120 such as computing spread metrics 178 or delay time metrics 179 for example.


Additionally, when operating in “computation” mode, the analytics logic 195 is configured to parse the shooter imagery captured by the camera 185 to identify visual cues that may warrant identification and subsequent re-training to adjust and calibrate these visual cues. As an illustrative example, hand positioning on the training weapon captured as part of the shooter imagery may warrant adjustment or calibration for improved safety and/or improved shooting accuracy. Also, certain body placement or movement (e.g., head/arm/shoulder placement or movement) captured by the camera 185 as part of the shooter imagery may warrant adjustment or calibration to achieve improved shooting accuracy.


When operating in “display” mode, the analytics logic 195 is configured to generate screen layouts to be rendered on the display screen 194. Various types of screen layout may display the aggregate metrics 182 (or portions thereof) associated with the current training session, the metrics associated with a particular shot conducted by the shooter 145 during the current training session, or historical metrics associated with one or more prior training sessions or the collective metrics for multiple training sessions. Further illustrative embodiments of the screen layouts are shown in FIGS. 4A-4B and FIGS. 5A-5B. Additionally, as shown in more detail in FIG. 3, the analytics logic 195 may be configured to simultaneously render the shooter imagery captured by first camera 185 and imagery of the electronic target 120 as captured by an optional, second camera 193 on the display screen 194 as a split-screen illustration.


When operating in “storage” mode, the analytics logic 195 is configured to generate a message, including the aggregate metrics 182 and/or visual cues 184 (or portions of the shooter imagery) for transmission via the wireless network interface 192B. The message may include the shooter ID 140 for the cloud service 130 to properly categorize the content to the appropriate shooter, as the cloud service 130 may maintain target shooting results and shooter imagery for a number of shooters.


The computing device 110 further includes the data store 199 as internal storage for metrics, imagery, and other information. For example, the data store 199 is configured to retain the target shooting results 132 (e.g., aggregate metrics 182, etc.), visual cues 184, content associated with the drills and/or exercises for training sessions, optionally different electronic targets for uploading to the target 120 from the computing device via one of the interfaces 192, or the like.


Referring to FIG. 2A, a block diagram of a second exemplary embodiment of an electronic weapon training system 200, which is configured to concurrently capture and analyze target shooting results and shooter imagery, is shown. Herein, the electronic weapon training system 200 features a computing device 210 and at least one target 220. In contrast with the target(s) 120 of FIG. 1A, the target 220 is made of one or more sheets of light reflective and/or photoluminescence material, without being implemented with circuitry to capture the geographic location of a detected shot 255 (i.e., emitted light beam 250 responsive to activation of a trigger of the training weapon 150 contacting the target 220). Herein, the target 220 includes a silhouette image 222, which features one or more predetermined regions 224 and 226. Each predetermined region 224 and 226 includes a reference point 225 and 227, respectively. The reference points 225 and 227 identify the precise area at which a shot (light beam emitted from a training weapon) is intended.


As further shown in FIG. 2A, the computing device 210 includes a first camera 230 and a second camera 240. In lieu of the target 220 featuring components to establish wireless connectivity with the computing device 210 and receive shot metrics therefrom, the computing device 210 utilizes the first camera 230 to capture imagery 290 of the light beam 250 contacting the target 220 (e.g., silhouette image 222 presented on the target 220) as a shot fired from the training weapon 150 by the shooter 145. This imagery (hereinafter “target shooting imagery” 290) may constitute video or a series of images (e.g., one image or multiple images in succession). From the target shooting imagery 290, the computing device 210 is configured to determine shot metrics 260 and multiple-shot training session metrics 265.


More specifically, from the imagery captured by the first camera 230, the computing device 210 is configured to determine the shot metrics 260. As an illustrative example, the shot metrics 260 may include the geographic location 261 of the shot (light beam 250) striking the target 220, inside or outside the silhouette image 222. Where the target 220 is made of light reflective material, the computing device 210 is adapted to compute the geographic location 261 of the shot 255 based on the point of contact as reflected by the light beam 250 coming into contact with the light reflective material. Where the target 220 is made of a photoluminescence material, a mark at which the shot 255 contacts the material and remains as the mark, the computing device 210 computes the geographic location 261 based on the location of the mark.


Besides the geographic location 261, the shot metrics 260 may further include (i) an identification 262 as to whether the detected shot resides within a predetermined region 224 or 226 representing a successful shot (i.e., a “hit”) or resides outside the predetermined region 224 or 226 representing an unsuccessful shot (i.e., a “miss”), (ii) a time of the shot (timestamp) 263, and/or (iii) a distance 264 (spread) from a location of the reference point 225 or 227 corresponding to an intended location of the shot directed to the predetermined region 224 or 226, respectively.


The computing device 210 is further configured to generate the multiple-shot training session metrics 265, such as a delay time 266 between successive shots, average delay time between shots 267, shot accuracy 268 (#successful hits/#total hits (successful & unsuccessful)), maximum spread range 269, or the like. An aggregate of at least the shot metrics 260 and the multiple-shot training session metrics 265 are referred to as “aggregate metrics” 270.


The computing device 210 is further configured to utilize the second camera 240 to capture the capture shooter imagery 292 such as a video, an image, or a series of images associated with the shooter. From the captured shooter imagery 292, the computing device 210 is configured to identify visual cues 275, which may cause selection of different drills and exercises for future training sessions to adjust or calibrate different body part placement or movement by the shooter 145 as well as tactical actions (e.g., when to raise/lower weapon, taking of head or center-mass shots, etc.) interpreted by the shooter imagery 272.


Referring to FIG. 2B, a detailed embodiment of the computing device 210 deployed within the electronic weapon training system 200 of FIG. 2A is shown. Herein, the computing device 210 features a processor 280, a non-transitory storage medium 281, a wireless network interface 282, a plurality of cameras 283 (e.g., a first camera 230 and a second camera 240), a display screen 284, and/or a data store 299. The wireless network interface 282 provides a communication path with the cloud service 130 to provide the aggregate metrics 270 and the visual cues 275 thereto.


As shown, the non-transitory storage medium 281 provides storage for analytics logic 285, including shot analytics logic 286, visual cue analytics logic 287 and training session control logic 288, which are accessible and executable by the processor 280. The shot analytics logic 286 is configured, when executed by the processor 280, to conduct analytics on the target shooting imagery 290 captured by the first camera 230, namely video or a series of images of the shot (light beam) 250 contacting the target 220. From the target shooting imagery 290, the shot analytics logic 286 is configured to determine the shot metrics 260, such as the geographic location 261 of the shot (light beam 250) striking the target 220, the identification 262 as to whether the detected shot was a “hit” (within the predetermined region 224 or 226) or a “miss” (outside the predetermined region 224 or 226), the shot timestamp 263, and/or the distance (spread) 264 from the shot to the closest reference point 225 or 227. The shot metrics 260 may be displayed on a representation of the target 220 or as part of screen layout selected to convey the shot metrics 260. For example, the geographic location 261 of the shot 250 may be illustrated on the display screen 284 as a contact point 330 on the silhouette image 222 of the target 220 as shown in FIG. 3.


Additionally, from the target shooting imagery 290, the shot analytics logic 286 is further configured, when executed by the processor 280, to conduct analytics on the target shooting imagery to determine and generate the multiple-shot training session metrics 265, such as the shot delay times 266, the average shot delay time 267, the shot accuracy 268, the maximum spread range 269 between the shots conducted during the training session, or the like. The multiple-shot training session metrics 265 may be represented in selected screen as shown in FIG. 4A.


The visual cue analytics logic 287 is configured to conduct analytics on imagery captured by the second camera 240, namely a video or image(s) of the shooter 145 (hereinafter “shooter imagery” 292). Herein, the visual cue analytics logic 287 is configured to parse the shooter imagery 292 and identify the visual cues 275 that, with adjustment and/or calibration, may improve shooting performance or safety. As an illustrative example, hand positioning on the training weapon 150 captured as visual cues 275 from the shooter imagery 292 may warrant adjustment or calibration for improved safety and/or improved shooting accuracy. Additionally, other body part placement or movement (e.g., head movement, arm movement, shoulder placement, etc.) may be captured as one of the visual cues 275 from the shooter imagery 292. The visual cues 275 (and scores associated with these visual cues 275 as described below) may be provided to the training session control logic 288 for selecting drills and/or exercises to adjust or calibrate placement or movement of the body part, rendering on the display screen 284, and/or uploading for storage within the cloud services 130 for subsequent retrieval.


Referring still to FIG. 2B, when executed by the processor 280, the visual cue analytics logic 287 may be configured to operate in cooperation with machine-learning (ML) models stored as part of the training session materials 136 within the cloud service 130 to conduct analytics of certain visual cues 275 associated with the shooter 145 in order to initiate scores associated with various body part positioning and movement that significantly influence shot accuracy. These scores may be received and analyzed by the training session control logic 288 to select exercises and/or drills to be performed in the current training session or a future training session. The exercises and/or drills are stored within the cloud services 130, where these exercises and/or drills are categorized toward adjusting or calibrating different visual cue types to improve shooting performance. Also, the visual cues 275 may be analyzed to interpret tactical actions by the shooter 145, and score these tactical actions to identify recommended and non-recommended (or unsafe) tactics.


As another illustrative example, the visual cue analytics logic 287 and the ML models may conduct analytics on the shooter's arm position and assign a score thereto. If the score falls below a prescribed value, the arm position may be deemed less effective or unsafe, and further training sessions may be directed to include exercises and/or drills to adjust the current arm position that are more consistent with acceptable (or safer) industry practices during weapon discharging. For instance, exercise and/or drills may be selected from the collection of training materials within the cloud service 130 to cause the shooter to adjust her or his arm position to maintain or improve shot accuracy. Similar scoring and training session selection may be conducted for different types of visual cues 275.


The training session control logic 288 is configured to receive the aggregate metrics 270 form the shot analytics logic 286 and scores/visual cues from the visual cue analytics logic 287. Based on this information, the training session control logic 288 may be used to determine particular drills and/or exercises to be acquired from the cloud service 130 directed to improve target shooting results and/or adjust or calibrate shooter positioning and movement that may assist in achieving improved target shooting results or increased safety. The training session control logic 288 may be further configured to upload the aggregate metrics 270 and/or visual cues 275 (and scoring thereof) to the cloud services 130 for storage, retrieve specific stored aggregated metrics 270 and/or visual cues 275 for selected training sessions for analysis by an instructor (or shooter), and/or retrieve the stored aggregate metrics 270 and/or visual cues 275 to compute historical metrics for rendering on the display screen 284.


Also, the training session control logic 288 may be configured to receive data from a calendar software module installed within the computing device 210 (e.g., tablet, smartphone, etc.) and receive a calendar notification when a scheduled practice session is to occur. Upon receipt of calendar notification, the training session control logic 288 may generate a message to the shooter or an individual associated with the shooter (e.g., instructor, administrative assistant, etc.) of the scheduled practice session. Also, the training session control logic 288 may illustrate a timer on the display screen 284 to identify a total time and/or time between detected shots (light beams and actual) so that the computing device 210 further operates as a live-fire range shot timer along with capturing imagery of the target 220 and the shooter 145.


The computing device 210 further includes the data store 299 as internal storage for metrics, imagery, and other information. For example, the data store 299 is configured to retain the target shooting results (e.g., aggregate metrics 279, target shooting imagery 290, etc.), visual cues 275, content associated with the drills and/or exercises for training sessions, or the like.


Iii Metric Representation

Referring now to FIG. 3, an exemplary embodiment of a split-screen illustration 300 generated by the computing device 110 of FIGS. 1A-1B or the computing device 210 of FIGS. 2A-2B is shown. For clarity, the discussion of the split-screen illustration 300 will be based on its generation by the computing device 210 of FIGS. 2A-2B, although the computing device 110 of FIGS. 1A-1B may be configured to generate the split-screen illustration 300 provided the second camera 193 is utilized.


Herein, the split-screen illustration 300 includes a first screen display 310 illustrating the target shooting imagery 290 captured by the first camera 230 of FIG. 2A and a second screen display 320 illustrating the shooter imagery 292 captured by the second camera 240 of FIG. 2A. In response to the shot 250 (e.g., the light beam 250 emitted from the training weapon 150 of FIG. 2A) coming into contact with the silhouette image 222 of the target 220, in particular the predetermined region 224 within the silhouette image 222, the shot analytics logic 286 computes its shot metrics 260. This includes determining the shot geographic location, which may be illustrated as a contact point 330 on the first screen display 310. The contact point 330 identifies the computed geographic location of the shot 250 on the target 220. With every shot, as shown in FIGS. 2A-2B, the shot metrics 260 and the multiple-shot training session metrics 265 are computed by the shot analytics logic 286 of the computing device 210.


Upon receiving the shot metrics 260 for each shot detected and processed from the target shooting imagery 290 by the shot analytics logic 286, the training session analytics logic 288 is configured to generate contact points 330-332, which correspond to the geographical location of the shots hitting the silhouette image 222, as part of the first screen display 310. According to one illustrative embodiment of the disclosure, the contact points 330-332 may be rendered as an overlay over a rendering of the silhouette image 222 captured by the first camera 230 and displayed as part of the first screen display 310. As another illustrative embodiment of the disclosure, the contact points 330-332 may be integrated within the rendering of the silhouette image 222.


As further shown, the second screen display 320 illustrates the shooter imagery 292 captured by the second camera 240 of FIG. 2A. Based on the shot timestamp 263, the visual cue analytics logic 287 may parse the shooter imagery 292 to identify one or more images of the shooter 145 at the time the shot was fired from the training weapon 150. From the image(s), the visual cue analytics logic 287 may identify visual cues 275 and conduct analytics on the visual cues 275 (scoring) to identify shooter positioning and movement that, if adjusted or calibrated, would improve shooting accuracy. But the training session control logic 288 may be configured to render the shooter imagery 292 as the second screen display 320 as captured.


Referring now to FIG. 4A, an exemplary embodiment of an interactive screen display 400 inclusive of a portion of the aggregate metrics 182/270 of the target shooting results conducted during a training session is shown. The interaction screen display 400 includes a rendering of the silhouette image 170/222 including contact points 410 representing the geographical location of the shots conducting during the training session. The aggregate metrics 182/270 identify the shot accuracy (average accuracy for the training session) 420 and the average delay 430 between shots conducted during the training session. Additionally, the aggregate metrics 182/270 may include a “maximum error” 440, namely a maximum distance between a shot in the training session and a reference point 173/225 associated with the silhouette image 170/222. The aggregate metrics 182/270 may further include an “average error” 450, namely the average distance between shots conducted during the training session and the reference point 173/225 for the silhouette image 170/222.


Referring to FIG. 4B, an exemplary embodiment of a screen display of metrics associated with a selected shot, represented by contact point 412 of the contact points 410 within the target shooting results illustrated in FIG. 4A, is shown. Upon selection of the contact point 412 associated with a second shot conducted during the training session, a portion of the shot metrics are rendered. These shot metrics include shot placement 460, time 470, and spread 480. The shot placement 460 illustrates the identification 176/262 as to whether the second shot resides within the predetermined region 172/224 representing a “hit” or resides outside the predetermined region 172/224 representing a “miss.” The time 470 illustrates the delay time produced as part of the shot metrics, which represents the amount of time between the first shot in the training session and this second shot. Lastly, the spread 480 illustrates the distance between the contact point 412 (second shot) in the training session and the reference point 173/225 associated with the silhouette image 170/222.


Referring now to FIG. 5A, an exemplary embodiment of an interactive screen display 500 featuring historical metrics 510 of target shooting results for a particular shooter during the training session and prior training sessions is shown. Herein, a statistics field 520 identifies the total number of training session represented (metric 521), the total number of shot taken during the training sessions 522 (metric 522), the shot accuracy for these training sessions (metric 523) and the average delay between shots over these training sessions (metric 524). These metrics may be computed by the analytics logic 195 of the computing device 110 of FIGS. 1A-1B and/or the analytics logic 285 of the computing device 210 of FIGS. 2A-2B. The particulars of each training session 530-534, namely the shot accuracy 535-539, number of shots 540-544 during the training session, and the average delay time 545-549 between shots during the training session, are shown.


Referring to FIG. 5B, an exemplary embodiment of a scatter plot representation 550 of the historical metrics based on deviation in delay times between shots within the training sessions 530-534 of FIG. 5A is shown. Herein, as the average delay time between shots within the training sessions 530-535 range from 724 milliseconds (ms) to 1186 ms, a majority of the shots fall between 800 ms-900 ms. Other scatter plot representations may be generated based on spread distance or other shot metrics.


In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. However, it will be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims
  • 1. An electronic weapon training system comprising: a computing device including one or more cameras; andan electronic target communicatively coupled to the computing device, the electronic target is configured to (i) detect a plurality of shots provided from a training weapon in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the electronic target, (ii) compute metrics associated with each shot of the plurality of shots, and (iii) transmit the metrics to the computing device,wherein the computing device is configured to concurrently process (i) the metrics to generate additional metrics associated with a training session during which the plurality of shots are provided from the training weapon and (ii) imagery of a shooter of the training weapon captured by a first camera of the one or more cameras to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.
  • 2. The electronic weapon training system of claim 1, wherein the metrics computed for each shot of the plurality of shots, including a second shot of the plurality of shots, include a geographic location of the second shot of the plurality of shots.
  • 3. The electronic weapon training system of claim 2, wherein the metrics computed for each shot of the plurality of shots including the second shot include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image rendered on the front surface of the electronic target.
  • 4. The electronic weapon training system of claim 3, wherein the metrics computed for each shot of the plurality of shots including the second shot include a timestamp of the second shot identifying a time of contact with the front surface of the electronic target by the light beam corresponding to the second shot.
  • 5. The electronic weapon training system of claim 1, wherein the computing device includes a visual cue analytics logic configured to identify visual cues associated with the imagery of the shooter captured by the first camera by parsing the imagery and conducting analytics on positioning or movement of a body part associated with the shooter against known body part positionings or movements that detrimentally influence shooting accuracy or weapon safety.
  • 6. The electronic weapon training system of claim 5, wherein the visual cue analytics logic operates in combination with one or more machine-learning models in conducting analytics on a visual cue directed to the positioning or movement of the body part associated with the shooter to identify positioning or movement of the body part needs adjustment or calibration to improve shooting accuracy or weapon safety.
  • 7. An electronic weapon training system comprising: a training weapon to emit a light beam to represent a shot fired from the training weapon;a target made of a light reflecting material or a photoluminescent material; anda computing device including a plurality of cameras including a first camera and a second camera, a processor, and a non-transitory storage medium including shot analytics logic, visual cue analytics logic, and training session control logic,wherein the shot analytics logic is configured to receive imagery associated with the target from the first camera, detect a plurality of shots from the training weapon during a current training session in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the target, and compute metrics associated with each shot of the plurality of shots including a geographic location of each shot,wherein the visual cue analytics logic is further configured to concurrently receive imagery associated with a shooter of the training weapon captured by second camera and identify visual cues associated with positioning or movement of body parts by the shooter that may be used by the training session control logic to select of drills or exercised performed during the current training session or during a future training session.
  • 8. The electronic weapon training system of claim 7, wherein the metrics computed for each shot of the plurality of shots, including a second shot of the plurality of shots, include the geographic location of the second shot of the plurality of shots.
  • 9. The electronic weapon training system of claim 8, wherein the metrics computed for each shot of the plurality of shots including the second shot include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image placed on the front surface of the target.
  • 10. The electronic weapon training system of claim 9, wherein the metrics computed for each shot of the plurality of shots including the second shot include a timestamp of the second shot identifying a time of contact with the front surface of the target by the light beam corresponding to the second shot.
  • 11. The electronic weapon training system of claim 7, wherein the visual cue analytics logic of the computing device is configured to operate with one or more machine-learning (ML) models to identify the visual cues associated with the imagery of the shooter captured by the second camera by at least parsing the imagery of the shooter and conducting analytics on positioning or movement of one of more of the body parts associated with the shooter against known body part positionings or movements analyzed by the one or more ML models that detrimentally influence shooting accuracy or weapon safety.
  • 12. A method for an electronic weapon training system, comprising: providing a computing device including one or more cameras;coupling an electronic target communicatively to the computing device;configuring the electronic target to (i) detect a plurality of shots provided from a training weapon in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the electronic target, (ii) compute metrics associated with each shot of the plurality of shots, and (iii) transmit the metrics to the computing device; andconfiguring the computing device to concurrently process (i) the metrics to generate additional metrics associated with a training session during which the plurality of shots are provided from the training weapon and (ii) imagery of a shooter of the training weapon captured by a first camera of the one or more cameras to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.
  • 13. The method of claim 12, wherein configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots, including a second shot of the plurality of shots, to include a geographic location of the second shot of the plurality of shots.
  • 14. The method of claim 13, wherein configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots including the second shot to include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image rendered on the front surface of the electronic target.
  • 15. The method of claim 14, wherein configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots including the second shot to include a timestamp of the second shot identifying a time of contact with the front surface of the electronic target by the light beam corresponding to the second shot.
  • 16. The method of claim 12, wherein providing the computing device includes configuring a visual cue analytics logic to identify visual cues associated with the imagery of the shooter captured by the first camera by parsing the imagery and conducting analytics on positioning or movement of a body part associated with the shooter against known body part positionings or movements that detrimentally influence shooting accuracy or weapon safety.
  • 17. The method of claim 16, wherein configuring the visual cue analytics logic includes configuring the visual cue analytics logic to operate in combination with one or more machine-learning models in conducting analytics on a visual cue directed to the positioning or movement of the body part associated with the shooter to identify positioning or movement of the body part needs adjustment or calibration to improve shooting accuracy or weapon safety.
PRIORITY

This application claims the benefit of and priority to U.S. Provisional Application, entitled “System And Method For Shooter Imagery And Target Shooting Analytics,” filed on Jan. 22, 2024, and having application Ser. No. 63/623,575, the entirety of said application being incorporated herein by reference.

Provisional Applications (1)
Number Date Country
63623575 Jan 2024 US