Embodiments of the disclosure relate to the field of real-time data analytics for training systems. More specifically, one embodiment of the disclosure relates to an electronic weapon training system configured to concurrently capture and analyze target shooting results and shooter imagery.
Firing ranges are specialized facilities that provide individuals using weapons (hereinafter, “shooters”) with a hands-on opportunity to safely practice the handling of their weapons. These facilities are commonly used by individuals employed by law enforcement, military, or other governmental agencies for weapons training and qualification. Such training and qualification routines are designed to improve combat marksmanship and tactical weapon handling skills, which can be achieved through the development of appropriate neural pathways via task specific repetition over time.
Commonly, firing ranges includes paper targets that are positioned a few yards from the shooter. After firing the weapon a few times, the shooter may retrieve the target and analyze the bullet holes to determine where the bullets struck the target. This requires measurements to be undertaken by a person (e.g., shooter, instructor, etc.) to determine compliance with the training/qualifications criteria. However, due to the costs of ammunition and replacement targets, such training is expensive. Given this expense, shooters tend to refrain from performing the necessary amount of shooting repetition needed to maintain or improve their weapon handling skills. This lack of training has resulted in a steady decline in shooting performance and weapon safety by law enforcement and individuals at large. An electronic weapon training system is needed to provide a low-cost, practice system that may be installed on or off the firing range to facilitate increased expertise and safety in handling of a weapon.
An electronic weapon training system is described. The system features one or more targets and a computing device that is configured to concurrently process (i) data associated with shots detected to hit the target to generate metrics associated with these shots and a training session in which the shots occurred and (ii) imagery associated with a shooter captured by a camera integrated as part of the computing device. The shooter imagery is used to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.
In an exemplary embodiment, an electronic weapon training system comprises: a computing device including one or more cameras; and an electronic target communicatively coupled to the computing device, the electronic target is configured to (i) detect a plurality of shots provided from a training weapon in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the electronic target, (ii) compute metrics associated with each shot of the plurality of shots, and (iii) transmit the metrics to the computing device, wherein the computing device is configured to concurrently process (i) the metrics to generate additional metrics associated with a training session during which the plurality of shots are provided from the training weapon and (ii) imagery of a shooter of the training weapon captured by a first camera of the one or more cameras to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.
In another exemplary embodiment, the metrics computed for each shot of the plurality of shots, including a second shot of the plurality of shots, include a geographic location of the second shot of the plurality of shots. In another exemplary embodiment, the metrics computed for each shot of the plurality of shots including the second shot include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image rendered on the front surface of the electronic target. In another exemplary embodiment, the metrics computed for each shot of the plurality of shots including the second shot include a timestamp of the second shot identifying a time of contact with the front surface of the electronic target by the light beam corresponding to the second shot.
In another exemplary embodiment, the computing device includes a visual cue analytics logic configured to identify visual cues associated with the imagery of the shooter captured by the first camera by parsing the imagery and conducting analytics on positioning or movement of a body part associated with the shooter against known body part positionings or movements that detrimentally influence shooting accuracy or weapon safety. In another exemplary embodiment, the visual cue analytics logic operates in combination with one or more machine-learning models in conducting analytics on a visual cue directed to the positioning or movement of the body part associated with the shooter to identify positioning or movement of the body part needs adjustment or calibration to improve shooting accuracy or weapon safety.
In an exemplary embodiment, an electronic weapon training system comprises: a training weapon to emit a light beam to represent a shot fired from the training weapon; a target made of a light reflecting material or a photoluminescent material; and a computing device including a plurality of cameras including a first camera and a second camera, a processor, and a non-transitory storage medium including shot analytics logic, visual cue analytics logic, and training session control logic, wherein the shot analytics logic is configured to receive imagery associated with the target from the first camera, detect a plurality of shots from the training weapon during a current training session in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the target, and compute metrics associated with each shot of the plurality of shots including a geographic location of each shot, wherein the visual cue analytics logic is further configured to concurrently receive imagery associated with a shooter of the training weapon captured by second camera and identify visual cues associated with positioning or movement of body parts by the shooter that may be used by the training session control logic to select of drills or exercised performed during the current training session or during a future training session.
In another exemplary embodiment, the metrics computed for each shot of the plurality of shots, including a second shot of the plurality of shots, include the geographic location of the second shot of the plurality of shots. In another exemplary embodiment, the metrics computed for each shot of the plurality of shots including the second shot include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image placed on the front surface of the target. In another exemplary embodiment, the metrics computed for each shot of the plurality of shots including the second shot include a timestamp of the second shot identifying a time of contact with the front surface of the target by the light beam corresponding to the second shot.
In another exemplary embodiment, the visual cue analytics logic of the computing device is configured to operate with one or more machine-learning (ML) models to identify the visual cues associated with the imagery of the shooter captured by the second camera by at least parsing the imagery of the shooter and conducting analytics on positioning or movement of one of more of the body parts associated with the shooter against known body part positionings or movements analyzed by the one or more ML models that detrimentally influence shooting accuracy or weapon safety.
In an exemplary embodiment, a method for an electronic weapon training system comprises: providing a computing device including one or more cameras; coupling an electronic target communicatively to the computing device; configuring the electronic target to (i) detect a plurality of shots provided from a training weapon in which each shot of the plurality of shots corresponds to a light beam coming into contact with a front surface of the electronic target, (ii) compute metrics associated with each shot of the plurality of shots, and (iii) transmit the metrics to the computing device; and configuring the computing device to concurrently process (i) the metrics to generate additional metrics associated with a training session during which the plurality of shots are provided from the training weapon and (ii) imagery of a shooter of the training weapon captured by a first camera of the one or more cameras to identify visual cues that may be used in selection of drills or exercised performed during the training session or during a future training session.
In another exemplary embodiment, configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots, including a second shot of the plurality of shots, to include a geographic location of the second shot of the plurality of shots. In another exemplary embodiment, configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots including the second shot to include an identification as to whether the second shot is detected to be positioned within a predetermined region of a silhouette image rendered on the front surface of the electronic target. In another exemplary embodiment, configuring the electronic target includes configuring the electronic target to compute the metrics for each shot of the plurality of shots including the second shot to include a timestamp of the second shot identifying a time of contact with the front surface of the electronic target by the light beam corresponding to the second shot.
In another exemplary embodiment, providing the computing device includes configuring a visual cue analytics logic to identify visual cues associated with the imagery of the shooter captured by the first camera by parsing the imagery and conducting analytics on positioning or movement of a body part associated with the shooter against known body part positionings or movements that detrimentally influence shooting accuracy or weapon safety. In another exemplary embodiment, configuring the visual cue analytics logic includes configuring the visual cue analytics logic to operate in combination with one or more machine-learning models in conducting analytics on a visual cue directed to the positioning or movement of the body part associated with the shooter to identify positioning or movement of the body part needs adjustment or calibration to improve shooting accuracy or weapon safety.
These and other features of the concepts provided herein may be better understood with reference to the drawings, description, and appended claims.
Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
While each inventive aspect of the disclosure may be subject to various modifications, equivalents, and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will now be described in detail. It should be understood that each inventive aspect is not limited to the particular embodiments disclosed. On the contrary, the intention is to cover modifications, equivalents, and alternative forms of the inventive aspects within the specific embodiments as each inventive aspect may be implemented into different embodiments of those illustrated as well as different embodiments than those illustrated.
Embodiments of the disclosure generally relate to an electronic weapon training system configured to concurrently (i.e., at same time or in an overlapping manner) analyze both shooter imagery and target shooting results such as shot metrics and/or target shooting imagery. According to a first embodiment of the disclosure, the electronic weapon training system features a computing device communicatively coupled to one or more electronic targets. Herein, for this embodiment, an “electronic target” constitutes an electronic device that features one or more components, which are adapted to (a) detect one or more shots (light beams emitted from a training weapon) at a silhouette image with at least one predetermined region during a shooting training session and (b) compute metrics associated with each shot (emitted light beam) detected by the electronic target. The computed shot metrics may include, but are not limited or restricted to (i) a location of the shot (light beam) striking an area within the silhouette image, (ii) an identification as to whether the detected shot resides within the predetermined region identifying a successful shot (i.e., a “hit”) or outside the predetermined region identified as an unsuccessful shot (i.e., a “miss”), and/or (iii) an assigned time at which the shot occurred (timestamp). The shot metrics may be computed for each shot conducted during a multiple-shot training session.
For the first embodiment of the electronic weapon training system, the component(s) implemented with the electronic target may include logic to detect the shot, logic to compute metrics associated with the shot, and logic to generate and transmit a message to the computing device, where the message includes the shot metrics. The computing device is configured to process the contents of the message, and based on the shot metrics associated with each shot, determine further metrics directed to the shot and the training session during which the shot occurred. These further metrics may include a delay time between successive shots during the training session and/or a distance (spread) from the determined location of the shot to a reference point within the predetermined region corresponding to the intended (aimed) location for the shot. Collectively, shot progression (order) may be further computed based on the shot timestamps.
According to a second embodiment of the disclosure, the electronic weapon training system features a computing device configured to capture imagery associated with a shot (light beam) contacting a target (e.g., a sheet of light reflective material, a sheet of photoluminescence material, etc.). This imagery, referred to “target shooting imagery,” may constitute one or more images or a video captured when the shot hits the target. The content of the target shooting imagery may be processed to compute the shot metrics and the multiple-shot training session metrics. The shot metrics may be based, in part, on the first camera deployed within the computing device capturing an image of the light beam (shot) being reflected from the target or an image of a portion of the target illuminated in response to the light beam (shot) emitted from the training weapon contacting the target. The multiple-shot training session metrics may be based, at least in part, on the computed shot metrics.
For both of these embodiments, the computing device features a processor, a non-transitory storage medium, a display, one or more cameras, and one or more communication interfaces. For this architecture, the computing device may be configured to (i) render an image of the detected shots and their locations within the silhouette image formed in the target as an overlay or (ii) render the image of the detected shots and their locations integrated as part of the silhouette image. It is contemplated that light beams (shots) fired by the training weapon may miss the target completely, and as an optional feature, the training weapon may provide a count value of the number of shots fired during a training session to the electronic target for routing to the computing device and/or to the computing device directly to account for errand shots.
Additionally, for both embodiments, the electronic weapon training system further includes a camera to capture and record imagery of the shooter during the training session (hereinafter, “shooter imagery”). The shooter imagery may be analyzed by logic, stored within the non-transitory storage medium and executed by the processor, to identify, in real-time, visual cues associated with the shooter. These “visual cues” may be directed to positioning or movement of a body part of the shooter that may influence shooting accuracy and identifying certain tactical actions conducted by the shooter. The visual cues may be determined based on real-time analytics by the logic, and in response, the visual cues may be used to initiate drills and/or exercises during the current training session or to initiate specific drills and/or exercises for a subsequent training session conducted by the shooter.
As an illustrative example, the hand positioning on the training weapon, head movement, arm movement, and/or shoulder placement (posture) may constitute visual cues, where these visual cues may signify desired or undesired handling of the training weapon during the training session that may affect shooting performance or safety. Moreover, the raising or lowering of the training weapon, perceived nervousness by the shooter (e.g., shaking hands, legs, etc.), the immediate firing of the weapon in response to external prompts (e.g., startling, or loud audio, visual prompts displayed behind the target, etc.) may identify correct or incorrect tactical actions undertaken by the shooter. The undesired handling of the training weapon and/or incorrect tactical actions by the shooter may cause a decrease in shooting accuracy, unwanted or unintended shot delay, unsafe retention of the training weapon, or the like. Hence, the visual cues may be analyzed by the logic to (i) identify shortcomings in the handling of the training weapon or tactical actions by the shooter that may cause a reduction in shooting accuracy, and (ii) select additional training sessions or real-time changes to the current training session (e.g., drills, exercises, etc.) to improve these shortcomings.
More specifically, the analytics logic of the computing device may utilize machine learning (ML) models, accessible within a cloud service or stored on-premises, to conduct analytics of certain visual cues associated with the shooter in order to compute a shooter activity score. The shooter activity score may be used by the computing device in the selection of drills and/or exercises stored as training material within the cloud service to be utilized in the current training session or in an upcoming training session for the shooter.
In summary, as on-demand combat marksmanship and tactical weapon handling skills are achieved through the development of appropriate neural pathways via task specific repetition over time, analytics logic deployed within the computing device (shot analytics logic and/or visual cue analytics logic) is configured to conduct concurrent, real-time analytics on the target shooting results and shooter imagery to provide a more effective training in the use and safe handling of a weapon. Such training includes the following:
Speed and Accuracy Data—For every shot fired, the analytics logic may be configured to record and log precise time data (time since start signal, split times between shots, total time, etc.) and accuracy.
Video Capture—The analytics logic may be configured to use a second camera on a computing device to record video of shooter performance for self-analysis or coaching by an instructor.
Training Log—The analytics logic may be configured to record all training sessions for review and analysis, and automatically keeps a detailed log of each session, accessible for replay, including any associated video.
Performance Analysis—The analytics logic may be configured to provide an ability to analyze shooter time and accuracy data over time, to track improvement and optimize training sessions.
Practice Notification—The analytics logic may be configured to notify the student and/or instructor when a training session is due and/or past due, as scheduled by the student and/or instructor.
Drill and Evaluation Library—The analytics logic may be configured to access an extensive drill library of proprietary and industry standard practice drills and evaluation exercises, where the drill library may be maintained within a storage repository on-premises or as part of a cloud service.
Visual Cues—The analytics logic may be configured use programmable visual cues to initiate drills and exercises, and to indicate additional real-time tactical actions during a given iteration.
Target Library—The analytics logic may be configured to populate and access an extensive library of available targets, including common standard civilian, law enforcement, and military qualification silhouettes.
Electronic Targets—The electronic targets may be configured to communicate with the computing device (e.g., the analytics logic deployed therein) via wireless connectivity, and facilitate both time and accuracy data from the targets, and optionally render images associated with drills and/or exercises for the shooter.
Shot Timer with Video—The analytics logic may be configured to function as a live-fire range shot timer, with video of both the shooter and targets downrange.
Internet Upload—The analytics logic may be configured to upload shooter sessions, including video, to a selected website, social media, and/or specified email and/or smart phone addresses.
Training Weapon Compatibility—The analytics logic may be configured to support a specific training weapon provided as an accessory to the electronic weapon training system or any other light-emitting training weapons.
In the following description, certain terminology is used to describe aspects of the invention. In certain situations, the terms “logic” and “component” are representative of hardware, firmware, and/or software that is configured to perform one or more functions. As hardware, the logic (or component) may include circuitry having data processing and/or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor (e.g., microprocessor, digital signal processor, programmable gate array, microcontroller, an application specific integrated circuit, etc.), wired or wireless receiver/transmitter/transceiver circuitry, semiconductor memory, or combinatorial logic.
Alternatively, or in combination with the hardware circuitry described above, the logic (or component) may be software in the form of one or more software modules, which may be configured to operate as it its counterpart circuitry. For instance, a software module may be a software instance that operates as a processor, namely a virtual processor whose underlying operations is based on a physical (hardware) processor. Additionally, a software module may include an executable application, a daemon application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or one or more instructions. The software module may be coded in any of a variety of programming languages such as lower-level programming language associated with a particular hardware or operating system (e.g., assembly) or higher-level programming languages (e.g., source code). Other programming languages, such as scripts, shell or command language, query or search languages may be used.
The software module(s) may be stored in any type of a suitable non-transitory storage medium, or a transitory storage medium (e.g., electrical, optical, acoustical, or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of a non-transitory storage medium may include, but are not limited or restricted to a programmable circuit, a semiconductor memory, non-persistent storage such as volatile memory (e.g., any type of random-access memory “RAM”), persistent storage such as non-volatile memory (e.g., read-only memory “ROM,” power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device.
The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware.
The symbol “(s)” represents one or more quantities of an item. For example, the term “component(s)” represents one or more components.
The term “target” is directed to any object positioned within an aim of a shooter or other marksperson, intended to receive an incoming light beam representative of a shot fired by a training weapon. For example, the target may include a light reflective or photoluminescent material placed on a front surface of the target. Additionally, or in the alternative, the target may operate as an electronic target, which features components to (i) detect an occurrence and location of the incoming light beam pertaining to a shot conducted by a shooter (independent of or in relation to a reference point operating as an intended shot location), (ii) compute shot metrics, and (iii) transmit these metrics as a message to a computing device. A silhouette image may be featured on the front surface of the target persistently or may be rendered as a programmable, interchangeable, displayed image by the electronic device.
The term “computing device” may constitute a commercial electronics device such as a laptop, a smartphone, a tablet, a wearable (e.g., smart glasses, headset, etc.), or the like. According to one embodiment, the “training weapon” may be construed as physical device that resembles an electronic handgun or rifle, but emits a light beam (e.g., a laser beam, etc.) upon trigger activation. Alternatively, the training weapon may be construed as an actual weapon (e.g., handgun, rifle, etc.) with an accessory adapted to the weapon that causes a light beam to be emitted proximate to a muzzle opening in response to trigger activation of the weapon.
A “message” generally refers to information in a prescribed format and transmitted in accordance with a suitable wired or wireless connectivity scheme such as wireless peer-to-peer communications (e.g., Bluetooth™, etc.), wireless networks (e.g., Wireless Fidelity “WiFi” networks such as WLANs, etc.), cellular, or the like. Each message may be in the form of one or more packets, frames, or any other series of bits having the prescribed format.
The term “transmission medium” generally refers to a physical or logical communication link (or path) between the target and a computing device. For example, as a physical communication path, transmission medium may be in the form of electrical wiring, optical fiber, or a cable. As a logical communication link, the transmission medium may be a wireless channel established between components within the target and/or computing device that support wireless transmissions such as Bluetooth™, radio frequency (RF) or other wireless signaling.
Finally, the terms “or” and “and/or” as well as the symbol “/” positioned between two elements are to be interpreted as inclusive or meaning any one or any combination. As an example, “A or B,” “A and/or B,” or “A/B” mean “any of the following: A; B; A and B.” Likewise, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps, or acts are in some way inherently mutually exclusive.
As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.
Various embodiments of the disclosed invention will be described in reference to one or more accompanying drawings. Herein, the references of “illustrative embodiment” and “exemplary embodiment” are used as examples that particular features, characteristics, and/or structures may be included in at least the described embodiment and may be included in other embodiments or non-illustrated versions of an embodiment. The embodiments are directed to articles of manufacturer, systems, and software modules, methods, or the like.
Herein, embodiments of the disclosure provide advancements in the field of electronic weapon training systems by providing an automated feedback system that concurrently monitors both the target shooting results along with the visual cues of the shooter to improve overall safety and performance for the shooter. Additionally, the electronic weapon training system provides real-time feedback to the shooter and/or instructors for each shot, the entire training session, and even multiple training sessions in which historical target shooting results and visual cues may be accessed and computations made to show performance trends. The real-time feedback provides shooters with information for conducting adjustments and calibrations to their weapon handling and/or tactical actions by the shooter.
Referring to
As further shown in
According to one embodiment of the disclosure, the components 124 may include circuitry 126 configured to assist in (a) detecting the light beam 160 in contact with the electronic target 120, (b) determining a geographic location of the contact point 174 on the silhouette image 170, and (c) determining whether the contact point 174 resides within the predetermined region 172. The circuitry 126 may include a processor 128 to compute metrics associated with the detected light beam 160 (hereinafter “shot metrics”). These shot metrics may include, but are not limited or restricted to (i) a geographic location 175 of the shot (light beam 160) striking the electronic target 120, inside or outside the silhouette image 170, (ii) an identification 176 as to whether the detected shot resides within the predetermined region 172 representing a successful shot (i.e., a “hit” 176) or resides outside the predetermined region 172 representing an unsuccessful shot (i.e., a “miss”), and/or (iii) a time of the shot (timestamp) 177. Herein, the shot metrics 175-177 may be computed for each shot fired by the training weapon 150 during a training session and hitting the electronic target 120 (i.e., multiple light beam strikes on the electronic target 120), where each of the shot metrics may be included within a message 180 provided over a transmission medium established between the electronic target 120 and the computing device 110.
The computing device 110 is configured to receive the message 180 and utilize these shot metrics 175-177 to generate additional metrics 178 associated with each particular shot such as a distance (spread) from the location of the shot to a reference point 173 corresponding to an intended location of the shot within the predetermined region 172. The computing device 110 is further configured to generate additional metrics 179 associated with the multiple-shot training session such as a delay time between successive shots or average delay time between shots, accuracy metrics (#hits/total shots (hits & misses)), maximum spread range, or the like. Also, the computing device 110 is configured to generate historical metrics associated with multiple training sessions for the shooter 145 taking into account the metrics 175-179 for each of the shots conducted during the current training session to identify improvement or deterioration in shooting performance.
The computing device 110 further includes at least one camera 185, which is configured to capture imagery associated with the shooter 145, such as video associated with the shooter 145 or one or more images associated with the shooter 145 (e.g., a single image, a series of successive images, etc.). Upon processing this shooter imagery, the computing device 110 is configured to identify visual cues 184, such as characteristics associated with the handling of the training weapon 150 by the shooter 145 for example, which may require adjustment or calibration. The visual cues 184 may be used by the computing device 110 to identify specific drills and/or exercises to be conducted during the current training session or during a future training session to improve shooting performance by reinforcing or correcting certain activities by the shooter 145. The drills and/or exercises may be retrieved by the computing device from the cloud service 130.
The computing device 110 is configured to provide an aggregate 182 of the metrics (provided from the message 180 and generated by the computing device 110) and the visual cues 184 (with or without the shooter imagery in its entirety) to the cloud service 130. The shooter ID 140 would be included with the aggregate metrics 182 and the visual cues 184 to properly store the data as historical data associated with the shooter 145. The data may be retrieved by the computing device 110 to perform computations on the data for rendering training session metrics, determine future drills and/or exercises to address shortcomings in the target shooting results and/or shooter activity, or shooter/instructor review.
Referring to
More specifically, as shown in both
The visual cue analytics logic 197 is configured to conduct analytics on the incoming imagery captured by the camera 185 (“shooter imagery”) to identify the visual cues 184 from the captured shooter imagery. From these the visual cues 184, the visual cue analytics logic 197 may be configured to operate with ML models 137 within the cloud service 130 to conduct analytics of certain visual cues 184 associated with the shooter 145 in order to initiate scores associated with various body part positioning and movement that is considered to influence shot accuracy more than a prescribed amount. The ML models 137 may be configured to identify known body part positioning and/or movements that detrimentally influence shooting accuracy or weapon safety and score visual cues associated with such positioning and/or movement to cause adjustment or calibration. These scores may be utilized by the training session control logic 198 to select drills and/or exercises stored within the cloud services 130 to adjust the current training session or customize a future training session with the goal of adjusting positioning and/or movement of the shooter 145 to improve shooting performance. Also, the visual cues 184 may be analyzed to interpret tactical actions by the shooter, and score these tactical actions to identify recommended and non-recommended (or unsafe) tactics.
As an illustrative example, the visual cue analytics logic 197 and the ML models 137 may conduct analytics on the shooter's hand position and assign a score thereto. If the score falls below a prescribed value, the hand position may be deemed unacceptable or unsafe. Upon receiving the visual cue and its assigned score from the visual cue analytics logic 197, the training session control logic 198 may determine that certain drills and/or exercises maintained within the cloud service 130 should be conducted in the current training session or a future training session to encourage the shooter to adjust her or his current hand position to achieve better shooting accuracy and/or better weapon safety. For instance, drills and/or exercises associated with a future training session may be selected from the cloud service 130, where the selected drills and/or exercises are designed to encourage adjustment or calibration of the shooter's hand position (e.g., drills and/or exercises where shooting performance is likely diminished by the current hand position and more favorable with the recommended hand position). Similar scoring and training session selection may be conducted for different types of visual cues.
In general, according to this embodiment of the disclosure, the analytics logic 195 may be configured to perform tasks in a plurality of operating modes: (1) computation mode, (2) display mode, and/or (3) storage mode. When operating in “computation” mode, the analytics logic 195 generates the aggregate metrics 182 based on metrics provided from the electronic target 120. The aggregate metrics 182 are performed by the analytics logic 195 to offload processing being conducted by the processor 128 implemented within the electronic target 120 such as computing spread metrics 178 or delay time metrics 179 for example.
Additionally, when operating in “computation” mode, the analytics logic 195 is configured to parse the shooter imagery captured by the camera 185 to identify visual cues that may warrant identification and subsequent re-training to adjust and calibrate these visual cues. As an illustrative example, hand positioning on the training weapon captured as part of the shooter imagery may warrant adjustment or calibration for improved safety and/or improved shooting accuracy. Also, certain body placement or movement (e.g., head/arm/shoulder placement or movement) captured by the camera 185 as part of the shooter imagery may warrant adjustment or calibration to achieve improved shooting accuracy.
When operating in “display” mode, the analytics logic 195 is configured to generate screen layouts to be rendered on the display screen 194. Various types of screen layout may display the aggregate metrics 182 (or portions thereof) associated with the current training session, the metrics associated with a particular shot conducted by the shooter 145 during the current training session, or historical metrics associated with one or more prior training sessions or the collective metrics for multiple training sessions. Further illustrative embodiments of the screen layouts are shown in
When operating in “storage” mode, the analytics logic 195 is configured to generate a message, including the aggregate metrics 182 and/or visual cues 184 (or portions of the shooter imagery) for transmission via the wireless network interface 192B. The message may include the shooter ID 140 for the cloud service 130 to properly categorize the content to the appropriate shooter, as the cloud service 130 may maintain target shooting results and shooter imagery for a number of shooters.
The computing device 110 further includes the data store 199 as internal storage for metrics, imagery, and other information. For example, the data store 199 is configured to retain the target shooting results 132 (e.g., aggregate metrics 182, etc.), visual cues 184, content associated with the drills and/or exercises for training sessions, optionally different electronic targets for uploading to the target 120 from the computing device via one of the interfaces 192, or the like.
Referring to
As further shown in
More specifically, from the imagery captured by the first camera 230, the computing device 210 is configured to determine the shot metrics 260. As an illustrative example, the shot metrics 260 may include the geographic location 261 of the shot (light beam 250) striking the target 220, inside or outside the silhouette image 222. Where the target 220 is made of light reflective material, the computing device 210 is adapted to compute the geographic location 261 of the shot 255 based on the point of contact as reflected by the light beam 250 coming into contact with the light reflective material. Where the target 220 is made of a photoluminescence material, a mark at which the shot 255 contacts the material and remains as the mark, the computing device 210 computes the geographic location 261 based on the location of the mark.
Besides the geographic location 261, the shot metrics 260 may further include (i) an identification 262 as to whether the detected shot resides within a predetermined region 224 or 226 representing a successful shot (i.e., a “hit”) or resides outside the predetermined region 224 or 226 representing an unsuccessful shot (i.e., a “miss”), (ii) a time of the shot (timestamp) 263, and/or (iii) a distance 264 (spread) from a location of the reference point 225 or 227 corresponding to an intended location of the shot directed to the predetermined region 224 or 226, respectively.
The computing device 210 is further configured to generate the multiple-shot training session metrics 265, such as a delay time 266 between successive shots, average delay time between shots 267, shot accuracy 268 (#successful hits/#total hits (successful & unsuccessful)), maximum spread range 269, or the like. An aggregate of at least the shot metrics 260 and the multiple-shot training session metrics 265 are referred to as “aggregate metrics” 270.
The computing device 210 is further configured to utilize the second camera 240 to capture the capture shooter imagery 292 such as a video, an image, or a series of images associated with the shooter. From the captured shooter imagery 292, the computing device 210 is configured to identify visual cues 275, which may cause selection of different drills and exercises for future training sessions to adjust or calibrate different body part placement or movement by the shooter 145 as well as tactical actions (e.g., when to raise/lower weapon, taking of head or center-mass shots, etc.) interpreted by the shooter imagery 272.
Referring to
As shown, the non-transitory storage medium 281 provides storage for analytics logic 285, including shot analytics logic 286, visual cue analytics logic 287 and training session control logic 288, which are accessible and executable by the processor 280. The shot analytics logic 286 is configured, when executed by the processor 280, to conduct analytics on the target shooting imagery 290 captured by the first camera 230, namely video or a series of images of the shot (light beam) 250 contacting the target 220. From the target shooting imagery 290, the shot analytics logic 286 is configured to determine the shot metrics 260, such as the geographic location 261 of the shot (light beam 250) striking the target 220, the identification 262 as to whether the detected shot was a “hit” (within the predetermined region 224 or 226) or a “miss” (outside the predetermined region 224 or 226), the shot timestamp 263, and/or the distance (spread) 264 from the shot to the closest reference point 225 or 227. The shot metrics 260 may be displayed on a representation of the target 220 or as part of screen layout selected to convey the shot metrics 260. For example, the geographic location 261 of the shot 250 may be illustrated on the display screen 284 as a contact point 330 on the silhouette image 222 of the target 220 as shown in
Additionally, from the target shooting imagery 290, the shot analytics logic 286 is further configured, when executed by the processor 280, to conduct analytics on the target shooting imagery to determine and generate the multiple-shot training session metrics 265, such as the shot delay times 266, the average shot delay time 267, the shot accuracy 268, the maximum spread range 269 between the shots conducted during the training session, or the like. The multiple-shot training session metrics 265 may be represented in selected screen as shown in
The visual cue analytics logic 287 is configured to conduct analytics on imagery captured by the second camera 240, namely a video or image(s) of the shooter 145 (hereinafter “shooter imagery” 292). Herein, the visual cue analytics logic 287 is configured to parse the shooter imagery 292 and identify the visual cues 275 that, with adjustment and/or calibration, may improve shooting performance or safety. As an illustrative example, hand positioning on the training weapon 150 captured as visual cues 275 from the shooter imagery 292 may warrant adjustment or calibration for improved safety and/or improved shooting accuracy. Additionally, other body part placement or movement (e.g., head movement, arm movement, shoulder placement, etc.) may be captured as one of the visual cues 275 from the shooter imagery 292. The visual cues 275 (and scores associated with these visual cues 275 as described below) may be provided to the training session control logic 288 for selecting drills and/or exercises to adjust or calibrate placement or movement of the body part, rendering on the display screen 284, and/or uploading for storage within the cloud services 130 for subsequent retrieval.
Referring still to
As another illustrative example, the visual cue analytics logic 287 and the ML models may conduct analytics on the shooter's arm position and assign a score thereto. If the score falls below a prescribed value, the arm position may be deemed less effective or unsafe, and further training sessions may be directed to include exercises and/or drills to adjust the current arm position that are more consistent with acceptable (or safer) industry practices during weapon discharging. For instance, exercise and/or drills may be selected from the collection of training materials within the cloud service 130 to cause the shooter to adjust her or his arm position to maintain or improve shot accuracy. Similar scoring and training session selection may be conducted for different types of visual cues 275.
The training session control logic 288 is configured to receive the aggregate metrics 270 form the shot analytics logic 286 and scores/visual cues from the visual cue analytics logic 287. Based on this information, the training session control logic 288 may be used to determine particular drills and/or exercises to be acquired from the cloud service 130 directed to improve target shooting results and/or adjust or calibrate shooter positioning and movement that may assist in achieving improved target shooting results or increased safety. The training session control logic 288 may be further configured to upload the aggregate metrics 270 and/or visual cues 275 (and scoring thereof) to the cloud services 130 for storage, retrieve specific stored aggregated metrics 270 and/or visual cues 275 for selected training sessions for analysis by an instructor (or shooter), and/or retrieve the stored aggregate metrics 270 and/or visual cues 275 to compute historical metrics for rendering on the display screen 284.
Also, the training session control logic 288 may be configured to receive data from a calendar software module installed within the computing device 210 (e.g., tablet, smartphone, etc.) and receive a calendar notification when a scheduled practice session is to occur. Upon receipt of calendar notification, the training session control logic 288 may generate a message to the shooter or an individual associated with the shooter (e.g., instructor, administrative assistant, etc.) of the scheduled practice session. Also, the training session control logic 288 may illustrate a timer on the display screen 284 to identify a total time and/or time between detected shots (light beams and actual) so that the computing device 210 further operates as a live-fire range shot timer along with capturing imagery of the target 220 and the shooter 145.
The computing device 210 further includes the data store 299 as internal storage for metrics, imagery, and other information. For example, the data store 299 is configured to retain the target shooting results (e.g., aggregate metrics 279, target shooting imagery 290, etc.), visual cues 275, content associated with the drills and/or exercises for training sessions, or the like.
Referring now to
Herein, the split-screen illustration 300 includes a first screen display 310 illustrating the target shooting imagery 290 captured by the first camera 230 of
Upon receiving the shot metrics 260 for each shot detected and processed from the target shooting imagery 290 by the shot analytics logic 286, the training session analytics logic 288 is configured to generate contact points 330-332, which correspond to the geographical location of the shots hitting the silhouette image 222, as part of the first screen display 310. According to one illustrative embodiment of the disclosure, the contact points 330-332 may be rendered as an overlay over a rendering of the silhouette image 222 captured by the first camera 230 and displayed as part of the first screen display 310. As another illustrative embodiment of the disclosure, the contact points 330-332 may be integrated within the rendering of the silhouette image 222.
As further shown, the second screen display 320 illustrates the shooter imagery 292 captured by the second camera 240 of
Referring now to
Referring to
Referring now to
Referring to
In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. However, it will be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.
This application claims the benefit of and priority to U.S. Provisional Application, entitled “System And Method For Shooter Imagery And Target Shooting Analytics,” filed on Jan. 22, 2024, and having application Ser. No. 63/623,575, the entirety of said application being incorporated herein by reference.
| Number | Date | Country | |
|---|---|---|---|
| 63623575 | Jan 2024 | US |