Field of Invention
The present invention relates to target training and education systems and more particularly to devices, systems and methods for providing feedback on aiming accuracy during shooting activities. Furthermore, this invention relates to improvements in the effectiveness of systems that are used to show feedback of a user's performance with a weapon (e.g. firearm, bow/arrow device, crossbow, or other device used to shoot projectile(s) at targets) while engaging in these shooting activities.
Related Art
Within this field of invention, there are many approaches to achieve improvements in the user's shooting performance. Many of these approaches use simulated environments, or in other applications special devices used to simulate the weapon itself. These simulating devices, environments and training aids that employ such simulations introduce limitations to the training experience, as they do not fully replicate the shooting experience. U.S. Pat. No. 8,496,480, to Guissin, discloses a video camera integrated with a weapon to provide video recording in operational training. The simulation aspects of this approach limit effects realized from actual shooting conditions (e.g. environment, physical recoil).
Still in other approaches, the inventions do not re-create or develop the training experience in an efficient manner to enable a fully beneficial experience. It is generally accepted that for maximum benefit from a training experience, the user needs to experience and re-experience the training repetitively to understand, remember, and absorb the lessons from the training. Given this point, the level of efficiency from the training experience can significantly add to user's benefit from the training. A commercially available system, Laserport, sold by the English company Powercom (UK) Ltd, utilizes a laser reflected off a simulated target to indicate hits or misses. This approach provides an indication of hits and misses, but does not enable replays of the user's past performance for training purposes. Furthermore, this system does not provide a realistic simulation experience of the user's swing profile, which would enhance the training experience. The efficient delivery of the playback from the training experience is an important aspect of the training aid used. This aspect has been found lacking in prior inventions and approaches previously employed.
Still, other approaches employ specialized apparatus attached to the weapon to enable the training experience. This approach can be a preferred means to effectively teach the user, as it can more closely replicate the user's actual shooting experience. There are drawbacks to these approaches whereas the added apparatus increasingly alters the user experience. As the added apparatus becomes more intrusive, it is more difficult for the user to have an unaffected shooting experience. U.S. Pat. No. 5,991,043, to Andersson and Ahlen, discloses a video camera with gyroscope mounted on a weapon to determine impact position on a moving target; the means employed to calculate position, distance and image size introduce error that isn't present when utilizing other methods. Furthermore, Andersson's approach yields only the impact point after calculations and estimates are made as to the future position of the clay and aim of weapon.
As inventions' approaches become less invasive to the user's natural shooting routine, the invention will be able to deliver a more effective training experience. Therefore, new means to deliver a more effective training experience can be found through incorporation of less intrusive means to a typical shooting experience.
Furthermore, other devices are employed to achieve improve shooting performance during the shooting experience, i.e. in “real-time.” U.S. Pat. No. 7,836,828, to Martikainen, is an example an embodiment of these devices employed in shotgun shooting sports, which is a high visibility wads are used in some shotgun loads to provide improved visible tracking of the shot stream immediately after the weapon is fired. These approaches have limitations in several ways, none of the least of which, that they only provide feedback at the time of discharge of the weapon. Furthermore, these approaches do not provide any user feedback on swing characteristics of the weapon prior to discharge.
Additionally, an observer can also provide training instruction for the user. This instruction is usually in the form of advice to user as to how they can improve their aim, shooting form, or delivery of the shot itself during the training experience. There are numerous shortcomings of these real-time approaches that are readily apparent to someone trained in this field. But for the purposes of this background, these shortcomings will be limited to a brief discussion. These devices and other observers introduce error through interpretation, assumptions and estimation that is further complicated by the very short timeframe that the information is available during and after the time the shot occurs. Furthermore, the experience in this situation can only be experienced once and then must be remembered after it occurs. This creates difficulty to recall multiple training instances.
Such systems as briefly outlined above, however, fail to provide a complete, effective training system with adequate precision of target tracking, efficient delivery of review methods, and minimal artificial additions to user's shooting experience—inclusive of apparatus, processes, or constraints.
The present invention is a system and method for target training and education that will improve shooting performance in shooting situations, such as, but not limited to, clay pigeon shooting, archery target shooting, or rifle target shooting. It is therefore an object of the invention to provide a system to, either removably or permanently, attach a recording device to a weapon for the purposes of providing means to record the environment during a shooting event.
This invention will also provide a method for processing the recorded shooting event that is not attached to the weapon, but which will enable usage of the following objectives of this invention.
Another object of the invention is a method to analyze the record of the shooting event that enables efficient playback and effective training after the shooting event. It is a further object of the invention that this method employs automated analysis techniques that remove unnecessary and non-relevant video from the training playback record. A part of this automated analysis process detects the occurrence of shot(s) in the recorded video and uses these shot event(s) to trigger actions on the video. Specifically, a useful action of such shot detection is to identify specific portion(s) of the recorded video, so that it, and similar events, can be aggregated to create a condensed set of video(s) of the selected events relevant to the training feedback.
A still further object of this invention is that the audio portion of the video record can be used in a novel way to detect the presence of the shot(s) by using digital processing techniques to determine the presence of a shot in a video record. A further object of this invention is that the analysis method applies video processing techniques that use characteristics of pixels in the frame(s) of the video, such as, but not limited to, color channel, hue, focus, clarity, jitter, or rate of change of these characteristic, to further improve the detection of a shot in the video record.
Another object of this invention is to describe an effective training system and method with and without the recording mechanism attached to the weapon that enables the user or observer with the ability to view the training event as many times as desired. This ability to repeatedly review the training event with the invention's annotated video and analysis methods enables a superior training experience.
This invention has a further objective to provide techniques that locate the aimpoint of the weapon on the training record. One use of this recorded aimpoint is to provide the relative location between the aimpoint of the weapon and the target(s) locations during training feedback from the shooting events. The aimpoint is determined by referencing specific region(s) of the video frame. Since the recording device is held firmly in place on the weapon, this frame region is a repeatable reference location during the video playback. More specifically, a particular point within that region can be determined to be coincident with the user's aimpoint. As a point of illustration, this specific point will remain ‘X’ pixels on the horizontal video frame axis and ‘Y’ pixels on the vertical video frame axis. Thereby, the frame's location (denoted by X, Y coordinates recorded for each frame) will be the aimpoint path of the user's weapon throughout the duration of the video. By recording this aimpoint location for each frame during video playback, the user's aiming path is recorded and is coherently maintained as part of the video record. This user aimpoint path provides added degrees of value during the training playback, since an observer can see how the weapon was moved throughout the shooting event(s).
Yet another object of the invention is to provide further analysis of the record that detects and tracks targeted object(s) during the shooting event. Specifically, the analysis method tracks the object(s) before, during and after the weapon's shot. In the same manner as the method records the X,Y location for the aimpoint, this method records the frame location coincident with the location of the targeted object(s) for each frame in the relevant portion(s) of the video. As with the aimpoint path described earlier in this document, the X,Y location pairs for the target is recorded and is coherently maintained as part of the video record. This target path(s) provides added degrees of value during the training playback, since an observer can see each of the target's flight path(s) during the shooting event.
This method's ability to show the user (or other observers) this target(s) path simultaneously synced in time with the path of the aimpoint(s) provides time-based feedback of the training record. This record is not available with other training aids and methods, which do not provide such synchronized feedback of both the aimpoint and the target(s) throughout the shooting event.
Still another objective of this invention is to provide techniques to analyze the training experience that will permit the user to understand the reasons for hitting or not hitting the targeted object(s) (i.e. a ‘hit’ or ‘miss,’ respectively). These analysis techniques will use information provided on the specific weapon (e.g. shotgun, rifle, bow-arrow, slingshot, spear, or other object used as a weapon) in use, the targets (e.g. a clay pigeon, ball, silhouette, paper target, animal, or other object), environmental conditions, and data derived from the invention's analysis (e.g. target and/or weapon aimpoint motion, travel path(s), trajectories, relative locations) to interpret if the target should be considered a hit or miss after a given event occurs (e.g. a shot).
As an illustrative example, the time when the projectile(s) from the weapon reaches the target can be determined through the invention's analysis and is noted as part of the video training record (e.g. video). This time is when the projectile(s) can come into contact with the target (e.g. point of impact). This point of impact is referenced to a specific frame in the video, as part of the invention's analysis. In this ‘point of impact’ frame, the aimpoint X,Y coordinates and the target X,Y coordinates are evaluated. If the coordinates are in sufficiently close proximity to each other, this condition is considered a ‘hit.’ The corresponding video playback enables confirmation of the ‘hit’ based on the results seen in the video (e.g. once the shotgun pellets contact with the clay pigeon target, this target would become broken and fly in pieces, which is visible on the video). Although the specific illustration in this description reference clay pigeons shot by a shotgun, other weapons and targets used with this invention will benefit from the same training record approach.
The invention will be more clearly understood and additional advantages will become apparent after a review of the accompanying description of figures, drawings and more detailed description of the preferred embodiment.
In order to more fully illustrate the present invention, the following will describe a particular embodiment of the present invention with reference to the figures. While these figures will describe a specific set of configurations in this embodiment, it should be understood that this description is for illustrative purposes only. A person skilled in the relevant art will easily be able to recognize that other configurations, weapons, or arrangements can be used without departing from the concept, scope and spirit of this invention. It will be further evident that the invention's analysis processes can be incorporated in other structural forms without deviating from the scope and spirit of this invention.
Referring now to the invention in more detail, in
In
The viewable recording area shown by the dashed lines 4 emanating from the recording device is shown relatively coaxial with the axis of the clamp mechanism 3. An important aspect of the invention shown by these lines 4 is that the invention does not need to be coaxially aligned with the direction of the firearm's projectile. The invention is able to achieve a successful recording, and therefore result in a successful analysis, as long as the target remains in the viewing area captured by the recording device.
As a means to further illustrate that the specific configuration of the fixture 5 could be achieved by other configurations,
When a training event is to be recorded for analysis by the invention, the recording system 1 is attached to the firearm 2 as shown in
Since the camera 6 is firmly mounted on the shotgun 2, the shotgun's 2 aimpoint does not change with respect to coordinates on the video frame. More Specifically, a location (X aimpoint, Y aimpoint pixels on the video's frame), which is consistent with the shotgun's 2 aimpoint at a given frame in the video, will remain consistent with the shotgun's 2 aimpoint throughout the other frames in the video. This condition remains true as long as the camera 6 does not move relative to the shotgun 2.
Optionally, the invention's accuracy can be improved with a calibration process to precisely identify this aimpoint location on the video. This calibration between the shotgun 2 and the recording device 6, e.g. camera, can be conducted in many ways and still remain consistent with the scope of the invention. For the purposes of further illustration, the following details a specific calibration routine. Before the user begins the trap round, the camera 6 is activated to start recording. The user points the shotgun 2 at a placard that is used for calibrating the aimpoint.
The placard used in this illustration, as shown in a black-and-white image on
The camera 6 now has video file(s) 16 stored on the memory card 7 as a video record of the trap round. Next, the video record is analyzed by means of a computing device. As
Once transferred to the computing device 17, two operations are conducted to complete the analysis portion of the calibration process. First, the first thirty (30) seconds of the video file(s) 16 is searched for an image shape that is consistent with a known shape, e.g. the placard. To identify this shape, each frame of this thirty-second video segment is reviewed by the invention's analysis algorithm. More specifically within a given video frame, the Red-Green-Blue pixel values are transformed into the Hue-Saturation-Value (HSV) domain using well-known equations. The resulting images are thresholded for pixels having H within 20-30, S within 100-255, and V within 100-255, which is a yellow color in HSV domain. Each pixel meeting these criteria is set to a white value in this thresholded image, while the rest are left dark.
The resulting image from this thresholding process is shown in
Second, A contour map of this region is computed as the final part of this process. The region 18 with the largest area (i.e. the placard) is contoured by the algorithm to identify its shape. From these contours, the algorithm identifies the location of the corners of the placard, shown in
Motion between successive frames that contain a placard is computed (motion detection is described later when
If no geometry is located as a result of this process, the calibration process ends and the aimpoint will be identified manually during the analysis process. This calibration, detailed in this illustration, provides a means to improve the accuracy of the aimpoint identification. The scope and performance of the invention does not require this or other calibration means.
The algorithm's audio evaluation, described in section ‘B’ of
To further illustrate the output of this algorithm, the values for these ten groups 23, per frame, are graphed in
A possible ‘shot fired’ event is determined when the level of energy in these groups 23 exceed a given limit determined by an analysis characterization. Each frame that meets the conditions set by the characterization are identified as a possible ‘shot fired’ event 24. This analysis characterization refers to a frame-by-frame review of each frame's stored energy values for a representative video file 16. The results of this review set the levels that are the conditions used to determine possible ‘shot fired’ conditions for other video files 16. As a means of illustration for this preferred embodiment, this analysis characterization for shotgun shots determined that the frequency groups between 6,000 Hz and 15,600 Hz had most of the energy recorded on the video during ‘shot-fired’ events. After reviewing a control group of shotgun shots recorded on video using this analysis characterization approach, the lowest levels measured for these frequency groups became the limits used to determine possible shot fired frames 24 for other video files.
Referencing section ‘C’ of
The graphs shown in
The Translation Distance, shown in
Referring now to
The offset position of the regions 28 and 29 within Frames F and G, respectively, must be the same. Δx and Δy are computed by minimizing Σr=1RΣc=1C(F(r,c)−G(r−Δy, c−Δx))2. For algorithmic efficiency, Δx and Δy can be computed by applying a Hanning window to each region and calculating the phase correlation to determine Δx, and Δy. A potential problem arises when the images in regions 28 and 29 are featureless (for example, solid color, blue skies). For this or similar cases, other regions on the video frames F and G will be used to estimate motion. To avoid these potential problem conditions, regions towards the bottom of the video frame tend to be better candidate areas, as this part of the frame typically stays below the horizon and enables greater probability of features within the measurement frame.
The focus metric, shown in
may be approximated in discrete time, using x for pixel distance in columns, y for pixel distance in rows, and f as the pixel value. Averaging |L(ƒ)| over a region of video yields a focus metric for that region. This focus metric can vary significantly based on the content of a video frame; however, a large reduction, typically greater than in its value from one frame to the next can detect a sudden blurriness event, as the content of the video is substantially the same. Typical focus metric values tend to be in the 10-25 range for videos captured on sunny days, while videos shot in the snow tend to a metric of 5-15. Regardless of the weather conditions, a >30% drop in focus metric, from one video frame to another is generally indicative of a shot-fired event. When this decision threshold is combined with the audio and motion cues, shot-fired events are extracted with high accuracy, with the shot-fired frame correctly identified.
Individually, changes in the translation distance or the focus metric cannot reliably be used to detect a shot. But, when both parameters are evaluated together, a shot can be reliably determined. Therefore, this evaluation of both parameters, as a collection, becomes the detector used to detect ‘shot-fired’ conditions.
To illustrate this detector,
After this, the algorithm creates multiple video segments around each video frame where a shot-fired event is detected. As an example,
Referencing section of
Then, as shown in
Motion estimation between successive video frames is can be efficiently accomplished in multiple ways. As part of this illustration, motion between a frame and the last-stable frame 32 is calculated by summing up the motion between each pair of successive frames between that frame and the last-stable frame 32. More specifically, motion estimation was calculated using phase correlation techniques applied to a rectangular portion of the image at the center of each video frame. The resulting data was checked for sanity—if the calculated motion estimate exceeded a specified threshold, other regions on the two frames being compared were used to estimate motion until a reasonable value was computed. This sanity check guards against erroneous motion estimation when the center of the video lacks any features that would enable the phase correlation to detect motion (e.g. blue skies or uniform snow).
Continuing further with the description in section ‘D’ of
For this hit/miss determination, the algorithm checks if the location of the target in the shot-fired frame 31 is within a specified region of the aimpoint of the shotgun. If this is the case, the shot is registered as a hit; otherwise, it is registered as a miss. The region can be adjusted to reflect the width of the shotgun's pellet pattern radius. This region is the effective impact zone of the shotgun's projectile(s) (e.g. pellet pattern).
To end this portion, as detailed in D.4 of
In the last phase of the algorithm, referencing section ‘E’ of
The first strategy is implemented as follows. To locate a target object 34 within these noisy images, this invention adjusts the threshold value, and only considers objects, shown as other white regions in
A second strategy is implemented as follows. The location of the brightest point(s), e.g. possible object(s) 34, that meet a specified minimum size are aggregated from each of the thresholded frames (as described above). All of these objects 34 meeting these requirements, that also are also located within an area defined by a region (e.g. oval or other shaped mask), which is centered on a chosen point of the video frame (e.g. center of frame, aimpoint, or other specific point), are selected as possible target locations.
A third strategy is implemented by modifying the second strategy as follows. This strategy exploits the fact that the target usually appears red. The hue of a pixel is indicative of the dominant color of that pixel. The saturation of the pixel is indicative of how much color is in the pixel. To illustrate this point, a red pixel and a gray pixel both can both contain the same amounts of red, but the hue and saturation value of the red pixel (e.g. 10 and greater than 70 respectively) will differ significantly from the gray pixel's (e.g. any hue, but less than 10 saturation). In the third method, therefore, in addition to searching for the brightest points, a check is made on the second of the original two images that were subtracted in [0085] above. If the hue of the pixel value falls within 0-15 or within 165-180, and the saturation is greater than 70, only then is the brightest point considered as a trajectory candidate.
As a result of restricting the algorithm's focus to these masked region(s), this process excludes peripheral artifacts caused by reflections from unintended (e.g. non-target) objects, such as the shotgun barrel, the trap house, or other objects. Therefore, the algorithm processes the video frames much more efficiently and effectively, which results in much faster detection of possible target objects 34.
An illustration of a resulting cluster of points from this process is shown on
From this set of points, only points that yield a reasonable curve fit are considered as candidate points. The count of such points is recorded. This process is repeated for all possible point pairs. The pair that produces the highest count of fitted points (e.g. winning combination) is selected as the set of target points. The trajectory traced by these points is selected as the seed trajectory. In this example, the trajectory 38 depicted in
In
Furthermore, to further develop the annotation concept outlined in
To enhance the training experience further, this invention enables the aggregated data from the training experiences, either specific shot occurrences, parts of or entire training records to be combined into one (or a set of) view(s).
The advantage of the present invention, without limitation, is that these combined views, relevant video selection, determination of target(s) and aimpoint tracking, derivation of relative velocities of said target(s) and aimpoint(s), coupled with video record 16 of the actual training experience, deliver far greater value to user or observer because they illustrate shooting experiences (e.g. habits) that are either desired or not desired much more efficiently and effectively than prior methods or systems. Furthermore, since the annotated experience illustrated in
In a broad embodiment, the present invention is a system and method for target training and education that will improve user performance in shooting situations. While the foregoing detailed description of the present invention enables one of ordinary skill to make and use what is presently considered to be the best embodiment of the present invention. Those of ordinary skill will appreciate and understand that other variations, combinations and equivalents of the specific embodiment, method, system and examples could be created; but would still be within the scope of the invention. Therefore, the invention should not be limited to the embodiment described above, but by all embodiments and methods within the breadth and scope of the invention.
This application claims priority from U.S. Provisional Patent Application No. 62/277,150, filed Jan. 11, 2016, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62277150 | Jan 2016 | US |