Automatic Target Scoring Machine

Information

  • Patent Application
  • 20210133441
  • Publication Number
    20210133441
  • Date Filed
    September 09, 2020
    4 years ago
  • Date Published
    May 06, 2021
    3 years ago
Abstract
A image of a target is obtained and processed to find changes in the target which represent projectiles hitting the target, and that target is then scored.
Description
BACKGROUND

Target shooting is typically done by shooting with a projectile firing item, such as a gun, at paper targets. The locations where the projectile hits the target can be seen by the user in different ways. The location can be found using magnifying devices such as binoculars. Sometimes the targets are on pulleys and can be rolled back towards the shooter.


Users who target shoot want to get ready confirmation of the location where the target was hit.


SUMMARY

The inventors recognized the desirability of an indoor architecture for a shooting target system that automatically identifies the location of shots on a target. The system allows the shooters to know their score by looking at their phone or tablet, without needing to view the target. This can be used when competing with other shooters.


The system as described can be used with any existing indoor target range systems, as it is independent of target size and color.


Embodiments describe a system that uses an image acquisition device such as a camera to obtain images of the shooting target and uses a processing device to compute the location of the shots on the target. In embodiments, the users can share both the real-time and historic details as part of a competition. Unlike other systems, users who are competing need not be in the same physical location.





BRIEF DESCRIPTION OF THE DRAWINGS

in the drawings:


the drawings show embodiments of the invention, and specifically



FIG. 1 shows a block diagram of the camera recognizing parts of the target;



FIG. 2 illustrates a screen shown on a user's personal device;



FIG. 3 illustrates exemplary datum in an embodiment;



FIG. 4 shows the datum on a target;



FIG. 5 shows identified shots on a target;



FIG. 6 shows additional identified shots on the target;



FIG. 7 shows a flowchart of operation to subtract colors;



FIG. 8 shows a functional operation chart;



FIG. 8B shows a flow of the quantization path.



FIG. 8C shows the optical detection path



FIG. 8D represents the flow of the native bayes processing



FIG. 8E represents the spiking neural network



FIG. 8F represents hardcoded feature detection



FIG. 8G shows the hierarchical temporal method or HDM



FIG. 8H represents the object detection path



FIG. 9 shows a flowchart of registration;



FIG. 10 shows an alternative flowchart of registration without using the datum references; and



FIG. 11 shows a flowchart of shot filtering.





DETAILED DESCRIPTION

An embodiment is described herein with reference to the figures.



FIG. 1 illustrates a basic silhouette target with datums on the target. The target can be any kind of target. The datums are unique identifiers which can be any form, but which can be recognized by a camera. In this embodiment, the datums are shown in FIG. 3, formed by squares having a hollow round centers at each of 4 corners of the target. By using these kinds of datum, the processing system can readily recognize the perimeter of the target.


In an alternate embodiment, the processing system extracts the white border of any target without the datums to recognize the perimeter of the target. FIG. 10 illustrates this alternative embodiment, and is described herein.


Before the shooter starts shooting, the camera 110 and processor may identify the optimal digit and optical zoom values.


The camera 110 images the target at all times. Each time a projectile marks the target, the camera sends a labeled version of the target showing the locations of shots fired, to the user's device shown in FIG. 2. In this way, the user receives on their personal device an image of the target with the different shots thereon, also showing the total score of their shots.


In one embodiment, the datum that is used can be the one shown in FIG. 3, however any unique pattern can be used in this way. The image from the camera 110 is sent to a processor 120, which can be either the processor in an off site processing system, or can be processed by the smart phone.


The processor is connected to the Internet shown as 125 so that the results can be communicated with the Internet in real time. A central server can 130 can accumulate all of the information.


The processor 120 uses a processing system to determine images of the reason which are shot in order to score the image.


The processor 120 can carry out the following steps. First, the system takes the target as shown in FIG. 1, after it has been shot at by a user. The target itself may have score lines 400, such as shown in FIG. 4 so that the target 100 not only includes the datum 105, but also score lines 400 which indicate the score which should be received for each shot.


The processor transforms the image of the target to a common reference frame, thus transforming the input image into that standardized reference frame. FIG. 5 represents the target in the standardized reference frame, showing the processed target relative to the datum 105, and showing a number of different shots such as 500, 502 on the target. In this image, each of the shots is identified by machine vision, and also numbered. The numbering can be carried out sequentially, so that each shot gets a number, and each time a new shot is recognized using machine vision techniques, that new shot gets a new sequential number. For example, each image and each time can be correlated against a previous image, and when a difference is detected, that difference is compared against different images of what a shot through a paper target looks like. If the difference matches the shot, then the new item is recognized as a new shot and numbered.


For each of the numbered target shots, as shown in FIG. 6, the system has identified all of: the location of the shot, the number of the shot, and the score associated with the shot.


A simplified version of the annotated target image is sent to the user's phone, as FIG. 2, which shows four numbered target hits (“shots”), and shows the score associated with those four numbered shots.


This uses a non-rigid transformation to align the target which is in an indeterminate reference frame based on the location of the camera, to the common reference frame, the common reference frame being defined by specific location of the datum.


In an embodiment where no specific datum is being used, the common reference frame is defined by the specific location of the machine identified perimeter of the target.


Then, the target image is processed by a technique which is agnostic to target color to distinguish the shots on the target, as described herein. False positives are removed by removing any spots on the target which are less than a specified size or of the wrong shape.


An embodiment follows the flowchart of FIG. 7 to carry out all these items. The session is started at 700, which initially starts with an initialization phase at 705, in which multiple different image frames are captured of the target and its datum. At 710, the system determines different colors which are present in the frames. This is done in order to maintain the color agnostic features of the invention. Once the target has been specifically initialized, the flow of 720 is carried out at multiple different frames. At 725, an image frame is captured. At 730 the datums are used to register the frame to a baseline template.


Colors are initially found during the initial initialization at 710. These colors are then subtracted at 735 to form a color agnostic frame. The resulting image at 740 is a map of “candidate” shots, which are items which are likely to represent shots. This is then processed to remove false positives. For example, as described above, anything that is not the right size and/or shape to be properly classified as a target hit can be removed. Moreover, anything that is not visible for more than a specified time, such as 2 seconds, is similarly removed.


The output image of FIG. 2 is returned at 745 with labeled and numbered shots.



FIG. 8 shows additional details of the process as described previously herein. The initial sampling at is carried out at 800 by capturing a camera image at 805, registering the image at 806, sampling all the color pixels in the image at 807, and running this through a quantization algorithm at 808. An example quantization algorithm is the K-means-like algorithm, although other techniques are also possible. These colors are added to the pallet of existing colors at 809.


At the same time as the quantization, the system normalizes and formats the frame data at 810 and retains averages of frame normalization at step 811. After initializing, control passes, at 812, to the active processing carried out at 815. The active processing again receives a camera image 820. Image registration is carried out at 821, followed by eliminating the known pixels at 822 from the pallet. The remainder is converted to binary at 823, and dilated at 824. A time map is used at 825, to repeatedly increment and decrement each location at 826 and 827. This is used to find newly aged candidates at 828. Any newly aged candidate is taken as a new possible shot. At 829, it is determined whether this matches in time with a known shot. Machine learning techniques are utilized to confirm that a candidate shot is, in fact, a shot and distinguishes between nearby shots. Machine learning techniques are also used to determine if an area contains multiple shots that have merged together. If so, its location is extracted at 830, and again it is determined if this was a shot at 831. If so, the shot is scored at 832, and the local data structures are updated at 833 with the new scoring.


The determination of whether a shot has been fired at 831 is further explained herein with reference to FIG. 11.


At 1100, each candidate shot is processed. First, the system checks at 1110 if the shot is close to any other shots that we have already found, with a distance threshold. If it is close to any other shots, control passes to 1120 to determine if the area of the shot has expanded at 1120. If so, then this candidate shot is processed at 1130 to determine whether it is a new shot that is close to an existing shot. If not, this candidate shot is classified at 1140 as just a repeat of an old shot.


If the candidate shot is not closer to any other shots that we have found, 1130 is used to check if the candidate is a shot. This uses multiple different machine vision techniques as a quorum, including but not limited to utilizing machine learning techniques, candidate shot characteristics (such as circularity, dimensions, area, and arc length, latitude/longitudinal deltas), decision trees, and artificial neocortical neuron modelling utilizing sparsified network inputs. Each of these methods in tandem contribute to an overall confidence score as to whether the candidate is a shot or not.


Based on the confidence score, the candidate shot is characterized as a shot at 1150, or not as a shot at 1151.


The registration, shown in 806 is carried out as shown in FIG. 9. At 900, the pixels with the datum color are found. For each found region, the regions are checked for presence of an inner shape at 905. This inner shape should be roughly the size and shape of the projectile whose shot is being scored. The center point of the shape is found at 910. The points are sorted at 915 and their perspective is translated at 916 based on the datums. This provides the shot at FIG. 5.



FIG. 8B shows an flow of the quantization path. The image frame is received at 851, and registered at 852 using the techniques described above. At 853, the known pixels are eliminated and the remaining pixels are converted to binary at 854. The remainders are then dilated, at 855, and applied to a time map at 856. The found locations are incremented at 857 and then decremented at 858. If there are any newly aged candidates at 859, 860 determines whether this is a known shot. If there are not newly aged candidates at 859, then flow returns to 851 to look for another image frame. A known shot is again returned to find the image frame. If there is no known shot at 860, however, the location of the shot is extracted at 861, and processed at 862 to determine if it is a shot using a weighted average of naïve Bayes, hard coding features, spiking neural network, and hierarchical temporary memory temporal memory. If this scores it yes, 863 scores this is a shot at 863 and updates the little data structures at 864.



FIG. 8C shows the optical detection path. The image frame at 865 is normalized at 866 and the normalized images passed through trained object detection model at 867. 868 determines if this is a known shot, and if not extracts the location at 869. The shot escorted 870 and the data structures are updated at 871.



FIG. 8D represents the flow of the native bayes processing. The candidate shot at 875 has its features extracted at 876. If the probability exceeds the threshold, then the features are passed through a filter to get the probability of a shot at 877. If the probability exceeds the threshold at 877, the shot is scored at 878 and the data structures are updated at 879. Otherwise, the system is labeled as not being a shot at 880.



FIG. 8E represents the spiking neural network. The candidate shot at 881 has its features extracted at 882 and its features are fed into the trained spiking neural network at 883. If the neuron spikes at 883, this shot is scored at 884 and the data structures are updated at 885. If not, this is not a shot at 886.



FIG. 8F represents hardcoded feature detection. This candidate shot at 886 has its features extracted at 887. 888 determines whether the shot area arc length and dimensions are within some threshold. If so, this shot is scored at 889 and the data structures are updated at 890. If not, then this is not a shot as determined at 891. The hierarchical temporal method or HDM is shown in FIG. 8G.


At 891 the candidate shot is received, and its features are extracted at 892 including the shot area arc length intensity values. 893 makes a determination of what temporal group from the HTN model, the features are aligned with. If these are aligned with a shot group at 893, the shot is scored at 894 and the data structures are updated at 895. If not, then this is taken as not being a shot at 896.



FIG. 8H represents the object detection path. At 1151, this operates to initialize the object detection module using a number of different layers and parameters. 1152 these object detection models are trained on data sets of images containing bullet holes. 1153 determines if the inference time is less than 100 ms, and if so, 1154 initiates initializes the object detection model with a smaller number of parameters so that the inference time will be less than a thousand milliseconds 1155 then trains the second object detection model on the outputs of the first object detection model. The model is deployed at 1156 in this way.


In addition to the technique described above to find shots, another technique that we have used to find shots is lightweight object detection models that can determine the location of shots on an image between 10 ms and 1000 ms. These lightweight object detection models are neural networks that utilize millions of parameters through convolutional layers, linear layers, dropout layers, pooling layers, residual layers, regional proposal networks, and/or other parameters. With this, they are able to identify and locate the shots on a given image fed into the network. These lightweight object detection models can be trained from scratch. They also can be trained using a technique called knowledge distillation, which is where the lightweight object detection model can be trained from the outputs of another machine learning model that is more accurate, but slower.



FIG. 10 illustrates an alternative embodiment, that allows the system to be used with any kind of target, without needing the datums on the target to identify and register the target.


In FIG. 10, a frame is input 1000, representing the information being imaged by the camera. At 1010, the border is extracted, and corners of the border obtained at 1020. These corners form the edges of the image. The different points in the image are sorted at 1030. The image is then changed in perspective to determine new points on the image, and the frame information is returned at 1050. By doing this, no datums are needed for the registration.


Although only a few embodiments have been disclosed in detail above, other embodiments are possible and the inventors intend these to be encompassed within this specification. The specification describes certain technological solutions to solve the technical problems that are described expressly and inherently in this application. This disclosure describes embodiments, and the claims are intended to cover any modification or alternative or generalization of these embodiments which might be predictable to a person having ordinary skill in the art.

Claims
  • 1. An automated target shot identifying system, comprising: a processing part, receiving an image of an area including a target;the processing part operating to compare a first image of the target at a first time prior to a shot being fired, with a second image of the target at a second time subsequent to the shot being fired,the processing part determining changes on the target between the first time and the second time, and processing said changes to determine if each change represents a target hit at a location where the target is changed; andthe processing part sending information indicative of the target hits to a user's personal device.
  • 2. The system as in claim 1, wherein the processing part labels the target hits with numerical values according to an order in which the target hits appeared on the target as part of the information sent to the personal device.
  • 3. The system as in claim 1, wherein the processing part determines colors that are present in the image, and subtracts the color to form a color agnostic image and processes the target hits on the color agnostic image.
  • 4. The system as in claim 1, wherein the processing part uses a time map to determine new changes to the image, and to process any new changes as a possible location of a possible target hit.
  • 5. The system as in claim 4, wherein each of the changes is processed to determine whether the change in the image is a right size and a right shape to be pressed properly classified as a target hit.
  • 6. The system as in claim 1, wherein the processing part identifies target hits on the target, and sends an image of the target with target hits identified thereon to a user's personal device as the information sent to the personal device.
  • 7. The system as in claim 6, wherein the processing part determines a score for each of the target hits and also sends a total score for the target hits to the user's device as part of the information sent to the personal device.
  • 8. The system as in claim 5, wherein the processing part determines if a change to the target is within a distance threshold to other target hits that have already been found, and determines if an area of the target hit has expanded, and classifies this as is new target hit if the area has expanded.
  • 9. The system as in claim 1, wherein the processing part uses machine vision techniques to create a confidence score as to whether the changes in the image is classified as a target hit, or not a target hit.
  • 10. The system as in claim 1, wherein the processing part finds a reference location on the target, and processes the changes according to the reference location.
  • 11. The system as in claim 10, wherein the reference location is a datum which is a specified image on the target.
  • 12. The system as in claim 10, wherein the reference location is a white perimeter of the target.
  • 13. A method of automated target shot identifying, comprising: receiving an image of an area including a target;operating to compare a first image of the target at a first time prior to a shot being fired, with a second image of the target at a second time subsequent to the shot being fired,determining changes on the target between the first time and the second time;processing said changes to determine if each change represents a target hit at a location where the target is changed or to determine that the each change does not represent a target hit at the location; andsending information indicative of the target hits to a user's personal device.
  • 14. The method as in claim 13, wherein the processing part labels the target hits with numerical values according to an order in which the target hits appeared on the target.
  • 15. The method as in claim 13, wherein each of the changes is processed to determine whether the change in the image is a right size and a right shape to be pressed properly classified as a target hit.
  • 16. The method as in claim 13, further comprising sending an image of the target with target hits identified thereon to a user's personal device as part of the information sent to the user's personal device.
  • 17. The method as in claim 13, further comprising sending a score for each of the target hits and also sends a total score for the target hits to the user's device as part of the information sent to the user's personal device.
  • 18. The method as in claim 13, wherein the determining changes uses machine vision techniques to create a confidence score as to whether the changes in the image is classified as a target hit, or not a target hit.
Parent Case Info

This application claims priority to Provisional application No. 62/899,519, filed Sep. 12, 2019, the entire contents of which are herewith incorporated by reference.

Provisional Applications (1)
Number Date Country
62899519 Sep 2019 US