SYSTEM AND METHOD FOR ESTIMATING WHETHER AN OBJECT HIT OR MISSED A TARGET

Information

  • Patent Application
  • 20200348110
  • Publication Number
    20200348110
  • Date Filed
    May 04, 2020
    4 years ago
  • Date Published
    November 05, 2020
    4 years ago
  • Inventors
    • FRUCHTNIS; YAAKOV
Abstract
The subject matter discloses a method for estimating hit or miss of an object directed towards a target, comprising collecting an audio signal over time by an audio sensor located on, inside or near the target, said audio signal is defined by a set of frequencies, identifying changes in the set of frequencies over time, predicting a minimal distance between the object and the audio sensor based on the changes in set of frequencies over time, emitting an indication according to the minimal distance between the object and the audio sensor.
Description
FIELD

The present invention relates to estimating whether an object hit or missed a target.


BACKGROUND

Objects, such as bullets, projectiles, balls, plastics objects, and others, are many times shot at a target. Such a target may be part of a firing training session, or part of a game, in which a player uses a toy gun and fires, shoots, or throws physical objects at a target. The target may be static or movable, for example mounted on a players' upper arm.


It is likely to desire to estimate whether or not the object hit the target in order to decide how to reward a player in the game, or to evaluate a training session when firing real projectiles at a target. Current methods to estimate hit or miss include placing a disposable layer on the target, such as cardboard or paper, and marking the hits on the disposable layer. This method cannot be used when playing with toy guns, or using objects that do not leave a mark on the target.


SUMMARY

The subject matter discloses a computerized system and method configured to estimate whether or not an object hits a target or misses a target based on audio signals. The object may be a projectile shot from a weapon. The weapon may be a firearm, a training weapon configured to fire non-kill bullets, toys, and the like.


The system comprises one or more audio sensors, such as microphones. The audio sensor is electrically coupled to a processing module configured to process the audio signals, as elaborated below, a digital processor and an indicator configured to indicate whether the object hit or missed the target. The indicator may be configured to emit the indication in one or more manners, including emitting light, audio, wireless transmissions, and vibrations. The indicator emits an indication according to a command from the processing module, the command depends on whether the object hit or missed the target. The system may also comprise one or more audio-signal amplifiers.


The system detects objects moving towards the target by sampling sonic and/or ultrasonic and/or infrasonic signals at the processing module's input. The system is configured to be placed in or on the target. In case the system is located inside the target, the audio sensor may be external to the target or on its surface, to allow the audio signals to reach them without attenuation. The system tracks the sound emitted during the objects' movement and detects if the object hit in or on the target. The sound may be emitted due to the flow of air or other medium through and around the object, while the object moves through the air or other medium. The system may estimate hit or miss at a sphere-like volume surrounding the system. The radius of the target around the system may be adjusted. Once a hit is detected, the system provides indications using the indicator.


The audio sensor may be configured to collect audio signals in a frequency range that matches a predefined frequency signature of the audio emitted due to the object's movement. The frequency of the audio emitted due to the object's movement may be stored in the memory unit of the system. When the system is configured to determine hits or misses of multiple objects having multiple frequency signatures of the audio emitted due to the objects' movements, a specific frequency signature associated with a specific object or object type may be inputted into the system by a user of the system, for example via an input unit of the system or by a control interface receiving commands from another device.


The present invention also discloses a method for determining hits or misses of an object shot at a target. The method comprises collecting the audio signals by the audio sensor. The audio signals are sampled by the sensor, for example in a sampling rate of 10,000 samples per second. The audio signals are sent to the processing module. The processing module obtains multiple samples of the audio signals emitted due to and during the object's movement. The processing module may calculate a rate of change in the frequency signature of the audio signals emitted due to the object's movement.


The invention may be used in the field of toy darts, emitting whistling noises while flying through the air. The invention may also be used to analyze movement of objects in other mediums, such as water. The device can be made small enough to be worn on the user's arm, shoulder, neck, or torso, or even pinned or clamped to a piece of clothing. The device detects flying darts, and when they hit the wearer inside a certain sphere, circle, or point of impact, the device will indicate a hit, for example by flashing an LED, emitting specific noises through a small speaker, emit vibrations, and send wireless signals to other devices, smart phones, and any other devices participating in the game.


The invention may also be used for the military field of applications, where such devices can be equipped on vehicles, portable targets or personnel, capturing whistles of projectiles during combat training. The collected signals may be shared or sent to a remote server. The server may create a map of precise hit points of incoming shells, bullets, and other projectiles.


Another military use case is to use the information of projectiles as they fly, to create a map of the origin of the shots, allowing to reveal the enemy's location.


It is another object of the subject matter to disclose a method for estimating hit or miss of an object directed towards a target, comprising collecting an audio signal over time by an audio sensor located on, inside or near the target, said audio signal is defined by a set of frequencies, identifying changes in the set of frequencies over time, predicting a minimal distance between the object and the audio sensor based on the changes in set of frequencies over time, emitting an indication according to the minimal distance between the object and the audio sensor.


In some cases, the method further comprising identifying that the object is directed towards the target by comparing a pattern from the audio signal with a list of predefined patterns stored in a computerized memory.


In some cases, each of the predefined patterns provides information as to frequencies in which audio is emitted due to the movement of a specific object, wherein identifying the object based on a frequency pattern associated with the object in the computerized memory.


In some cases, the method further comprising estimating hit or miss of a plurality of different objects having different frequencies.


In some cases, the method further comprising computing a distance between the object's hit location and the target and determining a hit or miss of the object in the target by comparing the computed distance and a threshold distance.


In some cases, computing the distance between the object's hit location and the target is based on the timing in which the object ended its movement, said end of the object's movement is detected when the audio signal ceases to include the object's frequency signature.


In some cases, the method further comprises defining a range of distances from the audio sensor that qualifies as a hit, and determining a hit in case the distance between the hit location and the audio sensor is smaller than the range.


In some cases, the method further comprises defining multiple radii from the audio sensor and determining two radii that define the distance between the object and the audio sensor.


In some cases, the method further comprises defining a distance between the object and the audio sensor that qualifies as a hit.


In some cases, the method further comprises adjusting the distance between the object and the audio sensor that qualifies as a hit based on environmental measurements.


In some cases, the method further comprises determining the distance between the location in which the object ended its movement and the target.


In some cases, the method further comprises detecting that the audio signal ceases to include the object's frequency signature;


computing a time duration elapsing between a time stamp in which the object was at the minimal distance from the audio sensor and the time of detecting that the audio signal ceases to include the object's frequency signature;


multiplying the time duration with a known or calculated velocity of the object.


In some cases, the indication is emitted by at least one of light, sound, RF, and vibration.


In some cases, the method further comprises creating a tracking log, said tracking log comprises the object's location and the detected frequencies over time.


In some cases, predicting the minimal distance between the object and audio sensor is performed before the object reaches the minimal distance.





BRIEF DESCRIPTION OF THE DRAWINGS

Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.


In the drawings:



FIG. 1 depicts the system and its comprising elements, in relation to a whistling projectile and the system's whistle detection sphere-like volume, according to exemplary embodiments of the present invention;



FIG. 2 depicts a projectile entering the hit detection sphere, and ending its flight inside that sphere, resulting in a hit indication, according to exemplary embodiments of the present invention;



FIG. 3 depicts a projectile passing inside the hit detection sphere, and ending its flight outside that sphere, resulting in a miss indication, according to exemplary embodiments of the present invention;



FIG. 4 depicts a projectile passing outside the hit detection sphere, resulting in a miss indication, according to exemplary embodiments of the present invention, according to exemplary embodiments of the present invention;



FIG. 5 depicts a person wearing the device in 2 different configurations, according to exemplary embodiments of the present invention;



FIG. 6 depicts a dart hit trajectory, according to exemplary embodiments of the present invention;



FIG. 7 depicts a dart miss trajectory, according to exemplary embodiments of the present invention;



FIG. 8 depicts a 2D triangulation of hit location from several microphones on the same system, according to exemplary embodiments of the present invention;



FIG. 9 depicts a 2D triangulation of hit location from information shared by several devices, sharing information wirelessly, according to exemplary embodiments of the present invention;



FIG. 10 depicts a special case of a projectile hit and miss, according to exemplary embodiments of the present invention;



FIG. 11 shows a graph that shows a shift from the whistle frequency as a function of the distance from the target, according to exemplary embodiments of the present invention;



FIG. 12 shows a graph having the system's microphone in the center of the graph, showing the percentage of frequency change versus the distance between the object and the system, according to exemplary embodiments of the present invention;



FIG. 13 schematically shows an object hitting the target, according to exemplary embodiments of the present invention;



FIG. 14 shows a system for estimating hit or miss of an object based on audio signals, according to exemplary embodiments of the present invention;



FIG. 15 shows a method for estimating hit or miss of an object based on audio signals, according to exemplary embodiments of the present invention;



FIG. 16 shows a method for estimating a change in frequency of the audio emitted due to the object's movement, according to exemplary embodiments of the present invention.





DETAILED DESCRIPTION

The term “audio signal”, also defined as “frequency spectrum” and “frequency of the audio signal emitted due to and during the object's movement”, refers to a physical representation of the audio signal generated inside a medium, such as air or water, due to and during an object's movement through that medium, in the sonic and/or ultrasonic and/or infrasonic range.



FIG. 1 depicts the system and its components, in relation to a moving object and the target, according to exemplary embodiments of the present invention. The system is located inside a target, or on a surface of a target, or very close to the target, for example up to 20 centimeters from the target.


The system comprises one or more audio sensors 1, such as microphones, configured to collect audio signals. The audio signals may be in the sonic and/or ultrasonic and/or infrasonic range. The system may be further comprised of filters and amplifiers 2 configured to enhance the audio signal collected by the audio sensor, for example to filter the audio signal in a manner that only the frequency range likely to be emitted by the object during its movement is processed. The system also comprises a processing module 3 and an indicator 7. The indicator is configured to indicate whether the object hit or missed the target. The indication may be provided in one or more ways, including emitting light, audio, wireless transmissions, and vibrations.


In some exemplary cases, the audio sensor provides a digital output, such as microelectromechanical systems MEMS. In such case, the audio sensor may be directly coupled to the digital input of the processing module 3. In case the audio sensor comprises analog microphones, the microphones' output is filtered and amplified, requiring an analog filter and analog amplifier 2. The filtered analog signal may be converted to digital format by a standard analog to digital converter (ADC).


Capturing of the digital audio signals may be performed by a DSP or processor, via a digital interface. The DSP or processor performs digital signal processing of the audio signal, in groups of samples, and analyzes the signal's frequency spectrum and other characteristics.


The system stores the characteristics, for example the frequency spectrum, of the audio signal. This way, the system is configured to perform a frequency analysis of the collected signals to determine whether the object is moving towards the system, or has stopped moving or has moved past the system, and to calculate the object's distance from the system.


By tracking the object's frequency spectrum over time, group by group of samples, the system determines whether the object has ended its movement within or without a certain radius from the audio sensor. Changes in the audio signal are used to predict the minimal distance of the object during its movement from the microphone, as detailed below. After calculating the minimal distance between the object and the target, the processing module can extract the time stamp in which the object is closest to the target. The processing module tracks the audio signal until the audio generated due to the object's movement ends. Then, the processing module extracts the time elapsed between the time stamp in which the object is closest to the target and the time stamp in which the object's movement ended and calculates the distance between the target and the location where the object ended its movement.


It is possible to increase the signal-to-noise-ratio (SNR) of the object's movement noise, by using more than 1 microphone, and/or activating noise cancelation techniques on the captured audio signals. This can give better detection chance in noisy environments.



FIG. 2 depicts a projectile entering the hit detection sphere, and ending its flight inside that sphere, resulting in a hit indication, according to exemplary embodiments of the present invention.



FIG. 3 depicts a projectile passing inside the hit detection sphere, and ending its flight outside that sphere, resulting in a miss indication, according to exemplary embodiments of the present invention.



FIG. 4 depicts a projectile passing outside the hit detection sphere, resulting in a miss indication, according to exemplary embodiments of the present invention, according to exemplary embodiments of the present invention.



FIG. 5 depicts a person wearing the device in two different ways, according to exemplary embodiments of the present invention. In one optional embodiment the system is located on the user's wrist 502 and in another embodiment, the system is located on the user's chest 501. The system may be embedded in a wearable device, such as a watch, or inside a garment, such as a shirt, hat, or a jacket. The system may be pinned or clamped to a piece of clothing. Placing multiple systems on the same target enables to further enhance the audio signals collected by the multiple systems and processing accuracy of the projectile's movement. Such multiple systems may communicate with each other over a wired or wireless communication channel.



FIG. 6 depicts a dart hit trajectory, according to exemplary embodiments of the present invention. The figure shows the dart in a first position 601, prior to hitting the target area defined by a sphere 603. The figure shows the dart in a second position 602, immediately before hitting the person. The figure also shows the system 604 worn by the person. The volume of the sphere 603 defines the target. The sphere's volume may be defined by the user and/or the manufacturer of the system. For example, how far from the system will be considered a hit. One user may define 5 centimeters and another user will define 25 centimeters from the system as a successful hit.



FIG. 7 depicts a dart miss trajectory, according to exemplary embodiments of the present invention. The figure shows a dart in a first position 1, approaching the person wearing the system. The figure also shows the same dart in a second position 2, passing in front of the person. When the dart is in the second position, the minimal distance between the dart and the system is lower than the system's threshold of a “hit”. However, the dart does not end its trajectory inside the sphere and continues its movement towards the third position 3, in which the dart is outside the “hit” distance. The minimal distance of the dart from the system is within the “hit” distance, but the “hit” indication is determined according to the distance between the object and the system at the end of the object's movement, which in this case is outside the “hit” range. The system may provide a “hit” indication only, a “miss” indication only, or both a “hit” indication and a “miss” indication.



FIG. 8 depicts a 2D representation of a triangulation of the hit location from several microphones on the same system, according to exemplary embodiments of the present invention. The two-dimensional triangulation uses two or more audio sensors 801 and 802. The analysis of the audio signal captured from the two audio sensors 801 and 802 define two hit radii 803 and 804 surrounding the respective audio sensor. Combining two such circles resulting from hit radii 803 and 804, produce two points of intersection, 805 and 806, which indicate the possible hit locations of the object. These points, if inside the defined hit-detection sphere, will result in a hit indication. Otherwise, it may result in a miss indication. Furthermore, this triangulation can determine the exact hit location on the target, apart from the hit detection itself.



FIG. 9 depicts a 2D representation of a triangulation of hit location from information shared by several devices, sharing information wirelessly, according to exemplary embodiments of the present invention. In FIG. 9, the audio sensors 902 and 904 define two hit radii 901 and 905.


Combining two such circles resulting from hit radii 901 and 905, produce 2 points of intersection, 903 and 906, which indicate the possible hit locations of the object. These points, if inside the defined hit-detection sphere, will result in a hit indication. Otherwise, it may result in a miss indication. Furthermore, this triangulation can determine the exact hit location on the target, apart from the hit detection itself.


When the system comprises multiple audio sensors, the method of calculating the distance from the target may be as follows:


In the first step, the system processes the audio signals from all of the multiple audio sensors simultaneously. In the second step, the object's audio signal is detected by each microphone, and a hit distance is calculated for each microphone separately. Then, each hit distance calculated for each microphone is used to simulate a sphere of hit radii, around each microphone's location relative to the other microphones. The architecture of microphones is stored in the memory used by the system, including the location of each microphone relative to the other microphones. The system then calculates intersection points of the multiple simulated spheres surrounding the microphones. Using the intersection points, the system identifies an exact hit location of the object relative to the system. For example, by computing the distance from each microphone and knowing the distances and angles between each of the microphones. In many cases, more precise results are provided as the number of microphones is greater.


With two microphones, the hit location will be a 2D circle, which represents an intersection of two spheres. With three microphones, the hit location will be one of two possible 1D points. With four microphones and more, the hit location will be a single possible 1D point. Similarly, this can be accomplished by sharing hit radii information from several separate systems, each having its set of one or more microphones, with the information shared either by-wire or wirelessly.



FIG. 10 shows a graph representing the frequency change in Hz versus the time-distance in milliseconds, between the object and the microphone, according to exemplary embodiments of the present invention. The X axis shows the time elapsing and the Y axis shows a change in frequency of the audio signal. The graph shows the system's microphone in the center of the graph, where the value in the Y axis is 0.0.


The graph shows 2 plots of two different objects moving near the system and their audio signals collected by the system. One plot on the graph is of a hit (orange) and one of miss (blue). Both plots are associated with an object flying at 17 m/sec. Both objects emit a sinusoidal audio signal at about 3000 Hz during their flight. The hit occurs at 50 cm distance from the microphone, and the miss is a distance greater than 150 cm from the microphone.



FIG. 11 shows a graph that shows a shift from the whistle frequency as a function of the distance from the target, according to exemplary embodiments of the present invention.


The shift from the whistle frequency has a positive offset when the dart is located in one direction, defined as a positive distance, and has a negative offset when the dart is located in one direction, defined as a negative distance.



FIG. 12 shows a graph having the system's microphone in the center of the graph, showing the percentage of frequency change versus the distance between the object and the microphone, according to exemplary embodiments of the present invention.


The Y axis shows the minimal distance of the object from the microphone. The X axis shows the advancement of the object from left to right, relative to the microphone.


The graph shows the association between the rate of frequency change due to the Doppler effect, caused by the velocity of the object relative to the microphone, which depends on the location of the object at every point in time. Also, the graph shows that the closer the minimum distance of the object to the microphone (along the Y axis), the higher the rate of frequency change. The amount of frequency change being fixed, means the duration of the main portion of the frequency change occurs in a smaller amount of time, the closer the object's path is to the microphone.


In some exemplary cases, when the change in the frequency of the audio signal is higher than a predefined threshold, the minimal distance between the object and the target is smaller than a predefined value. Hence, when analyzing the audio signals over time, for example over 0.05 seconds, the processing module may predict the minimal distance between the target and the object based on the change in the frequency of the audio signal over time.



FIG. 13 schematically shows an object hitting the target, according to exemplary embodiments of the present invention. The figure shows a physical body 1301 functioning as the target. A microphone 1302, or another audio sensor, is located inside, on or otherwise physically or electrically connected to the physical body 1301. Line 1303 represents the minimal distance between the object's closest point 1307 and the microphone 1302. The object moves from origin 1304, at movement path 1305, in the general direction of the physical body 1301. Line 1306 represents the distance between the hitting point 1308 of the object and the microphone 1302. The frequency change is greatest, and its change is immediate, when the object moves directly towards the microphone 1302, and the frequency change is diminished, and its rate of change decreases as the minimal distance to the microphone 1302 increases.



FIG. 14 shows a system for estimating hit or miss of an object based on audio signals, according to exemplary embodiments of the present invention. The system comprises an audio sensor 101 configured to collect audio signals. In some cases, the system comprises multiple audio sensors. The system also comprises a processing module 102 configured to process the collected audio signals and to determine whether or not the object hit the target. The processing module 102 may also determine a minimal distance between the object and the target. The system also comprises an indicator 104 for emitting an indication according to the processor's determination of whether or not the object hit or missed the target. The system also comprises a memory module 103 configured to store a set of rules used by the processor to determine whether or not the object hit the target. The set of rules may be stored in executable instructions accessed by the processing module when receiving a request to determine if the object missed or hit the target.



FIG. 15 shows a method for estimating hit or miss of an object based on audio signals, according to exemplary embodiments of the present invention.


Step 110 discloses collecting audio signal by a sensor. The audio signals are collected by sampling a frequency range likely to include the object's characteristic audio signal. For example, objects with different physical forms generate audio signals of different frequency spectrums as they move through a medium such as air.


Step 120 discloses sending the received signal to processor. Sending may be implemented by enabling access for the processor to a memory address in which the received signal is stored. When the audio sensor is not directly connected to the processor, sensing the received signal may be implemented via a wired or wireless cable. In some cases, the collected signals are stored in a memory address known to the processor and accessed by the processor.


Step 130 discloses receiving the signal by processor. The signals may be received by a memory module inside the processor or accessed by the processor when stored in a memory module in the system.


Step 140 discloses the processor loading predefined patterns of the frequency signature from the memory unit of the system or from a remote device, such as an online server. The predefined patterns may be associated with a specific object. The predefined patterns provide information as to the frequencies in which the audio is emitted due to the movement of a specific object. The memory may comprise multiple frequency patterns, each pattern of the multiple frequency patterns is associated with a specific object. The processor may convert the audio signals to the frequency domain using known methods, such as the Discreate Fourier transform, its FFT implementation, wavelet functions and the like.


Step 150 discloses the processor checking if the received signal includes at least one predefined pattern associated with the specific object. The processor may compare the pattern or patterns stored in the memory module of the system, or a memory module stored in a remote device, with the patterns extracted from the audio signals. The comparison may be updated after every set of signals sampled by the audio sensor. In some exemplary cases, the system is configured to estimate hit or miss of a plurality of different objects having different frequency signatures. In such case, the multiple frequency signatures are stored in the memory module of the system, and the system compares the collected audio signals with the multiple frequency signatures in order to identify the specific object type of the multiple obj ects.


Step 155 discloses tracking the predefined pattern over time. That is, the frequency spectrum of the audio signals is collected over time, to identify and compute changes in the frequency, as disclosed in step 160. Step 165 discloses predicting a minimal distance between the object and the target based on the rate of change in the frequency of the audio emitted due to the object's movement. In one exemplary case, an object is directed towards a target having an initial velocity of at 17 meters per second and having an audio signal at about 3000 Hz generated due to the object's movement. When the maximum rate of change is 8300 Hz/sec, the minimal distance between the object and the target would be 15 centimeters. When the minimal the maximum rate of change is 3700 Hz/sec, the minimal distance between the object and the target would be 40 centimeters. Step 165 discloses emitting indication according to minimal distance between the object and the target. The indication may be audio, light, vibration, smell, and the like.


It should be noted that the system may define multiple radii for a specific target, each radius of the multiple radii is associated with a different rate of change in the frequency of the audio signal emitted due to movement of a specific object or object type. Such radiuses may be 0.2 m, 0.4 m, 0.6 m, 0.8 m. This way, the system may compute whether the object is within a range of 0.4 m but outside the range of 0.2 m and determine a range of distance from the audio sensor. For example, the range of distance may be from 0.2 meters to 0.4 meters.


In some exemplary cases, the system is configured to estimate distance between the target for multiple object types having different frequency signatures. This way, the system identifies the object type and then estimates whether the object type hit or missed the target. The system then logs the hit or miss in a memory module. The system may also estimate a range of distance from the target as explained above. The system may then send a performance report to one or more electrical devices based on the object types and the hit/miss and/or distance from target. In some cases, the system may generate multiple reports, based on the object types, and send the reports to electrical devices associated with the object type.



FIG. 16 shows a method for estimating a change in frequency of the audio signal emitted due to and during the object's movement based on audio signals, according to exemplary embodiments of the present invention.


Step 210 discloses defining a distance between the object and the audio sensor that qualifies as a hit. Such defining may be performed using information inputted via a user's interface accessed by a user of the system, for example by manipulating buttons or a touch screen on the system, via a mobile phone, a personal computer, tablet, laptop, and the like. The user may control an array of targets and assign a different distance to be defined as a hit radius for each target or for all targets at once. For example, a training assembly, in which 5 targets have a range of 5 centimeters that qualifies as a hit and 3 targets have a range of 25 3 centimeters that qualifies as a hit. The distance defined as hit may be adjusted according to environmental measurements or information, such as user's height, weight, terrain in which the system is placed, weather, light, and the like. In some exemplary cases, the distance defined as hit may be adjusted automatically based on measurements, such as environmental measurements, audio signals and the like.


Step 220 discloses converting audio signals to the frequency domain. Such conversion may be performed using the Fourier transform. The audio sensor may sample 2,000-50,000 samples per second and send 20-120 samples to the processor. The audio sensor samples the audio signals in response to the processor's command and terminates sampling upon a termination command from the processor.


Step 230 discloses performing time domain analysis or a frequency analysis to match the object type with the collected audio signal. The frequency analysis determines the frequency pattern of the audio emitted due to the object's movement. For example, which local-maxima frequencies are detected, the amplitude in each frequency and the like.


Step 240 discloses updating the object's frequency analysis in subsequent groups of samples. For example, in the first run, the processor analyzes samples 1-120 and in the second run the processor analyses samples 121-240. There can be an overlap between groups, for example, in the first run, the processor analyzes samples 1-128 and in the second run the processor analyzes samples 65-193.


Step 245 discloses identifying the rate of frequency change of the audio signal's frequency spectrum, thereby computing the minimum distance of the object from the audio Sensor.


Step 250 discloses determining the distance between the location in which the object ended its movement and the target. Such determination is based on the timing in which the object ended its movement. This process comprises computing a time in which object ended its movement. Such time is detected when the audio signal ceases to include the object's frequency signature. Then, the system has the time duration between the time stamp in which the object was at minimal distance to the target and the time when the object stopped moving. Determining the distance may be performed by multiplying the time duration with a known velocity of the object, for example the object's initial velocity in case the object was fired at a known initial velocity.


Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.


All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.

Claims
  • 1. A method for estimating hit or miss of an object directed towards a target, comprising: collecting an audio signal over time by an audio sensor located on, inside or near the target, said audio signal is defined by a set of frequencies;identifying changes in the set of frequencies over time;predicting a minimal distance between the object and the audio sensor based on the changes in set of frequencies over time;emitting an indication according to the minimal distance between the object and the audio sensor.
  • 2. The method of claim 1, further comprising identifying that the object is directed towards the target by comparing a pattern from the audio signal with a list of predefined patterns stored in a computerized memory.
  • 3. The method of claim 2, wherein each of the predefined patterns provides information as to frequencies in which audio is emitted due to the movement of a specific object, wherein identifying the object based on a frequency pattern associated with the object in the computerized memory.
  • 4. The method of claim 1, further comprising estimating hit or miss of a plurality of different objects having different frequencies.
  • 5. The method of claim 1, further comprising computing a distance between the object's hit location and the target and determining a hit or miss of the object in the target by comparing the computed distance and a threshold distance.
  • 6. The method of claim 5, wherein computing the distance between the object's hit location and the target is based on the timing in which the object ended its movement, said end of the object's movement is detected when the audio signal ceases to include the object's frequency signature.
  • 7. The method of claim 1, further comprises defining a range of distances from the audio sensor that qualifies as a hit, and determining a hit in case the distance between the hit location and the audio sensor is smaller than the range.
  • 8. The method of claim 7, further comprises defining multiple radii from the audio sensor and determining two radii that define the distance between the object and the audio sensor.
  • 9. The method of claim 1, further comprises defining a distance between the object and the audio sensor that qualifies as a hit.
  • 10. The method of claim 9, further comprises adjusting the distance between the object and the audio sensor that qualifies as a hit based on environmental measurements.
  • 11. The method of claim 1, further comprises determining the distance between the location in which the object ended its movement and the target.
  • 12. The method of claim 11, further comprises detecting that the audio signal ceases to include the object's frequency signature; computing a time duration elapsing between a time stamp in which the object was at the minimal distance from the audio sensor and the time of detecting that the audio signal ceases to include the object's frequency signature;multiplying the time duration with a known or calculated velocity of the object.
  • 13. The method of claim 1, wherein the indication is emitted by at least one of light, sound, RF, and vibration.
  • 14. The method of claim 1, further comprises creating a tracking log, said tracking log comprises the object's location and the detected frequencies over time.
  • 15. The method of claim 1, wherein predicting the minimal distance between the object and audio sensor is performed before the object reaches the minimal distance.
Provisional Applications (1)
Number Date Country
62841838 May 2019 US