The present invention relates to estimating whether an object hit or missed a target.
Objects, such as bullets, projectiles, balls, plastics objects, and others, are many times shot at a target. Such a target may be part of a firing training session, or part of a game, in which a player uses a toy gun and fires, shoots, or throws physical objects at a target. The target may be static or movable, for example mounted on a players' upper arm.
It is likely to desire to estimate whether or not the object hit the target in order to decide how to reward a player in the game, or to evaluate a training session when firing real projectiles at a target. Current methods to estimate hit or miss include placing a disposable layer on the target, such as cardboard or paper, and marking the hits on the disposable layer. This method cannot be used when playing with toy guns, or using objects that do not leave a mark on the target.
The subject matter discloses a computerized system and method configured to estimate whether or not an object hits a target or misses a target based on audio signals. The object may be a projectile shot from a weapon. The weapon may be a firearm, a training weapon configured to fire non-kill bullets, toys, and the like.
The system comprises one or more audio sensors, such as microphones. The audio sensor is electrically coupled to a processing module configured to process the audio signals, as elaborated below, a digital processor and an indicator configured to indicate whether the object hit or missed the target. The indicator may be configured to emit the indication in one or more manners, including emitting light, audio, wireless transmissions, and vibrations. The indicator emits an indication according to a command from the processing module, the command depends on whether the object hit or missed the target. The system may also comprise one or more audio-signal amplifiers.
The system detects objects moving towards the target by sampling sonic and/or ultrasonic and/or infrasonic signals at the processing module's input. The system is configured to be placed in or on the target. In case the system is located inside the target, the audio sensor may be external to the target or on its surface, to allow the audio signals to reach them without attenuation. The system tracks the sound emitted during the objects' movement and detects if the object hit in or on the target. The sound may be emitted due to the flow of air or other medium through and around the object, while the object moves through the air or other medium. The system may estimate hit or miss at a sphere-like volume surrounding the system. The radius of the target around the system may be adjusted. Once a hit is detected, the system provides indications using the indicator.
The audio sensor may be configured to collect audio signals in a frequency range that matches a predefined frequency signature of the audio emitted due to the object's movement. The frequency of the audio emitted due to the object's movement may be stored in the memory unit of the system. When the system is configured to determine hits or misses of multiple objects having multiple frequency signatures of the audio emitted due to the objects' movements, a specific frequency signature associated with a specific object or object type may be inputted into the system by a user of the system, for example via an input unit of the system or by a control interface receiving commands from another device.
The present invention also discloses a method for determining hits or misses of an object shot at a target. The method comprises collecting the audio signals by the audio sensor. The audio signals are sampled by the sensor, for example in a sampling rate of 10,000 samples per second. The audio signals are sent to the processing module. The processing module obtains multiple samples of the audio signals emitted due to and during the object's movement. The processing module may calculate a rate of change in the frequency signature of the audio signals emitted due to the object's movement.
The invention may be used in the field of toy darts, emitting whistling noises while flying through the air. The invention may also be used to analyze movement of objects in other mediums, such as water. The device can be made small enough to be worn on the user's arm, shoulder, neck, or torso, or even pinned or clamped to a piece of clothing. The device detects flying darts, and when they hit the wearer inside a certain sphere, circle, or point of impact, the device will indicate a hit, for example by flashing an LED, emitting specific noises through a small speaker, emit vibrations, and send wireless signals to other devices, smart phones, and any other devices participating in the game.
The invention may also be used for the military field of applications, where such devices can be equipped on vehicles, portable targets or personnel, capturing whistles of projectiles during combat training. The collected signals may be shared or sent to a remote server. The server may create a map of precise hit points of incoming shells, bullets, and other projectiles.
Another military use case is to use the information of projectiles as they fly, to create a map of the origin of the shots, allowing to reveal the enemy's location.
It is another object of the subject matter to disclose a method for estimating hit or miss of an object directed towards a target, comprising collecting an audio signal over time by an audio sensor located on, inside or near the target, said audio signal is defined by a set of frequencies, identifying changes in the set of frequencies over time, predicting a minimal distance between the object and the audio sensor based on the changes in set of frequencies over time, emitting an indication according to the minimal distance between the object and the audio sensor.
In some cases, the method further comprising identifying that the object is directed towards the target by comparing a pattern from the audio signal with a list of predefined patterns stored in a computerized memory.
In some cases, each of the predefined patterns provides information as to frequencies in which audio is emitted due to the movement of a specific object, wherein identifying the object based on a frequency pattern associated with the object in the computerized memory.
In some cases, the method further comprising estimating hit or miss of a plurality of different objects having different frequencies.
In some cases, the method further comprising computing a distance between the object's hit location and the target and determining a hit or miss of the object in the target by comparing the computed distance and a threshold distance.
In some cases, computing the distance between the object's hit location and the target is based on the timing in which the object ended its movement, said end of the object's movement is detected when the audio signal ceases to include the object's frequency signature.
In some cases, the method further comprises defining a range of distances from the audio sensor that qualifies as a hit, and determining a hit in case the distance between the hit location and the audio sensor is smaller than the range.
In some cases, the method further comprises defining multiple radii from the audio sensor and determining two radii that define the distance between the object and the audio sensor.
In some cases, the method further comprises defining a distance between the object and the audio sensor that qualifies as a hit.
In some cases, the method further comprises adjusting the distance between the object and the audio sensor that qualifies as a hit based on environmental measurements.
In some cases, the method further comprises determining the distance between the location in which the object ended its movement and the target.
In some cases, the method further comprises detecting that the audio signal ceases to include the object's frequency signature;
computing a time duration elapsing between a time stamp in which the object was at the minimal distance from the audio sensor and the time of detecting that the audio signal ceases to include the object's frequency signature;
multiplying the time duration with a known or calculated velocity of the object.
In some cases, the indication is emitted by at least one of light, sound, RF, and vibration.
In some cases, the method further comprises creating a tracking log, said tracking log comprises the object's location and the detected frequencies over time.
In some cases, predicting the minimal distance between the object and audio sensor is performed before the object reaches the minimal distance.
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
The term “audio signal”, also defined as “frequency spectrum” and “frequency of the audio signal emitted due to and during the object's movement”, refers to a physical representation of the audio signal generated inside a medium, such as air or water, due to and during an object's movement through that medium, in the sonic and/or ultrasonic and/or infrasonic range.
The system comprises one or more audio sensors 1, such as microphones, configured to collect audio signals. The audio signals may be in the sonic and/or ultrasonic and/or infrasonic range. The system may be further comprised of filters and amplifiers 2 configured to enhance the audio signal collected by the audio sensor, for example to filter the audio signal in a manner that only the frequency range likely to be emitted by the object during its movement is processed. The system also comprises a processing module 3 and an indicator 7. The indicator is configured to indicate whether the object hit or missed the target. The indication may be provided in one or more ways, including emitting light, audio, wireless transmissions, and vibrations.
In some exemplary cases, the audio sensor provides a digital output, such as microelectromechanical systems MEMS. In such case, the audio sensor may be directly coupled to the digital input of the processing module 3. In case the audio sensor comprises analog microphones, the microphones' output is filtered and amplified, requiring an analog filter and analog amplifier 2. The filtered analog signal may be converted to digital format by a standard analog to digital converter (ADC).
Capturing of the digital audio signals may be performed by a DSP or processor, via a digital interface. The DSP or processor performs digital signal processing of the audio signal, in groups of samples, and analyzes the signal's frequency spectrum and other characteristics.
The system stores the characteristics, for example the frequency spectrum, of the audio signal. This way, the system is configured to perform a frequency analysis of the collected signals to determine whether the object is moving towards the system, or has stopped moving or has moved past the system, and to calculate the object's distance from the system.
By tracking the object's frequency spectrum over time, group by group of samples, the system determines whether the object has ended its movement within or without a certain radius from the audio sensor. Changes in the audio signal are used to predict the minimal distance of the object during its movement from the microphone, as detailed below. After calculating the minimal distance between the object and the target, the processing module can extract the time stamp in which the object is closest to the target. The processing module tracks the audio signal until the audio generated due to the object's movement ends. Then, the processing module extracts the time elapsed between the time stamp in which the object is closest to the target and the time stamp in which the object's movement ended and calculates the distance between the target and the location where the object ended its movement.
It is possible to increase the signal-to-noise-ratio (SNR) of the object's movement noise, by using more than 1 microphone, and/or activating noise cancelation techniques on the captured audio signals. This can give better detection chance in noisy environments.
Combining two such circles resulting from hit radii 901 and 905, produce 2 points of intersection, 903 and 906, which indicate the possible hit locations of the object. These points, if inside the defined hit-detection sphere, will result in a hit indication. Otherwise, it may result in a miss indication. Furthermore, this triangulation can determine the exact hit location on the target, apart from the hit detection itself.
When the system comprises multiple audio sensors, the method of calculating the distance from the target may be as follows:
In the first step, the system processes the audio signals from all of the multiple audio sensors simultaneously. In the second step, the object's audio signal is detected by each microphone, and a hit distance is calculated for each microphone separately. Then, each hit distance calculated for each microphone is used to simulate a sphere of hit radii, around each microphone's location relative to the other microphones. The architecture of microphones is stored in the memory used by the system, including the location of each microphone relative to the other microphones. The system then calculates intersection points of the multiple simulated spheres surrounding the microphones. Using the intersection points, the system identifies an exact hit location of the object relative to the system. For example, by computing the distance from each microphone and knowing the distances and angles between each of the microphones. In many cases, more precise results are provided as the number of microphones is greater.
With two microphones, the hit location will be a 2D circle, which represents an intersection of two spheres. With three microphones, the hit location will be one of two possible 1D points. With four microphones and more, the hit location will be a single possible 1D point. Similarly, this can be accomplished by sharing hit radii information from several separate systems, each having its set of one or more microphones, with the information shared either by-wire or wirelessly.
The graph shows 2 plots of two different objects moving near the system and their audio signals collected by the system. One plot on the graph is of a hit (orange) and one of miss (blue). Both plots are associated with an object flying at 17 m/sec. Both objects emit a sinusoidal audio signal at about 3000 Hz during their flight. The hit occurs at 50 cm distance from the microphone, and the miss is a distance greater than 150 cm from the microphone.
The shift from the whistle frequency has a positive offset when the dart is located in one direction, defined as a positive distance, and has a negative offset when the dart is located in one direction, defined as a negative distance.
The Y axis shows the minimal distance of the object from the microphone. The X axis shows the advancement of the object from left to right, relative to the microphone.
The graph shows the association between the rate of frequency change due to the Doppler effect, caused by the velocity of the object relative to the microphone, which depends on the location of the object at every point in time. Also, the graph shows that the closer the minimum distance of the object to the microphone (along the Y axis), the higher the rate of frequency change. The amount of frequency change being fixed, means the duration of the main portion of the frequency change occurs in a smaller amount of time, the closer the object's path is to the microphone.
In some exemplary cases, when the change in the frequency of the audio signal is higher than a predefined threshold, the minimal distance between the object and the target is smaller than a predefined value. Hence, when analyzing the audio signals over time, for example over 0.05 seconds, the processing module may predict the minimal distance between the target and the object based on the change in the frequency of the audio signal over time.
Step 110 discloses collecting audio signal by a sensor. The audio signals are collected by sampling a frequency range likely to include the object's characteristic audio signal. For example, objects with different physical forms generate audio signals of different frequency spectrums as they move through a medium such as air.
Step 120 discloses sending the received signal to processor. Sending may be implemented by enabling access for the processor to a memory address in which the received signal is stored. When the audio sensor is not directly connected to the processor, sensing the received signal may be implemented via a wired or wireless cable. In some cases, the collected signals are stored in a memory address known to the processor and accessed by the processor.
Step 130 discloses receiving the signal by processor. The signals may be received by a memory module inside the processor or accessed by the processor when stored in a memory module in the system.
Step 140 discloses the processor loading predefined patterns of the frequency signature from the memory unit of the system or from a remote device, such as an online server. The predefined patterns may be associated with a specific object. The predefined patterns provide information as to the frequencies in which the audio is emitted due to the movement of a specific object. The memory may comprise multiple frequency patterns, each pattern of the multiple frequency patterns is associated with a specific object. The processor may convert the audio signals to the frequency domain using known methods, such as the Discreate Fourier transform, its FFT implementation, wavelet functions and the like.
Step 150 discloses the processor checking if the received signal includes at least one predefined pattern associated with the specific object. The processor may compare the pattern or patterns stored in the memory module of the system, or a memory module stored in a remote device, with the patterns extracted from the audio signals. The comparison may be updated after every set of signals sampled by the audio sensor. In some exemplary cases, the system is configured to estimate hit or miss of a plurality of different objects having different frequency signatures. In such case, the multiple frequency signatures are stored in the memory module of the system, and the system compares the collected audio signals with the multiple frequency signatures in order to identify the specific object type of the multiple obj ects.
Step 155 discloses tracking the predefined pattern over time. That is, the frequency spectrum of the audio signals is collected over time, to identify and compute changes in the frequency, as disclosed in step 160. Step 165 discloses predicting a minimal distance between the object and the target based on the rate of change in the frequency of the audio emitted due to the object's movement. In one exemplary case, an object is directed towards a target having an initial velocity of at 17 meters per second and having an audio signal at about 3000 Hz generated due to the object's movement. When the maximum rate of change is 8300 Hz/sec, the minimal distance between the object and the target would be 15 centimeters. When the minimal the maximum rate of change is 3700 Hz/sec, the minimal distance between the object and the target would be 40 centimeters. Step 165 discloses emitting indication according to minimal distance between the object and the target. The indication may be audio, light, vibration, smell, and the like.
It should be noted that the system may define multiple radii for a specific target, each radius of the multiple radii is associated with a different rate of change in the frequency of the audio signal emitted due to movement of a specific object or object type. Such radiuses may be 0.2 m, 0.4 m, 0.6 m, 0.8 m. This way, the system may compute whether the object is within a range of 0.4 m but outside the range of 0.2 m and determine a range of distance from the audio sensor. For example, the range of distance may be from 0.2 meters to 0.4 meters.
In some exemplary cases, the system is configured to estimate distance between the target for multiple object types having different frequency signatures. This way, the system identifies the object type and then estimates whether the object type hit or missed the target. The system then logs the hit or miss in a memory module. The system may also estimate a range of distance from the target as explained above. The system may then send a performance report to one or more electrical devices based on the object types and the hit/miss and/or distance from target. In some cases, the system may generate multiple reports, based on the object types, and send the reports to electrical devices associated with the object type.
Step 210 discloses defining a distance between the object and the audio sensor that qualifies as a hit. Such defining may be performed using information inputted via a user's interface accessed by a user of the system, for example by manipulating buttons or a touch screen on the system, via a mobile phone, a personal computer, tablet, laptop, and the like. The user may control an array of targets and assign a different distance to be defined as a hit radius for each target or for all targets at once. For example, a training assembly, in which 5 targets have a range of 5 centimeters that qualifies as a hit and 3 targets have a range of 25 3 centimeters that qualifies as a hit. The distance defined as hit may be adjusted according to environmental measurements or information, such as user's height, weight, terrain in which the system is placed, weather, light, and the like. In some exemplary cases, the distance defined as hit may be adjusted automatically based on measurements, such as environmental measurements, audio signals and the like.
Step 220 discloses converting audio signals to the frequency domain. Such conversion may be performed using the Fourier transform. The audio sensor may sample 2,000-50,000 samples per second and send 20-120 samples to the processor. The audio sensor samples the audio signals in response to the processor's command and terminates sampling upon a termination command from the processor.
Step 230 discloses performing time domain analysis or a frequency analysis to match the object type with the collected audio signal. The frequency analysis determines the frequency pattern of the audio emitted due to the object's movement. For example, which local-maxima frequencies are detected, the amplitude in each frequency and the like.
Step 240 discloses updating the object's frequency analysis in subsequent groups of samples. For example, in the first run, the processor analyzes samples 1-120 and in the second run the processor analyses samples 121-240. There can be an overlap between groups, for example, in the first run, the processor analyzes samples 1-128 and in the second run the processor analyzes samples 65-193.
Step 245 discloses identifying the rate of frequency change of the audio signal's frequency spectrum, thereby computing the minimum distance of the object from the audio Sensor.
Step 250 discloses determining the distance between the location in which the object ended its movement and the target. Such determination is based on the timing in which the object ended its movement. This process comprises computing a time in which object ended its movement. Such time is detected when the audio signal ceases to include the object's frequency signature. Then, the system has the time duration between the time stamp in which the object was at minimal distance to the target and the time when the object stopped moving. Determining the distance may be performed by multiplying the time duration with a known velocity of the object, for example the object's initial velocity in case the object was fired at a known initial velocity.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
Number | Date | Country | |
---|---|---|---|
62841838 | May 2019 | US |