This application claims priority to foreign European patent application No. EP 23307049.9, filed on Nov. 24, 2023, the disclosure of which is incorporated by reference in its entirety.
The present invention relates to a method and system for a shooting training of a shooting device on a target. In particular, the invention relates to a method and system used in live exercise shooting training or games.
In live exercise shooting training, it is necessary to be able to provide an assessment, reliably and in real time, of the projectile shots fired, whether blank or simulated. Such requirements demand certain features from the analysis systems conceived. They must have a precision comparable to that which would be obtained in real situations, while being unobtrusive, i.e. requiring the least possible additional equipment mounted on both the shooting device and the target.
These requirements must address a number of practical operational obstacles such as the weight of the additional equipment installed, the performance (precision and latency), and the autonomy of the kit, as well as technological obstacles which are mainly the precision of the data recorded and analyzed, the reliability of the image analysis, the minimum computing power installed in the device, and the speed and consumption of the wireless link.
Currently, there are a number of methods for simulating projectiles in shooting training. A technique most commonly used is the a posteriori observation of the accuracy of the shot. If the shot is fired at a target, then the target serves as a support for checking the accuracy of the shot fired. If the shot is fired at a target, then the accuracy of the shot is analyzed by way of the impact of the ammunition (blank or via a paintball for example). Another technique is to film the target via an external device allowing the user to check the accuracy of the shot. A last approach is to use a laser system coupled to the firing of the shot and analyzed by an external device, for example by means of multiple markers mounted on different parts of the potential targets which transmit the information to a central system whether there is an impact or not.
The following references are an illustration of various devices of the prior art:
EP 0985899 A1 proposes a compact device for recording video images which may be mounted on a gun and used to record video images before and after the firing of the gun. The recording device comprises a camera comprising a lens and a video image sensor. The video recording device is mounted on the gun such that the viewing area of the camera comprises the target area of the gun. The video image sensor generates an electronic signal representative of a video image impinging on the respective sensor. The output of the image sensor is processed and generally employed to produce successive frame data which are sequentially stored in locations of a semiconductor memory organized as a circular buffer memory while the video recording device is in an active state. When the gun is fired, additional frames are stored in the buffer memory for a short period of time and a portion of the buffer memory is employed to keep a video record of the shooting both before and after the event. Additional frames are successively stored in the unused portion of the buffer memory.
U.S. Pat. No. 8,022,986 by Jekel provides a shooting device orientation measurement device which comprises a processor configured to receive first location information indicative of the locations of a first and a second point on a shooting device, the first and second points being a known distance apart in a direction parallel to a pointing axis of the shooting device, and to receive second location information indicative of the locations of the first and second points on the shooting device. The processor is further configured to receive information indicative of a first terrestrial orientation and to determine a second terrestrial orientation corresponding to the shooting device based on the first and second location information and the information indicative of the first terrestrial orientation. The first location information represents a location relative to a first sensor at a first location and the second location information represents a location relative to a second sensor at a second location, and the first and second sensors are separated by a given distance.
Patent application US 2012/0178053 A1 by D'Souza et al. relates to a method and system for a shooting training system which automatically predicts the ballistics based on automatically gathered meteorological and distance information. The projectile shooting training system also confirms that manual efforts performed by an operator to adjust the sight turrets would or would not result in hitting the target after firing a shot. Both adjustment of the turrets and target settings are used to distinguish between the following states after firing a shot: hit; kill; miss; near miss. A light or other signal is sent from the shooting device to the target to indicate that a shot was fired by the shooting device.
The drawbacks of the existing methods are that, in general, shooting training requires an assessment of the shot fired to be provided in a way that is as close as possible to real ballistics while being free from the associated dangers. As a result, the analysis of a shot may be seen as a marking problem in which it is necessary to be able to label a target through certain opaque obstacles and fuzzy obstacles, or even via a curved trajectory.
A method known for more than 20 years for tackling this problem consists in equipping the potential targets with photosensitive sensors that are able to send information when they are illuminated by a laser. This method has several drawbacks: attenuation of the laser over great distances, the inability to shoot through fuzzy obstacles (e.g. foliage), and the need to equip the target with enough photosensitive sensors, among others. The number of photosensitive sensors necessary engenders heavy weight that is not representative to real field equipment, as well as high cost. Additionally, laser light is emitted in a straight line, which does not precisely represent the trajectory of a projectile.
To be usable, digital marking must be able to simulate a shot by assigning the impact of the projectile a random distribution close to that of a real shot. However, the techniques currently proposed do not allow this problem to be solved in a satisfactory manner.
These problems are solved or mitigated by the claimed method and system.
A first aspect of the method comprises a method for improving a shooting training with a shooting device on a target, the shooting device comprises a laser having an initial position and the target comprising one or more laser receivers, the method comprising:
The invention enables simultaneous and realistic estimation of the projectile trajectory, which engenders a more precise determination of the impact zone. Additionally, the system does not require placing a large number of sensors on users' equipment, which reduces the weight and therefore enable realistic training conditions, and reduces the costs.
Additionally, the invention enables using the laser for transmitting data instead of determining the impact zone, reducing the number of laser sensors. In particular, this mean of communication enables determining the position of the receiver target in the shooting device frame using the image analysis. Thus, this in turns enables communicating with the target by pointing the laser at it without knowing its “identification” or network address.
Advantageously, it is a point-to-point communication between the shooting device and the target, which does not rely on an external infrastructure network. Therefore, the communication between the shooting device and the target using the laser enables lowering the cost and ensuring an effective use of bandwidth.
Optionally, the data is transmitted if the projectile hits the target.
Optionally, the shooting device comprises a trigger to shoot the projectile on the target, the acquisition of the image or video data being triggered when the trigger is activated.
Advantageously, the steps of the method are performed automatically when a user pulls the trigger.
Optionally, the target is moving and the analysis step comprises: analyzing the motion of the target and determining a velocity of the target, and wherein the impact zone determination step comprises predicting the target position when the projectile reaches the target.
Advantageously, if the target is moving, it is possible to measure a future position of the target.
Optionally, the method comprises, acquiring further image or video data relating to the target prior to the step of transmitting data relating to the shot on the target.
Advantageously, if the target is moving, the further image or video data enable determining the kinematics of the target and thus its position at the transmitting time to ensure that the data relating to the shot is transmitted.
Optionally, the data relating to the shot on the target are chosen in a group consisting in a type of ammunition, identification information about a first user shooting the projectile, a time stamp and/or target coordinates and a combination thereof.
Advantageously, the data relating to the shot can be further for training purpose, damage assessment on the target side and optional report to an exercise monitoring station.
A second aspect of the invention provides a system for improving a shooting training with a shooting device on a target, the system comprising means to implement the method of any one of claims 1 to 6, the means comprising:
Optionally, the deflecting system comprises micro-electromechanical systems.
Optionally, the diameter of the laser is adjustable based on the analysis of the video of image data.
Advantageously, it ensures that the receiver of the correct user is hit in case a group of users are nearby.
A third aspect of the invention provides a computer program product, said computer program product comprising code instructions for performing, when executed by a processing unit, the steps of:
The following description presents several examples of embodiment of the device of the invention: these examples are not limiting of the scope of the invention. These exemplary embodiments present both the essential characteristics of the invention as well as additional characteristics linked to the embodiments considered.
Embodiments of the invention will now be described with reference to the accompanying drawings, in which:
The invention relates to a method 1000 for improving a shooting training with a shooting device on a target.
The method 1000 may be used for live exercise training in which multiple players are training. In another example, the method 1000 may be used in a shooting game, such as a laser game. For example, each user may hold a shooting device and be a target at the same time. In another example, some users may hold a shooting device and others may be a target. The target T corresponds to a user or a body area of a user.
The shooting device W may be a firearm, such as a gun, that launches projectiles such as ammunitions or bullets. In another example, the device W is not required to fire any ammunition, the shot may be calculated virtually.
At block 1002, the method 1000 comprises acquiring image or video data relating to the target. For example, the shooting device W of a user may comprise a camera C which takes images and/or videos of a scene comprising another user. The camera C may be sensitive to infrared light in order to take images or videos in the dark.
As illustrated in
The shooting device W comprises a trigger Trig to shoot a projectile on the target T. In an example, the acquisition of the image or video data being triggered when the trigger is activated. In particular, when the first user U1 pulls the trigger, the photo and/or video is acquired automatically. In an example, the camera stream is activated as soon as the trigger Trig is activated, which is determined using an inertial measurement unit.
At block 1004, the method comprises analyzing the acquired image or video data of the target T to determine pixels corresponding to at least one part of the target T. For example, the analysis may comprise using a stance calculation algorithm based on neural networks.
The determination of pixels corresponding to at least one part of the target T is illustrated in
The camera C has previously been aligned and calibrated. For example, a database may be used, the database listing standard distances between different body key points. For example a database such as the NASA Standard Human Body Model and the human body stance may be used.
At block 1006, the method comprises determining a distance between the shooting device and the target based on the analysis and determining a position of one of the laser receivers.
The standard distances between different body key points together with the camera parameters (such as the optical parameters of the lens) determined previously may be used to determine the body distance between the shooting device and the target T, i.e. the second user U2.
Additionally, the distance may be determined based on a GPS position of the users. In particular, each user may hold a GPS transmitter, for example comprised in the processing unit PU, which enables continuous tracking of the GPS position of each user.
Additionally or alternatively, the first user U1 may hold a laser telemeter to detect the distance between the shooting device the second user U2.
If multiple users are detected in the image or video data, the distance is calculated per user and subsequent processing is performed on a user basis, then rejecting users that do not cross the ammunition's ballistic path.
At block 1008, the method comprises shooting a virtual or a real projectile with the shooting device.
In particular, the first user U1 may shoot blank ammunition, aiming at the target T, which is the second user U2 or a body area of the second user U2.
As indicated before, the previous steps may be triggered when the first user U1 pulls the trigger Trigg. When the first user U1 pulls the trigger Trig, a non-lethal projectile may also be shot from the shooting device W. Therefore, the previous steps and the projectile being shot may be completed simultaneously.
At block 1010, the method comprises determining the impact zone of the projectile on the target T by calculating a trajectory from the shooting device to the target T based on the distance determination and ballistic data from the projectile.
The ballistic data may include the type of ammunition and the type of shooting device W.
The trajectory of the projectile may be calculated by the processing unit PU of user U1 in which the ballistic data from the projectile are stored. In particular, the shooting device W and the ammunition used may be known, and therefore the ballistic properties. Using the determined distance obtained by the, the projectile drop is calculated. The theoretical hit point is determined analogous to the dynamic hit determination.
For example, the processing unit PU of user U1 may calculate if the projectile has hit the target T, and/or which area of the target has been hit, corresponding to the impact zone on the target.
Additionally, at block 1012, the method 1000 comprises deflecting the laser L from the initial position to a new position based on the determined position of said one laser receiver.
At block 1014, the method 1000 comprises transmitting with the laser at the new position to the laser receiver data relating to the determined impact zone and relating to the shot on the target.
The processing unit PU of the user U1 calculates the position of the receiver of the second user U2 in the shooting device coordinates frame. Further, the processing unit PU then deflects the shooting device mounted steerable laser to aim at the target receiver. Data are then transmitted to the laser receiver of the second user U2 for further processing.
For example, as shown in
Further, damage assessment information may be sent by the processing unit of the second user U2 to an exercise monitoring station for higher-level scenario control and later debriefing analysis.
The target may comprise only one laser receiver R. for example on the helmet H. Alternatively, the target may comprise several receivers R. The processing unit PU may select one of the receivers. For example, the processing Unit may select the receiver which is visible on the image or video data or randomly pick one of the receivers. The receiver R may be comprised in the processing unit PU or may be separate. For example, the receiver R may be placed on the helmet of the second user U2 and the data may be transmitted via Bluetooth to the PU. Alternatively, the shooting device may comprise a laser L, and the data may be transmitted to the receiver R via a laser beam B.
The laser beam B is deflected to aim at the receiver R in order to transmit the data. For example, the shooting device W may comprise a system for deflecting the laser L, such as micro-electromechanical systems.
The receiver R is placed in the field of view F of the camera C, which enables the deflection of the laser L. In particular, the image and/or video data may be analyzed to calculate the deflection of the laser needed to reach the receiver R.
Using the determined body stance and its dimensions, the position of the laser receiver R on the target T can be determined. The laser L is deflected to a position (dxaptp, dyaptp) using the MEMS mirrors and transmits the data to the receiver.
The following formulas may be used to define the deflection of the laser:
ymloffset represents the distance between the spot on the target seen through the gun sights and the target (where the projectile hits the second user U2).
The method 1000 may further comprise: acquiring further image or video data relating to the target T prior to the step of transmitting data relating to the shot on the target T. In particular, the target T being hit by the projectile may engender a different position or stance of the second user U2. For example, the second user U2 may fall on the ground or run away. Therefore, the position of the receiver R may be different from that at the time the image or video data was acquired.
The diameter of the laser beam B may be adapted (increased or decreased) on the laser L depending on the estimated distance to enable close combat training, that is if several users are in the same area, or if the laser spot has an elliptical shape.
As illustrated in
The tracking of the body stance of the user U2 enables determining the instantaneous velocity vhuman of the tracked user and its position in the immediate future in the potentially moving coordinate's frame of the shooting device. Using the determined velocity, the distance estimation between the shooting device W and the target T as well as the ballistic properties of the ammunition, the time delayed impact point may be calculated when shooting with lead.
In an example, the data is transmitted if the projectile hits the target. For example, if the theoretical impact point coincides with the body, this information may be transmitted to the receiver R.
The invention relates to a system 100 for improving a shooting training with a shooting device on a target.
As illustrated in
The camera C may comprise a system for analysing the images or videos and therefore detect the stance of the second user U2. Therefore, the camera may be implemented using hardware, software, and/or a combination thereof in order to analyze the image and video data.
Alternatively, the camera acquires the image and video data and transmits the data to a processing unit PU for further analysis. In that case, the camera may comprise a Bluetooth module to communicate with the processing unit PU. In particular, the camera C may transmit the image and video data to the processing unit PU via Bluetooth or local Wi-Fi. The camera C may comprise a power supply. For example, WO 2020/079157 describes a camera that may be used.
The camera C comprise a laser source L for transmitting data. The laser source L may use MEMS deflector and may be steerable to any position in the image.
The system 100 further comprises one or more receivers for receiving the data relating to the determined impact zone and relating to the shot on the target.
The shooting device further comprises a deflecting system configured to aim the laser at the one or more receivers.
In another example, the system 100 comprises multiple receivers R. For example, the system 100 may comprise one receiver which is placed on the helmet H of the user and a second receiver placed on the chest of the user. This is advantageous as it enables transferring data when several users are in close to each other or in case a particular sensor is hidden by any obstacle.
In an example, the deflecting system comprises micro-electromechanical systems (MEMS). MEMS enable compact and fast solutions to deflect the laser.
In an example, the diameter of the laser L is adjustable based on the analysis of the video of image data.
The system 100 further comprises a processing unit PU-a processing unit comprising code instructions for analyzing the acquired image or video data of the target to determine pixels corresponding to at least one part of the target; for determining a distance between the shooting device and the target based on the analysis, determining a position of one of the laser receivers, for determining after a shooting of a projectile, the impact zone of the projectile on the target by calculating a trajectory from the shooting device on the target, based on the distance determination and ballistic data of the projectile deflecting the laser from the initial position to a new position based on the determined position of the of said one laser receiver; and transmitting with the laser at the new position to the laser receiver data relating to the determined impact zone and relating to the shot on the target.
The processing unit PU of the first user U1 may comprise an artificial intelligence controller implementing image detection algorithms. Additionally, the processing unit PU of the second user U2 may comprise an acoustic indicator and an optical indicator or a vibration device to warn the second user U2 that they have been hit.
The processing unit PU may be implemented using hardware, software, and/or a combination thereof. For example, hardware devices may be implemented using processing circuitry such as, but not limited to, a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner. Software may include a computer program, program code, instructions, or some combination thereof, for independently or collectively instructing or configuring a hardware device to operate as desired. The computer program and/or program code may include program or computer-readable instructions, software components, software modules, data files, data structures, and/or the like, capable of being implemented by one or more hardware devices, such as one or more of the hardware devices mentioned above. When a hardware device is a computer processing device (e.g. CPU, a controller, an ALU, a digital signal processor, a microcomputer, a microprocessor, etc.), the computer processing device may be configured to carry out program code by performing arithmetical, logical, and input/output operations, according to the program code. Each unit may also include one or more storage devices. The one or more storage devices may be tangible or non-transitory computer-readable storage media, such as random access memory (RAM), read only memory (ROM), a permanent mass storage device (such as a disk drive), solid state (e.g., NAND flash) device, and/or any other like data storage mechanism capable of storing and recording data. The one or more storage devices may be configured to store computer programs, program code, instructions, or some combination thereof, for one or more operating systems and/or for implementing the example embodiments described herein. The computer programs, program code, instructions, or some combination thereof, may also be loaded from a separate computer readable storage medium into the one or more storage devices and/or one or more computer processing devices using a drive mechanism. Such separate computer readable storage medium may include a Universal Serial Bus (USB) flash drive, a memory stick, a Blu-ray/DVD/CD-ROM drive, a memory card, and/or other like computer readable storage media.
The processing unit PU may further comprise a Bluetooth and Wi-Fi module to receive the image and video data from the camera and the shot information from the laser receiver. The processing unit PU may further comprise a power supply.
In an example, the processing unit PU may be a standard smartphone with graphic processing unit functionality. The phone may run an Android operating system. Alternatively, the processing unit PU may be a dedicated Linux platform.
In an example, the processing unit may be the device for shot analysis described in WO 2020/079157.
The invention also concerns a computer program product, said computer program product comprising code instructions for performing, when executed by a processing unit, the steps of:
These steps may be implemented by the processing unit PU described above.
The present description illustrates one embodiment of the invention, but is not limiting. The example was chosen to allow a good understanding of the principles of the invention, and one specific application, but it is not exhaustive, and the description should allow a person skilled in the art to provide modifications and implementation variants while keeping the same principles. Additionally, the system has been implemented in the examples with one receiver placed on a helmet, the receiver may be placed on the chest for example or on any other body part of the users. Additionally, several receivers may be used. Further, the invention has been illustrated in shooting training, the invention may also be used in games, such as laser game.
Number | Date | Country | Kind |
---|---|---|---|
23307049.9 | Nov 2023 | EP | regional |