The present invention relates to the field of autonomous robots. In particular, it relates to an intelligent tactical engagement trainer.
Combat personnel undergo training where human players spar with the trainers or an opposing force (OPFOR) to practice a desired tactical response (e.g. take cover and fire back). In the tactical and shooting practices, a trainer or OPFOR could be replaced by an autonomous robot. The robot has the advantages that it does not have fatigue and emotional factors; however, it must have intelligent movement and reactions such as shooting-back in an uncontrolled environment i.e. it could be a robotic trainer acting as an intelligent target that reacts to the trainees. Conventionally, systems have human look-a-like targets that are mounted and run on fixed rails giving it fixed motion effects. In another example, mobile robots act as targets that operate in a live firing range setting. However, shoot-back capabilities in such systems are not defined. In yet another example, a basic shoot back system is provided. However, the system lacks mobility, intelligence and does not address human-like behaviours in the response. Conventionally, a barrage array of laser is used without any aiming.
In accordance with a first aspect of an embodiment, there is provided a simulation-based Computer Generated Force (CGF) system for tactical training in a training field including a receiver for receiving information on the training field, a database for storing a library of CGF behaviours for one or more robots in the training field, a CGF module, coupled with the receiver and the database, for processing the information on the training field and selecting a behaviour for each of the one or more robots in the training field from the library of CGF behaviours stored in the database, a controller, coupled with the CGF module, for sending commands based on the selected behaviours to the one or more robots in the training field. The information on the training field includes location of one or more trainees and the commands include shooting the one or more trainees.
In accordance with a second aspect of an embodiment, there is provided a method for conducting tactical training in a training field, including receiving information on the training field, processing the information on the training field, selecting a behaviour for each of one or more robots in the training field from a library of CGF behaviours stored in a database, and sending commands based on the selected behaviours to the one or more robots in the training field. The information on the training field includes location of one or more trainees and the commands include shooting the one or more trainees.
The accompanying figures, serve to illustrate various embodiments and embodiments and to explain various principles and advantages in accordance with a present embodiment.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been depicted to scale. For example, the dimensions of some of the elements in the block diagrams or flowcharts may be exaggerated in respect of other elements to help to improve understanding of the present embodiments.
In accordance with an embodiment, there is provided a robot solution that would act as a trainer/OPFOR, with which players can practice tactical manoeuvers and target engagements.
In accordance with an embodiment, there is provided a simulation system backend that provides the scenario and behaviours for the robotic platform and its payload. The robotic platform carries a computer vision-based shoot-back system for tactical and target engagement using a laser engagement system (e.g. MILES2000).
These embodiments advantageously enable at least to:
These solutions are versatile such that they could be easily reconfigured onto different robot bases such as wheeled, legged or flying. In particular, a collective realization of the following features is advantageously provided:
(1) Simulation-based computer generated force (CGF) behaviours and actions as a controller for the robotic shoot back platform.
(2) A computer vision-based intelligent laser engagement shoot-back system.
(3) A voice procedure processing and translation system for two-way voice interaction between instructors/trainees and the robotic shoot-back platform.
The target sensing and shoot back part 206 in each of one or more autonomous platforms 104 includes optical-based electromagnetic transmitter and receiver, camera(s) ranging from infra-red to colour spectral, sensors for range, imaging depth sensors and sound detectors. The optical-based electromagnetic transmitter and receiver may function as a laser engagement transmitter and detector which is further discussed with reference to
The one or more autonomous platforms 104 further include computing processors coupled to the optical based electromagnetic transmitter and receiver, cameras, and sensors for executing their respective algorithms and hosting the system data. The processors may be embedded processors, CPUs, GPUs, etc.
The one or more autonomous platforms 104 further include communication and networking devices 210 such as WIFI, 4G/LTE, RF radios, etc. These communication and networking devices 210 are arranged to work with the computing processors.
The one or more autonomous platforms 104 could be legged, wheeled, aerial, underwater, surface craft, or in any transport vehicle form so that the one or more autonomous platforms 104 can move around regardless of conditions on the ground.
The appearance of the one or more autonomous platforms 104 is configurable as an adversary opposing force (OPFOR) or as a non-participant (e.g. civilian). Depending on the situation to be used, the one or more autonomous platforms 104 are flexibly configured to fit each situation.
The target sensing and shoot back part 206 may include paint-ball, blank cartridges or laser pointers to enhance effectiveness of training. Also, the target sensing and shoot-back part 206 can be applied to military and police training as well as sports and entertainment.
In an embodiment, an image machine learning part in a remote station may work with the vision-based target engagement system 206 in the autonomous platform 104 for enhancing the target engagement function as shown in 106 of
Also, a Simulation System 102 of
The modules of a standard CGF rational/cognitive model cannot be directly controlling a robot with sensing and control feedback from the robot as these high-level behavioural models do not necessarily translate into robotic actions/movements and vice versa. Conventionally, this in-direct relationship is a key obstructive factor that makes the direct integration of modules challenging. As such it was tedious to design the training scenarios for a robot's autonomous actions as part of the training scenarios in a conventional system.
In accordance with present embodiment, the pre-recorded path of the actual robot under remote control is used to set up a training scenario. Furthermore, in contrast to the tedious set up issues highlighted previously, the computer is used via a 3D game engine to bring about a more intuitive method for designing the robot movements.
In accordance with an embodiment, a CGF middleware (M-CGF) that is integrated into a standard CGF behavioural model is provided as shown in 204 of
The functionalities and components of this simulation system include CGF middleware. CGF middleware 308 inputs 3D action parameters of robots, planned mission parameters, CGF behaviours and robot-specific dynamic parameters such as maximum velocity, acceleration, payload, etc.
The CGF middleware 308 processes the multi-variable and multi-modal inputs (both discrete and continuous data in the spatial temporal domain) into a meaningful real-time signal to command the robot. Atomic real-time signals are commanding the robot emulator for visualization in the graphics engine.
In the CGF middleware 308, a robot emulator is used for virtual synthesis of the shoot-back robot for visualization. Also the CGF middleware 308 could be in the form of a software application or dedicated hardware such as a FPGA.
The simulation system further includes Computer Generated Force (CGF) cognitive components. In the CGF cognitive components, robotic behaviours are designed like CGF behaviours and may be residing on the robot, on the remote server or on both.
The CGF behaviours imaged onto the robotic platform can drive the robotic actions directly and thus result in desired autonomous behaviours to enable the training outcomes as planned.
In the CGF cognitive components, machine learning is used to adjust and refine the behaviours. Also, the CGF cognitive components use information on simulation entities and weapon models to refine the CGF behaviours.
Furthermore, the CGF cognitive components enable the robot (autonomous platform) to interact with other robots for collaborative behaviours such as training for military operations.
The CGF cognitive components also enable the robot to interact with humans, such as trainers and trainees. The components generate action-related voice procedures and behaviour-related voice procedures preferably in multi-languages so that it gives instruction to the trainees. The components also include voice recognition components so that the robot receives and processes instructions from the trainers.
The simulation system further includes a terrain database 304. The data obtained from the terrain database 304 enables 3D visualization of the field which refines autonomous behaviours.
Based on computer vision algorithms, the simulation system generates data sets of virtual image data for machine learning. The data sets of virtual image data are refined through machine learning.
The system further includes a library of CGF behaviours. One or more CGF behaviours are selected in the library of CGF behaviours based on training objectives.
In the simulation system, a pedagogical engine automatically selects behaviours and difficulty levels based on actions of trainees detected by computer vision. For example, if trainees are not able to engage robotic targets well, the robotic targets detect the poor trainee actions. In response, the robotic targets determined to lower the difficulty level from expert to novice. Alternatively, the robotic targets can change behaviours, such as slowing down movements to make the training more progressive.
Gestures by humans are mapped to commands with feedback control such as haptic feedback or tactile feedback. In the simulation system, the gestures by humans are trained to enhance their preciseness. Gesture control for single or multiple robot entities is carried out in the simulation system. If the gesture control in the simulation system is successful, it is mirrored onto the robot's mission controller.
The mission controller 204 in the shoot back robot may execute computer implemented methods that manage all the functionality in the shoot back robot and interface with the remote system. For example, the mission controller 204 can receive scenario plans from the remote system. The mission controller 204 can also manage behaviour models.
The mission controller 204 further disseminates tasks to other modules and monitors the disseminated tasks.
Furthermore, the mission controller 204 manages coordination between the shoot back robots for collaborative behaviours such as training for military operations.
During the training, several data such as robot behaviours, actions and navigations are recorded and compressed in accordance with an appropriate format.
For a robotic shoot back system, a robot needs to see and track a target (a trainee) in line-of-sight with a weapon before the target (the trainee) hits the robot. After a robot shoots at a target, it needs to know how accurately it hits the target. Also in any system, the target sensing and shooting modules have to be aligned.
The shooter 402 includes a target engagement platform, a processor and a laser transmitter. The target engagement platform detects a target 404 by a camera with computer vision functions and tracks the target 404. The target engagement platform is coupled to the processor which executes a computer implemented method for receiving information from the target engagement platform. The processor is further coupled to the laser transmitter, preferably together with an alignment system. The processor further executes a computer implemented method for sending instruction to the laser transmitter to emit a laser beam 406 with a specific power output in a specific direction.
The target 404 includes a laser detector 408 and a target accuracy indicator 410. The laser detector 408 receives the laser beam 406 and identifies the location where the laser beam reaches on the target 404. The distance between a point where the laser beam 406 is supposed to reach and the point where the laser beam 406 actually reaches is measured by the target accuracy indicator 410. The target accuracy indicator 410 sends hit accuracy feedback 412 including the measured distance to the processor in the shooter 402. In an embodiment, the target accuracy indicator 410 instantaneously provides hit-accuracy feedback 412 to the shooter in the form of coded RF signals. The target accuracy indicator 410 may provide hit-accuracy feedback 412 in the form of visual indicators. The processor in the shooter 402 may receive commands from the CGF in response to the hit-accuracy feedback 412.
On the shooter side, at least one camera and laser beam transmitter is mounted on the rotational target engagement platform. Also the camera and transmitter may be rotated independently. If the target is detected in 502, the functional activity flow moves forward to target tracking 506. The target detection and tracking are carried out by the computer vision-based methods hosted on the processor.
In 508, the position difference between the bounding box of the tracked target and the crosshair is used for rotating the platform 510 until the bounding-box centre and the crosshair are aligned. Once the tracking is considered stable, the laser is triggered in 512.
On the target side, upon detection of a laser beam/cone, the target would produce a hit-accuracy feedback signal through (i) a visual means (blinking light) or (ii) a coded and modulated signal of RF media which the “shooter” is tuned to.
The shooter waits for the hit-accuracy feedback from the target side in 504. Upon receiving the hit-accuracy feedback, the system decides whether to continue with the same target.
In 602, target is not aligned to the crosshair 606. Thus, the platform is rotated until the crosshair 606 is in the centre of a tracker bound box before firing the laser as shown in 604.
In one example, a system for automatic computer vision-based detection and tracking of targets (human, vehicles, etc) is provided. By using adaptive cone of laser ray shooting based on image tracking, the system specially aligns aiming of the laser shoot-back transmitter to enhance preciseness of the tracking of the targets.
Use of computer vision resolves the issues of unknown or lack of precision in location of the target, and target occlusion in uncontrolled scenes. Without the computer vision, detecting and tracking of the target may not be successful.
In an example, the computer vision algorithm is assisted by an algorithm with information from geo-location and geo-database. Also, the computer vision may include single or multiple-camera(s), or multiple views or a 360 view.
The system includes target engagement laser(s)/transmitter(s), and detector(s). The system further includes range and depth sensing such as LIDAR, RADAR, ultrasound, etc.
The target engagement lasers will have self-correction for misalignment through computer vision methods. For example, the self-correction function is for fine adjustment to coarse physical mounting. Further, an adaptive cone of fire laser shooting could also be used for alignment and zeroing.
As a mode of operation, live image data is collected and appended to its own image database for future training of a detection and tracking algorithm.
In an example, robots share information such as imaging and target data which may contribute to collective intelligence for the robots.
In an example, a combat voice procedure may be automatically generated during target engagement. The target engagement is translated into audio for local communication and modulation transmission.
Furthermore, the audio and voice system receives and interprets demodulated radio signals from human teammates so that they facilitate interaction with human teammates. In addition, the system may react to collaborating humans and/or robots in audible voice output through a speaker system or through the radio communication system. The system will also output the corresponding weapon audible effects.
In addition to the above discussed features, the system may have adversary mission-based mapping, localization and navigation, with real-time sharing and updating of mapping data among collaborative robots. Furthermore, distributed planning functionalities may be provided in the system.
Also, power systems may be provided in the system. The system may be powered by battery systems, or other forms of state of the art power systems, e.g. hybrid, solar systems etc. The system will have a return home mode when the power level becomes low (relative to the home charging location).
An exemplary shoot-back payload is shown as 704. The shoot-back payload includes a camera and a pan tilt actuator and a laser emitter. Data detected by the camera actuates the pan tilt actuator to align the laser emitter so that the laser beam emitted by the laser emitter precisely hits the target.
Exemplary propulsion bases are shown as 706. The exemplary propulsion bases include 2 wheeler bases and 4 wheeler bases. Both of the 2 wheeler bases and the 4 wheeler bases have LIDAR and other sensors. Also, on-board processors are embedded.
Information on the training field received in step 802 includes location information of one or more robots in the training field. The information on the training field also includes terrain information of the training field so that one or more robots can move around without any trouble. The information further includes location information of trainees so that the behaviour of each of the one or more robots is determined in view of the location information of the trainees.
In step 804, the received information is processed so that behaviour for each of the one or more robots is selected based on the results of the process.
In step 806, behaviour for each of the one or more robots in the training field is selected from a library of CGF behaviours stored in a database. The selection of behaviour may include selection of collaborative behaviour with other robots and/or with one or more trainees so that the one or more robots can conduct organizational behaviours. The selection of behaviour may also include communicating in audible voice output through a speaker system or through a radio communication system.
The selection of behaviour may further include not only outputting voice through the speaker but also inputting voice through a microphone for the communication.
In accordance with an embodiment, the method 900 further includes receiving a feedback with regard to accuracy of the laser beam emission from the laser beam transmitter.
In step 902, the detecting includes range and depth sensing including any one of LIDAR and RADAR for precisely locating the target.
In step 906, the computing includes computing a positional difference of geo-location information in a geo-database.
In step 908, the adjusting the alignment includes rotating a platform of the laser beam transmitter.
In summary the present invention provides a robot solution that would act as a trainer/OPFOR with which players can practice tactical manoeuvres and target engagement.
In contrast to conventional systems which lack mobility, intelligence and human-like behaviours, the present invention provides simulation based computer generated force (CGF) behaviours and actions as controller for the robotic shoot back platform.
In particular, the present invention provides a computer vision based intelligent laser engagement shoot-back system which brings about a more robust representative target engagement experience to the trainees at different skill levels.
Many modifications and other embodiments of the invention set forth herein will come to mind the one skilled in the art to which the invention pertains having the benefit of the teachings presented in the foregoing description and the associated drawings. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Number | Date | Country | Kind |
---|---|---|---|
10201605705P | Jul 2016 | SG | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SG2017/050006 | 1/5/2017 | WO | 00 |