The present invention takes priority from Indian patent application titled “Self-Learning Command and Control Module (GENISYS)”, application number: 202221024830 post-dated to 27 Jul. 2022.
The present invention relates to control and manage maneuver of a vehicle. Particularly, the present invention pertains to maneuver control and management of a ground, marine or aviation vehicle. More particularly, the present invention relates to maneuver control and management in situations of loss of communication from a base station of the vehicle.
Ground, marine as well as aviation vehicles are known to be manned as well as unmanned. Routes and enroute unknowns of such vehicles are predictable as well as unpredictable.
Unmanned vehicles are controlled by sensors and communication. CN3067575 discloses self-learning autonomous navigation particularly for unmanned underwater vehicle based on an improved recurrent neutral network. CN109521454 discloses navigation method based on cubature Kalman filter. CN102778237 provides a map self-learning system in vehicle navigation, wherein the self-learning function of an electronic navigation map in a vehicle navigation system is achieved by curve fitting and a prediction algorithm, so that the vehicle positioning precision is enhanced and the electronic navigation map is perfected.
There are challenges for vehicles which are manned at times but unmanned in different situations. Such challenges become complex when communication breaks down!
The present invention attempts to bridge this significant industrial requirement.
It is an object of the present invention to invent a device and method thereof that captures discrete instructions under operation with operator on-board and integrates such instructions as self-learning.
It is another object of the present invention to invent a device and method thereof that captures instructions under operation by remote operator and integrates such instructions as self-learning.
It is yet another object of the present invention to invent a device and method thereof that captures macro as well as micro instructions by operators, while on-board as well as remotely and integrate such instructions as self-learning.
It is yet another object of the present invention to invent a device that is retro-fittable in any vehicle to make such vehicle suitable for autonomous operation, particularly in absence of communication network.
The present invention is a navigation system having a navigation module, named GENISYS, that is retro-fittable in any manned or unmanned road/aviation/marine/submarine vehicle. GENISYS has the capability to capture all operator instructions and commands including audio, manual, electrical, radio commands and convert them into an executable data/algorithm with time and position co-ordinates. GENISYS is connected, by hard wiring or wirelessly, with plurality of sensors and intelligence of the vehicle. GENISYS integrates intrinsic capabilities of the vehicle with executable data and algorithms developed from operator instructions and commands.
A drone, hereinafter termed as a platform or a vehicle, disposed with the navigation module is capable of self-navigating to an obstructed unknown destination or an obstructed unknown target even in absence of any communication network while an obstruction hinders a direct view of the target. In the description below, the inventive hardware and software of the navigation module and the navigation system is described step by step resulting into such capability.
A navigation system around the vehicle comprises:
Three types of controls deployed for the navigation module are
Manual control implies direct human control including throttle and steering control, either by being physically present in the tactical platform of vehicle particularly when the vehicle is road worthy vehicle or an airplane or a human carrying drone or a ship or a marine or a submarine; or remotely. In the mission planning type of control, before operation initialization, complete planning is executed. This planning includes how the vehicle should perform, in which scenario how it must behave, and all if-else this that or if commands and controlling factors are preplanned and fed to the RCW system. Data from multiple sensors is taken and graphed or shown or processes or used to generate user interface to perform the multiple jobs like control/monitor/eject/disturb/explode with the help of system/vehicle/platform. Tactical control includes wider guidance navigation and controlling system including handshaking with mission planning for decision-making in critical scenarios including scenarios like obstacle avoidance and reactive guidance, including multiple inputs and outputs. It maps the system with actual trajectory with desired trajectory. The difference is calculated and offset is fed to the system for learning, which helps system to progressively better match the actual trajectory with the desired trajectory.
Command Control Unit (CCU) comprises Self Learning Command Control Unit, Unmanned Control and Assessment & Correction platform. Manual Control controls the rudder and throttle of the engine directly without autonomous operation. The vehicle control gives the signal to VDC (Vehicle Direct Control) which manages all the controlling of the vehicle. Here, mixed signal type controlling operates where Command Control Unit processes the data, and it is carried forward to Vehicle Direct Control. Self-Learning Command Control Unit consists of sub-systems including Vehicle Guidance and Course Control.
Unmanned Control primarily relies on a plurality of perception sensors whose data is processed, assessed for precision and provides reactive guidance to mission planning.
With respect to autonomous maritime platforms, tactical platforms enable to exercise various mission sets such as sea denial, escort, survey, logistics, targeted counter abilities, etc. As such, such a platform will conduct assets planning by Mission Objectives, Data Collection and Analysis, Asset Availability and Capabilities, Optimization Algorithms, Resource Allocation, Dynamic Adaptation, and Communication and Coordination. By combining data analysis, optimization algorithms, and adaptive planning strategies, an autonomous maritime tactical platform can efficiently conduct assets planning for autonomous sea-going vessels. It optimizes the utilization of assets, enhances mission performance, and enables effective decision-making in dynamic maritime environments. Assessment and Correction Platform implements corrective measures and action taken to ensure that the tactical platform fulfills its objectives.
A plurality of perception sensors connected to the reactive guidance system include Accelerometer, Gyroscope, Compass, Magnetic Heading Sensor, Barometric Sensor, Multiple GNSS, Vision Sensor including camera, stereo camera, Omni camera, IR camera, Ultra-sonic sensor, Laser Range finder, Li-Dar, Sonar, Radar, Optical sensor, Depth sensor. Such type of sensor data using sensor fusion algorithms gives appropriate data. It is referred to as a computational methodology that aims at combining measurement from multiple sensors such that they jointly give more precise information on the measured system than any of the sensors alone.
The data from such sensor and data from human operator or third-party user or device helps to perform the critical tasks at Remote Control Workstation (RCW) with human operator in-loop. This data is used to course controlling of the platform or vehicle. Course control includes multiple movements of the platform or vehicle. Movement like roll, pitch, yaw, right, left, port, starboard, altitude gain, altitude decrease, thrust, throttle, gear shift, etc. are done by it.
With that those data from sensors are used to get feedback from the system in close control loop instead of open loop to monitor control and accept the data from system. As well as current system is platform agnostic which makes it able to work with closed loop as well as open loop system. In open loop system, the data is managed by perception/fusion sensor data management and the decision/control/monitoring/management is completed at Remote Control Workstation (RCW).
The vehicle or the tactical platform has a vehicle direct control including vehicle actuators maneuverable under manual control. Vehicle dynamics including speed, inclination etc. impacted by type of vehicle, weight, acceleration capabilities, night vision, turning radii, etc. and such navigational payloads and at the same time impacted by environmental scenario including terrain parameters which are applied to outputs of sensors to give a feedback to Mission Planning.
In the Manual Control mode, any command given by the human operator causing a change detected in throttle or a change in steering is saved in the self-learning module for subsequent autonomous operations. Importantly, although the perception sensors are not actively involved in navigational support in Manual Control, however, the perception sensors continue to generate dataset for self-learning in real time. Even if the vehicle is manually controlled, the perception sensors non-exhaustively collect information about path, surroundings including buildings and distance between them, terrain, drift, atmospheric pressure and build training dataset for use in Mission Planning and Tactical Control situations.
In the Mission Planning mode, the vehicle guidance module generates a path for the vehicle towards completing an assigned task to reach the prescribed destination with respect to a home position, by taking feedback from navigational payloads and perception sensors. The course control module generates commands for vehicle actuators for the assigned task. The perception sensors continue to generate dataset for self-learning in real time with respect to the assigned task, while vehicle guidance model updates dataset in real time.
In the Mission Planning mode with auto maneuvering with situational awareness and tactical control with deliberative collision avoidance, the vehicle guidance module generates a path for the vehicle towards completing an assigned task to reach the prescribed destination with respect to a home position, by taking feedback from navigational payloads and perception sensors. The course control module generates commands for vehicle actuators for the assigned task. The navigation system deploys deliberative payloads like AIS, the environmental scenario and assessment & Correction module for a precision navigation. The navigation system gets the collision related data from the prior information like map data including tree, mountain, building information and terrain information. The perception sensors continue to generate dataset for self-learning in real time with respect to the assigned task, while vehicle guidance model updates dataset in real time.
In the Mission Planning mode with auto maneuvering with situational awareness and tactical control with responsive collision avoidance, the navigation system uses data from cameras, LIDAR, SONAR, ADSB, besides AIS that provide real time data on which the reactive guidance makes course control (through assessment and correction module for collision avoidance. The perception sensors continue to generate dataset for self-learning in real time with respect to the assigned task, while vehicle guidance model updates dataset in real time.
The vehicle having assigned a known destination with respect to a home position calculates its path, which could be more than one. The path prediction is with prescribed preferences and limits based on a cognitive algorithm on a previously trained model from earlier real time dataset. The flight may be with a responsive collision avoidance technique or a deliberative collision avoidance technique.
Importantly, all above embodiments are with availability of a communication network including an internet communication or a satellite communication. The present invention is equally capable in absence of any or all kinds of communication network. The vehicle is assigned a task of reaching an unknown destination which is a drowning person needing help. The unknown destination is hidden behind a big obstruction, which is a ship. In this scenario, navigation is done on the basis of a localized frame of reference, in the form of an imaginary cuboid of situationally prescribed X, Y and Z dimensions created by a deep learning grid algorithm. The X, Y and Z dimensions are situationally prescribed based on a judgement that the home position, the unknown destination and the obstruction are well within the cuboid. The vehicle moves within the cuboid and knows its own relative position with respect to a home position with the help of compass, gyroscope, accelerometer, sonic sensors, cameras and other non-network devices and perception sensors. The vehicle, here a drone, predicts a safe height based on task type, weather condition and Location information inputted.
The present invention shall now be described with the help of the drawings. The detailed description is a preferred embodiment around which several variation can be developed and therefore it should not be construed to limit the invention in any manner whatsoever.
The present invention is a navigation module, named GENISYS, that is retro-fittable in any manned o unmanned road/aviation/marine/submarine vehicle, and a system thereof. The preferred embodiment is described with the figures of unmanned aviation vehicle, commonly known as drone.
GENISYS and the system around it has the capability to capture all operator instructions and commands including audio, manual, electrical, radio commands and convert them into an executable data/algorithm with time and position co-ordinates.
GENISYS is connected, by hard wiring or wirelessly, with plurality of sensors and intelligence of the vehicle. GENISYS integrates intrinsic capabilities of the vehicle with executable data and algorithms developed from operator instructions and commands.
The vehicle (302) refers to a platform or a transport system or vehicle that provides a specific capability and effect on an action field. It is essentially a platform or a means of delivering various assets, weapons or equipment to the action space. Such vehicles/tactical platforms (302/302A) vary in size, mobility, function, depending on the operational requirements.
Remote Control Workstation or RCW (301)-RCW (301) or Ground Control Station (GCS) is a section whereby all command-and-control options are initiated, including initial commands and controls by a human Operator (305), responsible for all the operations to be going to be executed on a manual or a semi-autonomous or an autonomous system platform. The command-and-control includes the complete control and monitoring of the system data or system video or payload data or payload control or both.
Human Operator (305) is a person who is responsible for controlling or monitoring or controlling and monitoring of the system data or system video or payload data or payload control or both. Human Operator (305) performs multiple tasks like Manual Operation (310), Mission Planning (330) and Tactical Control (360) by the Remote Control Workstation (RCW) or the Ground Control Station (GCS) (301).
Three types of controls deployed for the navigation module (100) are
Manual control (310) implies direct human control including throttle and steering control, either by being physically present in the tactical platform of vehicle (302) particularly when the vehicle is road worthy vehicle or an airplane or a human carrying drone or a ship or a marine or a submarine; or remotely. Here, the complete operation responsibility is of the human operator (305). The human operator (305) needs to take care of each and every single parameter or setting or state or decision or tactical decision of the system.
In the mission planning (330) type of control, before operation initialization, complete planning is executed. This planning includes how the vehicle should perform, in which scenario how it must behave, and all if-else this that or if commands and controlling factors are preplanned and fed to the RCW system. Once fed, after initialization the vehicle/command control performs functions as configured and RCW would not bypass the planned operations. Mission Planning (330) is the formal definition of any type of unmanned platform or system or vehicle where complete mission or planning or path planning and complete mission preparation and execution is done. Mission Planning (330) and real-time data monitoring/controlling/management is completely integrated into the system or platform. It helps to manage the complete path/plan/mission of the system/platform/vehicle. Data from multiple sensors is taken and graphed or shown or processes or used to generate user interface to perform the multiple jobs like control/monitor/eject/disturb/explode with the help of system/vehicle/platform.
Tactical Control (360)—Tactical control includes wider guidance navigation and controlling system including handshaking with mission planning (330) for decision-making in critical scenarios including scenarios like obstacle avoidance and reactive guidance, including multiple inputs and outputs. Wider guidance navigation contains hardware-software co-related system which determines the desired path of travel called as trajectory from the platform or vehicle's current location to a designated target location. It maps the system with actual trajectory with desired trajectory. The difference is calculated and offset is fed to the system for learning, which helps system to progressively better match the actual trajectory with the desired trajectory.
Generally, divided into two lateral dimensions:
Command Control Unit or CCU (311)-Manual Control (310) works with Command Control Unit (311). Command Control Unit (CCU) comprises Self Learning Command Control Unit (314), Unmanned Control (320) and Assessment & Correction platform (320). Manual Control controls the rudder and throttle of the engine directly without autonomous operation. The vehicle control gives the signal to VDC (Vehicle Direct Control) which manages all the controlling of the vehicle. Here, mixed signal type controlling operators where Command Control Unit processes the data, and it is carried forward to Vehicle Direct Control.
Self-Learning Command Control Unit (314) consists of the following sub-systems:
Vehicle Guidance system (316) is part of the control structure of the vehicle and consists of a path generator, a motion planning algorithm, and a sensor fusion module. A stereo vision sensor and a GFPS sensor are used as position sensors. The trajectory for the vehicle motion is generated in the first step by using only information from a digital map. Object-detecting sensors such as the stereo vision sensor, three laser scanners, and a radar sensor observe the vehicle environment and report detected objects to the sensor fusion module. This information is used to dynamically update the planned vehicle trajectory to the final vehicle motion. This helps the vehicle guidance system (316) to track correction and complete navigation guidance.
Course control system (317) derives a model for vehicle path-following and course controlling with the management of throttle and heading; wherein the pitch and yaw angles together with the surge velocity can be treated as control inputs.
Unmanned Control (320) primarily relies on a plurality of perception sensors (318) whose data is processed, assessed for precision and provides reactive guidance (319) to mission planning (330).
With respect to autonomous maritime platforms, tactical platforms enable to exercise various mission sets such as sea denial, escort, survey, logistics, targeted counter abilities, etc.
As such, such a platform will conduct assets planning by:
By combining data analysis, optimization algorithms, and adaptive planning strategies, an autonomous maritime tactical platform can efficiently conduct assets planning for autonomous sea-going vessels. It optimizes the utilization of assets, enhances mission performance, and enables effective decision-making in dynamic maritime environments.
Based on its perception and awareness of the environment, the given tasks/objectives, the onboard payloads, the platform continuously assesses the objectives, the effects achieved, and whether or not corrections are needed in its execution. Thereafter corrective measures are implemented and action taken to ensure that the tactical platform fulfills its objectives.
The plurality of perception sensors (318) connected to the reactive guidance system (319) include:
Such type of sensor data using sensor fusion algorithms gives appropriate data. It is referred to as a computational methodology that aims at combining measurement from multiple sensors such that they jointly give more precise information on the measured system than any of the sensors alone.
The data from such sensor and data from human operator or third-party user or device helps to perform the critical tasks at Remote Control Workstation (RCW) with human operator in-loop. This data is used to course controlling of the platform or vehicle (302). Course control includes multiple movements of the platform or vehicle (302). Movement like roll, pitch, yaw, right, left, port, starboard, altitude gain, altitude decrease, thrust, throttle, gear shift, etc. are done by it.
With that those data from sensors are used to get feedback from the system in close control loop instead of open loop to monitor control and accept the data from system. As well as current system is platform agnostic which makes it able to work with closed loop as well as open loop system. In open loop system, the data is managed by perception/fusion sensor data management and the decision/control/monitoring/management is completed at Remote Control Workstation (RCW).
The self-learning command control unit (314) comprises multiple FRAM and NVRAM memory storage which store the main control code which resides in the directory of the boot controller.
Importantly, all above embodiments are with availability of a communication network (390) including an internet communication or a satellite communication. The present invention is equally capable in absence of any or all kinds of communication network (390).
The grid is a deep learning algorithm based which configures grid of coarser or finer pitch based on a prescribed task. Illustratively, to locate a human, the grid would be of a pitch of a feet, while the grid would be of far longer pitch when the destination is a ship or a building.
Search direction (380)—The home position (81) is in the center (375) of the cuboid (90) when search direction is unascertained, or the home position is a corner (374) of the cuboid (90) if the search direction is ascertained.
As a variation, the imaginary cuboid (90) is an electromagnetic field, or any other energy field locally generated by the vehicle (302) or the tactical platform (302A) around itself. The imaginary cuboid (90) thus effectively navigates the vehicle (302) or the tactical platform (302A) particularly with the aid of the sensor fusion algorithm based self-learning module (314) prudently ensuring precision and redundancy. As earlier mentioned, the sensor fusion algorithm is a computational methodology deployed to analyze and compare relatable sensor data for mismatch and use a “fused” data for training the AI model (377) instead of merely using the captured data.
Relatable data illustratively implies comparing a linear measure from more than one sensor directly or indirectly. Thus, a plurality of orthogonal linear measure would be applied with geometrical rules to compute a non-orthogonal measure.
It is known that stand-alone electromechanical non-network based sensors are generally less accurate and the vehicle (302) takes a regular feedback from the previously trained cognitive navigation algorithm-based AI model (377).
Algorithms and computer programs present in multiple layers are present in the working of the self-learning command control unit (314), notably:
User Interface is the interaction medium of human operator to the system. All the control and monitoring parameters and control analogy is available on the User Interface of the system.
Intra-communication layer is the layer where communication between a computing system and multiple payloads like sensory system, Input Channel, Output Channels is gets managed. The most important part of this layer is to make sure that if one channel of communication medium is getting affected then in that case to manage and initiate the communication using another communication channel including switching navigation from one communication network (390) to another communication network (390). Thus, all monitoring and controlling is done through this layer.
All the threading related multiple vehicle operation control is managed by these libraries. These libraries enable hardware able to understand type of vehicle a specific hardware and corresponding software is going to control. For example, if car is going to be controlled from a specific hardware in that case the control and Thread Management layer will use the car type of vehicle libraries from the set of data, so that software and hardware can understand that which types of commands will be required for that specific vehicle.
This layer enables the processor to control the clock and timing of the complete system so that the multiple threading/multiple processes can work synchronously.
In this layer hardware is assigned. For example, initialization of software drivers, DLLs required for specific hardware peripherals so that compatibility issue can be resolved.
The payload and navigation sensors like accelerometers, vision system, compass/heading sensor, gyroscope, camera system, etc. are assigned in this system.
It is isolated testing environment that enables users to run and decides the program for which the controlling can be done without affecting the application and system or platform that we run.
The navigation system (300) deploying the navigation module (100) with the aid of a plurality of perception sensors (318) controls the tactical platform or vehicle (302) with micro-management, wherein all operator instructions and commands including audio, manual, electrical, radio commands are captured by a navigation algorithm with deep learning based Command & Control Unit.
All previously recorded position and relative time information is retrieved and compared against position and relative time information occurring during automatic maneuver. The difference between the previously recorded information and the actual information is used to generate error signals which are converted to command signals as well as to learn/self-learn.
Number | Date | Country | Kind |
---|---|---|---|
202221024830 | Jul 2022 | IN | national |