CONTROL METHOD OF UNDERWATER ROBOT EQUIPPED WITH A MULTI-DEGREE-OF-FREEDOM ROBOT ARM

Information

  • Patent Application
  • 20250042520
  • Publication Number
    20250042520
  • Date Filed
    July 16, 2024
    11 months ago
  • Date Published
    February 06, 2025
    4 months ago
  • Inventors
    • LEE; Kooksun
  • Original Assignees
    • KALMAN INC
Abstract
Proposed is a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm according to an exemplary embodiment, including: a) step 1-1th of obtaining a propulsive force prediction value by predicting a propulsive force of the underwater robot based on an artificial neural network and configuring a sensorless propulsion controller equipped with a propulsion system to control a speed of the underwater robot; and b) step 1-2th of configuring a underwater robot manipulator (URM) controller that obtains a torque prediction value by predicting an output torque of an actuator constituting the multi-degree-of-freedom robot arm provided in the underwater robot based on the artificial neural network.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the priority of Korean Patent Application No. 10-2023-0099849 filed on Jul. 31, 2023, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.


BACKGROUND
Field

The present disclosure relates to a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm, and more particularly, to a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm capable of precisely controlling a speed of the underwater robot and the multi-degree-of-freedom robot arm based on an artificial neural network and controlling the underwater robot by estimating the position of the underwater robot in water where GPS signal cannot be recognized.


Description of the Related Art

In general, underwater robots are divided into an autonomous underwater vehicle (AUV) and a remotely operated underwater vehicle (ROV) according to an operation method, and are used for underwater works such as deep-sea resource surveying, underwater structure construction, underwater survey observation, ship bottom cleaning, and port cleaning, fisheries, military applications such as mine search/clearance, unmanned port surveillance, and underwater reconnaissance. To perform these works, a position estimation technology to determine the position of the underwater robot is more important than anything else.


Conventionally, to estimate the position of the underwater robot, a position estimation system such as long base line (LBL), short base line (SBL), ultra short base line (USL), or GPS intelligent buoys (GIB) was used.


However, there is a problem in that the conventional position estimation system does not accurately reflect a height of a sea level which changes due to various factors such as tidal ebb and flow, rainfall, and river water inflow, a distance from the underwater robot to a sea floor due to factors such as the inability to recognize a GPS signal in water, etc.


In this way, there was a problem in that the position estimation performance of the underwater robot deteriorated because errors included in the height of the sea level, the distance from the underwater robot to the sea floor, etc., were not accurately reflected.


Meanwhile, the speed of the underwater robot is one of the very important performances in selecting the underwater robot. However, manufacturers of the underwater robot measure the speed of the underwater robot in very limited ways. Since radio waves are not transmitted in the underwater environment, speed measurement methods used on land, such as GPS, may not be used.


In the related art, the speed of the underwater robot was calculated by using a method in which when the underwater robot is exposed to a current environment similar to a target speed and the underwater robot overcomes the current without being washed away by the current, the speed of flow is specified as the speed of the underwater robot, or a method in which the speed of the underwater robot is calculated by the time it takes for the underwater robot to reach a certain distance in a restricted area, etc.


However, when measuring the speed of the underwater robot using the calculation method of the related art as described above, the speed of the underwater robot may not be measured quantitatively, and as a result, there is a problem in that many errors in the speed of the underwater robot occur.


In addition, a sensorless algorithm has been disclosed that measures the propulsive force generated from the propeller of the underwater robot without a sensor that measures the speed and position.


This sensorless algorithm uses curve fitting to calculate a curve (CP curve) that describes the relationship between the rotational speed and propulsive force of the propeller, and may measure the propulsive force of the underwater robot by assuming that the relationship between the rotational speed and the propulsive force of the propeller is a polynomial.


However, the sensorless algorithm had a problem in that the propulsive force generated from the propeller showed a large measurement error of the propulsive force of the underwater robot due to various causes such as the speed measurement error, the characteristics of the propeller, and measurement error of the sensorless algorithm.


In addition, the underwater robot for underwater work is equipped with multi-joint robot arms composed of an underwater actuator. Unlike the actuator used on land, the underwater actuator uses a seal with strong friction for underwater waterproofing.


The strong friction of the seal used in the underwater actuator had the problem of making the precise control of the multi-joint robot arm difficult.


Related Art Document and Patent Document

(Patent Document 0001) Korean Patent Publication No. 10-0969878


(Patent Document 0002) Korean Patent Publication No. 10-1615210


(Patent Document 0003) Korean Patent Publication No. 10-2530048


SUMMARY

An object to be achieved by the present disclosure is to provide a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm capable of precisely controlling a speed of an underwater robot by predicting a propulsive force of an underwater robot based on an artificial neural network to estimate the propulsive force.


Another object to be achieved by the present disclosure is to provide a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm capable of precisely controlling the multi-degree-of-freedom robot arm by predicting an output torque of an actuator based on an artificial neural network.


Still another object to be achieved by the present disclosure is to provide a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm capable of tracking a position of an underwater robot in water and on a water surface by estimating the position of the underwater robot through dead reckoning based on a Doppler velocity log (DVL) in water and calibrating the position of the underwater robot through real time Kinematic (RTK)-based position correction when a GPS signal of the underwater robot is detected on the water surface.


However, objects of the present disclosure are not limited to the above-described objects. That is, other objects that are not described may be obviously understood by those skilled in the art to which the present disclosure pertains from the following description.


According to an aspect of the present disclosure, a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm includes: a) step 1-1th of obtaining a propulsive force prediction value by predicting a propulsive force of the underwater robot based on an artificial neural network and configuring a sensorless propulsion controller equipped with a propulsion system to control a speed of the underwater robot; b) step 1-2th of configuring a URM controller that obtains a torque prediction value by predicting an output torque of an actuator constituting the multi-degree-of-freedom robot arm provided in the underwater robot based on the artificial neural network; c) step 1-3th of estimating a position of the underwater robot using dead reckoning based on a Doppler velocity log (DVL) in water, and configuring a navigation system that corrects the position of the underwater robot through wireless-based (real time Kinematic, RTK) position correction when the underwater robot moves to a water surface and a GPS signal of the underwater robot is detected; and d) a second step of configuring a robust attitude controller that adapts adapt the sensorless propulsion controller, the URM controller, and the navigation system from an internal influence of the underwater robot including the propulsive force of the underwater robot and the output torque of the multi-degree-of-freedom robot arm or influences of an underwater environment and external disturbance.


According to an exemplary embodiment of the present disclosure, by estimating the propulsive force of the underwater robot by predicting the propulsive force of the underwater robot based on the artificial neural network, it is possible to precisely control the speed of the underwater robot.


According to an exemplary embodiment of the present disclosure, by predicting the output torque of the actuator based on the artificial neural network, it is possible to precisely control the actuator-based multi-degree-of-freedom robot arm.


According to an exemplary embodiment of the present disclosure, by estimating the position of the underwater robot through dead reckoning based on a Doppler velocity log (DVL) in water and calibrating the position of the underwater robot through real time Kinematic (RTK)-based position correction when a GPS signal of the underwater robot is detected on the water surface, it is possible to track the position of the underwater robot in water and on a water surface.


However, effects which can be achieved by the present disclosure are not limited to the above-described effects. That is, other objects that are not described may be obviously understood by those skilled in the art to which the present disclosure pertains from the following description.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other aspects, features and other advantages of the present disclosure will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:



FIG. 1 is a block diagram illustrating an overall system of an underwater robot equipped with a multi-degree-of-freedom robot arm according to an exemplary embodiment of the present disclosure.



FIG. 2 is a flowchart illustrating a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm according to an exemplary embodiment of the present disclosure.



FIG. 3 is a flowchart illustrating a detailed process of a step of configuring a sensorless propulsion controller illustrated in FIG. 2.



FIG. 4 is a diagram for describing a step of training a propeller simulation neural network illustrated in FIG. 3.



FIG. 5 is a diagram illustrating an example of applying a propeller simulation neural network illustrated in FIG. 4.



FIG. 6 is a diagram illustrating another example of applying the propeller simulation neural network illustrated in FIG. 4.



FIG. 7 is a diagram illustrating an example of a step of applying the propeller simulation neural network illustrated in FIG. 3.



FIG. 8 is a flowchart illustrating a detailed process of a step of configuring an underwater robot manipulator illustrated in FIG. 2.



FIG. 9 is a diagram for describing a step of training a propeller simulation neural network illustrated in FIG. 8.



FIG. 10 is a diagram illustrating an example of applying the actuator simulation neural network illustrated in FIG. 9.



FIG. 11 is a diagram for describing a step of configuring a waterproof actuator illustrated in FIG. 8.



FIG. 12 is a diagram illustrating an example of applying an angle estimator according to an exemplary embodiment of the present disclosure.



FIG. 13 is a flowchart illustrating a detailed process of a step of configuring a navigation system illustrated in FIG. 2.



FIG. 14 is a diagram for describing a method of measuring a position of an underwater robot of the navigation system illustrated in FIG. 1.



FIG. 15 is a block diagram illustrating components for implementing dead reckoning based on a Doppler velocity log according to an exemplary embodiment of the present disclosure.



FIG. 16 is a block diagram illustrating models constituting an inertial navigation system model illustrated in FIG. 15.



FIG. 17 is a diagram illustrating an example of a step of configuring a robust attitude controller illustrated in FIG. 2.



FIG. 18 is a diagram illustrating another example of the step of configuring a robust attitude controller illustrated in FIG. 2.



FIG. 19 is a diagram illustrating a coordinate system and parameters of an underwater robot according to an exemplary embodiment of the present disclosure.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art to which the present disclosure pertains may easily practice the present disclosure. However, the description of the present disclosure is only an exemplary embodiment for structural or functional description, and therefore the scope of the present disclosure should not be construed as limited to exemplary embodiments described in the text. That is, since the exemplary embodiments may be variously modified and may have various forms, the scope of the present disclosure should be construed as including equivalents capable of realizing the technical idea. In addition, a specific exemplary embodiment is not construed as including all the objects or effects presented in the present disclosure or only the effects, and therefore the scope of the present disclosure should not be understood as being limited thereto.


The meaning of the terms described in the present disclosure should be understood as follows.


Terms such as “first” and “second” are intended to distinguish one component from another component, and the scope of the present disclosure should not be limited by these terms. For example, a first component may be named a second component and the second component may also be similarly named the first component. It is to be understood that when one element is referred to as being “connected to” another element, it may be connected directly to or coupled directly to another element or be connected to another element, having the other element intervening therebetween. On the other hand, it is to be understood that when one element is referred to as being “connected directly to” another element, it may be connected to or coupled to another element without the other element intervening therebetween. Meanwhile, other expressions describing a relationship between components, that is, “between”, “directly between”, “neighboring to”, “directly neighboring to” and the like, should be similarly interpreted.


It should be understood that the singular expression includes the plural expression unless p the context clearly indicates otherwise, and it will be further understood that the terms “comprises” or “have” used in this specification, specify the presence of stated features, numerals, steps, operations, components, parts, or a combination thereof, but do not preclude the presence or addition of one or more other features, numerals, steps, operations, components, parts, or a combination thereof.


Unless defined otherwise, all the terms used herein including technical and scientific terms have the same meaning as meanings generally understood by those skilled in the art to which the present disclosure pertains. It should be understood that the terms defined by the dictionary are identical with the meanings within the context of the related art, and they should not be ideally or excessively formally defined unless the context clearly dictates otherwise.


Overall System


FIG. 1 is a block diagram illustrating an overall system of an underwater robot equipped with a multi-degree-of-freedom robot arm according to an exemplary embodiment of the present disclosure.


Referring to FIG. 1, an appearance of an underwater robot 1 for underwater work includes a main body 2 and a multi-degree-of-freedom robot arm 3, and the overall system 10 of the underwater robot 1 includes a robot system 100 for autonomous driving of the underwater robot 1 and a mission system 200 for setting an autonomous driving path, works, etc., of the underwater robot 1.


In an exemplary embodiment, the robot system 100 includes a sensorless propulsion controller 110, a URM controller 120, a navigation system 130, a robust attitude controller 140, a position controller 150, a GPS unit 160, an IMU 170, and a device management unit 180, and among these components, the sensorless propulsion controller 110, the position controller 150, the GPS unit 160, and the IMU 170 may be disposed on the underwater robot 1.


In an exemplary embodiment, the sensorless propulsion controller 110 may obtain a propulsive force prediction value of the underwater robot 1 by estimating the propulsive force of the underwater robot 1 for precise control of propellers 2a to 2d.


In addition, the sensorless propulsion controller 110 may obtain a propulsive force prediction value 111a by predicting the propulsive force of the underwater robot 1 based on the artificial neural network. In this case, the artificial neural network may be a propeller simulation neural network 111.


This sensorless propulsion controller 110 may be provided with a propulsion system 115 for generating the propulsive force of the underwater robot 1 or controlling the speed of the underwater robot 1.


In an exemplary embodiment, the propulsion system 115 may include a first speed controller 115a, a first current controller 115b, a force sensor 115c, and a first environmental sensor 115d that transmit data to the propeller simulation neural network 111.


Here, the first speed controller 115a may control a rotational speed of the propellers 2a to 2d.


In addition, the first current controller 115b may control a current to operate the propellers 2a to 2d.


The force sensor 115c may be coupled to one side of the propellers 2a to 2d and measure an output value Y and a repulsion force F of the propellers 2a to 2d.


In addition, the first environmental sensor 115d may measure first environmental data including a temperature and voltage of the propellers 2a to 2d.


In an exemplary embodiment, the URM controller 120 may precisely control a multi-


degree-of-freedom robot arm 3, which is an underwater robot manipulator (URM), during autonomous driving of the underwater robot 1.


The URM controller 120 may precisely control the multi-degree-of-freedom robot arm 3 based on the artificial neural network. In this case, the artificial neural network may be an actuator simulation neural network 121 that accurately predicts the output torque of the actuator 122 which is a waterproof actuator.


Here, since the actuator 122 is applied to the underwater robot 1, the actuator 122 may be a waterproof actuator that is airtight so that it can be used underwater, and may be a hollow actuator with an empty space for disposing a cable 125 on an inner space of a joint of the multi-degree-of-freedom robot arm 3.


That is, in this specification, the waterproof actuator and the hollow actuator refer to the actuator 122, and the waterproof actuator and hollow actuator may be used interchangeably, and the meaning of the two terms is preferably understood to be the same.


In the present disclosure, the data obtained by the actuator simulation neural network 121 predicting the output torque of the actuator 122 may be the torque prediction value 121a.


In an exemplary embodiment, the navigation system 130 may control (or support) the underwater robot 1 to autonomously drive to the target position along the driving path by accurately measuring the current position of the autonomously driving underwater robot 1.


In addition, the navigation system 130 may enable the underwater robot 1 to identify obstacles in water and quickly calculate a relative distance from the obstacles, and sets an optimal movement direction and driving path for the underwater robot 1.


To track the position of the underwater robot 1, the navigation system 130 may estimate the position of the underwater robot 1 through dead reckoning based on a Doppler velocity log (DVL) in water, and calibrate the position of the underwater robot through wireless-based (Real Time Kinematic, RTK) position correction when the underwater robot 1 moves to the water surface and the GPS signal of underwater robot 1 is detected. For convenience of description, rather than description various devices, the position measurement is referred to as the wireless-based GPS error correction position in this specification.


In an exemplary embodiment, the robust attitude controller 140 may adapt the sensorless propulsion controller 110, the URM controller 120, and the navigation system 130 from the internal influences of the underwater robot 1, including the propulsive force of the underwater robot 1 and the output torque of the multi-degree-of-freedom robot arm 3 or from the influences of the complex underwater environment and external disturbances.


In addition, the robust attitude controller 140 may include a simulator 141 that performs the position tracking simulation of the underwater robot 1 based on parameters reflecting the propulsive force prediction value 111a and torque prediction value 121a so that the propulsion controller 110 and the URM controller 120 may control the speed of the underwater robot 1 and the output torque of the multi-degree-of-freedom robot arm 3 without error from the internal influence of the underwater robot 1.


The robust attitude controller 140 may include an environment module 142 that trains the simulator 141 by applying changes in parameters and external disturbances to the simulator 141 so that the propulsion controller 110 and the URM controller 120 may control the speed of the underwater robot 1 and the output torque of the multi-degree-of-freedom robot arm 3 without error from the influences of the complex underwater environment and external disturbances.


The sensorless propulsion controller 110 may predict the propulsive force of the underwater robot 1, the URM controller 120 may predict the output torque of the multi-degree-of-freedom robot arm 3, and the navigation system 130 may track the position of the underwater robot 1.


In an exemplary embodiment, the position controller 150 may be a position control system for controlling the rotation angle of the multi-degree-of-freedom robot arm 3.


In addition, the position controller 150 may include a position controller 150a that transmits data to the actuator simulation neural network 121, a second speed controller 150b, a second current controller 150c, and a second environmental sensor 150d.


Here, the position controller 150a may control the rotation angle of the multi-degree-of-freedom robot arm 3.


In addition, the second speed controller 150b may control the rotational speed of the multi-degree-of-freedom robot arm 3.


In addition, the second current controller 150c may control the current for rotating the multi-degree-of-freedom robot arm 3.


In addition, the second environmental sensor 150d may be coupled to one side of the actuator 122 and measure the second environmental data including the temperature, voltage, etc., of the actuator 122.


The GPS unit 160 is a global positioning system (GPS) that receives signals from satellites and calculates a user's current position.


The GPS is composed of a satellite section, a ground control section, and a user section. Here, the satellite section refers to the GPS satellite, the ground control section refers to a control station located on the ground, and the user section refers to a GPS receiver.


There are 30 GPS satellites orbiting the Earth. Among them, 24 satellites are distributed in 6 orbital planes orbiting the Earth, allowing at least 6 GPS satellites to be observed from anywhere in the world. The remaining six satellites serve as backup when a problem occurs with the 24 satellites.


The GPS satellites run on solar energy and have a lifespan of about 8 to 10 years. The control station is divided into a main control station located in Colorado Springs, USA, and five sub-control stations distributed around the world. Each sub-control station tracks the GPS satellites passing through the sky, measures the distance and rate of change, and sends the measured distance and rate of change to the main control station. The main control station collects information and processes the collected information to keep the satellite in orbit. The GPS receiver is composed of an antenna that receives signals from GPS satellites, a clock, software that processes the signals, an output device that outputs them, and the like.


The GPS unit 160 according to the present disclosure may include a moving base GPS 161 and a Rover GPS 162.


Here, the moving base GPS 161 may be used to measure the position of the underwater robot 1.


Next, the Rover GPS 162 may be used to measure a heading angle of the underwater robot 1.


In an exemplary embodiment, the IMU 170 is an inertial measurement unit that measures the degree of inclination of the underwater robot 1, and is preferably provided in the main body 2 of the underwater robot 1.


This IMU 170 may be a 6-axis sensor composed of a gyroscope and accelerometer, or a 9-axis sensor composed of a gyroscope, an accelerometer, and a geomagnetic sensor.


In an exemplary embodiment, the device management unit 180 may control operations of an LED 181, a camera 182, and a sonar 183 for the autonomous driving and underwater work of the underwater robot 1.


In this case, since the LED 181, the camera 182, the sonar 183, the operation of which is controlled through the device management unit 180, are common, a detailed description thereof will be omitted for convenience.


In an exemplary embodiment, the robot system 100 is not illustrated in the drawings, but may include a wireless communication unit (not illustrated) which includes one or more modules that enable wireless communication between the underwater robot 1 and the wireless communication system or between the device and the network where the device is located.


In an exemplary embodiment, the wireless communication unit may communicate with an external device through short range communication or long range communication.


Here, the short range communication may include ANT, Bluetooth, radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), and ZigBee technologies.


In addition, the long range communication may include code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), and single carrier frequency division multiple access (SC-FDMA).


In an exemplary embodiment, although not illustrated in the drawings, the robot system 100 may include a driving unit (not illustrated) for driving the underwater robot 1.


In an exemplary embodiment, the driving unit may provide driving force to move the underwater robot 1 based on components such as a motor and inverter.


In an exemplary embodiment, although not illustrated in the drawings, the robot system 100 may include a braking unit (not illustrated) that provides a braking function to stop the movement of the underwater robot 1.


In an exemplary embodiment, the braking unit may be composed of a braking force generation device that generates the force necessary to brake underwater robot 1, a braking device that uses the force generated by the braking force generation device to reduce the speed of the underwater robot 1 or directly stop the underwater robot 1, an ancillary device that transmits the force generated from the braking force generation device to the braking device, etc.


The braking force generation device may include auxiliary power such as vacuum, hydraulic, and air brake, master cylinders, boosters, etc., and the braking device include a drum, a disc brake, etc., and the auxiliary device include a vacuum pump, an air compressor, etc.


In an exemplary embodiment, the robot system 100 may include a controller 190 that typically controls the overall operation of the underwater robot 1.


In this specification, the controller 190 may be referred to as a host (HOST).


In addition, the controller 190 may communicate with a first base station 300a, and a second base station 300b and a third base station 300c, which support a Network Transport of RTCM via Internet Protocol (NTRIP) signal, through the wireless communication unit.


Here, NTRIP (Network Transport of RTCM via Internet Protocol) refers to a network method of receiving a GPS correction signal (RTCM).


In an exemplary embodiment, the mission system 200 may include a user interface, vision AI, driving AI, etc., so that a user may control the autonomous driving of the underwater robot 1 through remote control.


Underwater Robot Control Method

Hereinafter, an underwater robot control method (S1) will be described in detail based on the sensorless propulsion controller 110, the URM controller 120, the navigation system 130, and the robust attitude controller 140.



FIG. 2 is a flowchart illustrating a control method of an underwater robot equipped with a multi-degree-of-freedom robot arm according to an exemplary embodiment of the present disclosure.


Referring to FIG. 2, the underwater robot control method (S1) of the present disclosure may include a first step (S10) of configuring the sensorless propulsion controller 110, the URM controller 120, and the navigation system 130, and a second step (S20) of configuring the robust attitude controller (140) for the control of the underwater robot 1.


In an exemplary embodiment, the first step (S10) may include step 1-1th (S11) of configuring the sensorless propulsion controller 110, a 1-2th step (S12) of configuring the URM controller 120, and step 1-3th (S13) of configuring the navigation system 130.


In this first step (S10), the sensorless propulsion controller 110, the URM controller 120, and the navigation system 130 may be configured sequentially or simultaneously.



FIG. 3 is a flowchart illustrating a detailed process of a step of configuring a sensorless propulsion controller illustrated in FIG. 2.


Referring to FIG. 3, step 1-1th (S11) may include a propeller simulation neural network training step (S1a) and a propulsion system mounting step (S11b) for configuring the sensorless propulsion controller 110.


In the propeller simulation neural network training step (S11a), the propeller simulation neural network 111 may be trained to predict the propulsive force of the underwater robot 1 based on the data received from the propulsion system 115, as illustrated in FIG. 4. FIG. 4 is a diagram for describing a step of training a propeller simulation neural network illustrated in FIG. 3.


Referring to FIG. 4, in the propeller simulation neural network training step (S11a), the propeller simulation neural network 111 may be trained based on first environmental data that includes the propeller speed control error of the first speed controller 115a, the first current measured value measured by the first current controller 115b, an output value Y and a repulsion force F of the propellers 2a to 2d measured by the force sensor 115c connected to the propellers 2a to 2d, the temperature and voltage of the propellers 2a to 2d measured by the first environmental sensor 115d, etc.


In addition, the propeller simulation neural network 111 may obtain the propulsive force prediction value 111a by predicting the propulsive force of underwater robot 1 through training.


This propeller simulation neural network 111 may be trained to simulate a function representing the output value Y of the propellers 2a to 2d.


That is, in the present disclosure, the propulsive force prediction value 111a may be generated from the propeller simulation neural network 111 in the form of the function representing the output value Y of the propellers 2a to 2d, and an example of a function form representing the output value (Y) is as shown in [Equation 1] below.









y
=

F

(


x

1

,

x

2


)





[

Equation


1

]







In the above [Equation 1], y may be the output value of the propellers 2a to 2d, F may be the repulsion force of the propellers 2a, 2b, 2c, and 2d, x1 may be the propeller speed control error of the first speed controller 115a, and x2 may be the first current measured value measured by the first current controller 115b.


In this case, including the speed control error x1 in the propulsive force prediction value 111a is to reflect not only the speed measurement error measured by the first speed controller 115a but also the effect of time delay due to the controller bandwidth.


In addition, including the first current measured value x2 in the propulsive force prediction value 111a means that the propulsive force is proportional to the square of the rotational speed and torque (power), and the torque has a relationship between the current measured value and the torque constant, which is intended to be reflected in [Equation 1].


In addition, the torque constant becomes a variable when the temperature fluctuates significantly. To block such variables, the first environmental data including the temperature information of the propellers 2a to 2d may be included in the propulsive force prediction value 111a.


As illustrated in FIGS. 5 and 6, the propeller simulation neural network 111 of the present disclosure may be applied to the higher control system 1150 or the propulsion system 115 that controls the propulsion system 115 to generate the propulsive force of the underwater robot 1 or control the speed of the underwater robot 1.



FIG. 5 is a diagram illustrating an example of applying the propeller simulation neural network illustrated in FIG. 4, and FIG. 6 is a diagram illustrating another example of applying the propeller simulation neural network illustrated in FIG. 4.


Referring to FIG. 5, the higher control system 1150 connects the trained propeller simulation neural network 111 to the first speed controller 111a, the first current controller 115b, and the first environmental sensor 115d of the propulsion system 115 to control the propulsion system 115 based on the obtained propulsive force prediction value 111a, thereby generating the propulsive force of the underwater robot 1 or controlling the speed of the underwater robot 1.


In this way, the reason of applying the propeller simulation neural network 111 to the high-performance higher control system 1150 is that the propeller simulation neural network 111 requires a deep or complex neural network such as long short term memory (LSTM), recurrent neural network (RNN), convolutional neural networks (CNN).


Referring to FIG. 6, the propulsion system 115 may generate the propulsive force of the underwater robot 1 or control the speed of the underwater robot 1 based on the propulsive force prediction value 111a acquired through the trained propeller simulation neural network 111 being connected to the first speed controller 111a, the first current controller 115b, and the first environmental sensor 115d.


In this way, the reason of applying the propeller simulation neural network 111 to the propulsion system 115 is that the propulsion system 115 is implemented as a simple fully convolutional network (FCN), unlike complex neural networks such as deep LSTM, RNN, and CNN.


In addition, the propulsion system 115 of FIG. 6 may be implemented as a microcontroller (MCU) to control the propellers 2a to 2d of the underwater robot 1.



FIG. 7 is a diagram illustrating an example of a step of applying the propeller simulation neural network illustrated in FIG. 3.


Referring to FIG. 7, in the propulsion system mounting step (S11b), the propulsion controller 4 may be manufactured in the form of a printed circuit board (PCB) including the propulsion system 115 for controlling the propellers 2a to 2d of the underwater robot 1.


This propulsion controller 4 may be mounted on the propulsion controller module 5 to generate the propulsive force in the propellers 2a to 2d or control the speed.


In an exemplary embodiment, the propulsion controller module 5 is one or more propulsion controllers 4 inserted or combined, and may be provided on the main body 2 to generate the propulsive force in the plurality of propellers 2a to 2d or control the speed.


In an exemplary embodiment, the underwater robot 1 performs the autonomous driving based on the vector sum of the propulsive force generated by each propeller 2a to 2d, which generates the propulsive force through the propulsion controller module 5.


In addition, due to the structure of the underwater robot 1, the vector sum of the propulsive force may be divided into the vector sum of the front propulsive force generated from the front propellers 2a to 2d and the vector sum of the rear propulsive force generated from the rear propellers 2a to 2d.


In other words, the underwater robot 111 may precisely generate the front and rear vector sums through the propulsion controller module 5 based on the propeller simulation neural network 111, thereby enabling the precise autonomous driving.



FIG. 8 is a flowchart illustrating a detailed process of a step of configuring an underwater robot manipulator illustrated in FIG. 2.


Referring to FIG. 8, step 1-2th (S12) may include an actuator simulation neural network training step (S12a) and a waterproof actuator configuration step (S12b).


In the actuator simulation neural network training step (S12a), as illustrated in FIG. 9, the actuator simulation neural network 121 may be trained to predict the output torque of the actuator 122 constituting the multi-degree-of-freedom robot arm 3 based on the data transmitted from the position controller 150.



FIG. 9 is a diagram for describing a step of training an actuator simulation neural network illustrated in FIG. 8.


Referring to FIG. 9, in the actuator simulation neural network training step (S12a), the actuator simulation neural network 121 may be trained based on second environmental data that includes a rotation angle and speed control error of the multi-degree-of-freedom robot arm 3 of the position controller 150a and the second speed controller 150b, a second current measured value measured by the second current controller 150c, a torque calculated based on the second current measured value, a temperature and voltage of the actuator 122 measured by the second environmental sensor 150d, etc.


In addition, the actuator simulation neural network 121 may acquire the torque prediction value 121a by predicting the output torque of the actuator 122 through training.


As illustrated in FIG. 10, the actuator simulation neural network 121 may be linked to the simulator 141 of the robust attitude controller 140 to be described later, which simulates the operation of the multi-degree-of-freedom robot arm 3.



FIG. 10 is a diagram illustrating an example of applying the actuator simulation neural network illustrated in FIG. 9.


Referring to FIG. 10, the actuator simulation neural network 121 simulates physical phenomena that are complex to implement in the performance simulation of the multi-degree-of-freedom robot arm 3 such as the uncertain of the motor or the friction due to the sealing of the actuator 122 or the measurement uncertainty of the rotation angle measuring device 127.


Accordingly, when the actuator simulation neural network 121 is added, the simulator 141 may be implemented as a hybrid simulator based on a simulation to real (SIM-TO-REAL) methodology that precisely simulates the complex physical phenomenon of the multi-degree-of-freedom robot arm 3.


In addition, the simulator 141 may be linked to the position controller 150 to control the position of the actuator 122.


Accordingly, the position controller 150 may be implemented as the SIM-TO-REAL-based controller that may precisely control the position of the actuator 122 from the influence of the complex physical phenomena.


In an exemplary embodiment, the actuator 122 may constitute the multi-degree-of-freedom robot arm 3 whose operation is controlled from the URM controller 120 based on the actuator simulation neural network 121.


This actuator 122 is preferably kept airtight as illustrated in FIG. 11 so that it may be operated in water.



FIG. 11 is a diagram for describing a step of configuring a waterproof actuator illustrated in FIG. 8.


Referring to FIG. 11, the actuator 122 may be kept airtight by being coupled to the link 124 through a pogo pin 123 of a hollow shaft 123a.


In this case, the pogo pin 123 may simultaneously transmit the power and signal of the position controller 150 to the motor for operating the actuator 122.


In an exemplary embodiment, the actuator 122 has a structure in which the cable 125 for operation from the position controller 150 passes through the inside of the link 124, so a separate cable for external connection may be omitted.


In addition, the actuator 122 has a rotation angle measuring device 127 that may measure the rotation angle of the actuator output shaft and transmit the measured rotation angle to an angle estimator 126 and may be detached from the output shaft (‘encoder’ in FIG. 12) illustrated in FIG. 12.


In an exemplary embodiment, the rotation angle measuring device 127 may be detached from the output shaft of the actuator 122 through a circular housing 127a.


In an exemplary embodiment, the rotation angle measuring device 127 may have a plurality of magnetic sensors 1270 provided on (or built into) the housing 127a.


In the present disclosure, the plurality of magnetic sensors 1270 may include a first magnetic sensor 1270-1, a second magnetic sensor 1270-2, a third magnetic sensor 1270-3, and a fourth magnetic sensor 1270-4.


That is, in the present disclosure, the plurality of magnetic sensors 1270 may include four magnetic sensors 1270-1 to 1270-4 arranged at 90° intervals.


In an exemplary embodiment, the URM controller 120 may include the angle estimator 126 that receives measured values measured by each magnetic sensor 1270 of the rotation angle measuring device 127 and measures the rotation angle of the processed actuator output shaft by calibrating and fusing the measured values of each magnetic sensor 1270 received.



FIG. 12 is a diagram illustrating an example of applying an angle estimator according to an exemplary embodiment of the present disclosure.


Referring to FIG. 12, the angle estimator 126 may accurately measure the rotation angle of the actuator output shaft based on the correction and fusion of the measured values received from the rotation angle measuring device 127 disposed on the output shaft of the actuator 122.


In an exemplary embodiment, the angle estimator 126 may include a correction unit 126a that corrects the measured value of each magnetic sensor 1270 to an actual value by removing external disturbances included in the measured value received from the plurality of magnetic sensors 1270.


Here, the measured value that the correction unit 126a receives from the plurality of magnetic sensors 1270 may be a measurement signal including the actual value from which external disturbances and measurement errors that cause measurement errors have been removed.


In addition, the angle estimator 126 is based on an N-channel extended Kalman filter algorithm based on an extended Kalman filter, and may include a signal conditioner unit 126b that measures the rotation angle of the actuator output shaft processed by fusing the measured values of each magnetic sensor 1270 while removing the harmonic distortion and noise included in the measured values of each magnetic sensor.


Here, the processed rotation angle of the actuator output shaft means the rotation angle of the actuator output shaft measured as the actual value by removing the external disturbance through the correction unit 126a and removing the harmonic distortion and noise by the signal conditioner unit 126b.



FIG. 13 is a flowchart illustrating a detailed process of a step of configuring a navigation system illustrated in FIG. 2.


Referring to FIG. 13, the navigation system 130 may be configured through step 1-3th (S13) including the following steps (S13a to S13q) to accurately measure the current position of the underwater robot 1 and provide the current position to the user.


First, the controller 190, which is a host (HOST), may request access to a plurality of surrounding support stations 300 through a wireless communication unit to measure the position of the underwater robot 1.


As a specific example, the controller 190 requests access to a first base station 300a based on the NTRIP signal (S13a), requests access to a second base station 300b based on another NTRIP signal (S13b), and request access to a third base station 300c based on a radio frequency (RF) signal (S13c).


After the access request (S13a to 13c), each base station 300a to 300c may provide, to the controller 190, a GPS correction signal (RTCM) to be transmitted to the moving base GPS 161 via the controller 190 according to the access request from the controller 190 (S13d to S13f).


Next, the controller 190 may select the GPS correction signal (RTCM) of the base station closest to the underwater robot 1 to be transmitted to the moving base GPS 161 (S13g).


As a specific example, the controller 190 may select the GPS correction signal (RTCM) of adjacent base stations using RTCM Type 3.0, and RTCM Type 3.0 may include position information of each base station with which the controller 190 requests contact.


That is, the controller 190 may select a nearby base station using the RTCM Type 3.0, which already includes the information including the position of the base station antenna, and in an exemplary embodiment, the base station closest to the underwater robot 1 may be the first base station 300a.


Next, the controller 190 may transmit the information on the first base station 300a based on the NTRIP signal, which is the nearest server, to the moving base GPS 161 (S13h).


In this case, the information on the first base station 300a is RTCM stream 1, and in this specification, the RTCM stream 1 means the RTCM stream of the nearest server selected by the controller 190.


In addition, the moving base GPS 161 may calibrate its position information based on the information on the first base station 300a.


In other words, the information related to the nearest server (base station) may be the information on the difference between the position of the nearest server (base station) confirmed through the signal received from at least one satellite and the actual position of the nearest server (base station), and the moving base GPS 161 may calibrate errors caused by the separation distance between the at least one satellite and the moving base GPS 161 based on the difference information.


Next, the moving base GPS 161 may transmit the corrected first position information (reference position information) to the Rover GPS 162 (S13i).


Next, the Rover GPS 162 may calculate the heading angle information of the underwater robot 1 using the first position information of the corrected moving base GPS 161 and its own position information (second position information) acquired by the Rover GPS 162.


Here, the heading angle information may be the angle information in which the position information acquired by the Rover GPS 162 rotates based on the position information of the corrected moving base GPS 161 and the preset separation distance between the moving base GPS 161 and the rover GPS 162.


In addition, the Rover GPS 162 may receive the information related to the nearest server (base station) from the moving base GPS 161 and calculate the heading angle information after calibrating the position information.


In other words, the Rover GPS 162 may finally calculate the heading angle information of the underwater robot 1 using the position information of the moving base GPS 161 and the position information of the corrected Rover GPS 162.


In this case, the first position information (reference position information) transmitted to the Rover GPS 162 is RTCM stream 2, and in the reference position information transmitting step (S13i), the RTCM stream 2 refers to the RTCM stream transmitted from moving base GPS 161 to the Rover GPS 162.


Next, the moving base GPS 161 may transmit first position information and PVT information related to position, speed, and time to the controller 190 (S13j).


Next, the Rover GPS 162 may transmit a relative PVT value for measuring (or deriving) the heading value of the underwater robot 1 to the controller 190 (S13k).


The controller 190 may apply the first position information, the PVT information, and the relative PVT value received from the GPS unit 160 in steps (S13j to S13k) to an INS algorithm of an inertial navigation device model 191 configured in the controller 190 (S13n).


In this case, the inertial navigation device model 191 may measure the position value and heading value of the underwater robot 1 that is autonomously driving on the water surface through the INS algorithm.


Thereafter, the controller 190 broadcasts its position to the RTCM stream (S130).


In the broadcast step (S130), the RTCM stream 2 refers to the RTCM stream that transmits the position value and heading value of the underwater robot 1 measured by the INS algorithm to the control center 132 of the navigation system 130 (S13p).


As the control center 132 receives the position value and heading values of the underwater robot 1 that is autonomously driving on the water surface, it is possible to track the position of the controller 190 provided in the underwater robot 1 in real time, and thus, acquire the position information of the controller 190.


In addition, when the plurality of underwater robots 1 operate to autonomously drive on the water surface, the control center 132 may track the position of the controller 190 provided in each underwater robot 1 in real time through broadcasting by the controller 190 provided in each of the plurality of underwater robot 1 as the RTCM stream.


Furthermore, the control center 132 transmits the position information of the controller 190 provided in the underwater robot 1 that is autonomously driving on the water surface to the MAP server 133 side of the navigation system 130 (S13q), thereby allowing the update of the MAP server 133 to proceed.


In this case, the MAP server 133 may provide the user with the map information in which the position information of the controller 190 is updated, and thus, the user may determine the current position of the underwater robot 1 that is autonomously driving on the water surface.


In this way, in step 1-3th (S13) of the present disclosure, while the underwater robot 1 is autonomously driving on the water surface where the GPS signal can be detected, as illustrated in FIG. 14, by tracking the position of the underwater robot 1 based on calibrating the position of the underwater robot 1 through the RTK-based position correction, it is possible to provide the user with the navigation system 130 that can accurately measure the position of the underwater robot 1.


On the other hand, in step 1-3th (S13) of the present disclosure, because the underwater robot 1 cannot be detected while autonomous driving in underwater that cannot be detected by the GPS signal, some steps using the GPS signal (S13a˜13j) may be omitted.


When the underwater robot 1 of the present disclosure is autonomously driving in water, the measurement device 131 may measure the altitude, attitude, and speed of the underwater robot 1 that is autonomously driving in water, and then transmit data about the altitude, attitude, and speed of the underwater robot 1 to the controller 190 (S13l).


Here, the measurement device 131 is a component installed or detached on one side of the underwater robot 1, and as illustrated in FIG. 15, may include a Doppler velocity log 131a for measuring the speed of the underwater robot 1 autonomously driving in water, a vision sensor 131b for estimating the position of the underwater robot 1, and a depth meter 131c capable of measuring three-dimensional position by measuring the depth value of the image acquired by capturing underwater images at high density.


In addition, the depth value measured by the depth meter 131c means the altitude of the underwater robot 1 measured when the water surface (sea level) is set to 0.


The Doppler velocity log 131a can estimate the position based on the fact that when an electromagnetic wave signal is reflected by a moving object, the frequency of the signal changes in proportion to the speed of the object due to the Doppler effect.


In addition, since it takes a lot of time for the vision sensor 131b to estimate the position of the underwater robot 1 and to estimate the attitude of the underwater robot 1 by comparing the current image with the previous image, although not illustrated in the drawings, the vision sensor 131b may be fused with a gyro sensor (not illustrated) that can accurately estimate the attitude of the underwater robot 1 through angular velocity integration when the initial position is accurately known.


Next, the IMU 170 provided in the underwater robot 1 is composed of a 6-axis sensor composed of a gyroscope and an accelerometer, for example, and may use the accelerometer and gyroscope to measure the acceleration and angular velocity of the underwater robot 1, which is autonomously driving in water, and then transmit the data for the acceleration and angular velocity of the underwater robot 1 to the controller 190 (S13m).


Next, the controller 190 may apply the data transmitted in the steps (S131, S13m) to the INS algorithm of the inertial navigation device model 191 configured in the controller 190 (S13n).


Referring to FIGS. 15 and 16, the inertial navigation device model 191 is used by deriving the system model of the extended Kalman filter (192) through linearization of non-linear navigation equations, and the prediction model 192a, which is a process model that constitutes the system model of the extended Kalman filter 192, may estimate the errors in altitude, attitude, and speed of the underwater robot 1 measured by the measurement device 131 using the Kalman filter while deriving the equations for the altitude, attitude, and speed errors of the underwater robot 1 through the linearization expansion through the first Taylor series.


Next, the update model 192b, which is the measurement model constituting the system model of the extended Kalman filter 192, may calibrate the error in the altitude and speed of the underwater robot 1 estimated by the prediction model 192a through the altitude and speed data of the underwater robot 1 measured by the measuring device 131.


The inertial navigation device model 191 may measure the position value of the underwater robot 1 autonomously driving in water by deriving and using the system model of the extended Kalman filter 192 described above.


Thereafter, the controller 190 broadcasts the position information of underwater robot 1, including the position value of the underwater robot 1 measured through the inertial navigation device model 191, to the RTCM stream (S13o).


In the broadcast step (S130), the RTCM stream 2 refers to the RTCM stream that transmits the position information of the underwater robot 1 measured by the INS algorithm to the control center 132 of the navigation system 130 (S13p).


As the control center 132 receives the position information of the underwater robot 1 that is autonomously driving in water, it is possible to track the position of the controller 190 provided in the underwater robot 1 in real time, and thus, acquire the position information of the controller 190.


In addition, when the plurality of underwater robots 1 operates to autonomously drive in water, the position of the controller 190 provided in each underwater robot 1 may be tracked in real time through broadcasting by the controller 190 provided in each of the plurality of underwater robot 1 as the RTCM stream.


Furthermore, the control center 132 transmits the position information of the controller 190 provided in the underwater robot 1 that is autonomously driving on the water surface to the MAP server 133 side of the navigation system 130 (S13q), thereby allowing the update of the MAP server 133 to proceed.


In this case, the MAP server 133 may provide the user with the map information in which the position information of the controller 190 is updated, and thus, the user may determine the current position of the underwater robot 1 that is autonomously driving in water.


In the present disclosure, the navigation system 130 is configured as in the first to third steps (S13a to S13q), so that the navigation system may measure the position value of the underwater robot 1 autonomously driving on the water surface and in water through two GPSs 161 and 162 and the measurement devices 131a, 131b, and 131c and the IMU 170 and enable the autonomous driving of the underwater robot 1 based on the measured position value.


The present disclosure may provide the user with the navigation system 130 that accurately measures the position of the underwater robot during autonomous driving on the water surface and in water through steps 1-3th (S13), thereby improving the convenience of the underwater work of the underwater robot 1.



FIG. 17 is a diagram illustrating an example of a step of configuring a robust attitude controller illustrated in FIG. 2, and FIG. 18 is a diagram illustrating another example of the step of configuring a robust attitude controller illustrated in FIG. 2.


Referring to FIG. 17, the robust attitude controller 140 may be configured to adapt the sensorless propulsion controller 110, the URM controller 120, and the navigation system 130 from the internal influence of the underwater robot 1 including the propulsive force of the underwater robot 1 and the output torque of the multi-degree-of-freedom robot arm 3 in the second step (S20).


In the second step (S20), the robust attitude controller 140 may include the simulator 141 that performs the simulation based on the parameters reflecting the propulsive force prediction value 111a of the propeller simulation neural network 111 received from the propulsion controller 110 as the system approximation model and the torque prediction value 121a of the actuator simulation neural network 121 received from the URM controller 120.


The simulator 141 may perform a position tracking simulation of the underwater robot 1 in which the propulsive force prediction value 111a and the torque prediction value 121a are reflected in the parameters.


In addition, the navigation system 130 is linked to the simulator 141 and may track the position of the underwater robot 1 based on the parameters of the position tracking simulation of the underwater robot 1.


The robust attitude controller 140 may configure an optimal control algorithm that controls the output torque so that the multi-degree-of-freedom robot arm 3 generates the set force while the underwater robot 1 is controlled at the set speed at the position of underwater robot 1 tracked through the navigation system 130.


Through this, the propulsion controller 110 and the URM controller 120 may control the speed of the underwater robot 1 and the output torque of the multi-degree-of-freedom robot 3 from the internal influence of the underwater robot 1 during the autonomous driving of the underwater robot 1 based on the optimal control algorithm.


Referring to FIG. 18, the robust attitude controller 140 may be configured to adapt the propulsion controller 110, the URM controller 120, and the navigation system 130 from the complex underwater environments and external disturbances in the second step (S20).


Here, the robust attitude controller 140 may be configured with an environment module 142 that trains the simulator 141 by applying the changes in parameters and external disturbances to the simulator 141 so that the propulsion controller 110, the URM controller 120, and the navigation system 130 are robustly controlled from the changes in the complex underwater environment and external disturbances.


The environment module 142 may estimate the reinforcement learning control algorithm that allows the changes in parameters and external disturbances to be reflected in the position tracking simulation of the underwater robot 1 by applying the changes in parameters and external disturbances to the simulator 141.


Through this, the propulsion controller 110 and the URM controller 120 may control the speed of the underwater robot 1 and the output torque of the multi-degree-of-freedom robot arm 3 from the influences of the complex underwater environment and external disturbances without errors based on the reinforcement learning control algorithm.


In other words, the robust attitude controller 140 secures control performance and stability of the sensorless propulsion controller 110, the URM controller 120, and the navigation system 130 against the influences of the complex underwater environments and external disturbances.


Meanwhile, the robust attitude controller 140 is the system approximation model and may perform the position tracking simulation of the underwater robot 1 based on the coordinate system and parameters of the underwater robot 1 illustrated in FIG. 16.



FIG. 19 is a diagram illustrating a coordinate system and parameters of an underwater robot according to an exemplary embodiment of the present disclosure.


Referring to FIG. 19, the robust attitude controller 140 performs the position tracking simulation of the underwater robot 1 by the simulator 141 based on the coordinate system and parameters of the underwater robot 1 described in [Equation 2] to [Equation 5].










?

=

v
body





[

Equation


2

]













?

=


1

m
body




(


?

+

?

+

F
d


)






[

Equation


3

]













?

=


?


(
Θ
)



ω
body






[

Equation


4

]













?

=


?


(



?

×

?


+


?

×

?


+

F
d


)






[

Equation


5

]










?

indicates text missing or illegible when filed




In the above [Equation 2] to [Equation 5], Θbody denotes an Euler angle expressing the rotation angle of the main body 2 of the underwater robot 1, pbody denotes a position vector indicating the position of the body 2, ri and Fi each denote the propulsive force and the distance between the propellers 2a to 2d and the center of gravity of the main body 2, Fd denotes the external disturbance applied to the system approximation model by the external environment (tide, etc.), and Parm and Farm denote the distance to the gripper installed at the end of the multi-degree-of-freedom robot arm 3 and the direction of the force applied from the gripper, respectively.


In other words, the position tracking simulation of the underwater robot 1 may reflect the external disturbance caused by the reaction occurring at the end of the multi-degree-of-freedom robot arm 3 equipped with the gripper acting on the main body 2 of the underwater robot 1 when the underwater robot 1 performs the underwater works such as cutting/grasping using the multi-degree-of-freedom robot arm 3.


In addition, as the navigation system 130 tracks the position of underwater robot 1 based on the parameters of the position tracking simulation of the underwater robot 1, the position of underwater robot 1 may be tracked, including the changes in external disturbances acting on the main body 2 of the underwater robot 1.


A detailed description of preferred exemplary embodiments of the invention disclosed as described above is provided to enable a person skilled in the art to implement or practice the invention. Although exemplary embodiments of the present disclosure have been disclosed above, it may be understood by those skilled in the art that the present disclosure may be variously modified and changed without departing from the scope of the present disclosure. For example, a person skilled in the art may use each configuration described in the above-described exemplary embodiments by combining them with each other. Accordingly, the present disclosure is not intended to be limited to the exemplary embodiments illustrated herein but is to be given the widest scope consistent with the principles and novel features disclosed herein.


The present disclosure may be implemented in another specific form without departing from the spirit and the essential feature of the present disclosure. Therefore, the above-mentioned detailed description is to be interpreted as being illustrative rather than being restrictive in all aspects. The scope of the present disclosure is to be determined by reasonable interpretation of the claims, and all modifications within an equivalent range of the present disclosure fall in the scope of the present disclosure. The present disclosure is not intended to be limited to the exemplary embodiments illustrated herein but is to be given the widest scope consistent with the principles and novel features disclosed herein. In addition, claims that do not have an explicit reference relationship in the patent claims can be combined to form an exemplary embodiment or can be included as a new claim through amendment after filing.


DETAILED DESCRIPTION OF MAIN ELEMENTS






    • 1: Underwater robot


    • 2: Underwater robot main body


    • 2
      a, 2b: Front propeller


    • 2
      c, 2d: Rear propeller


    • 3: Multi-degree-of-freedom robot arm


    • 4: Propulsion controller


    • 5: Propulsion controller module


    • 10: Overall system


    • 100: Robot system


    • 110: Sensorless propulsion controller


    • 111: Propeller simulation neural network


    • 111
      a: Propulsive force prediction value


    • 115: Propulsion system


    • 115
      a: First speed controller


    • 115
      b: First current controller


    • 115
      c: Force sensor


    • 115
      d: First environmental sensor


    • 120: URM controller


    • 121: Actuator simulation neural network


    • 121
      a: Torque prediction value


    • 122: Actuator 123: Pogo pin


    • 123
      a: Hollow shaft


    • 124: Link


    • 125: Cable


    • 126: Angle estimator


    • 127: Encoder


    • 130: Navigation system


    • 131: Measuring device


    • 131
      a: Doppler velocity log


    • 131
      b: Vision sensor


    • 131
      c: Depth meter


    • 131: Measuring device


    • 132: Control center


    • 133: Map server


    • 140: Robust attitude controller


    • 141: Simulator


    • 142: Environment module


    • 150: position controller


    • 160: GPS unit


    • 161: Moving base GPS


    • 162: Rover GPS


    • 170: IMU


    • 180: Device management unit


    • 181: LED


    • 182: Camera


    • 183: Sonar


    • 190: Controller


    • 191: INS algorithm


    • 200: Mission system


    • 300
      a: First base station


    • 300
      b: Second base station


    • 300
      c: Third base station


    • 1150: Higher control system




Claims
  • 1. A control method of an underwater robot equipped with a multi-degree-of-freedom robot arm, comprising: a) step 1-1th of obtaining a propulsive force prediction value by predicting a propulsive force of the underwater robot based on an artificial neural network and configuring a sensorless propulsion controller equipped with a propulsion system to control a speed of the underwater robot;b) step 1-2th of configuring a URM controller that obtains a torque prediction value by predicting an output torque of an actuator constituting the multi-degree-of-freedom robot arm provided in the underwater robot based on the artificial neural network;c) step 1-3th of estimating a position of the underwater robot using dead reckoning based on a Doppler velocity log (DVL) in water; andd) a second step of configuring a robust attitude controller that adapts the sensorless propulsion controller, the URM controller, and the navigation system from an internal influence of the underwater robot including the propulsive force of the underwater robot and the output torque of the multi-degree-of-freedom robot arm or influences of an underwater environment and external disturbance,wherein the sensorless propulsion controller includes:a propeller simulation neural network that predicts the propulsive force of the underwater robot;a first speed controller that controls a rotational speed of the propeller provided in the underwater robot;a first current controller that controls a current to operate the propeller;a force sensor for measuring an output value and repulsion force of the propeller; anda first environmental sensor that measures first environmental data including a temperature and voltage of the thruster.
  • 2. The underwater robot of claim 1, wherein in step a), the propeller simulation neural network is trained to predict the propulsive force of the underwater robot based on a propeller rotational speed control error of the first speed controller, a first current measured value measured by the first current controller, the output value and repulsion force of the propeller measured by the force sensor, and the first environmental data measured by the first environmental sensor to obtain the propulsive force prediction value.
  • 3. The underwater robot of claim 2, wherein in step a), the propeller simulation neural network is trained to simulate a function representing the output value of the propeller using the following equation: y=F(x1,x2)In the above equation, y denotes the output value of the propeller, F denotes the repulsion force of the propeller, x1 denotes the propeller speed control error of the first speed controller, and x2 denotes the first current measured value.
  • 4. A control method of an underwater robot equipped with a multi-degree-of-freedom robot arm, comprising: a) step 1-1th of obtaining a propulsive force prediction value by predicting a propulsive force of the underwater robot based on an artificial neural network and configuring a sensorless propulsion controller equipped with a propulsion system to control a speed of the underwater robot;b) step 1-2th of configuring a URM controller that obtains a torque prediction value by predicting an output torque of an actuator constituting the multi-degree-of-freedom robot arm provided in the underwater robot based on the artificial neural network;c) step 1-3th of estimating a position of the underwater robot using dead reckoning based on a Doppler velocity log (DVL) in water; andd) a second step of configuring a robust attitude controller that adapts the sensorless propulsion controller, the URM controller, and the navigation system from an internal influence of the underwater robot including the propulsive force of the underwater robot and the output torque of the multi-degree-of-freedom robot arm or influences of an underwater environment and external disturbance,wherein the URM controller includes an actuator simulation neural network that predicts an output torque of the actuator based on data transmitted from a position controller including a position controller for controlling a rotation angle of the multi-degree-of-freedom robot arm, a second speed controller, a second current controller, and a second environmental sensor.
  • 5. The control method of claim 4, wherein in step b), the actuator simulation neural network is trained to predict the output torque of the actuator based on second environmental data including a rotation angle and speed control error of the multi-degree-of-freedom robot arm of the position controller and the second speed controller, a second current measured value measured by the second current controller, a torque calculated based on the second current measured value, and a temperature and voltage of the actuator measured by the second environmental sensor to obtain a torque prediction value.
  • 6. A control method of an underwater robot equipped with a multi-degree-of-freedom robot arm, comprising: a) step 1-1th of obtaining a propulsive force prediction value by predicting a propulsive force of the underwater robot based on an artificial neural network and configuring a sensorless propulsion controller equipped with a propulsion system to control a speed of the underwater robot;b) step 1-2th of configuring a URM controller that obtains a torque prediction value by predicting an output torque of an actuator constituting the multi-degree-of-freedom robot arm provided in the underwater robot based on the artificial neural network;c) step 1-3th of estimating a position of the underwater robot using dead reckoning based on a Doppler velocity log (DVL) in water, and configuring a navigation system that corrects the position of the underwater robot through wireless-based (real time Kinematic, RTK) position correction when the underwater robot moves to a water surface and a GPS signal of the underwater robot is detected; andd) a second step of configuring a robust attitude controller that adapts the sensorless propulsion controller, the URM controller, and the navigation system from an internal influence of the underwater robot including the propulsive force of the underwater robot and the output torque of the multi-degree-of-freedom robot arm or influences of an underwater environment and external disturbance,wherein the underwater robot includes:a GPS unit composed of a moving base GPS for calculating the position of the underwater robot and a Rover GPS for measuring a heading angle of the underwater robot, andthe navigation system enables autonomous driving of the underwater robot based on the position and heading values of the underwater robot derived through the GPS unit.
  • 7. A control method of an underwater robot equipped with a multi-degree-of-freedom robot arm, comprising: a) step 1-1th of obtaining a propulsive force prediction value by predicting a propulsive force of the underwater robot based on an artificial neural network and configuring a sensorless propulsion controller equipped with a propulsion system to control a speed of the underwater robot;b) step 1-2th of configuring a URM controller that obtains a torque prediction value by predicting an output torque of an actuator constituting the multi-degree-of-freedom robot arm provided in the underwater robot based on the artificial neural network;c) step 1-3th of estimating a position of the underwater robot using dead reckoning based on a Doppler velocity log (DVL) in water; andd) a second step of configuring a robust attitude controller that adapts the sensorless propulsion controller, the URM controller, and the navigation system from an internal influence of the underwater robot including the propulsive force of the underwater robot and the output torque of the multi-degree-of-freedom robot arm or influences of an underwater environment and external disturbance,wherein the robust attitude controller includes:a simulator that adapts the sensorless propulsion controller, the URM controller, and the navigation system from the internal influence of the underwater robot, including the propulsive force of the underwater robot and the output torque of the multi-degree-of-freedom robot arm by configuring an optimal control algorithm that allows the navigation system to track the position of the underwater robot through a position tracking simulation of the underwater robot performed based on parameters reflecting the propulsive force prediction value and torque prediction value and controls the output torque so that the multi-degree-of-freedom robot arm generate a set force while controlling the underwater robot at a set speed at the position of the underwater robot tracked from the navigation system; andan environment module that applies changes in the underwater environment and external disturbance to the simulator to estimate a reinforcement learning control algorithm that reflects changes in parameters and external disturbance in the position tracking simulation of the underwater robot and controls the sensorless propulsion controller and the URM controller to control the speed of the underwater robot and the output torque of the multi-degree-of-freedom robot arm without error from the influences of the underwater environment and external disturbance based on the reinforcement learning control algorithm.
Priority Claims (1)
Number Date Country Kind
10-2023-0099849 Jul 2023 KR national