The present disclosure relates to a following system, and more particularly, to a following system in which a mobile robot follows a moving object.
In a following system, a mobile robot may follow a human such as a soldier or a vehicle via autonomous driving. The following system may be a global positioning system (GPS) type or a camera type.
In a GPS type following system, a mobile robot acquires a following path according to GPS information of a moving object. In such a following system, an appropriate following path may be obtained only when the GPS has high accuracy. Hereinafter, the appropriate following path denotes a path along which a mobile robot may avoid obstacles and at the same time may follow a moving object via the shortest distance.
Therefore, according to the GPS type following system, the appropriate following path may not be obtained in an area where the GPS has low accuracy.
In addition, a camera type following system has also been used, and in the camera type following system, a camera for observing a scene in front of a mobile robot is provided on the mobile robot and images from the camera are analyzed to acquire a following path. In this following system, an appropriate following path may be obtained even in an area where a GPS has low accuracy, but following problems may occur.
First, the mobile robot does not know about a circumstance in front of the moving object, and thus, an optimal following path may not be obtained in a long-term perspective.
Second, if the camera of the mobile robot may not capture images of the moving object due to a long distance between the moving object and the mobile robot, the mobile robot may not be able to acquire the following path.
Third, if captured images have low visibility at night or in a dark place, the mobile robot may not obtain an appropriate following path because the mobile robot may not recognize an obstacle in front of it.
The information in the background art described above was obtained by the inventors for the purpose of developing the present disclosure or was obtained during the process of developing the present disclosure. As such, it is to be appreciated that this information did not necessarily belong in the public domain before the patent filing date of the present disclosure.
One or more embodiments of the present disclosure provide a camera type following system capable of obtaining an optimal following path of a mobile robot in a long-term perspective.
One or more embodiments of the present disclosure provide a camera type following system capable of acquiring a following path of a mobile robot even in a case where a camera of the mobile robot is unable to capture images of a moving object because a distance between the moving object and the mobile robot increases.
One or more embodiments of the present disclosure provide a camera type following system capable of obtaining an appropriate following path of a mobile robot even in a case where captured images have low visibility at night or in a dark place.
A following system according to an embodiment, in which a mobile robot follows a moving object, includes a first camera and a mobile robot.
The first camera is worn on the moving object to photograph a scene in front of the moving object.
The mobile robot includes a second camera for photographing a scene in front of the mobile robot, and obtains a following path according to a first front image from the first camera and a second front image from the second camera.
According to a following system of one or more embodiments, a mobile robot acquires a following path according to a first front image from a first camera worn on a moving object and a second front image from a second camera included in the mobile robot.
Accordingly, the following system according to the embodiments may have following effects when comparing with a following system according to the related art.
First, the mobile robot may identify a circumstance in front of the moving object, and may obtain an optimal following path in a long-term perspective.
Second, if the second camera of the mobile robot is unable to photograph the moving object because a distance between the moving object and the mobile robot increases, the mobile robot extracts a past frame corresponding to a current location from among a series of frames of the first front image from the first camera, and obtains a following path pursuing the image of the extracted frame as a following target.
Therefore, even when the second camera of the mobile robot is unable to photograph the moving object due to the increased distance between the moving object and the mobile robot, the mobile robot may obtain the following path.
Third, when the photographed image has low visibility at night or in a dark place, the mobile robot may extract a past frame corresponding to a current location from among a series of frames of the first front image, and may combine an image of the extracted frame and an image of a current frame to obtain a following path according to an image of a result of the combining.
Therefore, even when the photographed image has low visibility at night or in a dark place, the mobile robot may obtain an appropriate following path.
The following description and the attached drawings are provided for better understanding of the disclosure, and descriptions of techniques or structures related to the present disclosure which would be obvious to one of ordinary skill in the art will be omitted.
The specification and drawings should be considered in a descriptive sense only and not for purposes of limitation. Therefore, the scope of the present disclosure is defined by the appended claims. The terms and words which are used in the present specification and the appended claims should not be construed as being confined to common meanings or dictionary meanings but should be construed as meanings and concepts matching the technical spirit of the present disclosure in order to describe the present disclosure in the best fashion.
Hereinafter, one or more embodiments of the present invention will be described in detail with reference to accompanying drawings.
Referring to
Accordingly, the following system according to the present embodiment may have following effects when comparing with a following system according to the related art.
First, the mobile robot 102 may identify a circumstance in front of the moving object 101, and may obtain an optimal following path in a long-term perspective.
Second, if the second camera 102a of the mobile robot 102 is unable to photograph the moving object 101 because a distance between the moving object 101 and the mobile robot 102 increases, the mobile robot 102 extracts a past frame corresponding to a current location from among a series of frames of the first front image from the first camera 103, and obtains a following path pursuing the image of the extracted frame as a following target through panning and tilting of the second camera 102a.
Therefore, even when the second camera 102a of the mobile robot 102 is unable to photograph the moving object 101 due to the increased distance between the moving object 101 and the mobile robot 102, the mobile robot 102 may obtain the following path.
Third, when the photographed image has low visibility at night or in a dark place, the mobile robot 102 may extract a frame at a past time point corresponding to a current location, from among a series of frames of the first front image, and may combine an image of the extracted frame and an image of a current frame to obtain a following path according to an image of a result of the combining.
Therefore, even when the photographed image has low visibility at night or in a dark place, the mobile robot 102 may obtain an appropriate following path.
A user controls operations of the mobile robot 102 and transmits the first front image from the first camera 103 to the mobile robot 102, by using a remote control device 104 worn on the moving object 101.
Referring to
First, the mobile robot 102 searches for a past location where the mobile robot 102 has been located at the past time point tS when the moving object 101 has been located at a current location of the mobile robot 102.
Next, the mobile robot 102 calculates an estimated arrival time Ta taken for the mobile robot 102 to reach the current location from the past location.
In addition, the mobile robot 102 extracts a frame 1Fs at the past time point corresponding to the estimated arrival time Ta from among the series of frames 1F1 to 1FN of the first front image.
The extracted frame 1Fs at the past time point of the first front image may be used as follows.
First, the mobile robot 102 combines an image of the frame 1Fs at the past time point of the first front image with an image of a frame 2FN at a current time point tN in the second front image, and obtains a following path according to a result of the combined image.
Therefore, even when the photographed image has low visibility at night or in a dark place, the mobile robot 102 may obtain an appropriate following path.
Second, when the second camera 102a of the mobile robot is unable to photograph the moving object 101 due to the increased distance between the moving object 101 and the mobile robot 102, the mobile robot 102 obtains a following path, a following target of which is the image of the frame 1Fs at the past time point in the first front image, via panning and tilting of the second camera 102a.
Therefore, even when the second camera 102a of the mobile robot 102 is unable to photograph the moving object 101 due to the increased distance between the moving object 101 and the mobile robot 102, the mobile robot 102 may obtain the following path.
Referring to
The microphone 301 generates an audio signal.
The user input unit 302, e.g., a joystick, generates an operation control signal for controlling operations of the mobile robot 102 according to manipulation of the user.
The wireless communication interface 303 relays communication with the mobile robot 102.
The controller 304 outputs the first front image from the first camera 103 and the audio signal from the microphone 301 to the wireless communication interface 303 (S401). Accordingly, the wireless communication interface 303 transmits the first front image and the audio signal from the controller 304 to the mobile robot 102.
When the operation control signal is input from the user input unit 302 (S403), the controller 304 outputs the operation control signal to the wireless communication interface 303 (S405). Accordingly, the wireless communication interface 303 transmits the operation control signal from the controller 304 to the mobile robot 102.
The above processes S401 to S405 are repeatedly performed until a termination signal, e.g., a power off (Off) signal is generated (S407).
Accordingly, the mobile robot 102 may obtain the following path according to the first front image, the second front image, and the audio signal.
Here, the audio signal may include an audio signal regarding the duty of the mobile robot 102. In this case, the mobile robot 102 executes the duty according to the audio signal.
The mobile robot 102 includes a wireless communication interface 501, an image combiner 502, an audio amplifier 503, a first ultrasound sensor 504 (S1), a second ultrasound sensor 505 (S2), a following path generator 506, a tool portion 507, a driver 508, and a controller 509.
The wireless communication interface 501 receives the first front image, the audio signal, and the operation control signal from the remote control device 104.
The image combiner 502 generates a combined image by combining the first front image IM1 from the wireless communication interface 501 and the second front image IM2 from the second camera 102a.
The audio amplifier 503 amplifies an audio signal Sau from the wireless communication interface 501.
The first ultrasound sensor 504 (S1) generates a ground state signal of a front left portion of the mobile robot 102.
The second ultrasound sensor 505 (S2) generates a ground state signal of a front right portion of the mobile robot 102.
The following path generator 506 obtains the following path according to the combined image from the image combiner 502, the audio signal from the audio amplifier 503, the ground state signal of the front left portion from the first ultrasound sensor 504 (S1), and the ground state signal of the right front portion from the second ultrasound sensor 505 (S2). Therefore, an appropriate following path may be obtained rapidly when comparing with the following system using a single camera according to the related art.
The tool portion 507 is provided to operate the mobile robot.
The driver 508 drives the tool portion 507.
The controller 509 controls the driver 508 according to the following path from the following path generator 506 or the operation control signal Sco from the wireless communication interface 501.
Hereinafter, operations of the image combiner 502, the following path generator 506, and the controller 509 will be described in detail.
The image combiner 502 stores a series of frames 1F1 to 1FN of the first front image IM1 (S601).
Also, the image combiner 502 searches for a past location where the mobile robot 102 was located at the past time point tS when the moving object 101 was located at a current location of the mobile robot 102 (S602).
Next, the image combiner 502 calculates an estimated arrival time Ta taken for the mobile robot 102 to reach the current location from the past location (S603). The estimated arrival time Ta may be calculated by following Equation 1.
In above Equation 1, “dp” denotes a distance between the past location and the current location, and “Vm” denotes an average moving velocity applied to moving from the past location to the current location.
Next, the image combiner 502 extracts a frame 1Fs at the past time point corresponding to the estimated arrival time Ta from among the series of frames 1F1 to 1FN of the first front image IM1 (S604).
Next, the image combiner 502 combines the image of the frame 1Fs at the past time point in the first front image IM1 and the image of the frame 2FN at the current time point tN in the second front image IM2 (S605).
In addition, the image combiner 502 provides the following path generator 506 with an image of the result of the combining (S606).
The above operations S601 to S606 are repeatedly performed until a termination signal is generated.
The following path generator 506 obtains the following path according to the combined image from the image combiner 502, the audio signal from the audio amplifier 503, the ground state signal of the front left portion from the first ultrasound sensor 504 (S1), and the ground state signal of the right front portion from the second ultrasound sensor 505 (S2) (S701).
Examples of the following path setting operation of the following path generator 506 regarding the audio signal will be described as follows.
First, the following path generator 506 may estimate a ground state where the moving object 101 is located, by analyzing a sound pattern of a footstep of the moving object 101, e.g., a soldier. For example, the sound pattern is “splashing sound”, the following path generator 506 obtains the following path taking into account that there is a deep waterway at the location of the moving object 101. If the sound pattern is “dabbling sound”, the following path generator 506 obtains the following path taking into account that there is a shallow waterway at the location of the moving object 101. If the sound pattern is “trotting sound”, the following path generator 506 obtains the following path taking into account that the moving object 101 is walking fast. If the sound pattern is “sinking sound” in mud, the following path generator 506 obtains the following path taking into account that the moving object 101 is walking on a muddy road.
Second, the following path generator 506 obtains the following path based on command voice of the moving object 101. Examples of the command voice of the moving object 101 may include “watch your right,” “come slow,” “be quiet,” and “hurry up.”
The following path generator 506 provides the following path to the controller 509 (S702).
Also, when an audio signal related to the duty of the mobile robot 102 is input (S703), the following path generator 506 relays the audio signal to the controller 509 (S704).
Examples of the audio signal related to the duty will be described as follows.
First, as peripheral sound of the moving object 101, “clank sound” is the sound of operating guns by the moving object 101.
Second, as peripheral sound of the moving object 101, “boom sound” is the sound of exploding bombs.
Third, as peripheral sound of the moving object 101, “whistling sound” is the sound of strong wind.
Fourth, certain command voice of the moving object 101 may be an audio signal related to the duty.
In addition, when an image of the moving object is distinguished (S705), the following path generator 506 performs an emergency following mode (S706).
The above operations S701 to S706 are repeatedly performed until a termination signal is generated (S707).
The following path generator 506 searches for the image of the moving object 101 by panning and tilting the second camera 102a (S801).
When the image of the moving object 101 is found (S802), the following path generator 506 obtains a following path, a following target of which is the image of the moving object 101 (S803). Also, the following path generator 506 provides the controller 509 with the acquired following path (S804).
When the image of the moving object 101 is not found (S802), the following path generator 506 performs operations S805 to S807, and then, performs operation of S802 and operations after that again.
In operation S805, the following path generator 506 controls the image combiner 502 extracts the frame 1Fs at the past time point tS corresponding to the current location, from among the series of frames 1F1 to 1FN in the first front image IM1 from the first camera 103. This operation S805 is performed through operations S602 to S604 of
Next, the following path generator 506 obtains the following path, the following target of which is the image of the extracted frame, by panning and tilting the second camera 102a (S806).
Next, the following path generator 506 provides the controller 509 with the acquired following path (S807), and performs operation S802 and operations after that again.
The controller 509 controls the driver 508 according to the following path provided from the following path generator 506 (S901).
Next, when the audio signal related to the duty is input from the following path generator 506 (S902), the controller 509 controls the driver 508 according to the audio signal from the following path generator 506 (S903).
Also, when an operation control signal is input from the wireless communication interface 501 (S904), the controller 509 controls the driver 508 according to the operation control signal from the wireless communication interface 501 (S905).
The above operations S901 to S906 are repeatedly performed until a termination signal is generated.
In addition, as another embodiment of the present disclosure, a following system in which the mobile robot 102 (see
The first cameras 103 are respectively worn on the moving objects 101 to photograph a scene in front of the moving objects 101.
The mobile robot 102 includes the second camera 102a for photographing a scene in front of the mobile robot 102, and acquires a following path according to first front images from the first cameras 103 and the second front image from the second camera 102a.
Here, the above descriptions with reference to
Referring to
Since the following path is obtained according to the panorama image 1004 as described above, the mobile robot 102 may obtain more appropriate following path.
Here, the panorama image 1004 is generated based on the second front image 1002 from the second camera 102a. The first images 1001 and 1003 from the first cameras 103 are extracted through the execution of the operations S602 to S604 illustrated in
As described above, according to the following system of the present embodiment, the mobile robot obtains the following path according to the first front image from the first camera worn on the moving object and the second front image from the second camera provided in the mobile robot.
Accordingly, the following system according to the present embodiment may have following effects when comparing with a following system according to the related art.
First, the mobile robot may know about a circumstance in front of the moving object, and thus, an optimal following path may be obtained in a long-term perspective.
Second, in a case where the second camera of the mobile robot is unable to photograph the mobile robot because a distance between the moving object and the mobile robot increases, the mobile robot extracts a frame at a past time point corresponding to a current location, from among a series of frames in the first front image of the first camera, and may obtain a following path, a following target of which is the image of the extracted frame.
Therefore, if the camera of the mobile robot may not capture images of the moving object due to the increased distance between the moving object and the mobile robot, the mobile robot may obtain the following path.
Third, when the photographed image has low visibility at night or in a dark place, the mobile robot may extract a past frame corresponding to a current location from among a series of frames of the first front image, and may combine an image of the extracted frame and an image of a current frame to obtain a following path according to an image of a result of the combining.
Therefore, even when the photographed image has low visibility at night or in a dark place, the mobile robot may acquire an appropriate following path.
While this disclosure has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims. The preferred embodiments should be considered in descriptive sense only and not for purposes of limitation. Therefore, the scope of the disclosure is defined not by the detailed description of the disclosure but by the appended claims, and all differences within the scope will be construed as being included in the disclosure.
The present disclosure may be used in various following systems.
Number | Date | Country | Kind |
---|---|---|---|
10-2015-0120550 | Aug 2015 | KR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/KR2015/009215 | 9/2/2015 | WO | 00 |