This application claims priority to and the benefit of Japanese Patent Application No. 2022-060614 filed on Mar. 31, 2022, the entire disclosure of which is incorporated herein by reference.
The present invention relates to a mobile object and a control method therefor.
Autonomous mobile bodies are known, such as compact mobility vehicles or robots, each of which travels in the vicinity of a user to guide the user to a destination or carries baggage for the user. International Publication No. 2017/115548 proposes a mobile object that moves to travel to an appropriate position for a user, based on information of user's trunk and legs. In addition, Japanese Patent Laid-Open No. 2021-22108 proposes, regarding a mobile object deployed in an airport, a technique of leading a user while maintaining a certain distance to the user, and reducing a possibility that the user cannot reach a designated place by a boarding time.
The user, by the way, does not always move along an assumed path, and there is a sufficient possibility that the user suddenly changes the path. When the user deviates a predicted path, there is a possibility of losing sight of the user (“lost”) due to an obstacle or the like. In a case where the user has been lost, it is difficult to search for a predetermined user from the mobile object. Hence, it is important to promptly determine such a path change that has been made by the user, and it is also important to modify the path of the mobile object in accordance with the path change that has been made by the user.
An object of the present invention is to provide a mobile object that suitably modifies a path in accordance with a path change that has been made by a user. In addition, another object is to suitably conduct re-authentication of the user, when the user has been lost by the mobile object.
According to one aspect of the present invention, there is provided a mobile object comprising: a first sensor configured to detect a target object in surroundings; a setting unit configured to recognize and set a user; a path generation unit configured to acquire a predicted path indicating a future path of the user, based on a movement of the user that has been set by the setting unit and an output from the first sensor, and configured to generate a path of the mobile object to lead the user on a forward side of the user, based on the predicted path; and a travel control unit configured to cause the mobile object to travel in accordance with the path that has been generated, wherein upon detection of a change in a moving direction of the user, based on the output from the first sensor, the path generation unit modifies the generated path to cause the mobile object to move to the forward side of the user in accordance with the moving direction that has been changed.
According to another aspect of the present invention, there is provided a control method for a mobile object including a first sensor configured to detect a target object in surroundings, the control method comprising: a setting step of recognizing and setting a user; a path generation step of acquiring a predicted path indicating a future path of the user that has been set in the setting step, based on a movement of the user and an output from the first sensor, and generating a path of the mobile object to lead the user on a forward side of the user, based on the predicted path; and a travel control step of causing the mobile object to travel in accordance with the path that has been generated, wherein upon detection of a change in a moving direction of the user, based on the output from the first sensor, the path generation step modifies the generated path to cause the mobile object to move to the forward side of the user in accordance with the moving direction that has been changed.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note that the following embodiments are not intended to limit the scope of the claimed invention, and limitation is not made an invention that requires all combinations of features described in the embodiments. Two or more of the multiple features described in the embodiments may be combined as appropriate. Furthermore, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
The mobile bodies 100 are arranged in various facilities such as shopping malls, parks, stations, airports, and parking lots, and provides various services to users that have been set (hereinafter, each of such users will be referred to as a “set user”). For example, the mobile object 100 is capable of leading, following, and guiding the set user to be supported, and is capable of making a delivery in response to a request of an authenticated user who has been registered beforehand. The service provided by the mobile object 100 can be changed in accordance with a drive mode of the mobile object, and the drive mode will be described later with reference to
The server 200 monitors a plurality of mobile bodies 100, causes the mobile bodies to move to the respective areas to enhance convenience of users, and controls their arranged positions and the like. Specifically, the server 200 causes the plurality of mobile bodies 100 to move to locations where the probability that the mobile bodies to be used is higher in areas of a building or the like in which the plurality of mobile bodies 100 are arranged. For example, control is conducted such that the mobile object is caused to move to the vicinity of the location where a crowd of people are present, and the number of mobile bodies 100 in the area is increased in accordance with the degree of the crowd. In addition, in registering a user, the server 200 may acquire information such as veins from the mobile object 100, and may register and authenticate the user. Note that whether the authentication via the server 200 is necessary may be determined separately in accordance with the drive mode of the mobile object 100 used by the user. For example, in a case of using in a delivering mode, only the user that has been registered beforehand via the server 200 may be authenticated. On the other hand, in leading, following, and guiding modes, users do not have to be registered beforehand. A user may be allowed to use only by setting the user itself with the mobile object 100. Identification information that has been acquired (vein information, feature information by use of a captured image, and the like) is used for confirmation processing (re-authentication), when, for example, the user and the mobile object 100 are separated from each other by a predetermined distance or more, that is, when the user has been lost and then the user is rejoined.
The mobile object 100 and the server 200 are capable of communicating bidirectionally through a network 300. More specifically, the mobile object 100 accesses the network 300 via an access point 301 or 302 in the vicinity, and becomes capable of bidirectionally communicating with the server 200 through the network 300. For example, in a case where the mobile object 100 is installed in a building such as a shopping mall or in its site, the server 200 is capable of identifying a rough position of the mobile object 100 by use of the access point 301 or 302 that has been accessed by the mobile object 100. That is, the access points 301 and 302 each have position information of the place where it is installed, and it is possible to identify a rough position of the mobile object 100 in accordance with the position information. Further, according to the position information of the access point, it is possible to easily recognize on which floor in the building the mobile object 100 is located (altitude information). Furthermore, the server 200 is capable of identifying a detailed position by use of position information output from a GNSS, to be described later, or the like and provided in the mobile object 100. In addition, by combining these pieces of information, the server 200 is capable of acquiring position information of the mobile object 100, which is an object, and which is located in the vicinity of an elevator of an underground parking lot, for example. When the position information output from the GNSS includes altitude information, the altitude information may be used, instead of the position information of the access point.
Next, a configuration example of the mobile object 100 according to the present embodiment will be described with reference to
As illustrated in
In addition, the mobile object 100 is an electrically autonomous mobile object with a battery 106, to be described later, used as a main power supply. The traveling unit corresponds to a three-wheeled vehicle including the front wheel 20 and the pair of left and right rear wheels 21a and 21b. The traveling unit may be in another form, such as a four-wheeled vehicle. In addition, a seat, not illustrated, can be provided in the mobile object 100.
The housing unit 26 indicates a space in which user's baggage or the like can be loaded. When vein authentication is conducted by a vein sensor 107, to be described later, and the user setting is conducted, the lock of a door (not illustrated) of the housing unit 26 is released, and the user is able to load the baggage. Then, after a predetermined time elapses or when the set user moves away from the mobile object 100, the door is locked. By conducting the vein authentication for the user again, it is possible to unlock the door. Therefore, the vein information of the set user is held in a memory or the like provided in the mobile object 100.
As illustrated in
The detection unit 108 is a 360-degree camera, and is capable of acquiring an image of 360 degrees in the horizontal direction at a time with the mobile object 100 as the center. Note that there is no intention of limiting the present embodiment, but for example, a camera, in which the detection unit 108 is provided to be rotatable in the horizontal direction, and images captured in a plurality of directions are combined to acquire an image of 360 degrees, may be adopted. As another configuration, a plurality of detection units may be provided to capture images in respectively different directions and analyze individual images. By analyzing a 360-degree image that has been captured by the detection unit 108, the mobile object 100 is capable of detecting a target object, such as a human or an object in the surroundings of the mobile object 100.
The operation panel 109 is a liquid crystal display of a touch panel type having a display unit and an operation unit. In the present invention, the display unit and the operation unit may be configured to be provided individually. The operation panel 109 displays various types of information such as a setting screen for setting the drive mode of the mobile object 100 and map information for giving current position information and the like to the user.
A detailed configuration of each apparatus included in the present system will be described with reference to
The server 200 serves as an information processing apparatus such as a personal computer, and includes a control unit 210, a storage unit 220, and a communication unit 230. The control unit 210 includes a registration authentication unit 211 and a monitoring unit 212. By reading and executing a control program stored in the storage unit 220, the control unit 210 achieves various processes. In addition to the above control program, the storage unit 220 stores various data, setting values, registration information of users, and the like. The registration information of users includes authentication information including vein information and feature information of the users. The communication unit 230 controls communication with the mobile object 100 through the network 300.
The registration authentication unit 211 registers users, and authenticates the users that have been registered beforehand. The user registration may be conducted via the mobile object 100, or may be conducted by another device, such as a smartphone or a PC. In a case where the user registration is conducted via the mobile object 100, the vein information that has been acquired as the authentication information by the vein sensor 107 and the feature information of the user that has been extracted from the image acquired by the detection unit 108 are registered in association with the identification information of the user. In addition, the authentication information may include information of a password that has been set by the user. For the identification information, a user's name, a registration number, or the like can be used.
The monitoring unit 212 monitors a plurality of mobile bodies 100 arranged in a predetermined area, and controls a standby position, a move-around area, and the like of the mobile object 100 in accordance with a situation of a facility or the like where the plurality of mobile bodies 100 are arranged. Regarding the situation of the facility where the plurality of mobile bodies 100 are arranged, for example, the degree of the crowd may be acquired by conducting an image analysis on a captured image from each mobile object 100. In such a case, for example, an image that has been captured by the mobile object 100 in a moving-around mode is transmitted to the server 200, and is used. The standby position is provided at a predetermined place, and denotes a position where the mobile object 100 with no user setting stops. Note that even in a case where the user setting has been made, it is possible to temporarily stop at the standby position. For example, when the set user enters a place where the mobile object 100 is not capable of entering together, the mobile object 100 can wait at a standby position in the vicinity until the set user conducts re-authentication. In addition, the move-around area indicates an area where the mobile object 100 with no user setting moves around in the moving-around mode to be described later. The monitoring unit 212 monitors the positions of the plurality of mobile bodies 100, and for example, causes the mobile object 100 waiting on standby in the vicinity of a place where there are not many people to move to the vicinity of a place where many people are gathered. Accordingly, a more highly convenient system can be provided. Further, the monitoring unit 212 may grasp the battery remaining amount of each mobile object 100, and may plan a schedule for charging each mobile object 100 for efficient charging at a charging station.
The mobile object 100 includes a control unit 101, a microphone 102, a speaker 103, a GNSS 104, a communication unit 105, a battery 106, and a storage unit 110, in addition to the configuration described with reference to
The control unit 101, such as an electronic control unit (ECU), controls each device connected by a signal line. By reading and executing a program stored in the storage unit 110, the control unit 101 performs various processes. In addition to the control program, the storage unit 110 includes an area for storing various data, setting values, and the like, and a work area for the control unit 101. Note that the storage unit 110 does not have to be configured with a single device, and can be configured to include at least one of memory devices that are ROM, RAM, HDD, and SDD, for example.
The operation panel 109 denotes a device including an operation unit and a display unit, and may be achieved by, for example, a liquid crystal display of a touch panel type. In addition, the operation unit and the display unit may be individually provided. Various operation screens, map information, notification information to the user, inquiry information, and the like are displayed on the operation panel 109. Further, in addition to the operation panel 109, the mobile object 100 is capable of interacting with the user via the microphone 102 and the speaker 103.
A global navigation satellite system (GNSS) 104 receives a GNSS signal, and detects the current position of the mobile object 100. The communication unit 105 accesses the network 300 through the access point 301 or 302, and bidirectionally communicates with the server 200, which is an external apparatus. The battery 106 is, for example, a secondary battery such as a lithium ion battery, and the mobile object 100 is capable of traveling by itself on the above traveling unit with electric power supplied from the battery 106. In addition, the electric power from the battery 106 is supplied to each load.
A control configuration of the control unit 101 will be described. The control unit 101 includes, as the control configuration, a voice recognition unit 121, an interaction unit 122, an image analysis unit 123, a user setting unit 124, a position determination unit 125, a path generation unit 126, and a travel control unit 127. The voice recognition unit 121 receives sounds in the surroundings of the mobile object 100 through the microphone 102, and recognizes and interprets, for example, a voice from the user. In interacting with the user by voice, the interaction unit 122 generates a question or an answer, and causes the question or the answer to be output by sounds of voice through the speaker 103. Note that regarding the interaction with the user, conversations of the user that has been subjected to voice recognition, and an answer, a question, a warning, and the like from the mobile object 100 may be displayed on the operation panel 109 in accordance with a voice output or voice recognition.
The image analysis unit 123 analyzes an image that has been captured by the 360-degree camera that is the detection unit 108. Specifically, the image analysis unit 123 recognizes a target object including a human or an object from the captured image, and analyzes the image to extract a feature of the user. The feature of the user includes, for example, various features such as a color of clothes, baggage, and a behavioral habit.
The user setting unit 124 sets a user who uses the mobile object 100. Specifically, the user setting unit 124 sets the user by storing the vein information of the user that has been acquired by the vein sensor 107, in the storage unit 110. In addition, the user setting unit 124 may store the feature information of the set user that has been extracted by the image analysis unit 123 in association with the above vein information. After the user is set by the user setting unit 124, the vein information and the feature information stored in the storage unit 110 are used for reconfirming (re-authenticating) the user, when the user has been lost or the like. Here, “lost” denotes that the mobile object 100 has lost sight of the set user for a predetermined time or more. When the user has been lost, the mobile object 100 moves to, for example, a place in the vicinity where the mobile object 100 can stop, temporarily stops, and waits on standby until the vein sensor 107 or the detection unit 108 confirms the set user.
The position determination unit 125 determines a position relative to the set user, as a position where the mobile object 100 travels. For example, in leading the user, the position determination unit 125 determines at which position relative to the set user the mobile object leads the set user in accordance with the information of the movement of the user and the surrounding environment. The leading position is desirably a position from which it is easy for the user to recognize the mobile object 100, and it is less likely to come into contact with an obstacle including anyone in the surroundings. Note that the leading position is basically determined to be a position on the predicted path of the user while maintaining a predetermined distance to the user.
The path generation unit 126 generates a path along which the mobile object 100 moves in accordance with the current drive mode of the mobile object 100. The generation of the path here is not a path to the destination but a path for a short distance, for example, five meters or so. Therefore, the path generation unit 126 repeatedly generates a path until the mobile object 100 reaches a destination or the user stops. In addition, when the user deviates from the path, the generated path is modified in accordance with a user's movement. Further, in the leading mode, the path generation unit 126 predicts a movement of the set user from an analysis result of the image analysis unit 123 that has analyzed the captured image of the detection unit 108, and obtains a predicted path indicating a future path of the user. Furthermore, the path generation unit 126 generates a path of the mobile object so as to lead the user on a forward side of the user, based on the predicted path that has been obtained. Details of path generation will be described later.
The travel control unit 127 controls traveling of the mobile object 100 to maintain the leading position and the path in accordance with the path that has been generated by the path generation unit 126. Specifically, the travel control unit 127 causes the mobile object 100 to move along the generated path, and controls its movement while adjusting the positional relationship with the set user by use of the captured image of the detection unit 108. For example, when the distance becomes a predetermined distance or more, the speed is decreased, or when the set user is deviated to the left side from the path, the mobile object 100 similarly moves to the left side to maintain the leading position. On the other hand, when it is determined that the path has been changed, the path is regenerated (modified).
Next, a drive mode of the mobile object 100 according to the present embodiment will be described with reference to
The mobile object 100 includes, as the drive modes, for example, at least one of a leading mode, a following mode, a guiding mode, a delivering mode, a moving-around mode, and an emergency mode. The leading mode is a mode in which the mobile object 100 is controlled to travel on a forward side of the user in accordance with a moving speed of the user, in a state in which no destination is set. The following mode is a mode in which the mobile object 100 is controlled to travel on a rearward side of the user in accordance with a moving speed of the user in a state in which no destination is set. The guiding mode is a mode in which the mobile object 100 is controlled to travel in accordance with a predetermined speed or a moving speed of the user on a forward side of the user toward a destination in a state in which the destination is set by the user.
The delivering mode is a mode in which the mobile object 100 is controlled to travel at high speed toward a destination in a state in which the destination is set by the user. In addition, the delivering mode is a mode in which any package is loaded on the housing unit 26 so as to be delivered to the destination. The moving-around mode is a mode in which the mobile object 100 is controlled to travel at low speed for heading for a predetermined station (a standby station or a charging station) as a destination. In addition, in the moving-around mode, no user is set, and it is a mode of searching for a user. In the moving-around mode, the mobile object 100 travels while monitoring the surrounding environment by the detection unit 108. For example, upon detection of a human approaching the mobile object 100 while raising its hand, the mobile object 100 determines that such a human is a user who desires to use the mobile object 100, moves to a forward side of the user, and then stops. The emergency mode is a mode for controlling the mobile object 100 to travel at high speed for heading for a predetermined station as a destination. The emergency mode is conducted, for example, to cause the mobile object 100 to move to a charging station when the charge amount of the battery 106 becomes lower than a predetermined value, or to deliver a package to a lost-and-found station that stores lost articles when the set user forgets the package.
Next, an operation overview of the present system will be described with reference to
For example, as illustrated in
Note that, in such a facility, the user gets interested in a shop, a product, or the like that is visually recognized, and thus, the user may suddenly change the path, in many cases. In such cases, for example, there is a high possibility that the mobile object 100b will lose sight of the user B, who is a target for leading and is subject to tracking (“tracking”)(“lost”)mobile object. In addition, there are many objects such as people and walls that obstruct the tracking of the set user. For example, someone who cuts across between the mobile object 100 and the set user, or an object such as a wall of a corner when the mobile object 100 turns the corner may become an obstacle for the detection unit 108 to catch sight of the set user. Hereinafter, a service in the leading mode from among various services provided by the present system will be described.
In the following description, a basic processing flow of the leading mode in the mobile object according to the present embodiment will be described with reference to
First, a processing procedure in the leading mode of the mobile object 100 according to the present embodiment will be described with reference to
First, in S101, the control unit 101 sets a user. In the mobile object 100 in which no user who is moving around or who stops is set, the detection unit 108 conducts an image analysis on a surrounding environment when needed. In such a situation, for example, upon detection of a human approaching the mobile object 100 while raising its hand, the control unit 101 causes the travel control unit 127 to approach such a human to be a target and then stop the mobile object 100. Then, when such a human inserts its hand into the detection range of the vein sensor 107, the vein information is acquired. The control unit 101 causes the user setting unit 124 to store the acquired vein information in the storage unit 110 and to set the user as a set user.
Subsequently, in S102, the control unit 101 displays a mode selection screen on the operation panel 109 to prompt the set user to select the drive mode of the mobile object 100. In addition, in S103, the control unit 101 causes the detection unit 108 to capture an image of the set user, and causes the image analysis unit 123 to extract a feature point of the user. The extracted features include various features, for example, a color of clothes, baggage, a behavioral habit, and the like. These features are always used for recognizing the user, in the case of leading the user or the like. Note that S102 and S103 do not have to be performed in this processing order, and may be performed in the reverse order or may be performed in parallel.
Next, in S104, the control unit 101 starts movement control of the mobile object 100 in the drive mode that has been selected by the user via the mode selection screen displayed on the operation panel 109. In the present embodiment, a case where the leading mode is selected will be described. When the movement control in the leading mode is started, the control unit 101 starts monitoring the movement of the set user in accordance with an analysis result of the image analysis unit 123 that has analyzed a captured image. Specifically, the control unit 101 monitors at least a current position and an orientation of the user, and predicts subsequent moving direction and moving speed of the user. The current position may be acquired, for example, as a relative position by use of a distance from the mobile object 100. The orientation of the user is determined by, for example, the line of sight (face) and the orientation of the body (trunk). According to the present embodiment, the direction in which the user is predicted to move in the future is determined in accordance with the line of sight, and the current moving direction is determined in accordance with the trunk orientation. This is because a human does not always face or look at its moving direction, and the trunk is more likely to be directed in the moving direction than the face or the line of sight. On the other hand, there is a possibility that an object in which the user is interested may be present in the destination of the line of sight, and in such a case, its direction can become a future path of the user. Note that various methods may be used for a method, by the detection unit 108, for acquiring the direction of the line of sight from the captured image of the set user, without particularly limiting. For example, the determination may be made by the recognized positions of the pupil and the iris in an eye of the set user. In a case where the position of the pupil or the iris is not recognizable, it may be substituted by acquiring the orientation of the face. In addition, it is possible to acquire the moving speed of the user, based on the time-series data related to the current position of the user.
Next, in S105, the control unit 101 causes the position determination unit 125 and the path generation unit 126 to perform path generation processing of the mobile object 100. In the path generation processing, a predicted path indicating the future movement of the user is acquired, and a path indicating the moving route of the mobile object 100 is further generated. Details of the path generation processing will be described later with reference to
Next, in S107, the control unit 101 determines whether it is necessary to regenerate a path during traveling. In a case where it is determined that the path has to be regenerated, the processing returns to S105, and in the other cases, the processing proceeds to S108. Regarding the case where it is necessary to regenerate the path, two cases are included that are a case where a predetermined change or more in the moving direction of the user is detected within a predetermined time, and a case where the next path is generated when the user approaches within a predetermined distance of an end point of the path of the mobile object 100 generated in S105. Details of the above change in the moving direction of the user will be described later with reference to
In S108, the control unit 101 determines whether the current drive mode (here, the leading mode) has been ended by the set user. For example, the user is able to give an instruction to end the leading mode via the operation panel 109. In addition, the user is able to give an instruction to end the leading mode by voice via the microphone 102. In a case where it is determined that the mode has been ended, the processing of the present flowchart is ended, and in the other cases, the processing returns to S106.
Subsequently, a detailed processing procedure of the path generation processing (S105) in the leading mode of the mobile object 100 according to the present embodiment will be described with reference to
First, in S201, the control unit 101 acquires a future movement (predicted path) of the set user from the image that has been captured by the detection unit 108. A specific method for acquiring the predicted path will be described later with reference to
Next, in S203, the control unit 101 generates the path of the mobile object 100, based on the predicted path of the set user and the surrounding environment information. Specifically, the control unit 101 generates the path of the mobile object 100 to travel through a location on a forward side by a predetermined distance from the set user in accordance with the predicted path acquired in S201 and not to come into contact with the target object that has been detected from the surrounding environment information acquired in S202. When the path is generated, the processing of the present flowchart is ended, and the processing proceeds to S106 of
Next, a method for acquiring a predicted path of the user according to the present embodiment will be described with reference to
As described above, the line-of-sight direction of the user is acquired by detecting the positions of the pupil and the iris in an eye (for example, positions with respect to the inner corner of the eye) of the set user that has been recognized from the captured image of the user that has been captured by the detection unit 108. In addition, in a case where the position of the pupil or the iris is not recognizable, it may be substituted by acquiring the orientation of the face. As a method for estimating the orientation of the face, various image recognition algorithms can be used. For example, a method for normalizing a face by detecting a face attribute (eyes, nose, or mouth), a method for extracting a parallel vector and a rotation vector of the face, or the like can be used. Similarly, various image recognition algorithms can be used for the trunk orientation. Further, the moving direction of the user can be acquired, based on time-series data related to the current position of the user.
According to the present embodiment, it is determined that the user will change the path at any of the timings in
Hence, according to the present embodiment, when the moving direction changes by a predetermined amount or more within a predetermined time, it is determined that the user has changed the path, and the path of the mobile object 100b is also modified. In the example of
Subsequently, a detailed processing procedure of acquisition processing (S201) of the predicted path of the user according to the present embodiment will be described with reference to
First, in S301, the control unit 101 causes the image analysis unit 123 to acquire the line of sight of the user from the captured image of the set user that has been captured by the detection unit 108. Subsequently, in S302, the control unit 101 causes the image analysis unit 123 to acquire the trunk orientation of the user from the captured image of the set user that has been captured by the detection unit 108.
Subsequently, in S303, the control unit 101 causes the image analysis unit 123 to acquire the position of the user at each timing from the time-series data of the captured image of the set user that has been captured by the detection unit 108, and acquires the moving direction of the user. Similarly, in S304, the control unit 101 acquires the moving speed of the user, based on the user position at each timing in the time-series data acquired in S303.
Finally, in S305, the control unit 101 generates a predicted path of the user in accordance with the information acquired in S301 to S304, ends the processing of the present flowchart, and the processing proceeds to S202. Although the direction of the predicted path that has been generated desirably follows the line-of-sight direction, there is no intention of limiting the present invention. For example, in addition to the line-of-sight direction, the trunk orientation and the current moving direction of the user may be considered. For example, in a case where the moving direction of the user and the trunk orientation are substantially the same, it is determined that there is no further path change, and the moving direction and the trunk orientation may be set as the direction of the predicted path. Alternatively, an intermediate direction between the line-of-sight direction and the moving direction of the user may be set as the direction of the predicted path. In addition, in a case where the line-of-sight direction cannot be acquired, the direction of the predicted path may be determined by use of only the moving direction and the trunk orientation.
Next, tracking control according to the present embodiment will be described with reference to
Subsequently, a processing procedure of the tracking control according to the present embodiment will be described with reference to
First, in S401, the control unit 101 tracks (tracking) the set user in the captured image that has been captured by the detection unit 108, by use of the feature point of the user acquired in S103. The tracking information acquired here is also used for travel control of the mobile object 100 in the above S106. Subsequently, in S402, the control unit 101 determines whether the tracking in S401 caught sight of the user. In a case where the tracking caught sight of the user, the processing returns to S401, and continues tracking. On the other hand, in a case where the tracking caught sight of the set user, and in a case where the set user is not included in the captured image, for example, as illustrated in
In S403, the control unit 101 determines whether the tracking has not caught sight of the set user for more than a first time length (for example, five seconds or so) that is a predetermined time. In a case where the tracking does not catch sight of the set user for more than the first time length, the processing proceeds to S404, and in a case where it has not exceeded the first time length, the processing returns to S402.
In S404, when the tracking has not caught sight of the set user for more than the first time length, the control unit 101 determines that the user has been lost, and temporarily stops traveling in accordance with the path that is currently generated. Furthermore, the control unit 101 notifies the set user of the position of the mobile object 100 itself via the speaker 103, makes an utterance to prompt re-authentication, and transitions to a state of conducting the re-authentication. In addition, a predetermined warning sound may be output from the speaker 103, instead of the utterance.
Subsequently, in S405, the control unit 101 determines whether the re-authentication has been conducted successfully. Here, the re-authentication indicates that, for example, the set user inserts its hand into the detection range of the vein sensor 107, and confirms the identity with the vein information that has been already registered in the above S101. In a case where the re-authentication is successful, the processing returns to S401, the driving is resumed, and the tracking is conducted. On the other hand, in a case where the re-authentication is not conducted or in a case where the re-authentication has been conducted but failed, the processing proceeds to S406.
In S406, the control unit 101 determines whether it has exceeded a second time length (for example, five minutes or so) longer than the first time length, since the set user is no longer in sight. In a case where it has not exceeded the second time length, the processing returns to S405. On the other hand, in a case where it has exceeded the second time length, the processing proceeds to S407, and the control unit 101 determines that the set user has been completely lost, moves the mobile object 100 to a predetermined standby place, and ends the processing of the present flowchart.
The above embodiments disclose at least the following embodiments.
1. A mobile object (100) in the above embodiment, includes:
According to this embodiment, it is possible to provide the mobile object that suitably modifies the path in accordance with the path change that has been made by the user.
2. In the above embodiment, upon detection of at least a predetermined change in the moving direction of the user within a predetermined time, the path generation unit modifies the path (S107, S201).
According to this embodiment, the path of the mobile object can be modified promptly in accordance with the change in the moving direction of the user, and loss of the set user can be reduced.
3. In the above embodiment, the path generation unit acquires a line-of-sight direction of the user, based on the output from the first sensor, and acquires the predicted path with the line-of-sight direction as a direction in which the user will move in the future (S301, S305).
According to this embodiment, the future moving direction of the user can be suitably predicted.
4. In the above embodiment, in a case where the path generation unit is not capable of acquiring the line-of-sight direction of the user, based on the output from the first sensor, the path generation unit acquires the predicted path with a trunk orientation of the user as the direction in which the user will move in the future (S302, S305).
According to this embodiment, even though the line-of-sight direction of the user cannot be acquired, it is possible to estimate the direction in which the user will move in the future and to generate the predicted path.
5. In the above embodiment, the path to be modified, upon detection of the change in the moving direction of the user based on the output from the first sensor, is modified to shorten a distance to the user, as compared with the distance before the detection of the change in the moving direction of the user.
According to this embodiment, the possibility of losing the set user can be reduced.
6. In the above embodiment, the travel control unit causes the mobile object to travel in accordance with the path that has been generated by the path generation unit, tracks the user in accordance with the output from the first sensor, and maintains a distance to the user at a predetermined distance (S106, S401).
According to this embodiment, it is possible to lead the set user suitably, and also to reduce the occurrence of loss of the set user.
7. In the above embodiment, a second sensor (107) configured to conduct vein authentication of the user is further included, in which
According to this embodiment, even in a case where the user is temporarily lost, the provision of the service can be suitably resumed.
8. In the above embodiment, a voice output unit configured to make an utterance to prompt re-authentication of the user is further included, in a case where a first time length elapses since the track of the user is lost (S404).
According to this embodiment, even in a case where the user is lost, the possibility of re-authentication by the user can be increased.
9. In the above embodiment, the travel control unit causes the mobile object to move to a predetermined standby position, in a case where a second time length longer than the first time length elapses since the track of the user is lost (S406, S407).
According to this embodiment, in a case where the user is completely lost, it is possible to retreat the mobile object to a predetermined place so as not to obstruct any other passersby or the like.
10. In the above embodiment, a control method for a mobile object (100) including a first sensor (108) configured to detect a target object in surroundings, the control method comprising:
According to this embodiment, it is possible to provide a method for controlling the mobile object that suitably modifies the path in accordance with the path change that has been made by the user.
The invention is not limited to the foregoing embodiments, and various variations/changes are possible within the spirit of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2022-060614 | Mar 2022 | JP | national |