The present technology relates to, for example, an agent applicable to a robot, an existence probability map creation method, an agent action control method, and a program.
In recent years, the number of agent devices such as robots equipped with sensing devices such as cameras and depth sensors has increased. A robot can autonomously move inside a house by reconstructing a space map using a simultaneously localization and mapping (SLAM) technology. In a case of performing an autonomous motion, such as a walking motion according to an external state of surroundings and an internal state of the robot itself, the robot is proposed to detect an external obstacle and plan a walking route to avoid the obstacle. The robot creates an obstacle occupancy probability table indicating a relative distance between the position of the robot and the obstacle, and determines the walking route on the basis of the table. In a case of finding an obstacle on the walking route, the robot searches for an area where no obstacle exists and plans a new walking route. When the robot sequentially searches for an area around the robot and finds an obstacle, the robot starts a research for creating a new obstacle occupancy probability table. However, the efficiency of the research is poor and calculation of the walking route takes time, and the walking motion is delayed. For example, Patent Document 1 describes solving such a problem.
Since the robot is located at a predetermined arrangement position, the user cannot be imaged and captured unless the user approaches a sensable position. Even if the robot can autonomously move, the robot cannot capture the user early unless the robot is at a right position at a right time. For example, if the robot is not at the entrance at the time when the user returns home, the robot cannot notice that the user has come home.
Therefore, an object of the present technology is to provide an agent, an existence probability map creation method, an agent action control method, and a program for enabling early imaging and capture of a user when the user appears in a certain space.
The present technology is an agent including:
a sensing device configured to sense an object in a real space;
an existence probability map creation means configured to define the real space as a group of voxels, and create, every predetermined time, an existence probability map on which information of an existence probability of the object is recorded for each of the voxels; and
an arrangeable position storage unit configured to store information of an arrangeable position. The present technology is an agent including:
an evaluation value calculation unit configured to calculate an existence probability on the basis of an existence probability map and obtain an evaluation value at an arrangeable position at a predetermined time; and
a control unit configured to determine an arrangeable position according to an evaluation value obtained by the evaluation value calculation unit, and control a drive system for moving to the determined arrangeable position. The present technology is an existence probability map creation method including:
sensing, by a sensing device, an object in a real space;
defining the real space as a group of voxels, and creating, every predetermined time, an existence probability map on which information of an existence probability of the object is recorded for each of the voxels; and
storing information of an arrangeable position. The present technology is a program for causing a computer to execute an existence probability map creation method including:
sensing, by a sensing device, an object in a real space;
defining the real space as a group of voxels, and creating, every predetermined time, an existence probability map on which information of an existence probability of the object is recorded for each of the voxels; and
storing information of an arrangeable position. The present technology is an agent action control method including:
calculating an existence probability on the basis of an existence probability map and obtaining an evaluation value at an arrangeable position at a predetermined time;
determining an arrangeable position according to the obtained evaluation value; and
controlling a drive system for moving to the determined arrangeable position. The present technology is a program for causing a computer to execute an agent action control method including:
calculating an existence probability on the basis of an existence probability map and obtaining an evaluation value at an arrangeable position at a predetermined time;
determining an arrangeable position according to the obtained evaluation value; and
controlling a drive system for moving to the determined arrangeable position.
According to at least one embodiment, the present technology enables a pet robot, for example, to have an ability to move to an appropriate position at an appropriate time by using an existence probability map generated from a user's life pattern. Thereby, it becomes possible to provide a service such as life support by the robot. Furthermore, the user can be imaged and captured early when the user appears in a certain space. Note that the effects described here are not necessarily limited, and any of effects described in the present technology may be exhibited.
Hereinafter, embodiments and the like of the present technology will be described with reference to the drawings. Note that the description will be given in the following order.
<1. First Embodiment>
<2. Second Embodiment>
<3. Third Embodiment>
<4. Modification>
Embodiments and the like described below are favorable specific examples of the present technology, and the contents of the present technology are not limited by these embodiments and the like.
A first embodiment of the present technology will be described. The first embodiment is to control an action of an agent using an existence probability map. The agent is a user interface technology for autonomously determining and executing processing. The agent is a technology in which recognition and determination functions are added to an object that is a combination of data and processing for the data. In the present description, an electronic device in which software behaving as an agent is installed, such as a pet robot, is referred to as an agent.
More specifically, the agent is moved to an optimum position at an optimum time using the existence probability map. By such control, the robot can greet a user at an entrance when the user returns home, for example. The robot has three elemental technologies: various sensors, an intelligence/control system, and a drive system.
“Creation of Existence Probability Map”
In step ST1 in
In step ST2, a space information processing unit 4 obtains space map information of the environment and arrangeable coordinate (arrangeable position) information, records the space map information in a space map information storage unit 5, and records the arrangeable coordinate information in an arrangeable position information storage unit (arrangeable coordinate information storage unit) 6.
The space map is created by using a technology capable of creating an environment map, such as SLAM, for example. SLAM is a technology for simultaneously estimating a self-position and creating a map from information acquired from the sensing device 1. It is necessary to create a whole map on the basis of information obtained by an autonomous mobile robot (agent) moving in an unknown environment and to know the position of the robot itself. Therefore, a technology like SLAM is required. The arrangeable coordinate information is created by, for example, sampling and recording a history of places where the robot has actually moved at appropriate time intervals. Moreover, in a case where a floor plan is known in advance in a space such as a room, a floor other than the front of a door may be set as an arrangeable place.
In step ST3, whether or not (sensing time>time T1) is established is determined. The time T1 is a time for sensing the environment as described above. In a case where this condition is not satisfied, the processing returns to step ST1 (sensing the environment). When the condition is satisfied, the processing proceeds to step ST4 (definition of voxel space).
Voxel space information indicating a divided space (environment) using a grid is defined on the basis of the space map information. The space grid division is performed such that a space (environment) is divided using cubes having a side of 50 cm, for example. A voxel represents a value of a regular grid unit in a three-dimensional space. The voxel space definition information is stored in a voxel space definition storage unit 7.
In step ST5, the sensing device 1 senses an object. Objects to be sensed are, for example, humans, animals such as dogs and cats, and other pet robots. Not only a specific object but also a plurality of objects may be made recognizable. The plurality of objects can be individually recognized, and respective existence probability maps are created. Specifically, in a case where the agent is a pet robot, an owner user is set as an object, and a habit (life pattern) of the object is learned. The object in the environment is sensed for a certain period (time T2). An example of a sensing method includes identifying a planar position of an object by RGB-based human body part recognition and general object recognition, and then applying a distance sensor value to the identified position to convert the identified position into a position in a three-dimensional space.
In step ST6, spatiotemporal position information obtained by sensing the object is recorded for each object. For example, the voxel space information is prepared for each of 288 time intervals, which are obtained by dividing 24 hours into 5 minutes. When the user is actually observed, a voxel corresponding to a position where the user inside a voxel space at the time interval is observed is voted. The object information recording unit 8 in
In step ST7, whether or not (sensing time>time T2) is established is determined. In a case where this condition is not satisfied, the processing returns to step ST5 (sensing the object). When this condition is satisfied, the processing proceeds to step ST8 (creation of an existence probability map). An existence probability map creation unit 10 in
The existence probability map creation processing is processing of creating an existence probability map from the number of votes of an object voted in the voxel space information. For example, a value obtained by dividing the number of votes of the object voted for each voxel by the number of observation days is adopted as an existence probability for the voxel of the object. In this way, the existence probability map for each object is created.
As illustrated in
minutes of the day. Each existence probability map is associated with a time of the day.
“Action Control of Agent Using Existence Probability Map”
Next, action control of the agent, for example, the pet robot based on the created existence probability map will be described. This control is processing in which an evaluation value calculation unit 12 in
In step ST12, the evaluation value calculation unit 12 extracts the existence probability map at the time T from a time series of the existence probability maps stored in the existence probability map storage unit 11.
In step ST13, one arrangeable position is obtained from arrangeable positions stored in an arrangeable position information storage unit 6.
In step ST14, one imaging condition is obtained from the imaging condition information stored in the imaging condition state storage unit 14.
In step ST15, the evaluation value is calculated. The imaging condition is comprehensively simulated for each arrangeable position, and the evaluation value is calculated from an existence probability value of a voxel within a sensing range.
In step ST16, whether or not the imaging condition is the last imaging condition in the changing imaging conditions is determined. In a case where the imaging condition is not the last imaging condition, the processing returns to step ST14 (obtaining one imaging condition from the imaging condition information). In a case where the imaging condition is the last imaging condition, the processing proceeds to step ST17.
In step ST17, whether or not the arrangeable position is the last arrangeable position in a plurality of arrangeable positions is determined. In a case where the arrangeable position is not the last arrangeable position, the processing returns to step ST14 (obtaining an imaging condition from the imaging condition information). In a case where the arrangeable position is the last arrangeable position, the processing proceeds to step ST18.
In step ST18, an evaluation value MAX_VAL having the highest evaluation is acquired. In step ST19, the highest evaluation value MAX_VAL is compared with a predetermined threshold value VAL_TH. In a case of (MAX_VAL≤VAL_TH), the processing returns to step ST11 (obtaining the time T). That is, it is determined that the highest evaluation value MAX_VAL is not high enough to cause an action, and processing for causing the agent to take an action is not performed. In a case if (MAX_VAL>VAL_TH), the processing proceeds to step ST20.
In step ST20, the arrangeable position and the imaging condition corresponding to the highest evaluation value MAX_VAL are acquired.
In step ST21, the drive system is controlled via the mechanical control unit 3 to move the agent to the acquired arrangeable position.
In step ST22, the drive system is controlled via the mechanical control unit 3 to adjust the agent to the acquired imaging condition.
The action control processing is schematically described with reference to
Calculation of the evaluation value will be described. When, for example, 50% or more of a volume of a voxel is included in a sensing area of the camera, the voxel is set as a voxel for which the evaluation value is calculated. Specifically, the sensing area is an area to be imaged. Furthermore, a sum of the existence probabilities of all the voxels for which the evaluation value is calculated is calculated as the evaluation value of the arrangeable position and the imaging condition.
Note that, in the case of calculating the evaluation value, the final evaluation value may be obtained by multiplying a weighting coefficient set for each element of the user as the following object:
user preference, a distance to user, a user part type (head, face, torso, or the like), or a sensor type (camera, microphone, IR sensor, polarization sensor, depth sensor, or the like).
The imaging condition information is information of sensor angles obtained from a sensor 103 provided on a nose of the agent 101 and states of the agent, as illustrated in
As described above, by controlling the action of the agent on the basis of the created existence probability map, the pet robot as the agent can move to the entrance and greet the user in accordance with the time when the user as an object returns home, for example. Furthermore, the position of the pet robot to greet the user can be a place easily noticeable by the user and the pet robot's face can be turned to the user.
“Action Control of Agent Based on Online Action Prediction”
A second embodiment of the present technology is to predict an action of a user as an object, and move an agent on the basis of action prediction. An outline of the second embodiment will be described with reference to
Next, the agent 101 creates an existence probability map regarding future actions from an action prediction technology in which actions of the user who is currently visible are learned using the actions of the user as inputs. For example, the existence probability map regarding future actions can be created using a database formed by observing daily actions of the user.
Next, the agent 101 makes an action plan on the basis of the existence probability map regarding future actions. This action plan enables the agent 101 to take actions such as running in parallel with the object 102, and going round and cutting in the route of the object 102. In the past, the agent could only follow an object from behind.
While the existence probability map according to the above-described first embodiment is a static existence probability map, the existence probability map according to the second embodiment is a dynamic existence probability map updated according to an actual action of the user. The second embodiment can be implemented by replacing the static existence probability map in the first embodiment with a dynamic existence probability map. Note that the static existence probability map and the dynamic existence probability map may be combined.
The second embodiment will be specifically described with reference to
Next, suppose that the object 102 walks a little further into the room and walks towards the kitchen, as illustrated in
“Existence Probability Map with Direction Vector”
Configuring the dynamic existence probability map as an existence probability map with a direction vector will be described. As illustrated in
When calculating an evaluation value on the basis of the existence probability map with a direction vector, the evaluation value of an arrangeable coordinate and an imaging condition in which the face of the user can be imaged becomes high by considering the direction vector, as illustrated in
The existence probability map with a direction vector enables the agent to act to adjust the direction of the agent's face with the direction of the user's face. As illustrated in
When there is no particular object to look at where the user's face is directed, the agent is simply controlled to look at the same direction. For example, the agent looks at a garden with the user. The agent is controlled such that the evaluation value becomes high when the agent looks the same direction as the user at the close position to the user.
“Action Control of Plurality of Agents Using Existence Probability Map”
Since there is a limit to a sensable space by one agent, agents complement share an existence probability map and complement each other in a case where there is a plurality of agents. One or both of a static existence probability map and a dynamic existence probability map may be shared.
In the example in
As illustrated in
Note that the functions of the processing device in the above-described embodiments can be recorded in a recording medium such as a magnetic disk, a magneto-optical disk, or a ROM, as a program. Therefore, the functions of the agent can be implemented by reading the program from the recording medium by a computer and executing the program by a micro processing unit (MPU), a digital signal processor (DSP), or the like.
The embodiments of the present technology have been specifically described. However, the present technology is not limited to the above-described embodiments, and various modifications based on the technical idea of the present technology can be made. Furthermore, the configurations, methods, steps, shapes, materials, numerical values, and the like given in the above-described embodiments are merely examples, and different configurations, methods, steps, shapes, materials, numerical values, and the like from the examples may be used as needed. For example, the present technology can be applied not only to VR games but also to fields such as educational and medical applications.
Note that the present technology can also have the following configurations.
(1)
An agent including:
a sensing device configured to sense an object in a real space;
an existence probability map creation means configured to define the real space as a group of voxels, and create, every predetermined time, an existence probability map on which information of an existence probability of the object is recorded for each of the voxels; and
an arrangeable position storage unit configured to store information of an arrangeable position.
(2)
The agent according to (1), in which, in a case of sensing the object while moving in a real space, the arrangeable position is a position on a locus of the movement.
(3)
The agent according to (1) or (2), in which the real space is indoors and the object is a person.
(4)
The agent according to any one of (1) to (3), in which an existence probability map based on prediction of a future action of the object is created.
(5)
The agent according to any one of (1) to (4), in which a probability of a vector in a direction of the object is included.
(6)
An agent including:
an evaluation value calculation unit configured to calculate an existence probability on the basis of an existence probability map and obtain an evaluation value at an arrangeable position at a predetermined time; and
a control unit configured to determine an arrangeable position according to an evaluation value obtained by the evaluation value calculation unit, and control a drive system for moving to the determined arrangeable position.
(7)
The agent according to claim 6, further including:
a sensing device configured to sense an object in a real space;
an existence probability map creation means configured to define the real space as a group of voxels, and create, every predetermined time, the existence probability map on which information of the existence probability of the object is recorded for each of the voxels; and
an arrangeable position storage unit configured to store information of an arrangeable position.
(8)
The agent according to (6) or (7), in which the evaluation value calculation unit calculates the evaluation value, for each of a plurality of imaging conditions.
(9)
The agent according to any one of (6) to (8), in which the existence probability map is shared with another agent.
(10)
An existence probability map creation method including:
sensing, by a sensing device, an object in a real space;
defining the real space as a group of voxels, and creating, every predetermined time, an existence probability map on which information of an existence probability of the object is recorded for each of the voxels; and
storing information of an arrangeable position.
(11)
A program for causing a computer to execute an existence probability map creation method including:
sensing, by a sensing device, an object in a real space;
defining the real space as a group of voxels, and creating, every predetermined time, an existence probability map on which information of an existence probability of the object is recorded for each of the voxels; and
storing information of an arrangeable position.
(12)
An agent action control method including:
calculating an existence probability on the basis of an existence probability map and obtaining an evaluation value at an arrangeable position at a predetermined time;
determining an arrangeable position according to the obtained evaluation value; and
controlling a drive system for moving to the determined arrangeable position.
(13)
A program for causing a computer to execute an agent action control method including:
calculating an existence probability on the basis of an existence probability map and obtaining an evaluation value at an arrangeable position at a predetermined time;
determining an arrangeable position according to the obtained evaluation value; and
controlling a drive system for moving to the determined arrangeable position.
Number | Date | Country | Kind |
---|---|---|---|
2018-136370 | Jul 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/015544 | 4/10/2019 | WO | 00 |