This application is based upon and claims the benefit of priority under 35 USC 119 of Japanese Patent Application No. 2023-158316, filed on Sep. 22, 2023, the entire disclosure of which, including the description, claims, drawings, and abstract, is incorporated herein by reference in its entirety.
The present disclosure relates generally to a robot, a control method, and a recording medium.
Electronic devices that imitate living creatures such as pets, human beings, and the like are known in the related art. For example, Unexamined Japanese Patent Application Publication No. 2003-159681 describes a robot device that, when a specific input is provided, exhibits a specific action associated with the specific input.
A robot according to an embodiment of the present disclosure includes:
A more complete understanding of this application can be obtained when the following detailed description is considered in conjunction with the following drawings, in which:
4 is a block diagram illustrating the configuration of a terminal device according to Embodiment 1;
Hereinafter, embodiments of the present disclosure are described while referencing the drawings Note that, in the drawings, identical or corresponding components are denoted with the same reference numerals.
The robot 200 according to Embodiment 1 includes an exterior 201, decorative parts 202, bushy fur 203, head 204, coupler 205, torso 206, housing 207, touch sensor 211, acceleration sensor 212, microphone 213, illuminance sensor 214, and speaker 231 identical to those of the robot 200 disclosed in Unexamined Japanese Patent Application Publication No. 2023-115370 and, as such, description thereof is foregone.
The robot 200 according to Embodiment 1 includes a twist motor 221 and a vertical motor 222 identical to those of the robot 200 disclosed in Unexamined Japanese Patent Application Publication No. 2023-115370 and, as such, description thereof is foregone. The twist motor 221 and the vertical motor 222 of the robot 200 according to Embodiment 1 operate in the same manner as those of the robot 200 disclosed in Unexamined Japanese Patent Application Publication No. 2023-115370.
The robot 200 includes a gyrosensor 215. By using the acceleration sensor 212 and the gyrosensor 215, the robot 200 can detect a change of an attitude of the robot 200 itself, and can detect being picked up, the orientation being changed, being thrown, and the like by the user.
Note that, at least a portion of the acceleration sensor 212, the microphone 213, the illuminance sensor 214, the gyrosensor 215, and the speaker 231 is not limited to being provided on the torso 206 and may be provided on the head 204, or may be provided on both the torso 206 and the head 204.
Next, the functional configuration of the robot 200 is described while referencing
The control device 100 is a device that controls the robot 200. The control device 100 includes a controller 110 that is an example of control means, a storage 120 that is an example of storage means, and a communicator 130 that is an example of communication means.
The controller 110 includes a central processing unit (CPU). In one example, the CPU is a microprocessor or the like and is a central processing unit that executes a variety of processing and computations. In the controller 110, the CPU reads out a control program stored in the ROM and controls the behavior of the entire robot 200 while using the RAM as working memory. Additionally, while not illustrated in the drawings, the controller 110 is provided with a clock function, a timer function, and the like, and can measure the date and time, and the like. The controller 110 may also be called a “processor.”
The storage 120 includes read-only memory (ROM), random access memory (RAM), flash memory, and the like. The storage 120 stores an operating system (OS), application programs, and other programs and data used by the controller 110 to perform the various processes. Moreover, the storage unit 120 stores data generated or acquired as a result of the controller 110 performing the various processes.
The communicator 130 includes an interface for communicating with external devices of the robot 200. In one example, the communicator 130 communicates with external devices including the terminal device 50 in accordance with a known communication standard such as a wireless local area network (LAN), Bluetooth Low Energy (BLE, registered trademark), Near Field Communication (NFC), or the like.
The sensor 210 includes the touch sensor 211, the acceleration sensor 212, the gyrosensor 215, the illuminance sensor 214, and the microphone 213 described above. The sensor 210 is an example of detection means that detects an external stimulus.
The touch sensor 211 includes, for example, a pressure sensor and a capacitance sensor, and detects contacting by some sort of object. The controller 110 can, on the basis of detection values of the touch sensor 211, detect that the robot 200 is being pet, is being struck, and the like by the user.
The acceleration sensor 212 detects an acceleration applied to the torso 206 of the robot 200. The acceleration sensor 212 detects acceleration in each of the X axis direction, the Y axis direction, and the Z axis direction. That is, the acceleration sensor 212 detects acceleration on three axes.
In one example, the acceleration sensor 212 detects gravitational acceleration when the robot 200 is stationary. The controller 110 can detect the current attitude of the robot 200 on the basis of the gravitational acceleration detected by the acceleration sensor 212. In other words, the controller 110 can detect whether the housing 207 of the robot 200 is inclined from the horizontal direction on the basis of the gravitational acceleration detected by the acceleration sensor 212. Thus, the acceleration sensor 212 functions as an incline detection means that detects the inclination of the robot 200.
Additionally, when the user picks up or throws the robot 200, the acceleration sensor 212 detects, in addition to the gravitational acceleration, acceleration caused by the move of the robot 200. Accordingly, the controller 110 can detect the move of the robot 200 by removing the gravitational acceleration component from the detection value detected by the acceleration sensor 212.
The gyrosensor 215 detects an angular velocity from when rotation is applied to the torso 206 of the robot 200. Specifically, the gyrosensor 215 detects the angular velocity on three axes of rotation, namely rotation around the X axis direction, rotation around the Y axis direction, and rotation around the Z axis direction. It is possible to more accurately detect the move of the robot 200 by combining the detection value detected by the acceleration sensor 212 and the detection value detected by the gyrosensor 215.
Note that, at a synchronized timing (for example every 0.25 seconds), the touch sensor 211, the acceleration sensor 212, and the gyrosensor 215 respectively detect the strength of contact, the acceleration, and the angular velocity, and output the detection values to the controller 110.
The microphone 213 detects ambient sound of the robot 200. The controller 110 can, for example, detect, on the basis of a component of the sound detected by the microphone 213, that the user is speaking to the robot 200, that the user is clapping their hands, and the like.
The illuminance sensor 214 detects the illuminance of the surroundings of the robot 200. The controller 110 can detect that the surroundings of the robot 200 have become brighter or darker on the basis of the illuminance detected by the illuminance sensor 214.
The controller 110 acquires, via the bus line BL and as an external stimulus, detection values detected by the various sensors of the sensor 210. The external stimulus is a stimulus that acts on the robot 200 from outside the robot 200. Examples of the external stimulus include “there is a loud sound”, “spoken to”, “petted”, “picked up”, “turned upside down”, “became brighter”, “became darker”, and the like.
In one example, the controller 110 acquires the external stimulus of “there is a loud sound” or “spoken to” by the microphone 213, and acquires the external stimulus of “petted” by the touch sensor 211. Additionally controller 110 acquires the external stimulus of “picked up” or “turned upside down” by the acceleration sensor 212 and the gyrosensor 215, and acquires the external stimulus of “became brighter” or “became darker” by the illuminance sensor 214.
Note that a configuration is possible in which the sensor 210 includes sensors other than the touch sensor 211, the acceleration sensor 212, the gyrosensor 215, and the microphone 213. The types of external stimuli acquirable by the controller 110 can be increased by increasing the types of sensors of the sensor 210.
The driver 220 includes the twist motor 221 and the vertical motor 222, and is driven by the controller 110. The twist motor 221 is a servo motor for rotating the head 204, with respect to the torso 206, in the left-right direction (the width direction) with the front-back direction as an axis. The vertical motor 222 is a servo motor for rotating the head 204, with respect to the torso 206, in the up-down direction (height direction) with the left-right direction as an axis. The robot 200 can express movements of turning the head 204 to the side by using the twist motor 221, and can express movements of lifting/lowering the head 204 by using the vertical motor 222.
The outputter 230 includes the speaker 231, and sound is output from the speaker 231 as a result of sound data being input into the outputter 230 by the controller 110. For example, the robot 200 emits a pseudo-animal sound as a result of the controller 110 inputting animal sound data of the robot 200 into the outputter 230.
A configuration is possible in which, instead of the speaker 231, or in addition to the speaker 231, a display such as a liquid crystal display, a light emitter such as a light emitting diode (LED), or the like is provided as the outputter 230, and emotions such as joy, sadness, and the like are displayed on the display, expressed by the color and brightness of the emitted light, or the like.
The operator 240 includes an operation button, a volume knob, or the like. In one example, the operator 240 is an interface for receiving user operations such as turning the power ON/OFF, adjusting the volume of the output sound, and the like.
A battery 250 is a rechargeable secondary battery, and stores power to be used in the robot 200. The battery 250 is charged when the robot 200 has moved to a charging station.
A position information acquirer 260 includes a position information sensor that uses a global positioning system (GPS), and acquires current position information of the robot 200. Note that the position information acquirer 260 is not limited to GPS, and a configuration is possible in which the position information acquirer 260 acquires the position information of the robot 200 by a common method that uses wireless communication, or acquires the position information of the robot 200 through an application/software of the terminal device 50.
The controller 110 functionally includes an action information acquirer 111 that is an example of action information acquiring means, a state parameter acquirer 112 that is an example of state controlling means, an action controller 113 that is an example of action controlling means, and a degree of familiarity setter 114 that is an example of degree of familiarity setting means. In the controller 110, the CPU performs control and reads the program stored in the ROM out to the RAM and executes that program, thereby functioning as the various components described above.
Additionally, the storage 120 stores action information 121, a state parameter 122, log information 123, a coefficient table 124, and an familiarity table 125.
Next, the configuration of the terminal device 50 is described while referencing
The controller 510 includes a CPU. In the controller 110, the CPU reads a control program stored in the ROM and controls the operations of the entire terminal device 50 while using the RAM as working memory. The controller 510 may also be called a “processor.”
The storage 520 includes a ROM, a RAM, a flash memory, and the like. The storage 520 stores programs and data used by the controller 510 to perform various processes. Moreover, the storage 520 stores data generated or acquired as a result of the controller 510 performing the various processes.
The operator 530 includes an input device such as a keyboard, a mouse, a touch pad, a touch panel, a touch pad, and the like, and receives operation inputs from the user.
The display 540 includes a display device such as a liquid crystal display or the like, and displays various images on the basis of control by the controller 510. The display 540 is an example of display means.
The communicator 550 includes a communication interface for communicating with external devices of the terminal device 50. In one example, the communicator 550 communicates with external devices including the robot 200 in accordance with a known communication standard such as a wireless LAN, BLE (registered trademark), NFC, or the like.
The controller 510 functionally includes an action information creator 511 that is an example of action information creating means. In the controller 510, the CPU performs control and reads the program stored in the ROM out to the RAM and executes that program, thereby functioning as the various components described above.
Returning to
The “movement” refers to a physical motion of the robot 200, executed by driving of the driver 220. Specifically, the “movement” corresponds to moving the head 204 relative to the torso 206 by the twist motor 221 or the vertical motor 222. The “sound output” refers to outputting of various sounds such as animal sounds or the like from the speaker 231 of the outputter 230.
The action information 121 defines, by combinations of such movements and sound outputs (animal sounds), the actions that the robot 200 is to execute. A configuration is possible in which the action information 121 is incorporated into the robot 200 in advance, but it is possible for the user to freely create the action information 121 by operating the terminal device 50.
Returning to
Specifically, the user operates the operation 530 to start up a programming application/software installed in advance on the terminal device 50. As a result, the action information creator 511 displays an action information 121 creation screen such as illustrated in
The execution order and execution timing of a movement and a sound output (animal sound) that the robot 200 is to be caused to execute can respectively be set in a movement field and a sound field of the creation screen. The user can, by selecting and combining movements and sounds from menus while viewing this creation screen, freely program an action that the robot 200 is to be caused to execute.
Specifically, in the example of
The action information 121 created by such user operations more specifically has the configuration illustrated in
The trigger is a condition for the robot 200 to execute the action. Upon the trigger defined for a certain action being met, that action is executed by the robot 200. In the example of
Here, “Upon speech recognition” corresponds to a case in which the action name is recognized, by a speech recognition function of the robot 200, from speech of the user detected by the microphone 213. Additionally, “Upon head being petted” corresponds to a case in which the user petting of the head 204 of the robot 200 is detected by the touch sensor 211.
Note that the trigger is not limited to the examples described above, and various conditions may be used. For example, the trigger may be a case in which “There is a loud sound” is detected by the microphone 213, a case in which “Picked up” or “Turned upside down” is detected by the acceleration sensor 212 and the gyrosensor 215, or a case in which “Became brighter” or “Became darker” is detected by the illuminance sensor 214. These can be called triggers based on external stimuli detected by the sensor 210. Alternatively, the trigger may be “A specific time arrived” or “The robot 200 moved to a specific location.” These can be called triggers not based on external stimuli.
Note that a configuration is possible in which, when an execution command is received from the terminal device 50, each action is executed regardless of the trigger defined in the action information 121.
The action control parameters are parameters for causing the robot 200 to execute each action. The action control parameters includes various items, namely a movement, an animal sound, an execution start timing, a movement parameter, and an animal sound parameter.
The movement item defines the types and order of the movements constituting each action. The animal sound item defines the types and order of the sound outputs constituting each action. The execution start timing defines a timing at which to execute each of the movements or animal sounds constituting each action. Specifically, the execution start timing defines, for each movement or animal sound, a timing that is an origin point for execution, and an amount of execution time.
The movement parameter defines, for each of the movements constituting each action, an amount of movement time and a movement distance of the twist motor 221 or the vertical motor 222 when executing that movement. The animal sound parameter defines, for each animal sound constituting each action, a volume of the sound output from the speaker 231 when executing that animal sound.
The execution count is a cumulative number of times that each action has been executed by the robot 200. An initial value of the execution count is 0. The execution count of each action is increased by 1 each time the robot 200 executes that action. The previous execution date and time is the date and time at which the robot 200 last executed each action.
The action information creator 511 creates, on the basis of user commands, the action information 121 having the data configuration described above. When the action information 121 is created, the action information creator 511 communicates with the robot 200 via the communicator 550, and sends the created action information 121 to the robot 200. In the robot 200, the action information acquirer 111 communicates with the terminal device 50 via the communicator 130, and acquires and saves, in the storage 120, the action information 121 created in the terminal device 50.
Returning to
The emotion parameter is a parameter that represents a pseudo-emotion of the robot 200. The emotion parameter is expressed by coordinates (X, Y) on an emotion map 300.
As illustrated in
The emotion parameter represents a plurality (in the present embodiment, four) of mutually different pseudo-emotions. In
Note that, in
The state parameter acquirer 112 calculates an emotion change amount that is an amount of change of that the X value and the Y value of the emotion parameter are increased or decreased. The emotion change amount is expressed by the following four variables: DXP and DXM respectively increase and decrease the X value of the emotion parameter. DYP and DYM respectively increase and decrease the Y value of the emotion parameter.
The state parameter acquirer 112 calculates an emotion change amount that is an amount of change that each of the X value and the Y value of the emotion parameter is increased or decreased. The emotion change amount is expressed by the following four variables.
The state parameter acquirer 112 updates the emotion parameter by adding or subtracting a value, among the emotion change amounts DXP, DXM, DYP, and DYM, corresponding to the external stimulus to or from the current emotion parameter. For example, when the head 204 is petted, the pseudo-emotion of the robot 200 is relaxed and, as such, the state parameter acquirer 112 adds the DXP to the X value of the emotion parameter. Conversely, when the head 204 is struck, the pseudo-emotion of the robot 200 is worried and, as such, the state parameter acquirer 112 subtracts the DXM from the X value of the emotion parameter. Which emotion change amount is associated with the various external stimuli can be set as desired. An example is given below.
The sensor 210 acquires a plurality of external stimuli of different types by a plurality of sensors. The state parameter acquirer 112 derives various emotion change amounts in accordance with each individual external stimulus of the plurality of external stimuli, and sets the emotion parameter in accordance with the derived emotion change amounts.
The initial value of these emotion change amounts DXP, DXM, DYP, and DYM is 10, and the amounts increase to a maximum of 20. The state parameter acquirer 112 updates the various variables, namely the emotion change amounts DXP, DXM, DYP, and DYM in accordance with the external stimuli detected by the sensor 210.
Specifically, when the X value of the emotion parameter is set to the maximum value of the emotion map 300 even once in one day, the state parameter acquirer 112 adds 1 to the DXP, and when the Y value of the emotion parameter is set to the maximum value of the emotion map 300 even once in one day, the state parameter acquirer 112 adds 1 to the DYP. Additionally, when the X value of the emotion parameter is set to the minimum value of the emotion map 300 even once in one day, the state parameter acquirer 112 adds 1 to the DXM, and when the Y value of the emotion parameter is set to the minimum value of the emotion map 300 even once in one day, the state parameter acquirer 112 adds 1 to the DYM.
Thus, the state parameter acquirer 112 changes the emotion change amounts in accordance with a condition based on whether the value of the emotion parameter reaches the maximum value or the minimum value of the emotion map 300 (first condition based on external stimulus). As an example, assume that all of the initial values of the various variables of the emotion change amount are set to 10. The state parameter acquirer 112 increases the various variables to a maximum of 20 by updating the emotion change amounts described above. Due to this updating processing, each emotion change amount, that is, the degree of change of emotion, changes.
For example, when only the head 204 is petted multiple times, only the emotion change amount DXP increases and the other emotion change amounts do not change. As such, the robot 200 develops a personality of having a tendency to be relaxed. When only the head 204 is struck multiple times, only emotion change amount DXM increases and the other emotion change amounts do not change. As such the robot 200 develops a personality of having a tendency to be worried. Thus, the state parameter acquirer 112 changes the emotion change amounts in accordance with various external stimuli.
The personality parameter is a parameter expressing the pseudo-personality of the robot 200. The personality parameter includes a plurality of personality values that express degrees of mutually different personalities. The state parameter acquirer 112 changes the plurality of personality values included in the personality parameter in accordance with external stimuli detected by the sensor 210.
Specifically, the state parameter acquirer 112 calculates four personality values on the basis of (Equation 1) below. Specifically, a value obtained by subtracting 10 from DXP that expresses a tendency to be relaxed is set as a personality value (chipper), a value obtained by subtracting 10 from DXM that expresses a tendency to be worried is set as a personality value (shy), a value obtained by subtracting 10 from DYP that expresses a tendency to be excited is set as a personality value (active), and a value obtained by subtracting 10 from DYM that expresses a tendency to be disinterested is set as a personality value (spoiled).
As a result, as illustrated in
Since the initial value of each of the personality values is 0, the personality at the time of birth of the robot 200 is expressed by the origin of the personality value radar chart 400. Moreover, as the robot 200 grows, the four personality values change, with an upper limit of 10, due to external stimuli and the like (manner in which the user interacts with the robot 200) detected by the sensor 210. Therefore, 11 to the power of 4=14,641 types of personalities can be expressed.
Thus, the robot 200 assumes various personalities in accordance with the manner in which the user interacts with the robot 200. That is, the personality of each individual robot 200 is formed differently on the basis of the manner in which the user interacts with the robot 200.
These four personality values are fixed when the juvenile period elapses and the pseudo-growth of the robot 200 is complete. In the subsequent adult period, the state parameter acquirer 112 adjusts four personality correction values (chipper correction value, active correction value, shy correction value, and spoiled correction value) in order to correct the personality in accordance with the manner in which the user interacts with the robot 200.
The state parameter acquirer 112 adjusts the four personality correction values in accordance with a condition based on where the area in which the emotion parameter has existed the longest is located on the emotion map 300. Specifically, the four personality correction values are adjusted as in (A) to (E) below.
(A) When the longest existing area is the relaxed area on the emotion map 300, the state parameter acquirer 112 adds 1 to the chipper correction value and subtracts 1 from the shy correction value.
(B) When the longest existing area is the excited area on the emotion map 300, the state parameter acquirer 112 adds 1 to the active correction value and subtracts 1 from the spoiled correction value.
(C) When the longest existing area is the worried area on the emotion map 300, the state parameter acquirer 112 adds 1 to the shy correction value and subtracts 1 from the chipper correction value.
(D) When the longest existing area is the disinterested area on the emotion map 300, the state parameter acquirer 112 adds 1 to the spoiled correction value and subtracts 1 from the active correction value.
(E) When the longest existing area is the center area on the emotion map 300, the state parameter acquirer 112 reduces the absolute value of all four of the personality correction values by 1.
Note that the various areas of relaxed, excited, worried, disinterested, and center are examples and, for example, a configuration is possible in which the emotion map 300 is divided into more detailed areas such as happy, excited, upset, sad, peaceful, normal, and the like.
When setting the four personality correction values, the state parameter acquirer 112 calculates the four personality values in accordance with (Equation 2) below.
The battery level is the remaining amount of power stored in the battery 250, and is a parameter expressing a pseudo degree of hunger of the robot 200. The state parameter acquirer 112 acquires information about the current battery level by a power supply controller that controls charging and discharging of the battery 250.
The current location is the location at which the robot 200 is currently positioned. The state parameter acquirer 112 acquires information about the current position of the robot 200 by the position information acquirer 260.
More specifically, the state parameter acquirer 112 references past position information of the robot 200 stored in the log information 123. The log information 123 is data in which past action data of the robot 200 is recorded. Specifically, the log information 123 includes the past position information, emotion parameters, and personality parameters of the robot 200, data expressing changes in the state parameters 122 such as the battery level, and sleep data expressing past wake-up times, bed times, and the like of the robot 200.
The state parameter acquirer 112 determines that the current location is home when the current location matches a position where the record frequency is the highest. When the current location is not the home, the state parameter acquirer 112 determines, on the basis of the past record count of that location in the log information 123, whether the current location is a location visited for the first time, a frequently visited location, a location not frequently visited, or the like, and acquires determination information thereof. For example, when the past record count is five times or greater, the state parameter acquirer 112 determines that the current location is a frequently visited location, and when the past record count is less than five times, the state parameter acquirer 112 determines that the current location is a location not frequently visited.
The current time is the time at present. The state parameter acquirer 112 acquires the current time by a clock provided to the robot 200. Note that, as with the acquisition of the position information, the acquisition of the current time is not limited to this method.
More specifically, the state parameter acquirer 112 references a present day wake-up time and a past average bed time recorded in the log information 123 to determine whether the current time is immediately after the wake-up time of the present day or immediately before the bed time.
The log information 123 includes sleep data. While not illustrated in the drawings, the sleep data includes a sleep log and compiled sleep data. The past wake-up time and bed time of the robot 200 are recorded every day in the sleep log. The compiled sleep data is data compiled from the sleep log, and the average wake-up time and the average bed time for every day are recorded in the compiled sleep data.
in one example, when the current time is within 30 minutes after the wake-up time of the present day, the state parameter acquirer 112 determines that the current time is immediately after the wake-up time of the present day. Additionally, when the current time is within 30 minutes before the past average bed time, the state parameter acquirer 112 determines that it is immediately before the bed time.
While not illustrated in the drawings, the past nap times of the robot 200 are recorded in the sleep data. The state parameter acquirer 112 references the past nap times recorded in the log information 123 to determine whether the current time corresponds to the nap time.
The growth days count expresses the number of days of pseudo-growth of the robot 200. The robot 200 is pseudo-born at the time of first start up by the user after shipping from the factory, and grows from a juvenile to an adult over a predetermined growth period. The growth days count corresponds to the number of days since the pseudo-birth of the robot 200.
An initial value of the growth days count is 1, and the state parameter acquirer 112 adds 1 to the growth days count for each passing day. In one example, the growth period in which the robot 200 grows from a juvenile to an adult is 50 days, and the 50-day period that is the growth days count since the pseudo-birth is referred to as a “juvenile period (first period).” When the juvenile period elapses, the pseudo-growth of the robot 200 ends. A period after the completion of the juvenile period is called an “adult period (second period).”
During the juvenile period, each time the pseudo growth days count of the robot 200 increases one day, the state parameter acquirer 112 increases the maximum value and the minimum value of the emotion map 300 both by two. Regarding an initial value of the size of the emotion map 300, as illustrated by a frame 301, a maximum value of both the X value and the Y value is 100 and a minimum value is −100. When the growth days count exceeds half of the juvenile period (for example, 25 days), as illustrated by a frame 302, the maximum value of the X value and the Y value is 150 and the minimum value is −150. When the juvenile period elapses, the pseudo-growth of the robot 200 ends. At this time, as illustrated by a frame 303, the maximum value of the X value and the Y value is 200 and the minimum value is −200. Thereafter, the size of the emotion map 300 is fixed.
A settable range of the emotion parameter is defined by the emotion map 300. Thus, as the size of the emotion map 300 expands, the settable range of the emotion parameter expands Due to the settable range of the emotion parameter expanding, richer emotion expression becomes possible and, as such, the pseudo-growth of the robot 200 is expressed by the expanding of the size of the emotion map 300.
Returning to
The action controller 113 determines, on the basis of detection results and the like from the sensor 210, whether any trigger among the plurality of triggers defined in the action information 121 is met. For example, the action controller 113 determines whether speech of the user is recognized, whether the head 204 of the robot 200 is petted, whether a specific time has arrived, and whether any predetermined trigger in the action information 121, such as the robot 200 moved to a specific location, is met. When, as a result of the determination, any trigger is met, the robot 200 is caused to execute the action corresponding to the met trigger.
When any trigger is met, the action controller 113 references the action information 121 and identifies the action control parameters set for the action corresponding to the met trigger. Specifically, the action controller 113 identified, as the action control parameters, a combination of movements or animal sounds that are elements constituting the action corresponding to the met trigger, the execution start timing of each element, and the movement parameter or the animal sound parameter that is the parameter of each element. Then, on the basis of the identified action control parameters, the action controller 113 drives the driver 220 or outputs the sound from the speaker 231 to cause the robot 200 to execute the action corresponding to the met trigger.
More specifically, the action controller 113 corrects, on the basis of the state parameters 122 acquired by the state parameter acquirer 112, the action control parameters identified from the action information 121. By doing this, it is possible to add changes to the actions in accordance with the current state of the robot 200, and it is possible to realistically imitate a living creature.
The action controller 113 references the coefficient table 124 to correct the action control parameters. As illustrated in
The correction coefficients are coefficients for correcting the action control parameters identified from the action information 121. Specifically, each correction coefficients are defined by an action direction and a weighting coefficient for each of a speed and an amplitude of a vertical movement by the vertical motor 222, a speed and an amplitude of a left-right movement by the twist motor 221, and a movement start time lag.
More specifically, the action controller 113 determines, for the following (1) to (5), that state to which the current state of the robot 200, expressed by the state parameters 122 acquired by the state parameter acquirer 112, corresponds. Then, the action controller 113 corrects the action control parameters using the correction coefficients corresponding to the current state of the robot 200.
(1) Is the current emotion parameter of the robot 200 happy, upset, excited, sad, disinterested, or normal? In other words, are the coordinates (X, Y) expressing the emotion parameter positioned in the area labeled “happy”, “upset”, “sad”, “disinterested”, or “normal” on the emotion map 300 illustrated in
(2) Is the current personality parameter of the robot 200 chipper, active, shy, or spoiled? In other words, which of the four personality values of chipper, active, shy, and spoiled is the greatest?
(3) Is the current battery level of the robot 200 70% or greater, between 70% and 30%, or 30% or less?
(4) Is the current location of the robot 200 the home, a frequently visited location, a location not frequently visited, or a location visited for the first time?
(5) Is the current time immediately after waking up, a nap time, or immediately before bed time?
As an example, in the coefficient table 124 illustrated in
In the coefficient table 124 illustrated in
In addition to (5) the current time described above, the action controller 113 identifies, for each state, namely (1) the emotion parameter, (2) the personality parameter, (3) the battery level, and (4) the current location, the correction coefficients of the corresponding state from the coefficient table 124. Then, the action controller 113 corrects the action control parameters using the sum total of the correction coefficients corresponding to all of (1) to (5).
Next, a specific example is described in which (1) the current emotion parameter corresponds to happy, (2) the current personality parameter corresponds to chipper, (3) the current battery level corresponds to 30% or less, (4) the current location corresponds to location visited for the first time, and (5) the current time corresponds to immediately after waking up. In this case, when referencing the coefficient table 124 illustrated in
The sum total of the correction coefficients of the movement start time lag is calculated as “+0+0+0.3+0.2+0.2=+0.7.” As such, on the basis of the values acquired from the action information 121, the action controller 113 slows the execution start timing by 70% of normal.
Note that, while omitted from the drawings, the coefficient table 124 defines a correction coefficient for the animal sound in the same manner as for the movement. Specifically, the action controller 113 uses the correction coefficient corresponding to the state parameter 122 acquired from the state parameter acquirer 112 to correct the volume. Here, the volume is the animal sound parameter set for the action corresponding to the met trigger in the action information 121.
Thus, the action controller 113 corrects the action control parameters on the basis of the state parameters 122 acquired by the state parameter acquirer 112. Then, the action controller 113 causes the robot 200 to execute the action corresponding to the met trigger by causing the driver 220 to drive or outputting a sound from the speaker 231 on the basis of the corrected action control parameters.
More specifically, when causing the robot 200 to execute the action corresponding to the met trigger, the action controller 113 performs one of (1A) a first control for causing the robot 200 to correctly execute that action, and (1B) a second control for causing the robot 200 to incorrectly execute that action or for causing the robot 200 to not execute that action.
(1A) In the first control, causing the robot 200 to correctly execute the action means to control the robot 200 in accordance with the sequence defined for that action. Specifically, the first control corresponds to driving the driver 220 or outputting the sound from the speaker 231 correctly in accordance with the action control parameters corrected with the correction coefficients, when causing the robot 200 to execute the action corresponding to the met trigger.
(1B) In contrast, in the second control, causing the robot 200 to incorrectly execute the action means to control the robot 200 in accordance with sequence that differ at least in part from the sequence defined for that action or, in other words, to control the robot 200 so as to incorrectly execute at least a portion of that action incorrectly. Specifically, the second control corresponds to driving the driver 220 or outputting the sound from the speaker 231 not correctly in accordance with the action control parameters corrected with the correction coefficients, when causing the robot 200 to execute the action corresponding to the met trigger.
Here, incorrectly executing the action or, in other words, executing at least a portion of the action incorrectly means executing by a sequence that deviates from the sequence defined for that action. More specifically, incorrectly executing the action corresponds to omitting the executing of at least one element of the plurality of elements (movements or animal sounds) constituting that action, switching the execution order of at least one element with that of another element, or changing the action control parameters of at least one element.
As a specific example, for the action of “Test 1” illustrated in
Thus, the action controller 113 executes the first control or the second control in accordance with the situation and, as such, not only executes the action correctly every time, but also mistakes or omits a portion of the action, depending on the situation. Due to this, the actions of the robot 200 are not uniform, which allows the robot 200 to improve lifelikeness.
Returning to
Specifically, the degree of familiarity setter 114 calculates, in accordance with a predetermined rule, the degree of familiarity of each of the plurality of actions executable by the robot 200, and stores the calculated degrees of familiarity in the storage 120 as a degree of familiarity table 125. As illustrated in
When the robot 200 executes any of the actions, the degree of familiarity setter 114 calculates, in accordance with Equation 3 below, a new degree of familiarity of the executed action. Then, the degree of familiarity setter 114 updates, to the calculated new degree of familiarity, the degree of familiarity of the executed action included in the action information 121.
New degree of familiarity=Current degree of familiarity+product of degree of familiarity coefficients+correction value based on external stimulus (Equation 3)
In Equation 3, the degree of familiarity coefficient is a coefficient for calculating the degree of familiarity of each action. As illustrated in
In Equation 3, the product of degree of familiarity coefficients is a product of the degree of familiarity coefficients corresponding to current state of the robot 200 in each of (1) the emotion parameter, (2) the personality parameter, (3) the battery level, (4) the current location, and (5) the current time. The degree of familiarity setter 114 references the degree of familiarity table 125 to calculate the product of degree of familiarity coefficients corresponding to the current state of the robot 200.
Next, as an example similar to that described above, in a case in which (1) the current emotion parameter corresponds to happy, (2) the current personality parameter corresponds to chipper, (3) the current battery level corresponds to 30% or less, (4) the current location corresponds to location visited for the first time, and (5) the current time corresponds to immediately after waking up, the degree of familiarity coefficients corresponding to each state in the degree of familiarity table 125 are defined as 1.2, 1.2, 0.6, 0.7, and 0.6. As such, the degree of familiarity setter 114 calculates the product of degree of familiarity coefficients as 1.2×1.2×0.6×0.7×0.6≈0.36.
The degree of familiarity setter 114 adds the product of degree of familiarity coefficients calculated in this manner to the degree of familiarity every time the robot 200 executes the action. Since the product of degree of familiarity coefficients is added to the degree of familiarity of the action every time that action is executed, the degree of familiarity of that action increases as the execution count of that action increases.
Furthermore, for the degree of familiarity coefficient, as illustrated in the degree of familiarity table 125 (2) of
For example, in the degree of familiarity table 125 (2) illustrated in
Furthermore, in the degree of familiarity table 125 (2) illustrated in
In Equation 3, the correction value based on external stimulus is a correction value for correcting the degree of familiarity on the basis of an external stimulus relative to the action executed by the robot 200. When the sensor 210 detects an external stimulus during execution of the first control, or when the sensor 210 detects an external stimulus within a predetermined amount of time after the execution of the first control, the degree of familiarity setter 114 corrects the degree of familiarity on the basis of the external stimulus.
Specifically, the degree of familiarity setter 114 detects, by the sensor 210, a user response relative to the executed action as an external stimulus. In one example, the user demonstrates, as a response to the action executed by the robot 200, a positive response such as petting, praising, or the like, or a negative response such as striking, getting angry, or the like. The degree of familiarity setter 114 detects, by the various types of sensors of the sensor 210, such user responses while the robot 200 is being caused to execute the action and for a predetermined amount of time (for example, 1 minute) after the robot 200 is caused to execute the action.
Specifically, the degree of familiarity setter 114 uses the touch sensor 211 to detect the strength of contact of the user on the robot 200 and, on the basis of the strength of contact, determines whether the user is petting or striking the robot 200, that is, whether the user response is positive (petting) or negative (striking). Additionally, the degree of familiarity setter 114 detects the speech of the user by the microphone 213 and performs speech recognition of the detected speech to determines whether the user is praising or is angry at the robot 200, that is, whether the user response is positive or negative. Moreover, a configuration is possible in which the degree of familiarity setter 114 detects the speech of the user by the microphone 213 and senses a volume of the detected speech, and determines that the user is praising (is positive) when the volume is less than a predetermined value, and is angry (is negative) when the volume is greater than or equal to the predetermined value. Furthermore, a configuration is possible in which the degree of familiarity setter 114 determines that the robot 200 is rocked gently, rocked forcefully, hugged, turned upside down, or the like on the basis of detection values of the acceleration sensor 212 or the gyrosensor 215. Moreover, a configuration is possible in which the degree of familiarity setter 114 determines that the user response is positive when the robot 200 is rocked gently or hugged, and determines that the user response is negative when the robot 200 is rocket forcefully or turned upside down.
The degree of familiarity setter 114 sets the correction value based on external stimulus of Equation 3 when a user response such as described above is detected as an external stimulus. Then, the action controller 113 corrects the degree of familiarity with the set correction value. For example, when the user demonstrates a positive response to an action executed by the robot 200, the degree of familiarity setter 114 sets the correction value based on external stimulus to a positive value. Meanwhile, when the user demonstrates a negative response to an action executed by the robot 200, the degree of familiarity setter 114 sets the correction value based on external stimulus to a negative value.
Thus, the degree of familiarity setter 114 increases the degree of familiarity when the user response is positive, and decreases the degree of familiarity when the user response is negative. In other words, the degree of familiarity setter 114 increases the amount of increase of the degree of familiarity when the sensor 210 detects a positive user response as the external stimulus compared to when the sensor 210 detects a negative user response as the external stimulus.
Additionally, a configuration is possible in which the degree of familiarity setter 114 controls the correction value of the external stimulus of Equation 3 on the basis of a brightness detected by the illuminance sensor 214 when the action is executed. Specifically, a configuration is possible in which the degree of familiarity setter 114 increases the amount of increase of the degree of familiarity when the illuminance sensor 214 senses a brightness greater than or equal to a desired threshold compared to when the illuminance sensor 214 senses a brightness less than the desired threshold.
Thus, the degree of familiarity setter 114 increases the degree of familiarity of each action as the execution count of that action increases, and updates the degree of familiarity of each action in accordance with the current state of the robot 200 and the user response to that action.
When causing the robot 200 to execute an action corresponding to a met trigger, the action controller 113 determines the frequency at which to perform the first control, among the first control and the second control, on the basis of the degree of familiarity set for that action by the degree of familiarity setter 114. In other words, the frequency at which the first control is to be performed when the action controller 113 causes the robot 200 to execute the action corresponding to the met trigger changes in accordance with the degree of familiarity to that action.
When any trigger among the triggers of the plurality of actions defined in the action information 121 is met, before executing the action corresponding to the met trigger, the action controller 113 derives, in accordance with the current degree of familiarity of that action, the frequency at which the first control is to be performed.
For example, the action controller 113 derives 0.5 as the frequency when the current degree of familiarity of the action to be executed is 0 or greater and less than 5, derives 0.8 as the frequency when the current degree of familiarity of the action to be executed is 5 or greater and less than 10, and derives the frequency as 1.0 when the current degree of familiarity of the action to be executed is 10 or greater. Thus, as the frequency at which the first control is to be performed, the action controller 113 derives a frequency that increases as the current degree of familiarity of that action increases.
When the frequency is derived, the action controller 113 determines, on the basis of the derived frequency, whether to perform the first control or to perform the second control when causing the robot 200 to execute the action corresponding to the met trigger. For example, when the derived frequency is 0.5, the action controller 113 determines to perform the first control with a probability of 50%, and determines to perform the second control with a probability of the remaining 50%. When the derived frequency is 0.8, the action controller 113 determines to perform the first control with a probability of 80%, and determines to perform the second control with a probability of the remaining 20%. When the derived frequency is 1.0, the action controller 113 determines to perform the first control with a probability of 100%, and determines to not perform the second control. Thus, the action controller 113 determines to perform the first control with a probability that increases as the derived frequency increases.
In a case in which a determination to perform the first control is made, the action controller 113, when causing the robot 200 to execute the action corresponding to the met trigger, drives the driver 220 or outputs sounds from the speaker 231 correctly for all of the plurality of elements (movement or animal sounds) constituting that action, in accordance with the action control parameters corrected with the correction coefficients.
In contrast, in a case in which a determination to perform the second control is made, the action controller 113, when causing the robot 200 to execute a portion of the elements of the plurality of elements (movement or animal sounds) constituting the action corresponding to the met trigger, drives the driver 220 or outputs sounds from the speaker 231 not correctly in accordance with the action control parameters corrected with the correction coefficients. Specifically, the action controller 113 omits execution, switches the execution order, changes the action control parameters, or the like of a portion of the elements of the plurality of elements constituting the action corresponding to the met trigger.
Note that, when causing the robot 200 to executing an element other than the portion of elements of the plurality of elements constituting the met trigger, the action controller 113 drives the driver 220 or outputs the sound from the speaker 231 correctly in accordance with the action control parameters corrected with the correction coefficients.
Here, the portion of elements not correctly executed of the plurality of elements constituting the action corresponding to the met trigger may be randomly selected, or may be selected in accordance with a specific rule.
Next, the flow of robot control processing is described while referencing
When the robot control processing starts, the controller 110 sets the state parameters 122 (step S101). When the robot 200 is started up for the first time (the time of the first start up by the user after shipping from the factory), the controller 110 sets the various parameters, namely the emotion parameter, the personality parameter, and the growth days count to initial values (for example, 0). Meanwhile, at the time of starting up for the second and subsequent times, the controller 110 reads out the values of the various parameters stored in step S106, described later, of the robot control processing to set the state parameter 122. However, a configuration is possible in which the emotion parameters are all initialized to 0 each time the power is turned ON.
When the state parameters 122 are set, the controller 110 communicates with the terminal device 50 and acquires the action information 121 created on the basis of user operations performed on the terminal device 50 (Step S102). Note that, when the action information 121 is already stored in the storage 120, step S102 may be skipped.
When the action information 121 is acquired, the controller 110 determines whether any trigger among the triggers of the plurality of actions defined in the action information 121 is met (step S103).
When any trigger is met (step S103; YES), the controller 110 causes the robot 200 to execute the action corresponding to the met trigger (step S104). Details about the action control processing of step S104 are described while referencing the flowchart of
When the action control processing illustrated in
When the state parameters 122 are updated, the controller 110 references the action information 121 and acquires the action control parameters of the action corresponding to the met trigger (step S202). Specifically, the controller 110 acquires, from the action information 121, a combination of movements or animal sounds that are elements constituting the action corresponding to the met trigger, the execution start timing of each element, and the movement parameter or the animal sound parameter that is the parameter of each element.
When the action control parameters are acquired, the controller 110 corrects the action control parameters on the basis of the correction coefficients defined in the coefficient table 124 (step S203). Specifically, the controller 110 calculates the sum total of the correction coefficients corresponding to the state parameters 122 updated in step S201 among the correction coefficients defined in the coefficient table 124 for each of (1) the emotion parameter, (2) the personality parameter, (3) the battery level, (4) the current location, and (5) the current time. Then, the controller 110 corrects the movement parameter, the animal sound parameter, and the execution start timing with the calculated sum total of the correction coefficients.
When the action control parameters are corrected, the controller 110 determines whether to correctly execute the action corresponding to the met trigger (step S204). Specifically, the controller 110 references the degree of familiarity of the action corresponding to the met trigger in the action information 121, and derives, in accordance with degree of familiarity, the frequency at which the first control is to be performed. Then, the controller 110 determines, on the basis of the derived frequency, whether to perform the first control in which the action is executed correctly, or to perform the second control in which the action is executed incorrectly.
When the action is to be executed correctly (step S204; YES), the controller 110 causes the robot 200 to correctly execute the action corresponding to the met trigger (step S205). Specifically, the controller 110 causes the driver 220 to drive or outputs the sound from the speaker 231 correctly in accordance with the action control parameters corrected in step S204.
When the robot 200 is caused to correctly execute the action, the controller 110 determines whether a user response is detected during the execution of the action or within a predetermined amount of time after the execution of the action (step S206). Specifically, as the external stimulus, the controller 110 determines whether a response, such as a contact or a call or the like by the user, is detected by the sensor 210.
When a user response is detected (step S206; YES), the controller 110 sets, on the basis of the user response, the correction value for the degree of familiarity of the executed action (step S207). For example, when the user demonstrates a positive response such as petting, praising, or the like to the action executed by the robot 200, the controller 110 sets a positive value as the correction value. Meanwhile, when the user demonstrates a negative response such as striking or getting angry to the action executed by the robot 200, the controller 110 sets a negative value as the correction value.
When a user response is not detected (step S206; NO), the controller 110 skips the processing of step S207.
In contrast, when the action is not to be executed correctly (step S204; NO), the controller 110 determines, of the action corresponding to the met trigger, the movement or the animal sound to not execute correctly (step S208). Specifically, the controller 110 randomly or, in accordance with a specific rule, determines the portion of elements, among the plurality of elements (movements or animal sounds) constituting the action corresponding to the met trigger, to not execute correctly.
Next, the controller 110 causes the robot 200 to incorrectly execute the action corresponding to the met trigger (step S209). Specifically, the controller 110 omits execution, switches the execution order, changes the action control parameters, or the like for the movement or the animal sound determined in step S208. Then, for the other movements or the animal sound 1, the controller 110 drives the driver 220 or outputs the sound from the speaker 231 correctly in accordance with the action control parameters.
When the action is executed, the controller 110 updates the degree of familiarity of the executed action (step S210). Specifically, the controller 110 calculates, on the basis of the state parameters 122 updated in step S201, the product of the degree of familiarity coefficients. Then, in accordance with Equation 3, the controller 110 calculates a new degree of familiarity from the current degree of familiarity, the calculated product of degree of familiarity coefficients, and the correction value set in step S207. The controller 110 updates the degree of familiarity of the executed action in the degree of familiarity table 125 to the new degree of familiarity.
When the action is executed, the controller 110 updates the action information 121 (step S211). Specifically, the controller 110 adds 1 to the execution count of the executed action in the action information 121, and updates the previous execution date and time of the executed action in the action information 121 to the current date and time. Thus, the action control processing illustrated in
Returning to
Next, the controller 110 determines whether to end the processing (step S105). For example, when the operator 240 receives a power OFF command of the robot 200 from the user, the processing is ended. When ending the processing (step S105; YES), the controller 110 stores the current the state parameters 122 in the non-volatile memory of the storage 120 (step S106), and ends the robot control processing illustrated in
When not ending the processing (step S105; NO), the controller 110 uses the clock function to determine whether a date has changed (step S107). When the date has not changed (step S107; NO), the controller 110 executes step S103.
When the date has changed (step S107; YES), the controller 110 updates the state parameters 122 (step S108). Specifically, when it is during the juvenile period (for example, 50 days from birth), the controller 110 changes the values of the emotion change amounts DXP, DXM, DYP, and DYM in accordance with whether the emotion parameter has reached the maximum value or the minimum value of the emotion map 300. Additionally, when in the juvenile period, the controller 110 increases both the minimum value and the maximum value of the emotion map 300 by a predetermined increase amount (for example, 2). In contrast, when in the adult period, the controller 110 adjusts the personality correction values.
When the state parameters 122 are updated, the controller 110 adds 1 to the growth days count (step S109), and executes step S103. Then, as long as the robot 200 is operating normally, the controller 110 repeats the processing of steps S103 to S109.
As described above, when executing an action, the robot 200 according to Embodiment 1 performs one of the first control in which the action is correctly executed, and the second control in which the action is incorrectly executed. Moreover, the frequency at which the first control is performed changes in accordance with the degree of familiarity of the robot 200 to that action. Thus, whether the robot 200 executes the action correctly or executes the action incorrectly changes in accordance with the degree of familiarity to that action and, as such, it is possible to express the process of the robot 200 learning the action.
In particular, when an external stimulus is detected during the execution of the first control or within a predetermined amount of time after the execution of the first control, the robot 200 according to Embodiment 1 updates the degree of familiarity in accordance with the external stimulus. Due to this, even when an external stimulus for executing a specific action is applied, at an early stage when the degree of familiarity is low, the robot 200 clumsily executes that action without executing some of the procedures as defined. In contrast, when the user demonstrates a positive response when a correct action is executed, the degree of familiarity increases and the frequency at which the action is correctly executed gradually increases. As a result, it is possible to imitate the growth (development) of a living creature.
Next, Embodiment 2 is described. In Embodiment 2, as appropriate, descriptions of configurations and functions that are the same as described in Embodiment 1 are forgone.
In Embodiment 1, the degree of familiarity setter 114 updates the degree of familiarity of each action in accordance with the current state of the robot 200 and the user response to that action. In contrast, in Embodiment 2, instead of or in addition to the feature of Embodiment 1, the degree of familiarity setter 114 increases the degree of familiarity in accordance with an elapsed time from a pseudo-birth of the robot 200.
Here, the elapsed time from the pseudo-birth of the robot 200 corresponds, for example, to a growth days count of the state parameters 122. The degree of familiarity setter 114 sets the degree of familiarity immediately after the pseudo-birth of the robot 200 low, and increases the degree of familiarity as the growth days count increases. By increasing the degree of familiarity in accordance with the growth days count of the robot 200 in this manner, it is possible to express the process of the robot 200 learning the action as the robot 200 grows.
Furthermore, a configuration is possible in which, for example, the degree of familiarity setter 114 increases the degree of familiarity as the growth days count increases during a juvenile period but, when an adult period is reached, the increasing of the degree of familiarity as the growth days count increases is stopped. Thus, the robot 200 clumsily and incorrectly executes the actions in the juvenile period, and when the adult period is reached, the frequency at which the robot 200 correctly executes the actions increases. As such, it is possible to realistically express lifelikeness.
Embodiments of the present disclosure are described above, but these embodiments are merely examples and do not limit the scope of application of the present disclosure. That is, various applications of the embodiments of the present disclosure are possible, and all embodiments are included in the scope of the present disclosure.
For example, in Embodiment 1, the degree of familiarity setter 114 derives the degree of familiarity to the action on the basis of the execution count of the action by the robot 200 and the state of the robot 200. Additionally, in Embodiment 2, the degree of familiarity setter 114 derives the degree of familiarity to the action also on the basis of the elapsed time from the pseudo-birth of the robot 200. However, the method for deriving the degree of familiarity is not limited thereto. In other words, a configuration is possible in which the degree of familiarity setter 114 derives the degree of familiarity to the action on the basis of at least one of the execution count of the action by the robot 200, the elapsed time from the pseudo-birth of the robot 200, and the state of the robot 200.
In the embodiment described above, the control device 100 is installed in the robot 200, but a configuration is possible in which the control device 100 is not installed in the robot 200 but, rather, is a separated device (for example, a server). When the control device 100 is provided outside the robot 200, the control device 100 communicates with the robot 200 via the communicator 130, the control device 100 and the robot 200 send and receive data to and from each other, and the control device 100 controls the robot 200 as described in the embodiments described above.
In the embodiment described above, the exterior 201 is formed in a barrel shape from the head 204 to the torso 206, and the robot 200 has a shape as if lying on its belly. However, the robot 200 is not limited to resembling a living creature that has a shape as if lying on its belly. For example, a configuration is possible in which the robot 200 has a shape provided with arms and legs, and resembles a living creature that walks on four legs or two legs.
Furthermore, the electronic device is not limited to a robot 200 that imitates a living creature. For example, provided that the electronic device is a device capable of expressing individuality by executing various actions, a configuration is possible in which the electronic device is a wristwatch or the like. Even for an electronic device other than the robot 200, it is possible to described that electronic device in the same manner as in the embodiments described above by providing the same configurations and functions as with the robot 200 described above,
In the embodiment described above, in the controller 110, the CPU executes programs stored in the ROM to function as the various components, namely, the action information acquirer 111, the state parameter acquirer 112, the action controller 113, and the like. Additionally, in the controller 510, the CPU executes programs stored in the ROM to function as the various components, namely, the action information creator 511 and the like. However, in the present disclosure, the controller 110, 510 may include, for example, dedicated hardware such as an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), various control circuitry, or the like instead of the CPU, and this dedicated hardware may function as the various components, namely the action information acquirer 111 and the like. In this case, the functions of each of the components may be realized by individual pieces of hardware, or the functions of each of the components may be collectively realized by a single piece of hardware. Additionally, the functions of each of the components may be realized in part by dedicated hardware and in part by software or firmware.
It is possible to provide a robot 200 or a terminal device 50, provided in advance, with the configurations for realizing the functions according to the present disclosure, but it is also possible to apply a program to cause an existing information processing device or the like to function as the robot 200 or the terminal device 50 according to the present disclosure. That is, a configuration is possible in which a CPU or the like that controls an existing information processing apparatus or the like is used to execute a program for realizing the various functional components of the robot 200 or the terminal device 50 described in the foregoing embodiments, thereby causing the existing information processing device to function as the robot 200 or the terminal device 50 according to the present disclosure.
Additionally, any method may be used to apply the program. For example, the program can be applied by storing the program on a non-transitory computer-readable recording medium such as a flexible disc, a compact disc (CD) ROM, a digital versatile disc (DVD) ROM, and a memory card. Furthermore, the program can be superimposed on a carrier wave and applied via a communication medium such as the internet. For example, the program may be posted to and distributed via a bulletin board system (BBS) on a communication network. Moreover, a configuration is possible in which the processing described above is executed by starting the program and, under the control of the operating system (OS), executing the program in the same manner as other applications/programs.
The foregoing describes some example embodiments for explanatory purposes. Although the foregoing discussion has presented specific embodiments, persons skilled in the art will recognize that changes may be made in form and detail without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. This detailed description, therefore, is not to be taken in a limiting sense, and the scope of the invention is defined only by the included claims, along with the full range of equivalents to which such claims are entitled.
Number | Date | Country | Kind |
---|---|---|---|
2023-158316 | Sep 2023 | JP | national |