This invention relates to a robot, more particularly to a child-sized expressive humanoid robot with realistic, but simplified features, even more particularly to the hand of such a robot, where the hand comprises a magnet and an RFID sensor and optionally an FSR sensor to enable object interaction between a user and the robot.
KASPAR is a child-sized humanoid robot designed to help teachers and parents support children with autism. The robot was developed by the University of Hertfordshire's Adaptive Systems Research Group. KASPAR was designed for use as a social mediator, encouraging and helping children with autism to interact and communicate with adults and other children. KASPAR has the ability to engage in a range of interactive play scenarios, such as turn-taking or shared-gaze activities, which children with autism often find difficult to understand or perform. KASPAR's face is capable of showing a range of simplified expressions but with few of the complexities of a real human face. KASPAR has movable arms, head and eyes, which can be controlled by the teacher or parent but also can respond to the touch of a child. It is desirable to create a robot like KASPAR which is also capable of object interaction between a user and the robot.
Other humanoid robots that could be considered to perform a therapeutic role working in the field of children with autism include NAO and Milo. NAO is a small humanoid robot that is capable of performing gestures and similar to KASPAR. NAO however does not have a human like face and as a result cannot generate human like facial expressions in the same way that KASPAR can. Milo is a small humanoid robot similar to KASPAR, however it is not capable of tactile interaction due to the fragility of its joints and the lack of tactile sensors around the body.
According to a first aspect of the invention there is provided a child-sized humanoid robot comprising a magnet and a Radio-Frequency Identification (RFID) sensor.
Preferably the robot further comprises a Force Sensing Resistor (FSR) sensor.
Preferably the robot comprises a hand wherein the hand comprises the magnet and the RFID sensor and the FSR sensor if provided.
Preferably the hand comprises a plastic core, in one alternative the hand comprises a 3D printed plastic core.
Preferably the plastic core is covered with a skin. The skin should be of a sufficient thickness not to break easily, preferably the skin is between about 2 mm and 3 mm thick. The skin thickness should be sufficient to provide good cover and protection, but not too thick so that it does not obstruct the sensory capacity of the components within the hand. In one alternative the skin is formed from a silicone in another alternative the skin is formed from a vinyl such as PVC.
In one alternative the magnet is a permanent magnet, in another alternative the magnet is an electromagnet. Preferably the magnet is embedded in the plastic core, preferably at the front of the plastic core, preferably where the palm of the hand is located.
Preferably the hand comprises a plurality of FSR sensors. Preferably the FSR sensor(s) are embedded in the front and rear of the plastic core. The FSR sensor(s) are preferably placed under the skin and can detect the approximate amount of pressure being exerted on them.
Preferably the RFID sensor is embedded in the plastic core, preferably at the front of the plastic core, preferably where the palm of the hand is located. Preferably the RFID sensor sits behind the FSR sensor in the plastic core
In an alternative the RFID sensor may be located in a separate platform rather than in the hand of the robot. In this alternative a platform is provided (which is connected to the robot) upon which objects comprising an RFID tag are to be place by the child, rather than placing them onto the hand of the robot. This would allow for larger objects to be utilised, such as items of crockery (plates, bowls, and cups), toy models of animals etc., wherein the child has to recognise the correct item to be located onto the platform.
According to a second aspect of the present invention there is provided an object comprising a magnet and an RFID tag.
In one alternative the magnet and RFID tag are detachably connected to the object, more preferably the magnet and RFID tag are located in a housing which is detachably connected to the object. In another alternative the magnet and RFID tag are embedded in the object.
According to a third aspect of the present invention there is provided an apparatus comprising a child-sized humanoid robot comprising a magnet and an RFID sensor and an object comprising a magnet and an RFID tag wherein when the object is brought into close proximity with the robot the object becomes removably attached to the robot and the RFID tag interacts with the RFID sensor.
Preferably the apparatus further comprises a FSR sensor.
Preferably when the RFID tag interacts with the RFID sensor the robot identifies the object.
Preferably when the robot identifies the object the robot provides the user with a response, the response in one alternative could be a verbal response, in another alternative the response could be a gestural response. Preferably the robot provides the user with both a verbal response and a gestural response. In a further alternative the response is a non-verbal sound such as a beep or a jingle.
Preferably the object is selected from; toothbrush, comb, hair brush, cloth, spoon, fork, cup, paintbrush, pencil, crayon, pair of glasses, microphone, food.
Preferably food is selected from; fruit, vegetable, cake, biscuit, chocolate.
Preferably the verbal response comprises the robot identifying the object.
Preferably where the object is food the verbal response in addition or in the alternative comprises the robot commenting on whether the robot likes the food with phrases such as “that is tasty” or “I don't like this”.
Preferably the gestural response comprises the robot simulating the typical action that the object would be used for.
In one alternative the object is a toothbrush, the verbal response comprises the robot identifying the object as a toothbrush and the gestural response comprises the robot simulating the action for brushing teeth with the toothbrush.
In one alternative the object is a comb the verbal response comprises the robot identifying the object as a comb and the gestural response comprises the robot simulating the action for brushing hair with the comb.
In one alternative the object is a hair brush, the verbal response comprises the robot identifying the object as a hair brush and the gestural response comprises the robot simulating the action for brushing hair with the hair brush.
In one alternative the object is a cloth, the verbal response comprises the robot identifying the object as a cloth and the gestural response comprises the robot simulating the action for washing the face of the robot with the cloth.
In one alternative the object is a spoon, the verbal response comprises the robot identifying the object as a spoon and the gestural response comprises the robot simulating the action for eating with the spoon.
In one alternative the object is a fork, the verbal response comprises the robot identifying the object as a fork and the gestural response comprises the robot simulating the action for eating with the fork.
In one alternative the object is a cup, the verbal response comprises the robot identifying the object as a cup and the gestural response comprises the robot simulating the action for drinking from the cup.
In one alternative the object is a paintbrush, the verbal response comprises the robot identifying the object as a paintbrush and the gestural response comprises the robot simulating the action for painting with the paintbrush.
In one alternative the object is a pencil, the verbal response comprises the robot identifying the object as a pencil and the gestural response comprises the robot simulating the action for writing with the pencil.
In one alternative the object is a crayon, the verbal response comprises the robot identifying the object as a crayon and the gestural response comprises the robot simulating the action for drawing with the crayon.
In one alternative the object is a pair of glasses, the verbal response comprises the robot identifying the object as a pair of glasses and the gestural response comprises the robot simulating the action for putting on the pair of glasses.
In one alternative the object is a microphone, the verbal response comprises the robot identifying the object as a microphone and the gestural response comprises the robot simulating the action for singing into the microphone.
In one alternative the object is food, the verbal response comprises the robot identifying the object as food and the gestural response comprises the robot simulating the action for eating the food. Preferably the verbal response comprises the robot identifying the object as the particular food that it is such as fruit, vegetable, cake, biscuit, chocolate. Preferably where the food is fruit or vegetable the verbal response comprises the robot identifying the object as the particular food that it is such as carrot, banana, apple, pear etc.
Preferably the Robot is configured to give a verbal response when the FSR sensor is activated above a predefined level.
Preferably the Robot is configured to give a response when the FSR sensor is activated above about 50% of the sensors maximum value from baseline for less than 2 seconds. Preferably the response is a verbal response and in one alternative comprises the phrase “please don't hit me”, or a phrase giving a similar impact on the user.
Preferably the Robot is configured to give a response when the FSR sensor is activated above about 90% of the sensors maximum value from baseline. Preferably the response is a verbal response and in one alternative comprises the phrase “that hurts”, or a phrase giving a similar impact on the user.
Preferably the Robot is configured to give a response when the FSR sensor is activated between about 80% and about 90% of the sensors maximum value from baseline. Preferably the response is a verbal response and in one alternative comprises the phrase “please don't be so rough with me”, or a phrase giving a similar impact on the user.
Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
In a typical situation, a child will be placed in close proximity to the robot 10 preferably with a supervising adult. The child will interact with the robot 10 through a number of scenarios which have been programmed into the robot 10. Such scenarios could either be automatically controlled or in the alternative controlled by the supervising adult through means of a control pad.
A typical scenario might include teaching the child to recognise the appropriate piece of cutlery for eating a particular food stuff. In this scenario, the robot 10 might be programmed to say that it is hungry and wants to eat some soup, and asks the child to give the robot 10 something to eat the soup with. The child might then be provided with a toothbrush 34, a spoon 38, and a fork 36. The child then would have to choose the appropriate object, which in this case would be the spoon 38 and give the spoon 38 to the robot 10. The corresponding magnets located in housing 40 allow the object to be held by the hand 12 of the robot 10, the RFID tag also located in housing 40 communicates with the RFID sensor 22 to allow the robot 10 to determine which object has been given to the robot 10, and the FSR sensor 18 determines how much pressure us being exerted on the hand 12 of the robot 10. The robot 10 will then process this information and verbally give feedback to the child. This might include saying “thank you the spoon would be perfect”, or that “the fork might not work as the soup will fall out of the gaps”, and “the toothbrush is for brushing teeth not for eating” and so on. If the object is given to the robot 10 with too much force, then the robot 10 might say “ow that hurt” or similar so that the child gets feedback that they have been too rough.
The Robot is configured to give a response when the FSR sensor is activated above a predefined level. The response may be a sound response such as a beep or a jingle or other sound, or in the alternative the response may be a verbal response.
The Robot is configured to give a response when the FSR sensor is activated above about 50% of the sensors maximum value from baseline for less than 2 seconds. Preferably the response is a verbal response and in one alternative comprises the phrase “please don't hit me”, or a phrase giving a similar impact on the user.
The Robot is configured to give a response when the FSR sensor is activated above about 90% of the sensors maximum value from baseline. Preferably the response is a verbal response and in one alternative comprises the phrase “that hurts”, or a phrase giving a similar impact on the user.
The Robot is configured to give a response when the FSR sensor is activated between about 80% and about 90% of the sensors maximum value from baseline. Preferably the response is a verbal response and in one alternative comprises the phrase “please don't be so rough with me”, or a phrase giving a similar impact on the user.
Below are examples of objects along with associated verbal and gestural responses associated therewith.
In one alternative the object is a toothbrush, the verbal response comprises the robot identifying the object as a toothbrush and the gestural response comprises the robot simulating the action for brushing teeth with the toothbrush.
In one alternative the object is a comb the verbal response comprises the robot identifying the object as a comb and the gestural response comprises the robot simulating the action for brushing hair with the comb.
In one alternative the object is a hair brush, the verbal response comprises the robot identifying the object as a hair brush and the gestural response comprises the robot simulating the action for brushing hair with the hair brush.
In one alternative the object is a cloth, the verbal response comprises the robot identifying the object as a cloth and the gestural response comprises the robot simulating the action for washing the face of the robot with the cloth.
In one alternative the object is a spoon, the verbal response comprises the robot identifying the object as a spoon and the gestural response comprises the robot simulating the action for eating with the spoon.
In one alternative the object is a fork, the verbal response comprises the robot identifying the object as a fork and the gestural response comprises the robot simulating the action for eating with the fork.
In one alternative the object is a cup, the verbal response comprises the robot identifying the object as a cup and the gestural response comprises the robot simulating the action for drinking from the cup.
In one alternative the object is a paintbrush, the verbal response comprises the robot identifying the object as a paintbrush and the gestural response comprises the robot simulating the action for painting with the paintbrush.
In one alternative the object is a pencil, the verbal response comprises the robot identifying the object as a pencil and the gestural response comprises the robot simulating the action for writing with the pencil.
In one alternative the object is a crayon, the verbal response comprises the robot identifying the object as a crayon and the gestural response comprises the robot simulating the action for drawing with the crayon.
In one alternative the object is a pair of glasses, the verbal response comprises the robot identifying the object as a pair of glasses and the gestural response comprises the robot simulating the action for putting on the pair of glasses.
In one alternative the object is a microphone, the verbal response comprises the robot identifying the object as a microphone and the gestural response comprises the robot simulating the action for singing into the microphone.
In one alternative the object is food, the verbal response comprises the robot identifying the object as food and the gestural response comprises the robot simulating the action for eating the food. Preferably the verbal response comprises the robot identifying the object as the particular food that it is such as fruit, vegetable, cake, biscuit, chocolate. Preferably where the food is fruit or vegetable the verbal response comprises the robot identifying the object as the particular food that it is such as carrot, banana, apple, pear etc. Preferably where the object is food the verbal response in addition or in the alternative comprises the robot commenting on whether the robot likes the food with phrases such as “that is tasty” or “I don't like this”.
Number | Date | Country | Kind |
---|---|---|---|
1614090.7 | Aug 2016 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2017/052411 | 8/16/2017 | WO | 00 |