The invention generally relates to a human interaction system for interaction by a user with interrelated physical and virtual representations of a social robot.
Social robots are known which are designed to meet certain requirements in appearance and function for use in human care scenarios. For example, such robots may comprise moveable parts as well as the ability to produce visual and audio outputs that provide a relatable interactive experience for a user—that is, the social robot is designed with a view to encouraging interaction and a feeling of attachment between the user and the social robot. Citations [1]-[5] describe the characteristics desirable in a physical social robot in this regard, for example, under the heading “Our social robot characteristics” of reference [3]. The citations include discussion of features suitable for autism care and care of the elderly, in particular, in relation to care of those with dementia. Such social robots may be said to have their own personality, which typically extends to including a name—that is, the social robot embodies a personality. The design of the social robot intends for this through selection of physical design features and, typically, audible and/or visual design features.
However, existing systems rely solely on a physical robot. Although suitable designs have been found to improve engagement and, therefore, effectiveness in care, further developments are required.
Social robots can provide for a level of engagement and monitoring for people requiring care that can help to take some of the workload off carers. A known social robot is the present Applicant's Matlda robot (reference [7]), which provides human-like engagement and sensory enrichment to users. For example, Matlda has been designed to have a friendly appearance while providing user-friendly interactivity.
According to an aspect of the present invention, there is provided a human interaction system for interaction by a user comprising social systems comprising at least a social robot and one or more virtual robot systems, and a coordination system, wherein: the social robot is controlled by a robot processing system and is configured to provide interaction with a user, said interaction including output means and input means, the one or more one or more virtual robot systems are configured to controllably present an avatar representation of the social robot, and further configured to receive inputs, the coordination system is configured to coordinate operation of the social robot and the one or more virtual robots such that, at any one time, either the social robot or one of the virtual robot systems is active, such that, in operation, a user perceives a robot personality associated with the social robot and the avatar as associated with the active social robot or virtual robot system.
The social robot may comprise one or more cameras and/or a microphone as input means, and/or the social robot may comprise a speaker and/or one or more lights as output means.
The coordination system may be in data communication with the social robot and the one or more virtual robot systems.
The social robot may comprise a first portion, such as a head, moveable with respect to a second portion, such as a body.
The coordination system may be configured to: determine a location of the user and to determine a corresponding social system to the location of the user; and communicate a message to the corresponding social system configuring it as active. The coordination system may be further configured to: communicate a message to the one or more other social systems configuring each as inactive. The coordination system may be configured to receive a present communication from each social system, and the present communication may be generated in response to an input means of the social system indicating the presence of the user at a physical location associated with the social system.
The one or more virtual robot systems may be configured to animate the avatar, and at least one animation may be equivalent to a movement of the social robot.
The one or more virtual robot systems may be configured to animate the avatar, and at least one animation may be not equivalent to a movement of the social robot.
The robot processing system may be configured to control, at least in part, the operation of an active virtual robot system. The active virtual robot system may be configured to interpret commands received from the robot processing system and adapt said commands for display on a display of the virtual robot system.
At least one virtual robot system may be configured with at least two predefined avatar appearances, and one of said predefined avatar appearances may be selected in dependence on an application. One of the predefined avatar appearance may be a neutral appearance.
At least one virtual robot system may be configured with at least two predefined virtual environments over which the avatar is presented.
The system may further comprise one or more interaction devices in data communication with the social robot and/or virtual robot system(s), the interaction devices enabling the user to provide inputs and receive outputs. At least one virtual robot system may be configured to present a virtual object corresponding to an interaction device.
The social robot and/or at least one virtual robot system may be configured for data communication with one or more auxiliary devices.
The avatar appearance may be controllable in response to a user command to perform a verbal communication and/or a visual action.
According to another aspect of the present invention, there is provided a human interaction system for interaction by a user comprising social systems comprising at least a social robot and one or more virtual robot systems, wherein: the social robot is controlled by a robot processing system and is configured to provide interaction with a control user, said interaction including output means and input means, the one or more one or more virtual robot systems are configured to controllably present an avatar representation of the social robot, and further configured to receive inputs, wherein the, or each, virtual robot system is associated with a user, such that, in operation, the, or each, user perceives a robot personality associated with the social robot and the avatar as associated with the active social robot or virtual robot system.
The social robot may comprise one or more cameras and/or a microphone as input means, and/or the social robot may comprise a speaker and/or one or more lights as output means.
The social robot may comprise a first portion, such as a head, moveable with respect to a second portion, such as a body.
The one or more virtual robot systems may be configured to animate the avatar, and at least one animation may be equivalent to a movement of the social robot.
The one or more virtual robot systems may be configured to animate the avatar, and at least one animation may be not equivalent to a movement of the social robot.
The robot processing system may be configured to control, at least in part, the operation of an active virtual robot system.
At least one virtual robot system may be configured with at least two predefined avatar appearances, and one of said predefined avatar appearances may be selected in dependence on an application. One of the predefined avatar appearance may be a neutral appearance.
At least one virtual robot system may be configured with at least two predefined virtual environments over which the avatar is presented.
The system may further comprise one or more interaction devices in data communication with the social robot and/or virtual robot system(s), the interaction devices enabling the user to provide inputs and receive outputs. At least one virtual robot system may be configured to present a virtual object corresponding to an interaction device.
The social robot and/or at least one virtual robot system may be configured for data communication with one or more auxiliary devices.
The social robot may be configured to receive voice commands from the control user, at least one voice command may correspond to a request for information from a particular virtual robot system, and the social robot may be further configured to: communicate said command to said particular virtual robot system. The social robot may be further configured to: receive a response to said command from the particular virtual robot system. At least one virtual robot system may be further configured to: receive a directed command; undertake an associated action; and communicate a response to the social robot. At least one associate action may comprise obtaining a result from an associated auxiliary device in communication with the associated virtual robot system.
The avatar appearance may be controllable in response to a user command to perform a verbal communication and/or a visual action.
According to another aspect of the present invention, there is provided a human interaction method for allowing interaction by a user with a social system comprising at least a social robot and one or more virtual robot systems, comprising: controlling the social robot to provide interaction with a user, said interaction including output means and input means, controllably presenting an avatar representation of the social robot on one or more displays, such that when an avatar representation is displayed on a display it is active, and coordinating operation of the social robot and the one or more virtual robots such that, at any one time, either the social robot or one of the virtual robot systems is active, such that, in operation, a user perceives a robot personality associated with the social robot and the avatar as associated with the active social robot or virtual robot system.
The present disclosure can also be understood as including virtual avatars produced by virtual robot systems and their relationships to a physical robot, thereby providing a common relationship experience. For example, certain aspects disclosed may allow for multiple avatars to be presented at a time, where those avatars are located in different locations such as rooms—for example, virtual avatars may be presented in a hospital room while one or more physical robots are present in a common area, providing the experience that the single personality is present in both a resident's room and when the resident visits the common area.
As used herein, the word “comprise” or variations such as “comprises” or “comprising” is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.
In order that the invention may be more clearly understood, embodiments will now be described, by way of example, with reference to the accompanying drawing, in which:
Referring to
Referring to
Referring to
Referring to
For example, the robot processing system 20 can be configured to implement techniques for monitoring emotional state changes as described in the present Applicant's earlier PCT publication no. WO 2008/064431 A1. The control interface 124 controls the outputs of the social robot 11. These may vary depending on the particular implementation, but can include, for example, emitting visual and/or audio signals. The social robot 11 also receives input data from sensors of the social robot 11, such as from one or more cameras and/or one or more microphones. Reference is also made to citations [1], [2], [3], [4], and [5] for examples of existing operation of the robot processing system 20, each of which is incorporated herein by reference.
According to an embodiment, as shown in
Examples of local auxiliary devices 15 include portable computing devices such as smart phones and tablets 15a, wearable technology such as activity trackers 15b, and medical monitoring devices 15c. In the latter case, the robot processing system 20 can be configured to obtain medical information relating to a patient in the same room as the robot processing system 20 and/or a virtual robot processing system 21. Generally, such devices 15 may be provided with software to enable communication with the robot processing system 20 and/or a virtual robot processing system 21 or, alternatively, an existing output of the such devices 15 can be coupled to the robot processing system 20 and/or a virtual robot processing system 21.
For example, one or more auxiliary devices 15 may be provided for measuring: heart rates; emotional profile; sleep quality; blood pressure; brain activity (EEG).
Referring to
The social robot 11 may be moveable via a trolley or similar vehicle or via physical lifting. However, both techniques pose problems—for example, a trolley does not readily facilitate movement in a vertical direction (up or down stairs, for example) and it has been found that physical lifting can lead to injury or misplacement of the social robot 11. The latter problem can be significant—for example, if a social robot 11 is placed too close to an edge of an elevated position (e.g. table), it may fall off, risking both physical damage and potential emotional distress for the user.
Unless a distinction is required, for convenience, herein reference to a social robot 11 should be understood to include reference to its robot processing system 20. Similarly, reference to a virtual robot system 12 should be understood to include its virtual robot processing system 21.
At step S100, the coordination system 13 determines that the user is in the room associated with a particular virtual robot system 12a. In an embodiment, the virtual robot processing system 21 is configured to determine the presence of the user based on inputs received from its sensors and to communicate a message to the coordination system 13 indicate said presence. In another embodiment, the virtual processing system 21 is configured to communicate said sensor data to the coordination system 13, which determines the presence of the user. According to an embodiment, the virtual robot system 12a identifies the presence of the user via its equipped camera 31 using human recognition algorithms known in the art. Alternatively, or in addition, the user may be provided with a radio frequency identifier that is configured to be readable by a suitably configured scanner interfaced with the virtual robot system 12a.
At step S101, the coordination system 13 communicates messages to the social robot 11 and any other virtual robot systems 12b-12d configured to inform each device that it is to be in an inactive state. The meaning of “inactive state” may vary depending on the particular embodiment and whether the device is a social robot 11 or a virtual robot system 12. However, generally, when in an inactive state, the particular device is configured to not undertake functions corresponding to the robot personality. For example, a display 30 of an inactive virtual robot system 12 can be configured to not display a representation of the avatar 22. In another example, an inactive physical social robot 11 can be configured to limit or entirely halt output functionality such as the illumination of lights, emission of sounds, or movement of parts such as the head 40 with respect to the body 41.
At step S102, the coordination system 13 communicates to the virtual robot system 12a associated with the physical location of the user a message indicating that it is to be in an active state. The meaning of “active state” may vary depending on the particular embodiment. In a general sense, when in an active state, the virtual robot system 12 is configured to present a visual representation of the avatar 22. Similarly, the social robot 11 can be in an active state, in which case it is undertaking functions associated with its robot personality.
Referring to
The embodiments described in reference to
According to an embodiment, the robot processing system 20 is configured for at least partial control of an active virtual robot processing system 21. For example, in such an embodiment, the robot processing system 20 may operate as a server and the virtual robot processing system 21 operates as a client. Accordingly, the active virtual robot processing system 21 is configured to communicate received inputs, for example, from its microphone, camera(s), and/or other input means to the robot processing system 20. The communication can be facilitated by the coordination system 13.
The robot processing system 20 is configured to cause the virtual robot processing system 21 to undertake corresponding functions to those that would otherwise be performed by the social robot 11. For example, the robot processing system 20 can be configured to communicate commands to the virtual robot processing system 21 instructing the virtual robot processing system 21 to implement a certain presentation function.
According to an embodiment, the virtual robot processing system 21 processes received commands to determine the associated presentation function and to, in response, create a corresponding presentation. For example, the command may correspond to the avatar 22 looking in a particular direction (e.g. left or right). In this example, the virtual robot processing system 21 is configured with predefined programming such as to create the appearance of a virtual representation of the avatar 22 looking in the corresponding direction. Therefore, according to this embodiment, the robot processing system 20 is not configured to directly control the outputs of the virtual robot processing system 21—instead, the control is as to what function is to be implemented by the virtual robot processing system 21. The actual task of implementing the function is left to the virtual robot processing system 21. This embodiment may provide an advantage in that relatively low bandwidth communications are required between the virtual robot processing system 21 and the robot processing system 20.
According to an embodiment, the virtual robot processing system 21 is configured to communicate to the robot processing system 20 that it has completed implementing a received function. The robot processing system 20 can therefore be configured to maintain in its memory 122 a current state of the virtual robot processing system 21 relevant to operation of the avatar 22. For example, the current state can be determined based upon the received communications from the virtual robot processing system 21.
According to an embodiment, with reference to
In the example shown in
The virtual robot processing system 21 is therefore configurable to display a selected virtual representation 33 based upon a current function—an instruction may be communicated, for example, from the robot processing system 20 to the virtual robot processing system 21 to indicate which virtual representation 33 is to be displayed. In an embodiment, where the virtual robot processing system 21 is instructed to change between virtual representations 33, a change animation may be employed.
According to an embodiment, the avatar 22 may be designed such as to express a larger number of movements than compared to the social robot 11. For example, the social robot 11 may be limited to rotational movements of its head 40 with respect to its body 41. However, the avatar may be preconfigured for additional movements—for example, translational movement of the head 40 with respect to the body 41. The avatar may be configured to move with respect to the display 30—for example, from left to right and/or up and down. In general, it should be understood, many different animations are possible. It may be preferred that the avatar 22 retain throughout said motions its visual identity—that is, the user should perceive the avatar 22 to be the same virtual object at all times. Advantageously, the avatar 22, although representing the social robot 11, effectively has available more degrees of freedom in which to move.
Referring to
In the example shown in
Additional virtual environments 34 may be provided, such as office environment 34b. The additional virtual environments 34 correspond to certain activities that may be implemented by the system 10. For example, these may represent such ideas as a school, a kindergarten, an office, a home, a reception, etc. These allow the user to believe that the avatar 22 has moved to one of these locations, which may be triggered when the robot processing system 20 determines to undertake a particular activity with the user (as described in relation to a social robot 11 in the prior art). For example, it may be that a kindergarten application is begun in which the system 10 presents a kindergarten activity to the user. In this case, the virtual environment 34 displayed on the active display 30 can be changed to reflect a kindergarten virtual environment 34.
The virtual robot processing system 21 is therefore configurable to display a selected virtual environment 34 based upon a current function—an instruction may be communicated, for example, from the robot processing system 20 to the virtual robot processing system 21 to indicate which virtual environment 34 is to be displayed. In an embodiment, where the virtual robot processing system 21 is instructed to change between virtual environments 34, a change animation may be employed. For example, the virtual environment 34 of a kindergarten may include a playground, toy(s), table(s), chair(s), etc. The avatar 22 will then be presented within this virtual environment 34, potentially a virtual representation 33 selected also in accordance with the application (e.g. the avatar 22 may be dressed for kindergarten, for example, having a school backpack).
Examples of embodiments represented by
Referring to
According to the embodiment of
The virtual environment 33, virtual objects 34, and/or appearance of the avatar 22 may be determined dynamically depending on the application context. For example, when the user asks “read McDonald has a farm” story, the content of the story may be analysed. Then a virtual farm scene as the virtual environment 34 will be rendered together with 3D animals (i.e. virtual objects 34), their animations, and sound effects will be created and synched with the storytelling progress.
According to an embodiment, with reference to
Social robot 11 is associated with a control user and can be configured in a control mode—this is different to the active mode described above although the control mode may include the functionality of some or all of a social robot 11 in active mode. The robot processing system 20 is configured, in the control mode, to direct commands to particular virtual robot processing systems 21 in response to an instruction issued by the control user. The commands are configured to cause a receiving virtual robot processing system 21 to undertake an action, which may result in a response being communicated to the robot processing system 20. In this way, the control user is enabled to cause actions to occur at particular virtual robot systems 12 which may be remote from the control user.
The social robot 11 can be preconfigured with one or more voice commands. The social robot 11 can further be configured to interpret sensed voiced commands to identify the associated voice command. Furthermore, the voice command can be associated with a virtual robot system 12 identifier also spoken by the control user.
An advantage of providing for a social robot 11 as well as a plurality of virtual robot systems 12 may be that the social robot 11 provides a physical representation of the avatars 22 of the virtual robot systems 12. A patient (in this example) may be aware of the physical social robot 11 present at, in this case, the nurse station 90. Therefore, the patient may associate the avatar 22 with the nurse station 90 and the nurses occupying the nurse station 90. Therefore, through the perceived association, the patient may advantageously be more inclined to treat the avatar 22 as a “real” entity rather than simply a virtual animation.
Another implementation example provides for one or more social robots 11 and a plurality of virtual robot systems 12 within a residential aged care facility. The social robot(s) 11 can be placed within common areas, such as a lounge or dining area, or at a carer's desk. The virtual robot systems 12 can each be placed in the rooms of different residents. Similar to the above example, the residents can learn to associate the virtual avatars 22 with the physical social robots 11. A social robot 11 can be configured to undertake group-based activities in the common area (e.g. bingo games) while the virtual robot systems 12 provide more personalised functions for the specific associated residents, for example, monitoring, therapeutics, and social connectivity services.
More generally, an advantage of one or more embodiments described herein is that a user is encouraged and more likely to form an emotional bond with a physical social robot 11. However, this bond is then transferred to the virtual avatars, which are configured to embody the same “personality” as the social robot 11, thereby appearing to correspond to the same entity. An advantage may be that the present embodiments address the problem known in the art of it being more difficult for users to form bonds with virtual avatars compared to physical objects such as social robots 11, for example, as discussed in reference [6].
Further modifications can be made without departing from the spirit and scope of the specification.
Number | Date | Country | Kind |
---|---|---|---|
2020903504 | Sep 2020 | AU | national |
202110133608.1 | Feb 2021 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/AU2021/050698 | 6/30/2021 | WO |