The present invention relates to machines interacting with users, such as rehabilitation robots for patients or exoskeleton robots, as well as to components thereof. More particularly, the present invention relates to a physical interface device for providing contact between a user and rigid components of a machine or robot, as well as machines comprising such a physical interface device.
Traditionally, mobilization therapy is performed manually by therapists. This is physically demanding since a leg for example, weighs 15% of total body mass and needs to be mobilized for 30 min straight. Additionally, therapists have to take care of many patients at a time, this is up to 16 per day. That means 30 min per person without any break which makes it difficult to provide intensive care to all patients. As a consequence, we are currently not providing optimal care.
Rehabilitation robots have been developed as a way to improve treatment outcomes and reduce the workload of therapists while also reducing healthcare costs. These robots can provide long-term, multi-modal rehabilitation training. By way of illustration, the state of the art is discussed with focus on upper-limb rehabilitation robots. There are two main types of upper limb rehabilitation robots: (i) End-effector robots, such as Amadeo from Tyromotion or Luna EMG from EGZOTech, which are systems with only one attachment to the human body, where no alignment between patient and robot joint is required and (ii) Exoskeleton robots, such as Harmony SHR™ from Harmonics Robotics and Armeo Power from Hocoma, are systems with multiple connections to the human body. The joints between the connecting links of the robot align with the joints of the human.
Exoskeletons have the advantages that the multiple joints can be individually controlled, allow training of specific muscles and allow for force/torque measurements on each body connection, including the estimation of specific human joint torques, but require longer setup times, since it needs to be configured to the specific patient as the joint of the exoskeleton has to be aligned with the joint of the patient. Furthermore, they have a more complex structure and a customized design for a specific limb, are unsafe when they are badly calibrated to the patient and are difficult to carry and transport.
End-effectors on the other hand have a simple structure, are easier to carry or transport, provide solutions that are on average cheaper, mostly because of the simple structures and provide for a faster setup, resulting in less time wasted for the physiotherapist. In some cases, the patients can even strap themselves. Typical disadvantages are that they are mostly only capable of doing simple motions, do not provide solutions for functional motion training and operate in such a way that force generated at a distal interface changes the positions of other joints simultaneously, making isolated movement of single joints difficult.
Both end-effector and exoskeleton device rely on at least one attachment (i.e. physical interface) to the human body. These attachments have problems regarding safety and comfortability, as reported in literature, hindering mainstay practical use of rehabilitation robots in healthcare. Soft-tissue related adverse events are the most recurring events, preventing the medical field to provide long-lasting and safe therapy. The challenge relates to the pressure distributed on the skin tissues. When attaching a robot to a human body, excessive pressure can lead to skin injuries. A related problem is power transmission between machine and human body. Researchers have observed critical power losses (up to 50%) at the physical interface level, leading to reduced benefits of wearing the devices.
Moreover, inadequate controllability has limited adoption of rehabilitation devices, since it directly limits the range of operations we can assist. In essence, this boils down to the ability of the robot to capture the state and intention of the user. A popular biophysical sensor for intention-recognition is bipolar electromyography (EMG). This sensor can enable triggered assistance rehabilitation, allowing the patient to initiate a movement without any assistance. The robot will only start to assist the patient after some performance variables have reached a threshold. This encourages the patient to self-initiate a movement, which is believed to be beneficial in motor learning.
Drawbacks of current robotic rehabilitation devices include:
Time to set-up makes machines obsolete. Placing a patient inside a rehabilitation robot usually requires careful alignment of anatomical joints with the robotic ones. This requires expertise and training to prevent dangerous situations. When a therapy consists of 30 min sessions, clinicians cannot afford to spend one third of that time setting up.
Safety: skin-related injuries are prevalent and are hindering prolonged use.
User intention: Current technologies do not allow intention-based rehabilitation.
Existing solutions often only focus on two dimensional planar motions and are not able to perform complex more functional motions (e.g. drinking, reaching tasks).
Complexity of operation:
There is, thus, still a need in the art for devices and methods that address at least some of the above problems.
It is an object of embodiments of the present invention that a device is provided providing safety and/or comfortability when using a machine, e.g., a rehabilitation robot or exoskeleton.
It is an advantage of embodiments of the present invention that a physical interface device is provided, providing a sensorized interface which can be attached to the human body and can detect intentions to move, and quantify how much effort is being performed by the wearer during use of the machine, such as during performing tasks using the exoskeleton or during physical therapy.
The interface can be connected to a robotic arm, to further assist the person in achieving functional tasks, such as for example but not limited to brushing teeth, reaching for a glass of water, etc. This is especially useful for neurological patients who require repetitive and intense training to relearn these daily tasks.
By supplementing therapists with this data, the recovery process of the patient can be better monitored and evaluated, allowing a better personalization of therapy, e.g. rehabilitation therapy. By supporting therapists with the robotic device one can reduce their physical workload and allow them to treat more patients at the same time.
It is an advantage of at least some embodiments of the present invention that accurate intention of the patient is captured when using machines, e.g., rehabilitation robots or exoskeletons, so that the machine behaviour can adapt accordingly. It is for example an advantage of embodiments of the present invention that the intention of the patient is captured accurately so that a patient is not only triggering a motion assisted by the robot and “rides” out the rest of the movement but further provides effort during the rest of the movement, since slacking on the side of the patient may result in a decreased recovery.
It is an advantage of embodiments of the present invention that easy and fast setting up of a machine, e.g., rehabilitation robot or exoskeleton, is provided, since the physical interface device avoids the need for placement of separate sensors.
It is an advantage of embodiments of the present invention that due to the integration of the sensor or sensors inside the physical interface device, the positioning of the sensors with respect to the user is more accurate.
In a first aspect, the present invention relates to a physical interface device for providing a physical interface between a user and rigid components of a machine, e.g., a rehabilitation system or exoskeleton, e.g. the physical interface device also being referred to as a cuff, the physical interface device comprising:
It is an advantage of embodiments of the present invention that a rehabilitation robot and/or corresponding robotic rehabilitation platform may be provided that can replicate the sense of touch of a physiotherapist.
It is an advantage of embodiments of the present invention that use of mechanical power in conjunction with close physical interaction with patients does not result in safety and comfort issues by using a particular physical interface device.
It is an advantage of at least embodiments of the present invention that the physical interface device can track the pressure exerted on the patient's skin, providing additional safety and comfort. It is an advantage of embodiments of the present invention that patients can receive more frequent therapy with higher intensity, particularly in the early mobilization phase.
The device may comprise an output or is configured to provide data to an output for indicating information regarding the application of the physical interface device to the user or regarding movement or movement intentions by the user, based on data of the one or more sensors integrated in the physical interface device, optionally in combination with data regarding external forces operating on the physical interface device.
It is an advantage of embodiments of the present invention that a physical interface device with integrated pressure sensors allows identifying how the physical interface device is applied to the user, e.g. when straps are tightened, and/or allows data-driven. When straps are tightened, a pressure increase can be registered.
It is an advantage of embodiments of the present invention that skin-related injuries can be avoided by taking into account the pressure that occurs between the physical interface device and the user.
It is an advantage of embodiments of the present invention that indication can be given that strapping pressure is sufficient, or safe, resulting in a decreased need of training to use the device, resulting in the possibility to use the device more easily and in a safer way.
It is an advantage of embodiments of the present invention that pressure sensors may allow for intention-detection, i.e. detection of the intention to perform a movement by the user.
It is an advantage of embodiments of the present invention that the system allows for detecting where the arm is placed inside the physical interface device, e.g. by determining the center of pressure.
One or more sensors may be configured so as to provide the ability to measure biological signals directly on the human skin.
The one or more sensors may comprise one or more force and/or pressure sensors.
In one set of embodiments, the one or more sensors comprise at least one force sensor, for example multiple force sensors. In some embodiments, forces may be sensed at multiple locations inside the physical interface device, e.g. cuff. The one or more force sensors may be adapted for measuring force in three dimensions, e.g. in three directions. The forces may be measured in an x, y and z direction. In some embodiments, the x, y and z direction may form a carthesian reference system.
The one or more force sensors may be part of or form a small tactile sensor. In some embodiments different small tactile sensors may be implemented at different positions. In some embodiments, the small tactile sensor may contain a plurality, e.g. 12 or more, such as 24 or more force sensor voxels, each voxel being a force sensor. Each of these force sensors may in some embodiments allow for measuring force in three directions, e.g. three dimensions. In this way, also shear force may be taken into account.
It is an advantage of embodiments of the present invention that novel features can be calculated, such as for example that an accurate centre of pressure (COP) can be determined. The latter may in some embodiments be based on multiple sensor cells.
According to some embodiments of the present invention, multiple force sensors may be positioned around the circumference inside of the physical interface device. The latter may for example allow for performing mechanomyography. To detect muscle deformation, forces typically may be measured at multiple locations. The latter may allow to distinguish between an increased force by muscle deformation or an increased force by e.g. increase of the external machine force or robot force.
It is an advantage of embodiments of the present invention that force peaks can be predicted, e.g. accurately predicted. When injuries occur in physical human machine interfaces or physical human robot interfaces, the injury is mostly located at these force peaks. Correctly detecting force peaks means one can make a safer physical interface.
It is an advantage of embodiments of the present invention that by implementing a plurality of sensors, one can more easily identify where the physical interface device is located with respect to the limb, thus allowing to better distinguish between the change of a muscle force or e.g. external forces.
The goal of the physical interface is to attach a machine to the human body. The tighter this connection the better the energy transmission. However, there is a trade-off since a tighter strapped cuff will also be less comfortable. The perfect level of strapping is a trade-off between comfort and energy transmission. The tightness will have an influence on the internal forces that are measured. The tighter the attachment the higher the internal forces, without any effect on the external forces. The latter may be calibrated.
Depending on the limb to which the physical interface device is to be attached to, the position of the sensors for measuring forces may be selected as function of the position where typically peak forces may occur, such as for example where the bone is closer to the skin.
The one or more force and/or pressure sensors may be configured for measuring a pressure distribution on skin tissues.
The one or more sensors may be configured for 3 dimensional tactile force sensing.
The one or more sensors may comprise one or more sensors for capturing the intention of the user for performing a certain movement.
The one or more sensors may comprise one or more electromyography (EMG) sensors.
The one or more sensors may combine at least one electromyography sensor, at least one pressure sensor and at least one inertial measurement sensor.
The physical interface device may comprise releasable connections for releasebly connecting the physical interface device to components, e.g. a robotic arm, of the machine, e.g., rehabilitation robot or exoskeleton.
It is an advantage of embodiments of the present invention that the machine, e.g., rehabilitation robot or exoskeleton, may be programmed by demonstration. Demonstration is an intuitive robot programming technique. Using the built in IMU in the sensorized interface, one can program the exercise by off-robot demonstration. The programming may for example be performed when the patient is wearing the cuff, but the cuff is not attached to the machine or robot. One then gets a trajectory from the UMI and one can recreate the desired exercise once the machine or robot is attached to the cuff. In this way, therapists only have the move the patient limb around (which is easier for them).
In one aspect, the present invention also relates to a machine, such as for example a rehabilitation robot for rehabilitation of a user or an exoskeleton, the machine comprising a physical interface device as described above. The machine, e.g. rehabilitation robot or exoskeleton, or the physical interface device may comprise one or more sensors for measuring the external force on the limb, e.g. through external forces on the physical interface device induced by the machine. In some examples the external forces may be induced by a robotic arm in the machine.
The machine, e.g. rehabilitation robot or exoskeleton, may combine internal forces measured with the one or more sensors integrated in/on the physical interface device with external forced induced on the physical interface device and combine these force measurements for deriving information regarding the application of the physical interface device to the user or regarding movement or movement intentions by the user and/or for deriving peak forces applied to the limb.
The machine may be an end-effector robot.
The machine may be a rehabilitation robot for upper-body extremities and/or lower-body extremities. The rehabilitation robot may be such that it can be adapted for rehabilitation of upper-body extremities, such as arms, and for rehabilitation of lower-body extremities, such as legs. Depending on the use, e.g. different physical interface devices may be used.
The machine may use a robotic arm as actuation device.
The robotic arm may have 6 spatial degrees of freedom and can be altered over time.
According to embodiments of the present invention, the rehabilitation robot thus may comprise a robotic arm having 7 degrees of freedom, 6 spatial degrees of freedom and time. It is an advantage of embodiments of the present invention that any kind of motion possible by the human arm can be achieved, resulting in the possibility for making functional exercises. It is an advantage of embodiments of the present invention that also complex motions can be induced giving the possibility to train specific muscles, when part of the body can be grounded, or such that resistive or assistive forces are provided depending on the requirements of the therapy.
The physical interface device or machine may comprise or may be configured to communicate with a user interface, for giving feedback to the therapist and/or to the patient.
The physical interface device or machine may be adapted for performing data analytics, including providing data or processing data, the data comprising one or more of pressure readings, EMG readings, robot peak force, robot current position and programmed reference trajectory, amount of transferred work between patient and robot, level of active participation, cuff strapping pressure and performance of patient (for example whether the patient/user reaches goals regarding the range of motion and/or the force.
Rehabilitation is a repetitive process that requires coordinated movements for effective treatment. However, patient compliance and motivation can be challenging due to the monotony, intensity, and cost of most therapy routines.
It is an advantage of at least some embodiments of the present invention that an intention-based rehabilitation approach that uses bio-signals, such as surface electromyographic sensors (sEMG) can be provided. It is an advantage of at least some embodiments of the present invention that performance can be more easily quantified than with state of the art techniques and that engagement can be stimulated and cost-effective treatment can be provided. It is an advantage of embodiments of the present invention that patient compliance and functional outcomes can be good, e.g. improved with respect to at least some of the state of the art techniques.
In one aspect, the present invention relates to an exoskeleton comprising a physical interface device according to embodiments of the first aspect of the present invention. Indeed, the present invention is not strictly limited to applications in rehabilitation but may be applied in any type of exoskeleton for supporting a user, e.g., during a physical activity. It is an advantage of embodiments of the present invention that good use of and good support by the exoskeleton may be achieved.
Particular and preferred aspects of the invention are set out in the accompanying independent and dependent claims. Features from the dependent claims may be combined with features of the independent claims and with features of other dependent claims as appropriate and not merely as explicitly set out in the claims.
Although there has been constant improvement, change and evolution of devices in this field, the present concepts are believed to represent substantial new and novel improvements, including departures from prior practices, resulting in the provision of more efficient, stable and reliable devices of this nature.
The above and other characteristics, features and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention. This description is given for the sake of example only, without limiting the scope of the invention. The reference figures quoted below refer to the attached drawings.
The present invention will be described with respect to particular embodiments and with reference to certain drawings and a particular exemplary illustrative embodiment, but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes. The dimensions and the relative dimensions do not correspond to actual reductions to practice of the invention.
Furthermore, the terms first, second, third and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequence, either temporally, spatially, in ranking or in any other manner. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.
Moreover, the terms top, bottom, over, under and the like in the description and the claims are used for descriptive purposes and not necessarily for describing relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other orientations than described or illustrated herein.
It is to be noticed that the term “comprising”, used in the claims, should not be interpreted as being restricted to the means listed thereafter; it does not exclude other elements or steps. It is thus to be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. The term “comprising” therefore covers the situation where only the stated features are present and the situation where these features and one or more other features are present. The word “comprising” according to the invention therefore also includes as one embodiment that no further components are present. Thus, the scope of the expression “a device comprising means A and B” should not be interpreted as being limited to devices consisting only of components A and B. It means that with respect to the present invention, the only relevant components of the device are A and B.
Similarly, it is to be noticed that the term “coupled”, also used in the claims, should not be interpreted as being restricted to direct connections only. The terms “coupled” and “connected”, along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression “a device A coupled to a device B” should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. “Coupled” may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the clement for the purpose of carrying out the invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
The invention will now be described by a detailed description of several embodiments of the invention. It is clear that other embodiments of the invention can be configured according to the knowledge of persons skilled in the art without departing from the technical teaching of the invention, the invention being limited only by the terms of the appended claims.
In a first aspect, the present invention relates to a physical interface device for providing a physical interface between a user and rigid components of a machine, e.g., a rehabilitation system or an exoskeleton, e.g. a cuff, the physical interface device comprising:
The soft component may be a component that is more soft than the rigid components of the machine, e.g., rehabilitation system or exoskeleton. It may be e.g. foam based, although embodiments are not limited thereto. Further standard and optional components may be as set out in the summary and in the claims. Standard and optional features are also shown in the following drawing and description.
According to embodiments of the present invention the physical interface device is a physical Human Robot-Interface, what we call the ‘cuff”. It is the accessory to the device that allows to capture data about the patient that provides benefits. Specifically, the physical interface device may have in one embodiment integrated EMG electrodes, that measure muscular activity, and pressure sensors, to measure internal cuff pressures. Muscular activity can be used to detect intention of motion and estimate fatigue.
An image of an exemplary physical interface device is shown in
According to embodiments, the physical interface device may provide for:
In some embodiments, when using force sensors in the interface, e.g. cuff, musclemyography can be performed. In other words, based on the deformation of the limb, the muscle activity can be measured. This is possible for muscles and limbs inside the cuff.
The physical interface device or the machine, e.g., rehabilitation robot or exoskeleton, may comprise or be configured to communicate with a user interface, for giving feedback to the therapist and/or to the patient. The machine or device or robot may be adapted for performing data analytics, although embodiments are not limited thereto. Data that may be used in the data analytics may include one or more of: pressure readings, EMG readings, robot peak force, robot current position and programmed reference trajectory, amount of transferred work between patient and robot, level of active participation, cuff strapping pressure and performance of patient (for example whether the patient/user reaches goals regarding the range of motion and/or the force).
As indicated above, the physical interface device may be combined with or part of a machine, e.g., rehabilitation robot or exoskeleton, operating like an end-effector. It is an advantage of such a machine, e.g., rehabilitation robot or exoskeleton, that one is not bound by the limits of kinematic alignment between the joints. This significantly reduces set-up time.
According to some embodiments, physical interface devices comprise a sensorized interface having 3 integrated sensor modalities: electromyography, pressure, inertial measurement unit. In some embodiments, EMG data can be fused with pressure to increase the accuracy of motion prediction. By measuring the pressure inside the cuff we can estimate muscle force, since the volume of the limb changes with exerted muscle force. Such data may for example be processed in a local processor, which may be part of the physical interface device, may be processed in a processor being part of the machine, e.g., rehabilitation robot or exoskeleton, or may be processed externally to the system. Combining the data of these sensors with the data measured by the machine or robot, such as for example with measured external forces, torques and positions, one can more accurately model the intention of the patient. The latter information may be used for increasing motivation by the patient.
In some embodiments, a strain sensor could be used, allowing to measure how much stretching or bending occurs, which also may allow for detecting pressure or force changes.
In a second aspect, the present invention relates to a rehabilitation robot for rehabilitation of a user, the rehabilitation robot comprising a physical interface device as described in the first aspect. Further standard and optional components may be as set out in the summary and in the claims. Standard and optional features are also shown in
In one aspect, the present invention relates to an exoskeleton comprising a physical interface device according to embodiments of the first aspect of the present invention.
The present invention thus may also relate to a machine comprising a physical interface device according to embodiments of the first aspect of the present invention.
The machine, e.g., rehabilitation robot, according to an exemplary embodiment of the present invention may comprise one or more of the following components-the example not being limited for the present invention):
According to some embodiments:
The rehabilitation robot uses programming by demonstration.
The cobot of an exemplary embodiment of this invention may be composed of 7joints and can carry loads of 14 kg. These 7 joints mean that the workspace of the robot is large, and that many analytical motions can be performed as well as functional tasks. It is an advantage that the system may provide active assistance for functional tasks.
The robotic arm may be supported by a trolley. The trolley may be a cart with 4 rotative wheels, and it may possess an active lifting system. This means that electric actuators may lift the trolley to provide mechanical grounding. The platform can be moved around, similarly to a shopping cart.
SAFeR may be used to assist the legs, for example to provide mobilization in the post-op phase, as well as for the upper body. Specific cuffs may be provided for each body segment, which are easy to swap, and different sizes.
According to some embodiments, the system may comprise an insert used for attaching the sensors thereto, allowing an efficient installation and positioning of the sensors.
According to some embodiments, the system also may comprise a protection part for protecting the electronics in the system, e.g. made of any suitable material, such as e.g. plastics.
It is to be understood that although preferred embodiments, specific constructions and configurations, as well as materials, are discussed herein for devices according to the present invention, various changes or modifications in form and detail may be made without departing from the scope of this invention. For example, any formulas given above are merely representative of procedures that may be used. Steps may be added or deleted to methods described within the scope of the present invention.
In one example, the rehabilitation platform comprises a physical interface device used to attach the patient to the robot, being the part of the platform that makes it possible to mimic the physiotherapist sense of touch. The physical interface device is designed for safety and comfortability. By modeling human bodies, the physical interface device design can be shaped in such a way to better fit one or all patients. Additionally, using precise strapping mechanisms, patients can be more accurately strapped for increased comfort. Second, embedding sensors directly into the physical interface device results in easier and faster strapping of patients (reduced workload for physiotherapist results in reduced financial load). The sensors capture biological signals (i.e. internal cuff pressures and muscle activity) directly on the human body, and can be used as feedback for the physiotherapist to optimize the rehabilitation strategy. On the Data level, sensor data may be interpreted and visualized in an understandable way to the therapist. Soft-tissue adverse events during therapy (e.g. skin injury) can be counteracted by giving feedback about the strapping pressure distribution inside the cuff (integrated pressure sensors). Moreover, in order to increase outcome after rehabilitation, recognizing the intention of the patient by interpreting all data streams measured in the physical interface device and in the robot. The robot responsible for actuating the mobilization of the patient is a crucial element in the platform. Since the platform is in continuous contact with a patient, the first importance on the Robot level is the safety framework. Operation limits within which the robot can freely operate have been defined, outside of which the robot will go into a safety mode. For normal operation modes (i.e. mobilization of the patient), results have focused on the one hand on easy programming of new exercise motions (decrease the time a therapist has to setup the system). Programming is done using demonstrations, where the therapist records a motion by physically moving the robot and patient along the exercise trajectory. Afterwards, these exercises are replayed with different robot control modes that can be chosen by the physiotherapist. Early mobilization can rely on a passive controller, while later can be more relied on an active controller (using Proxy Based Sliding Mode Control) or adaptive controller (variable assistance based on level of motivation of the patient). Finally, all these achievements enabled the development of the robotic rehabilitation platform SAFER, developed in the context of the Kuka Innovation Award challenge. The platform is partly evaluated on the Medical level. Testing has focused on evaluating the platform with healthy subjects. Moreover, functional testing has been performed, evaluating the platform in a hospital setting.
By way of illustration, embodiments of the present invention not being limited thereby, a further exemplary embodiment is discussed below, showing standard and optional elements and features of some embodiments of the present invention and showing advantages thereof.
Although below, a specific robot or machine for upper limb rehabilitation is described, the machine may be any machine, e.g., any type of rehabilitation robot or may be an exoskeleton.
Robotics is gaining more and more attention in the medical field. In particular, robot-aided rehabilitation proved to be an effective tool for providing high-intensity treatments to patients suffering from neurological and musculoskeletal disorders.
Despite the fact that continuous-passive motion machines can effectively improve the range of motion of specific anatomical districts, more complex robotic platforms may play a paramount role in increasing patient active participation during therapy since they can induce neural plasticity to speed up motor recovery. Indeed, robotic architecture can include different features that aim at engaging the patient in interacting with the machine. Assist-as-needed controllers are designed to provide minimal assistive forces to the patient in such a way that the robot should intervene only if the patient is not capable of performing the task autonomously. Furthermore, the inclusion of the patient's intention in the control loop, i.e. to trigger the initiation of movement, results in successful clinical outcomes. Lastly, the gamification of the rehabilitation process thanks to the virtual reality technologies allows engaging the patients while they are doing motor exercises.
Although all these works stress the importance of involving and engaging patients undergoing robot-aided treatments, a methodology to continually estimate to what extent the patient is actively participating in performing the motor exercise and leveraging such metrics inside the robot control loop has not been addressed in the scientific literature.
The works in J. Wagner, T. Solis-Escalante, P. Grieshofer, C. Neuper, G. Muller-Putz, and R. Scherer, “Level of participation in robotic-assisted treadmill walking modulates midline sensorimotor eeg rhythms in able-bodied subjects,” Neuroimage, vol. 63, no. 3, pp. 1203-1211, 2012 and G. Tacchino, M. Gandolla, S. Coelli, R. Barbieri, A. Pedrocchi, and A. M. Bianchi, “Eeg analysis during active and assisted repetitive movements: evidence for differences in neural engagement,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 6, pp. 761-771, 2016 evidenced that some distinctive features of the electroencephalogram (EEG) significantly change between active and passive walking with a lower-limb exoskeleton and in performing upper limb repetitive motions. The experiment carried out in E. Koyas, E. Hocaoglu, V. Patoglu, and M. Cetin, “Detection of intention level in response to task difficulty from eeg signals,” in 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2013, pp. 1-6 found that EEG signals can be used to extract the intention level of the subjects in response to task difficulty. Moreover, the authors evidenced that some correlations between cortical and muscular activity exist when the participants exert different levels of participation. However, the instrumentation used in these works is wearable but requires extensive calibration procedures and can be considered invasive, and wearing an EEG helmet is not feasible for daily rehabilitation therapy.
Wearable monitoring seems to be a valuable alternative to monitoring user physiological signals. Muscular activity is measured in S. Parcek, H. Manjunath, E. T. Esfahani, and T. Kesavadas, “Myotrack: Realtime estimation of subject participation in robotic rehabilitation using semg and imu,” IEEE Access, vol. 7, pp. 76 030-76 041, 2019. Healthy subjects were enrolled to undergo an experiment to develop a binary classifier to predict whether a participant is passively or actively tracking a displayed trajectory with a haptic device. Wearable surface electromyography (sEMG) resulted to be an optimal estimator of patient participation. On the other hand, only sEMG is taken into account in the developed model. Moreover, the present invention may allow the prediction of only the discrete class between active and passive movement.
Some efforts have been made on robot control to close the loop on the patients themselves in the so-called biocooperative systems. Metrics can be computed during the rehabilitation session in order to tune the parameters of the controller in real-time. For instance, kinematics performance as well as patient physiological parameters can be used in gait, and in upper limb robot-aided rehabilitation. On the other hand, all these controllers aim at involving more and more participants without an explicit quantification of patient participation.
To overcome this limitation, the present invention aims at proposing a novel technique to quantify patient active level of participation (ALP) during upper-limb robot-aided rehabilitation and a control strategy to take into account the estimated ALP. An unobtrusive multimodal interface was developed and integrated into an end-effector rehabilitation robot to monitor sEMG and pressure exchanged between the robot and the user. The computation of the ALP enables the development of increasingly human-centered robotic platforms capable of real-time decision-making through patient multimodal monitoring. The proposed ALP-adapting robot was compared with respect to a state-of-the-art impedance controller to assess the closing of the control loop on such human-centred metrics.
Below, firstly, details are provided of an exemplary embodiment of the present invention, and experiments for the validation thereof is described. Then, results are presented that were obtained during said experiment. Lastly, a main conclusion for these experiments is made.
These quantities are capable of providing a picture of the human-robot interaction to quantify to what extent the user is participating in accomplishing the motor task. Machine
Learning algorithms can be trained to identify the two extreme conditions, e.g., the patient not participating at all and the active participation. In order to return a continuous estimate of the active level of participation (ALP), model calibration is needed. Thus, an experiment is needed to collect data to capture the behavior of healthy participants to model the ALP. The computed ALP can be used to adapt the robot behavior.
In particular, the proposed approach adapts the task execution speed according to the ALP. The human-robot interaction designed in this example along with the developed multimodal monitoring interface and the active participation model, presented in
The interaction between the robot and the user is a conventional Cartesian impedance controller around a set point. The robot motion dynamics along with the implemented control law are given by:
where B(q) is the robot inertia matrix, C(q, q′) accounts for Centrifugal and Coriolis effects, Fv is the viscous friction torque, Fssign (q′) is the static friction torque, g(q) is the gravity contribution, q, q′and q″ are the robot joint position, angular velocity and acceleration, respectively, τc is the torque supplied by the actuators and y is the control law. In particular, J†=JT J·JT−1 is the right pseudo-inverse of the robot geometric Jacobian, K is the stiffness matrix and x˜=xd−xa represents the pose error between the desired pose xd and the current pose xa. The desired speed and acceleration (x′d and x″d) are not considered inside the control law presented in the above equations since the controller of this example is adapted to provide a set point without an explicit time law.
In order to acquire the specific trajectory to be replayed by the robot in a specific session, the robot can be set transparent by defining the current stiffness matrix as K=diag{0, 0, 0, 0, 0, 0} N/m. Once the recording starts, the users can freely move their arm attached to the robot end-effector and the Cartesian position and orientation are saved in the reference demonstration xdemo. The set-point xd introduced in the control law in the above equations will be xd∈xdemo.
The human-robot interface developed for this example is composed of two sensing modalities: pressure and EMG. The development of the interface is described in detail in K. Langlois, E. Roels, G. Van De Velde, C. Espadinha, C. Van Vlerken, T. Verstraten, B. Vanderborght, and D. Lefeber, “Integration of 3d printed flexible pressure sensors into physical interfaces for wearable robots,” Sensors, vol. 21, no. 6, p. 2157, 2021 and K. Langlois, J. Gecroms, G. Van De Velde, C. Rodriguez-Guerrero, T. Verstraten, B. Vanderborght, and D. Lefeber, “Improved motion classification with an integrated multimodal exoskeleton interface,” Frontiers in Neurorobotics, p. 140, 2021. The EMG electrodes are made of conductive textile (EconTex NW170-PI-20).
As already indicated above, assessing the ALP of patients during robot-aided rehabilitation is preferred in order to engage them more and more in exercising with the robot. ALP is a measure that encompasses different areas related to both the physical and cognitive spheres. In particular, we may define ALP as a combination of i) physical workload, ii) intention, iii) performance and iv) engagement.
From a physical point of view, if the patients are actively performing the rehabilitation task, they exert a certain physical workload. On the other hand, if the patient slacks, the robot has to assume the leading role in accomplishing the task since the patient is not providing any contribution. The interaction forces can be useful to assess the amount of workload exchanged between the user and the machine. Moreover, another ALP feature is the intention to move. The participation of patients in exercising themselves represents also the will of performing the task with respect to being guided along it. The patient intention relates to the EMG.
Moving towards the cognitive sphere, participation appears to be closely linked with performance. A subject who wants to perform well actively performs the assigned exercise. The performance during rehabilitation is related to the accuracy of tracking the desired motion. At last, engagement in what one is doing and the perception of the interaction also play a paramount role in ALP.
To this purpose, the ALP can be predicted starting from the measures coming from the multimodal interface. In particular, a structured data acquisition campaign is typically needed to train a machine learning model. Given a temporal window of 1 second, statistical features such as the mean, standard deviation, minimum, maximum, and mean value of the first and second derivative can be computed from the four channels of the measured pressures. From the raw EMG signal, both time and frequency domain features were computed as described in K. Langlois, J. Geeroms, G. Van De Velde, C. Rodriguez-Guerrero, T. Verstraten, B. Vanderborght, and D. Lefeber, “Improved motion classification with an integrated multimodal exoskeleton interface,” Frontiers in Neurorobotics, p. 140, 2021 and F. Leone, C. Gentile, F. Cordella, E. Gruppioni, E. Guglielmelli, and L. Zollo, “A parallel classification strategy to simultaneous control elbow, wrist, and hand movements,” Journal of NeuroEngineering and Rehabilitation, vol. 19, no. 1, pp. 1-17, 2022. In particular, the root mean square, the average amplitude change, variance, integrated EMG, average energy, wavelength, mean absolute deviation, and logarithmic difference of absolute mean values are extracted as time-domain features. Moreover, mean and median frequencies are taken into account. To sum up, 10 and 24 features are extracted from the EMG and pressures, respectively, for a total of 34 features. Once the dataset has been collected, machine learning models can be fed with the data to classify the two different labels, e.g. passive and active participation. Since, in this example, it is tried to provide a framework to estimate in a continuous manner the ALP and not simply the binary class participating or not, a machine learning model calibration step is preferred (see B. Zadrozny and C. Elkan, “Transforming classifier scores into accurate multiclass probability estimates,” in Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, 2002, pp. 694-699). Calibration is an essential step for a machine learning model used in a clinical setting since it can provide the probability that its advice is right, e.g., to what extent the participant is actively participating or not. In particular, the isotonic regression calibration approach was used in the present example, as described in A. Niculescu-Mizil and R. Caruana, “Predicting good probabilities with supervised learning,” in Proceedings of the 22nd international conference on Machine learning, 2005, pp. 625-632. It maps the posterior probability of a supervised learning model to a monotonically increasing function. Given the predictions of the model fi and the true targets yi, isotonic regression assumes that
Once an estimation of the ALP is available, it is possible to take it into account to adapt the robot behavior. The present exemplary approach aims at modifying the task execution speed according to the ALP of the participants. Given the recorded trajectory xdemo, composed of T samples, at each iteration, the ALP is used to compute the number of samples Δt to skip in the recorded trajectory Xdemo in order to assign a reference set-point to the robot as
In order to validate the proposed approach, an experiment was designed and carried out by enrolling 15 healthy participants (10 males and 5 females, 34.5±14.18 mean age). All of them signed a written consent to participate in this experiment. In particular, two experimental sessions were carried out. The first one aimed at recording data about the human-robot interaction to train the ALP machine-learning estimation model. Once the model is validated offline, the second experiment was carried out in order to assess the ALP-adapting robot impact on the participants with respect to a simple one.
In the first experiment, five participants are asked to perform shoulder flexion/extension (sFE) exercise with the robot aid.
The volunteers are asked to sit comfortably near the rehabilitation robotic platform. The participants are asked to slip their right arm into the human-robot interface to fit in it. As explained above, when the participants are comfortably attached to the robot end-effector, they record sFE movement with the robot transparent to collect xdemo. When the demonstration is recorded, the replay of the current motion takes place. In both the experimental conditions, the stiffness matrix of the robot is set at K=diag{500, 500, 500, 300, 300, 300} N/m. These values are in fact typically used in end-effector robotic platforms for upper limb rehabilitation. The participants are asked to perform 30 repetitions of the recorded motion in two experimental conditions:
Passive Participation (PP): the users are slacking and let the robot guide their arm in accomplishing the motor task. They passively interact with the robot.
Active Participation (AP): the volunteers actively attempt to follow the trajectory. They participate in following the reference path.
Linear Discriminant Analysis (LDA), linear Support Vector Machine (SVM), Logistic Regression (LR), and k-Nearest Neighbours (kNN), with k=1, classifiers were compared.
In the second experimental validation phase, the ALP estimation module is used in real-time to adapt the robot behavior. To prove the effectiveness of the proposed ALP-adapting robot, two groups of five participants each were enrolled. In particular, all the participants were asked to perform 30 repetitions of the sFE task without providing further information to avoid bias in the groups. At the beginning of the experiment, 15 repetitions are performed with fixed Δt=4 samples in order to measure the participant baseline. In this phase, the aforementioned subject-specific threshold ALPcal is computed as the mean of the ALP collected during the first 15 repetitions of the rehabilitation treatment.
The rest of the experiments were performed following a double-blinded strategy: neither the participants nor the researcher knows which controller is provided to the participants. The control group (CG) interacts with the not adapting robot, e.g. fixing Δt=4 samples. The experimental group (EG) undergoes the rehabilitation session with the ALP-adapting robot. A schematic representation of the second experimental session is provided in
The first experimental phase assesses the machine learning models capability in estimating the ALP. K-fold cross-validation (k=5) is carried out on the collected dataset to compute the performance of the implemented classifiers in accurately predicting both the label and the score. In particular, the training set per each folder is composed of the 80% of the dataset and the i-th fold (20%) is split up into a validation set (10%), used to calibrate the classifiers, and a test set (10%), on which the following metrics are computed:
The best performing model among the tested ones, e.g. LDA, SVM, LR, and kNN, will be run in the second experimental phase as the ALP model in real-time. In the second experimental phase, the ALP-adapting robot is compared with respect to the not-adapting condition by means of a set of performance indicators assessing the biomechanics of the human-robot interaction as well as the subjective perception of the controllers.
In order to highlight the impact of the robot adaptation on the participants, the aforementioned performance indicators computed during the second 15 repetitions were normalized with respect to the values observed in the calibration phase as where X is one among {ALP, WR, TE, iEMG} and Xcal is the mean value of the metrics computed in the calibration phase. Moreover, the subjective perception of the controllers on the participants is assessed by means of questionnaires. The engagement in interacting with the robot was assessed by means of the Self Assessment Mannequin (SAM) questionnaire. The SAM allows the participants to declare their Valence of the response (from positive to negative), perceived Arousal (from high to low levels), and perceived Dominance (degree of control that a person has over the situation) evoked by rehabilitation robot use (see T.-M. Bynion and M. T. Feldner, Self-Assessment Manikin. Cham: Springer International Publishing, 2017, pp. 1-3. [Online]. Available: https://doi.org/10.1007/978-3-319-28099-877-1). Moreover, the NASA-TLX was administered to assess the perceived workload in interacting with the robot in the two experimental conditions (see L. M. A. La Bara, L. Meloni, D. Giusino, and L. Pietrantoni, “Assessment methods of usability and cognitive workload of rehabilitative exoskeletons: A systematic review,” Applied Sciences, vol. 11, no. 15, p. 7146, 2021). In particular, the participants were asked to rate from 0-10 their experience in terms of Mental Demand (MD), Physical Demand (PD), Temporal Demand (TD), Performance (PER), Effort (EF), and Frustration (FR).
To assess the impact of the user ALP-adapting robot use with respect to the simple impedance controller, a statistical analysis has been carried out on the collected data. In particular, the Wilcoxon rank-sum test is performed on the aforementioned performance indicators for the two groups of participants: CG and EG henceforth. This test assesses whether a significant difference exists between the two investigated conditions. The significance level is set at p-value≤0.05.
It is worth highlighting that the LDA outperformed the other approaches since returned the highest accuracy and the lowest Brier score. Consequently, LDA was selected as the ALP estimation model in the second experimental phase.
The AALP estimated during the last 15 repetitions of the experiment for the two groups is reported in
The ΔALP along with the other performance indicators, introduced above, are reported in
As already shown in
Muscle activity was also affected by the robot adaptation. Indeed, the iEMG metrics significantly increased in the experimental condition of EG participants with respect to the calibration phase. The overall increased muscular activation is a key factor for assessing the user intention in participating in the rehabilitation training. Despite the CG shown a slight increase with respect to the calibration phase, the ΔiEMG of the EG increased by about 130% (p-value=0.01).
Finally, the results on the level of engagement and perceived workload are shown in
The present example has provided a methodology in accordance with embodiments of the present invention to objectively estimate the ALP of participants interacting with a robot during robot-aided rehabilitation sessions and test it inside the control loop of an ALP-adapting robot. The estimation model is fed by a multimodal monitoring interface that senses the electromyographic activity of the biceps and the pressure exchanged at the human-robot interface. It is worth noticing that the proposed method is validated onto a specific robotic platform and movement of the upper limb, i.e., the shoulder flexion/extension, but the procedure described in this paper can be extended to different robots and motions. Indeed, the ALP estimation method relies only on the measurements collected at the human-robot interface, where the interaction itself takes place.
At first, a machine-learning model calibration procedure was performed in order to improve the probability prediction of commonly used classifier starting from data acquired from five healthy participants. LDA resulted to be the most suitable in terms of classification accuracy and goodness of calibration, e.g. the lowest Brier score.
The trained model was used then to estimate in real-time the ALP of two groups of five participants each interacting with a rehabilitation robot, e.g. namely the CG and EG. The participants who interacted with the ALP-adapting robot exhibited a significantly higher physical workload, muscular activity, and level of perceived excitement with respect to the CG. This demonstrates that closing the robot control loop on the participants effectively enhances the interaction stimulating the users to provide more and more effort in exercising themselves.
The proposed approach could be applied during robot-aided rehabilitation sessions of pathological subjects to assess the capability of the ALP estimation model to identify the ALP of patients with limited motor functions. Moreover, the ALP model could exploit different sensing modalities to take into account measures that relate to the cognitive workload of the users.
In a second example, the possibilities of embodiments of the present invention are shown with respect to the detection of human intention in wearable robots, such as for example exoskeletons. Exoskeleton controllers often assume full transmission of actuators' torque to the human body, neglecting torque loss. According to some embodiments of the present invention, 3D force sensing is applied. It thereby is an advantage that the analysis of arm movements is less restricted, compared to measuring only normal force, using a ID force sensor. 3D force sensing can be very sensitive and can be integrated in the physical interaction device according to embodiments of the present invention, to verify, in real-time, the efficiency of force and torque transmissions. The additional advantages of assessing shear forces are discussed, as well as a classification algorithm for touch and motion. In the present example a setup is chosen wherein twelve 3D force sensors are used. These are integrated in a cuff, forming the physical interface device between a robot and the user. The 3D force sensors used may be any suitable 3D force sensors. The translation and rotation frame of the 3D force sensors is thereby referenced to the center of mass frame of the physical interface device. The sensors in the present example are arranged as three parallel lines of four 3D sensors (each set of four 3D sensors thus being aligned on a single line), although other arrangements also may be used.
Controlling of the robot movement could be performed in two ways, using the data obtained with the 3D force sensing. One way is referred to a using admittance control, whereby the velocity of the end-effector is controlled. The mathematical model behind this can be expressed by the following equation:
It was found that using force sensing, a more sensitive result could be obtained, e.g. allowing for example to detect forces stemming from contact with a sheet of paper. The system showed that good accuracy could be obtained in the movement of the robot.
In combination with a virtual spring. The mathematical meaning of this equation is known to the person skilled in the art, but can for example also be found in Sharkawy et al. “Human-Robot Interaction: A review and Analysis on Variable Admittance Control, Safety, and Perspectives (2022) Machines 10(7):591, 1-26. The admittance control takes as input the forces obtained from the force sensors and as output the movement of the end effectors. Using data based control, like for example using a neural network or fuzzy logic, allows for variable admittance control.
A second way of controlling is referred to as impedance control, whereby the torque of the joints is controlled. The latter is applicable in case of small velocity and acceleration. The mathematical model behind this can be expressed as
The mathematical meaning of this equation is known to the person skilled in the art, but can for example also be found in Lynch et al., Video Supplements for Modern Robotics, Cambridge University Press (chapter 11.5, force control), hyperlink https://modernrobotics.northwestern.edu/nu-gm-book-resource/11-5-force-control/#department. The impedance control takes the movement of the arm and variation in the force sensor value as input and, using a PI controller, provides the torque joint value required to come back to the initial force value, thus allowing to control. So starting from a reference value of the force sensor, the movement to come back to the reference value is determined with a PID controller and used for controlling the robot. The latter allows for good control, whereby stiffness and overdamping can be taken into account. It allows further to differentiate between internal and external force.
Since the sensing according to embodiments of the present invention was performed in direct contact with the limb, there is no need for taking into account the environment, so that affection of the results by environmental noise can be avoided.
In the example, neural network classification mechanisms were used to estimate the effect of the importance of the shear component and to identify the relevance of the sensor positions used. Furthermore, neural network algorithms were also used to perform classification of the motions performed in real time. It was found that accurate classification was possible.
Furthermore, shear force was also used to detect relative motion between the arm and the physical interface device. Such motion is illustrated in
It is to be understood that although preferred embodiments, specific constructions and configurations, as well as materials, have been discussed herein for devices according to the present invention, various changes or modifications in form and detail may be made without departing from the scope of this invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks.
Number | Date | Country | Kind |
---|---|---|---|
23174131.5 | May 2023 | EP | regional |