The present disclosure relates to rehabilitation robots, and further relates to a rehabilitation robot assisted motion system and method based on motion intention.
The rehabilitation robot is considered to be a “wearable device” in special environments and has functions such as helping the disabled walk, rehabilitation treatment, and reducing labor intensity. The rehabilitation robot is a high-level rehabilitation medical technology developed in recent years. It is the product of the combination of robot technology and medical technology, helps disabled patients regain their motion function, and brings hope of returning to society.
The rehabilitation robots are currently mainly suitable for upper or lower limb motion dysfunction caused by stroke, brain injury, spinal injury, neurological injury, muscle injury, and orthopedic diseases. It helps patients reshape brain motion nerves and restore the brain's control of upper limb movement, thereby improving patients' daily living ability.
The rehabilitation robot can be divided into rehabilitation treatment/training robot, auxiliary terminal robot, and intelligent robot combined with health care, according to functional classifications. According to the body parts, it can be divided into upper limb robots and lower limb robots. According to the manner of the human-machine combination, it can also be atmospheric into an exoskeleton type and an embedded type.
It should be pointed out that in the process of using the conventional rehabilitation robot for rehabilitation training, the rehabilitation training is passive, and the patient can only perform rehabilitation training according to the original set functions of the rehabilitation equipment, which cannot be matched with the user's motion intention, and the effects of rehabilitation training are limited.
In view of the above-described technical problems, an object of the present disclosure is to provide a rehabilitation robot assisted motion system and method based on a patient's motion intention, capable of acquiring an active motion intention of a user, capable of controlling a motion of an actuator based on the active motion intention of the user, and capable of improving the effects of rehabilitation training of the user.
In order to achieve the above object, the present disclosure provides a rehabilitation robot assisted motion method based on an active motion intention of a user, comprising:
In some embodiments, the disclosure further provides a rehabilitation robot assisted motion system based on an active motion intention of a user, comprising:
Hereinafter, the above features, technical features, advantages, and implementations of the present disclosure will be further described in a clear and easily understood manner with reference to the embodiment and the accompanying drawings.
In order to more clearly explain the embodiments of the present disclosure or the technical solutions in the prior art, specific embodiments of the present disclosure will be described below with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings and other embodiments can be acquired from these drawings without making creative efforts to those skilled in the art.
For the sake of simplicity of the drawings, only the parts related to the disclosure are schematically shown in the drawings, and they do not represent the actual structure thereof as a product. In addition, in order to make the drawings simple and easy to understand, in some drawings, only one of the components having the same structure or function is schematically illustrated or only one of the components is referred. Herein, “a/an” means not only “only one”, but also “more than one”.
It should be further understood that the term “and/or” as used in the specification of this disclosure and the appended claims refers to any combination and all possible combinations of one or more of the associated listed items, and comprises such combinations.
In the present specification, it should be noted that unless otherwise specified and defined, the terms “mounted”, “attached”, and “connected” are to be understood in a broad sense, for example, can be fixedly connected, may be detachably connected, or can be integrally connected. It can be a mechanical connection or an electrical connection. It can be directly connected, or indirectly connected through an intermediate medium, and it can be the internal communication of two elements. The specific meanings of the above terms in the present disclosure will be understood by those skilled in the art.
In addition, in the description of the present disclosure, the terms “first”, “second”, and the like are used only to distinguish the description, and cannot be understood as indicating or implying relative importance.
Recent studies have shown that the human nervous system has plasticity throughout whole life, that is, the nervous system will adapt to the changes of the external environment through “re-learning”, or its own structure and function will be constantly modified and reorganized when it is damaged. As an important foundation of modern medical rehabilitation, neural plasticity is also the most important thing in rehabilitation training. Nerve reorganization and compensation are important features in the process of nerve reorganizing. Studies have shown that the important factors affecting nerve reorganization in rehabilitation training include appropriate training prescription and active participation of patients. Even if users have no ability to actively move, the active motion intention is necessary to ensure the training effect. Effectively identifying the active motion intention of the user is one of the keys for rehabilitation robots to ensure the rehabilitation training effects of the patients.
The eye movement information of humans is related to the thinking content of the brain. The information quantity that the brain can process at the same time is limited. In order to decide which information needs to be processed in time, humans and many other animals have evolved an information selection mechanism, which is usually called focus. Focus is pointing or concentrating on a certain object in psychological activities. Focus enables people to selectively process some stimulation and ignore others, so as to avoid information overload in the brain. A large number of studies have confirmed that the position of the eye is usually related to the thing being focused and considered, especially when observing an object with a goal in mind. This is known as the eye-brain consistency hypothesis. According to the eye-brain consistency hypothesis, the present disclosure incorporates the eye movement information into the rehabilitation training and effectively improves the rehabilitation training effects.
101, acquiring an eye movement information of the user;
102, processing the eye movement information to generate a user focus level coefficient; and
103, in response to the user focus level coefficient satisfying a preset condition, based on the eye movement information, generating an execution control signal, for controlling an actuator to assist the user in motion.
In the rehabilitation robot assisted motion method based on the active motion intention of the user according to the present disclosure, the active motion intention of the user can be evaluated based on the eye movement information of the user, and the eye movement information can be merged into the rehabilitation training, and the effects of the rehabilitation training can be improved.
In some embodiments, in the above step 101, the eye movement information of the user is from an eye movement sensor 100. The eye movement sensor 100 can locate the pupil position by image processing technology, acquire coordinates, and calculate the point at which the eye is fixed on or gazed at by a certain algorithm. In some embodiments, the eye movement sensor 100 is a “non-invasive” technology based on VOG (Video oculographic). The basic principle is that a ray of light (near-infrared light) and a camera are directed at the subject's eye, and the direction of the subject's gaze is inferred through the light and the rear-terminal analysis. The camera records the interaction process. In addition to monitoring the gaze, the eye movement sensor can also display other useful measurements comprising pupil size and blink rate.
In some embodiments, in the above step 102, the user focus level coefficient is calculated by the following formula:
where Cfocus is the user focus level coefficient, tstart is the time when the motion target appears, tcurr is the current time, and feye is the sampling frequency of the eye movement information.
ffocus is the number of times of eye movements satisfying the condition of |Peye−Ptarg|<|rtarg| during a period from tstart to tcurr, where Ptarg is the coordinate of the motion target, Peye is the coordinate of the gaze target acquired by mapping the sensed eye movement information of the user, and rtarg is the preset distance.
In some embodiments, the above step 103 comprises:
In some embodiments, the preset operating parameter comprises a preset target velocity parameter Vtarg and a preset target force parameter Ftarg. In the above step 103a, in a case where Cfocus>Ccons, it is considered that the user focus level coefficient satisfies the first preset condition, where Ccons is the preset focus threshold under a constant assist mode, and the rehabilitation robot assists the user to move to the target with the preset target velocity parameter Vtarg and the preset target force parameter Ftarg. The above described step 103a can be defined as the constant assist mode.
In some embodiments, the above step 103 comprises:
In the above step 103A, the first operating parameter is a certain percentage of the preset operating parameter. In the above step 103B, in a case where the focus level coefficient of the user is increased, based on the adaptive coefficient, the percentage in which the first operating parameter is in the preset operating parameter is adaptively increased, until the preset operating parameter is reached. The above steps 103A and 103B can adaptively adjust the operating parameters to adaptively adjust the exercise intensity, which will effectively increase the motivation and the participation level of users in training. The above described steps 103A and 103B can be defined as an adaptive assist mode.
In the above step 103B, the adaptive coefficient μadap is calculated by the following formula:
where Cadpa is a preset focus threshold and ϕ is a focus scaling coefficient.
For example, according to the preset target velocity parameter Vtarg, the adaptive velocity parameter Vadap=μadapVtarg is calculated. According to the preset target force parameter Ftarg, the adaptive force parameter Fadap=μμadapFtarg is calculated. The rehabilitation robot assists the user in motion with the adaptive velocity parameter Vadap and the adaptive force parameter Fadap.
In some embodiments, the rehabilitation robot assisted motion method based on the active motion intention of the user, before acquiring the eye movement information of the user, further comprises:
The above step 103 further comprises:
In some embodiments, the above described steps 1031, 1032, and 1033 can be defined as an adaptive assist mode.
In some embodiments, the initial operating parameter comprises an initial motion velocity Vinit and a target force parameter Ftarg, and the assistance parameter comprises an adaptive velocity parameter Vadap being determined by the following formula:
where Cadpa is a preset focus threshold, ϕ is the focus scaling coefficient, and Vtarg is the preset target velocity parameter.
In some embodiments, where the user current operating parameter reaches or exceeds the merged operating parameter, the actuator is controlled with the target force parameter Ftarg, to increase the resistance to the user in motion.
The present disclosure further provides a rehabilitation robot assisted motion system based on an active motion intention of a user, comprising a signal acquisition module 10, a signal processing module 20, and a motion intention evaluation module 30, where the signal acquisition module 10 is configured to acquire an eye movement information of the user, the signal processing module 20 is configured to process the eye movement information to generate a user focus level coefficient, and the motion intention evaluation module 30 is configured to, in response to the user focus level coefficient satisfying a preset condition, based on the eye movement information, generate an execution control signal for controlling a motion of the actuator 40.
In some embodiments, the actuator 40 is a driver, such as a motor or the like, capable of assisting the user in performing a rehabilitation exercise.
In some embodiments, the signal processing module 20 comprises a timing unit 21, a frequency acquisition unit 22, a coordinate acquisition unit 23, a counting unit 24, and a calculation unit 25. The timing unit 21 is configured to record a time tstart when the motion target appears and a current time tcur. The frequency acquisition unit 22 is configured to acquire a sampling frequency feye during acquiring the eye movement information of the user. The coordinate acquisition unit 23 is configured to acquire the coordinate Ptarg of the preset motion target and the coordinate Peye of the gaze target by mapping the sensed eye movement information of the user. The counting unit 24 is configured to count the number of times ffocus of eye movements that satisfy the condition of |Peye−Ptarg|<|rtarg| within a time period from tstart to tcurr, where rtarg is a preset distance. The calculation unit 25 is configured to determine the user focus level coefficient Cfocus based on the following formula:
In some embodiments, the motion intention evaluation module 30 comprises a determination unit 31 and a control signal generation unit 32. The determination unit 31 is configured to determine whether the user focus level coefficient satisfies a first preset condition. In a case where the user focus level coefficient satisfies the first preset condition, the control signal generation unit 32 is configured to generate a first execution control signal based on the user focus level coefficient, for controlling the actuator to move according to a preset operating parameter.
In the above embodiment, when the user focus level coefficient satisfies the first preset condition, it can be determined that the user has reached a certain focus level for the motion target, and then the rehabilitation robot assists the user in reaching the motion target according to the preset operating parameter. On the contrary, if the user focus level coefficient does not satisfy the first preset condition, it means that the user's focus level on the motion target is not reached, and the rehabilitation robot does not assist the user in motion.
In some embodiments, the determination unit 31 is further configured to determine whether the user focus level coefficient satisfies a second preset condition. In a case where the user focus level coefficient satisfies the second preset condition, the control signal generation unit 32 is configured to generate a second execution control signal based on the user focus level coefficient, for controlling the actuator to move according to a first operating parameter. Optionally, the intensity of the first operating parameter is less than the preset operating parameter, in other words, the first operating parameter is only a certain proportion of the preset operating parameter.
The motion intention evaluation module 30 further comprises an adaptive coefficient calculation unit 33. Within a preset period, in response to an increment of the user focus level coefficient of time, the adaptive coefficient calculation unit 33 can be configured to, based on the increment of the user focus level coefficient, determine an adaptive coefficient. The control signal generation unit 32 is configured to, based on the adaptive coefficient and the preset operating parameter, adjust the first operating parameter. Optionally, during the user focus level coefficient gradually increases, based on the adaptive coefficient, the proportion of the first operating parameter in the preset operating parameter is increased, until the first operating parameter is equal to the preset operating parameter.
With reference to
The state evaluation module 80 is further configured to compare the merged operating parameter with a user current operating parameter, if the user current operating parameter does not reach (less than) the merged operating parameter, appropriately control the actuator to assist the user movement with a certain force (controlling the actuator to operate with an assistance parameter), and if the user current operating parameter exceeds (greater than) the merged operating parameter, appropriately control the actuator to block the user movement with a certain force.
With reference to
In some embodiments, the rehabilitation robot assisted motion system based on the active motion intention of the user further comprises a planning module 60. The planning module 60 is configured to process the training target for converting single or abstract training targets into fixed motions, force parameters, or the like executable by the rehabilitation robot.
In some embodiments, the rehabilitation robot assisted motion system based on the active motion intention of the user further comprises a target communicating module 70, for converting the training target into a visual signal, an auditory signal, a tactile signal, or the like to guide the user to perform training. For example, the target communication module 70 is a display capable of converting the training target into an image signal for display or playback by sound.
It should be noted that all of the above-described embodiments can be freely combined as necessary. The above are only embodiments of the present disclosure, and it should be pointed out that for those skilled in the art, several improvements and retouches can be made without departing from the principles of the present disclosure, and these improvements and retouches should also be regarded as the scope of protection of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202311528072.9 | Nov 2023 | CN | national |
This application is a Continuation application of the International Application PCT/CN2024/108142, filed on Jul. 29, 2024, which claims the benefit of Chinese Patent Application No. 202311528072.9, filed on Nov. 16, 2023, wherein the contents of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2024/108142 | Jul 2024 | WO |
Child | 19023082 | US |