The present invention generally relates to hand function assessment. More specifically, the present invention relates to a machine vision-based method for determining the range of motion of the joints of a hand of a subject and a system for implementing the same.
The hand is one of the most complex and versatile anatomical structures of the human appendages. It serves a crucial function for interacting with the external environment and plays a significant role in human life. Functional disorders of the hand can limit individual independence and affect the quality of life. The determination of range of motion (ROM) is an essential component of hand function assessment. In clinical settings, hand joint angle measurement methods are commonly used to obtain the ROM of hand joints for the following reasons. Firstly, hand joint angles reflect hand function and motor ability. By measuring hand joint angles, one can understand the ROM, flexibility, and coordination of the hand joints, aiding in the assessment of hand function recovery and the extent of damage. Secondly, measuring hand joint angles can provide important evidence for clinical diagnosis and assist in rehabilitation therapy. Joint angle measurements help physicians quickly locate the site of lesions, determine the degree of damage, and formulate appropriate treatment plans. Lastly, in the field of medicine and sports science, hand joint measurements can reveal the dynamic characteristics and movement patterns of hand motion. This is crucial for understanding the biomechanical features of hand motion, optimizing motor skills, and improving sports training techniques.
Medical professionals often use universal goniometers or inclinometers to manually measure the declination angles of finger joints to assess the joint movement range. However, one significant challenge in manual assessment is intra- and inter-rater reliability. The methods involving manual assessment are also inefficient and expensive. In other approaches, electronic wearable devices were introduced to sense the hand movement. However, one of the challenges in these approaches is that the wearable devices require physical contact with the finger to achieve the best accuracy. Injuries, such as burns, wounds, lacerations or even dermatological conditions, can cause difficulties in conducting the measurement.
Adapting optical measurement systems or computer vision-based approaches provides a non-contact form of measurement. For example, Chinese patent application publication no. CN106355598A disclosed an automatic wrist and finger joint motion degree measurement method based on Kinect depth images. It is still desirable to have a more versatile vision-based hand joint angle measurement system which can measure hand joint angle for various hand movements and provide a more comprehensive hand function assessment.
According to one aspect of the present invention, a machine vision-based method for determining the range of motion of the joints of a hand of a subject is provided. The provided method comprises: facilitating the subject to perform a hand movement at a preset position; capturing at least two prior-movement images of the hand when the hand is in a neutral posture before performing the hand movement; capturing at least two post-movement images of the hand when the hand is in an assessment posture after performing the hand movement; processing the captured prior-movement images to obtain a plurality of prior-movement key point positions; processing the captured post-movement images to obtain a plurality of post-movement key point positions; and calculating the range of motion of the joint based on the plurality of prior-movement key point positions and the plurality of post-movement key point positions.
According to another aspect of the present invention, a machine vision-based system for determining a ROM of a joint of a hand of a subject is provided. The provided system comprises: a supporter configured to facilitate the subject to perform a hand movement at a preset position; at least two cameras configured to: capture at least two prior-movement images of the hand when the hand is in a neutral posture before performing the hand movement; and capture at least two post-movement images of the hand when the hand is in an assessment posture after performing the hand movement; and a processor configured to: process the captured prior-movement images to obtain a plurality of prior-movement key point positions; process the captured post-movement images to obtain a plurality of post-movement key point positions; an calculate the range of motion of the joint based on the plurality of prior-movement key point positions and the plurality of post-movement key point positions.
According to a further aspect of the present invention, a non-transitory computer-readable storage medium is provided to store a program including instructions for performing a machine vision-based method for determining a range of motion of a joint of a hand of a subject. The method comprises: facilitating the subject to perform a hand movement at a preset position; capturing at least two prior-movement images of the hand when the hand is in a neutral posture before performing the hand movement; capturing at least two post-movement images of the hand when the hand is in an assessment posture after performing the hand movement; processing the captured prior-movement images to obtain a plurality of prior-movement key point positions; processing the captured post-movement images to obtain a plurality of post-movement key point positions; and calculating the range of motion of the joint based on the plurality of prior-movement key point positions and the plurality of post-movement key point positions.
The present invention introduces an unconstrained and non-invasive machine vision-based approach, eliminating the need for direct hand contact. Unlike conventional manual measurement techniques, the present invention prevents human biases, reducing errors for more dependable results. The provided measurement method expedites hand joint angle measurements while concurrently assessing multiple hand joints. This efficiency not only saves time and resources but also facilitates comprehensive hand mobility assessments. It preserves the natural fluidity of movement by enabling unrestricted hand motion, setting it apart from wearables, gloves, or external devices. The machine vision technology closely mimics real hand activity, enhancing measurement authenticity and accuracy.
The invention also offers timely rehabilitation progress feedback, aiding therapy efforts. Furthermore, its automated data processing generates detailed angle ranges, statistical insights, and visual representations, simplifying data interpretation and analysis for healthcare professionals. It also offers automated tracking of finger and upper limb movements, delivering precise measurements and insights into an individual's fine motor skills. This capability proves invaluable for therapists assessing patient recovery and evaluating fine motor skills in professions that demand such precision. Beyond its immediate applications, this advancement holds significant promise for research in biomechanics, neuroscience, and human-computer interaction.
Aspects of the present disclosure may be readily understood from the following detailed description with reference to the accompanying figures. The illustrations may not necessarily be drawn to scale. That is, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion. There may be distinctions between the artistic renditions in the present disclosure and the actual apparatus due to manufacturing processes and tolerances. Common reference numerals may be used throughout the drawings and the detailed description to indicate the same or similar components.
In the following description, preferred examples of the present invention will be set forth as embodiments which are to be regarded as illustrative rather than restrictive. Specific details may be omitted so as not to obscure the present disclosure; however, the disclosure is written to enable one skilled in the art to practice the teachings herein without undue experimentation.
Referring to
The hand supporter 101 is configured to support hands of the subject to facilitate the subject to perform a hand movement at a preset position.
The two or more cameras 102 are configured to capture images of the hand from two or more perspectives respectively. More specifically, the cameras 102 may be configured to: capture prior-movement images of the hand when the hand is in a neutral posture before performing the hand movement; and capture post-movement images of the hand when the hand has an assessment posture after performing the hand movement.
In some embodiments, each camera may be calibrated with respect to a three-dimensional space to obtain a respective camera projection matrix for reconstructing the 3D coordinates of joints from the 2D pixel coordinates in the image.
The camera projection matrix is a representation of the projecting points in the 3D space onto the pixel coordinate system. The process of projecting points in the 3D space onto the pixel coordinate system essentially involves taking global coordinates of a 3D point P (Xw,Yw,Zw), mapping them using the pinhole imaging principle to the 2D image coordinate system, and applying rigid body transformation to convert them into 2D pixel coordinates p(u,v) in the image. The mathematical expression for the transformation is as follows:
The camera projection matrix P is given by: P=K[R t].
The camera calibration is then a task of determining the camera projection matrix P given the global coordinates (X,Y,Z) of some 3D points and their corresponding image pixel coordinates (u,v), which are denoted as:
Each pair of 2D and 3D counterpart points provides two linear equations about the camera projection matrix P, that is, n pairs of 2D and 3D counterpart points provide 2n linear equations. When n is greater than or equal to 6, a direct linear transformation can be used to solve this problem. Therefore, the camera projection matrix can be obtained through capturing an image with at least 6 non-coplanar 2D and 3D counterpart points.
The processor 103 is configured to process the captured images and calculate the range of motion of the joint of the hand. More specifically, the processor 103 may be configured to: process the captured prior-movement images to obtain a plurality of prior-movement key point positions; process the captured post-movement images to obtain a plurality of post-movement key point positions; and calculate the range of motion of the joint based on the plurality of prior-movement key point positions and the plurality of post-movement key point positions.
Preferably, the machine vision-based system 100 further comprises a display 104 configured to demonstrate the hand movement to the subject.
Referring to
Referring to
Similarly, the plurality of post-movement key point positions is obtained by mapping 2D positions of the plurality of sampling points of the hand in the post-movement images into the 3D space through the camera projection matrixes to obtain a plurality of 3D post-movement key point positions; and projecting the plurality of 3D post-movement key point positions on the projection plane to obtain the plurality of post-movement key point positions.
Referring to
In practical computations, the projection lines of the two cameras might not exactly intersect. Therefore, for multi-view images captured by multiple cameras respectively, the 2D to 3D mapping can be formulated as an optimization problem of minimizing the reprojection error:
min Σin∥PiX−xi∥2,
In some embodiments, the hand movement is designed with respect to a kinematic model of the hand. By analyzing the anatomical model (as shown in
The metacarpophalangeal (MCP) joint 503, also known as the knuckle joint, is a ball-and-socket joint, allowing flexion/extension and abduction/adduction movements with two DOF. However, in clinical assessments, based on the assumption that the abduction/adduction of the MCP joint of the thumb is associated with the abduction/adduction of the carpometacarpal (CMC) joint 504 of the thumb, only flexion and extension evaluations (one DOF) are usually performed for the MCP joint of the fingers.
The CMC joint 504 is the carpometacarpal joint of the thumb, also known as TM joint, and it is a saddle joint, enabling flexion, extension, abduction, adduction, and circumduction movements with two DOF. Therefore, in the present invention, a 26 DOF kinematic model for the hand as shown in
Movement A (as shown in
Movement B (as shown in
Movements C and D (as shown in
Movements E and F (as shown in
Movement G (as shown in
These specific movements are designed to assess the various ranges of motion of different joints in the hand, allowing therapists to evaluate the hand's flexibility, stability, and functionality of each joint in the hand. This assessment is common in clinical practice, especially for patients undergoing hand rehabilitation after injuries, hand diseases, or hand surgeries. Such assessments can also be used in research involving patients with hand diseases or movement disorders to understand how different conditions affect the range of motion in hand joints, providing a more scientific basis for treatment and rehabilitation strategies.
Referring to
For example, for movements A, B and E, the plurality of 3D post-movement key point positions is projected on the plane perpendicular to the palm plane to obtain the plurality of post-movement key point positions for calculating the ROM. For movements C, D, F and G, the plurality of 3D post-movement key point positions is projected on the plane in parallel with the palm plane to obtain the plurality of post-movement key point positions for calculating the ROM.
The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. While the methods disclosed herein have been described with reference to particular operations performed in a particular order, it will be understood that these operations may be combined, sub-divided, or re-ordered to form an equivalent method without departing from the teachings of the present disclosure. Accordingly, unless specifically indicated herein, the order and grouping of the operations are not limitations. While the apparatuses disclosed herein have been described with reference to particular structures, shapes, materials, composition of matter and relationships . . . etc., these descriptions and illustrations are not limiting. Modifications may be made to adapt a particular situation to the objective, spirit and scope of the present disclosure. All such modifications are intended to be within the scope of the claims appended hereto.