The present disclosure relates to devices, systems, and methods for feature-based joint range of motion capture that can, for example, analyze features of one or more images including a joint and determine at least one associated range of motion metric.
Joint range of motion (ROM) can be an important metric used clinically as an indication of post-operative recovery following a surgical procedure, e.g., a joint replacement or arthroscopic repair surgery. Present options for measuring and monitoring post-operative ROM, however, can be restricted in frequency, accuracy, and can involve a time-consuming process for a patient. For example, for patients who undergo a knee or hip replacement or arthroscopic repair surgery (e.g., ligament or meniscus repair), measurements of joint ROM following the surgery can be an important indication of recovery progress or lack thereof. For the knee, ROM is most commonly assessed using a hand-held goniometer. In order to achieve a meaningful measurement, the operator of the goniometer must have strong knowledge of anatomy such that the goniometer can be properly aligned with anatomical landmarks. Accordingly, ROM measurements can be assessed only in the presence of a trained clinician. As such, assessing joint ROM can require a post-operative follow up visit to a doctor or a physical therapy appointment, which can limit and constrain the frequency with which such measurements can be taken. Moreover, accuracy of goniometers can be limited. For example, a short arm goniometer can have a standard deviation of approximately 9 degrees, while a long arm goniometer can have a standard deviation of approximately 5 degrees. Accuracy can also be subject to operator error as a goniometer must be properly placed relative to patient anatomy to achieve a meaningful response.
Another current option for assessing a joint ROM can include sending a patient to a dedicated motion lab for movement analysis to assess the joint ROM. As with goniometer measurements, this process can require making a post-operative appointment and can only be done in the presence of a trained operator or clinician with specialized equipment, such as an array of cameras to view the joint from different angles and specialty software to analyze the captured images. Accordingly, assessing joint ROM at a motion lab can also restrict the frequency with which such measurement can be made, can be time-consuming for the patient, and can be costly due to the significant specialty equipment utilized.
In both cases, measurements of joint ROM can be infrequent and irregular, which can result in sparse data being made available to an orthopedic surgeon or other medical professional for post-operative assessment. Given the limitations on joint ROM measurement, the available joint ROM data may not have sufficient quality or quantity to adequately reveal abnormal patterns in joint recovery, which can limit the clinical relevance of such an assessment as a useful indication of patient recovery. Moreover, known solutions for joint ROM assessment can be burdensome on a patient, requiring travel and time to a specific location for undergoing the ROM measurement.
Accordingly, there is a need for improved systems, methods, and devices that provide an accessible way to measure joint ROM with greater frequency and accuracy for determining and monitoring post-operative patient recovery.
Feature-based joint range of motion capture systems and related methods are disclosed herein for increasing frequency, accuracy, and ease of assessing ROM of a joint of a patient, e.g., following a surgical procedure. Embodiments described herein can provide an accessible, low-cost solution for capturing joint ROM that can be used by patients at home on a daily basis, or as frequently as desired, without the need for a trained operator or clinician. For example, systems in accordance with the present disclosure can include a first pattern that can be coupled to a first portion of patient anatomy on a first side of a joint and a second pattern that can be coupled to a second portion of patient anatomy on a second side of the joint opposite the first. An image capture device, e.g., a mobile device such as a smartphone or a tablet, can capture at least one image including the joint, the first pattern, and the second pattern. Importantly, the patient can couple the first and second patterns to the appropriate patient anatomy and capture the at least one image without assistance from a trained professional and from almost any location. The at least one image can be transmitted to a processor, which can detect the first pattern and the second pattern in the image, calculate axes of the first and second portions of anatomy based on the detected patterns, calculate an angle between the axes in the at least one image, and calculate a range of motion metric based on the calculated angle between the axes. Accordingly, the systems and methods provided herein can provide for an increased frequency of accurate measurements of a joint ROM, without requiring patient appointments with, or travel to, a trained operator to administer the measurement.
In one aspect, a system for measuring joint range of motion can include a first pattern, a second pattern, an image sensor, and a processor. The first pattern can be configured to be coupled to a first portion of anatomy of a patient on a first side of a joint and the second pattern can be configured to be coupled to a second portion of patient anatomy on a second side of the joint opposite the first side. The image sensor can be configured to capture at least one image containing the joint, the first pattern, and the second pattern. The processor can be configured to recognize the first pattern and the second pattern in the at least one image, calculate axes of the first and second portion of anatomy to which the first and second patterns are coupled, calculate an angle between the axes, and calculate at least one range of motion metric of the joint based on the calculated angle between the axes in the at least one image.
The devices and methods described herein can have a number of additional features and/or variations, all of which are within the scope of the present disclosure. In some embodiments, for example, the image sensor and the processor can be contained within a smartphone or a tablet. The range of motion metric can be a full range of motion and the at least one image can include a first image where the joint is at a maximum extension and a second image where the joint is at maximum flexion. The processor can be configured to calculate the full range of motion as a difference between the angle between the axes when the joint is at maximum extension and the angle between the axes when the joint is at maximum flexion. In some embodiments, the range of motion metric can be a maximum extension angle and the at least one image can include an image where the joint is at maximum extension. The processor can be configured to calculate the maximum extension angle as the angle between the axes when the joint is at maximum extension. Alternatively, the range of motion metric can be a maximum flexion angle and the at least one image includes an image where the joint is at maximum flexion. The processor can be configured to calculate the maximum flexion angle based on the angle between the axes when the joint is at maximum flexion.
The first pattern can be coupled to a first elastic band configured to be placed over the first portion of anatomy and the second pattern can be coupled to a second elastic band configured to be placed over the second portion of anatomy. In other embodiments, the first patter can be disposed on a proximal portion of an elastic sleeve and the second pattern can be disposed on a distal portion of the elastic sleeve, with the elastic sleeve configured to be placed over the joint, the first portion of anatomy, and the second portion of anatomy. In some instances, the first portion of anatomy can be a shin, the second portion of anatomy can be a thigh, and the joint can be a knee. Optionally, at least one of the first pattern and the second pattern can include at least one patient-specific marker and the processor can be further configured to identify the patient-specific marker with a particular patient. The processor can communicate with an associated database based on the identified patient.
In another aspect, a method for measuring joint range of motion can include coupling a first pattern to a first portion of anatomy of a patient on a first side of a joint, coupling a second pattern to a second portion of anatomy of the patient on a second side of the joint opposite the first side, and capturing at least one image containing the joint, the first pattern, and the second pattern. The method can include detecting the first pattern and the second pattern for the at least one image, calculating axes of the first and second portions of anatomy based on the detected patterns in the at least one image, calculating an angle between the axes in the at least one image, and calculating a range of motion metric based on the calculated angle between the axes in the at least on image.
The step of detecting the first and second pattern for the at least one image can be performed using a feature-based image recognition algorithm. A smartphone or a tablet can be used to capture the at least one image. The smartphone or tablet can be used to detect the first and second patterns in the at least one image, calculate the axes of the first and second portions of anatomy, calculate the angle between the axes, and calculate the range of motion metric. In some embodiments, capturing the at least one image can include capturing a video segment using the smartphone or tablet. In some embodiments, the first portion of anatomy can be a shin, the second portion of anatomy can be a thigh, and the joint can be a knee.
In some embodiments, capturing the at least one image can further include capturing a first image in which the joint is at maximum extension and a second image in which the joint is at maximum flexion. In some such embodiments, the range of motion metric can be a full range-of-motion and calculating the range of motion metric can further include calculating an angle between the axes of the first image, calculating an angle between the axes of the second image, and calculating the range of motion as a difference between the angle between the axes when the joint is at maximum extension and the angle between the axes when the joint is at maximum flexion. In some embodiments, the at least one image can include an image in which the joint is at maximum extension and the range of motion metric can be a maximum extension. Calculating the maximum extension can include calculating the maximum extension as the angle between the axes when the joint is at maximum extension. The at least one image can include an image in which the joint is at maximum flexion and the range of motion metric can be a maximum flexion. Calculating the maximum flexion can include calculating the maximum flexion as the angle between the axes when the joint is at maximum flexion.
In another aspect, a system for capturing a joint range of motion can include one or more processors of a range of motion platform on a network. The one or more processors can be configured to receive at least one image taken with a smartphone or tablet, the image including a joint, a first pattern coupled to a first portion of anatomy on a first side of the joint, and a second pattern coupled to a second portion of anatomy on a second side of the joint opposite the first side. The one or more processor can detect the first pattern and the second pattern for the at least one image, calculate axes of the first and second portions of anatomy based on the detected patterns in the at least one image, calculate an angle between the axes in the at least one image, and calculate a range of motion metric based on the calculated angle between the axes in the at least one image.
In some embodiments, the range of motion metric can be a range of motion and the at least one image can include a first image where the joint is at a maximum extension and a second image where the joint is at maximum flexion. The one or more processors can calculate the full range of motion as a difference between the angle between the axes when the joint is at maximum extension and the angle between the axes when the joint is at maximum flexion. The first pattern and the second pattern can include at least one patient-specific marker. In some embodiments, the one or more processors can identify the patient-specific marker with a particular patient and communicate with an associated database based on the identified particular patient.
Any of the features or variations described above can be applied to any particular aspect or embodiment of the present disclosure in a number of different combinations. The absence of explicit recitation of any particular combination is due solely to the avoidance of repetition in this summary.
Feature-based joint range of motion (ROM) capture systems and related methods are disclosed herein that can improve the quality and quantity of joint ROM data to allow for more accurate and effective assessment of a post-operative condition of a patient, e.g., by providing systems and methods that a patient can easily use from home with inexpensive and accessible hardware to capture relevant data and assess one or more range of motion metric based on the captured data.
Joint ROM systems of the present disclosure can include a first pattern, a second pattern, an image sensor, and a processor. The first and second patterns can include one or more elements, such as a geometric shapes, images, lettering, etc., such that features in the pattern can be detected in an image to locate the pattern, e.g., with a feature-based image recognition algorithm. The first pattern can be coupled to a first portion of patient anatomy on a first side of a joint and the second pattern can be coupled to a second portion of patient anatomy on a second side of the joint opposite the first side. The first pattern and the second pattern can be provided in a manner that can allow a patient to securely and removably couple the first and second pattern to the first and second portions of anatomy, respectively, i.e., on opposite sides of a joint. The image sensor can capture at least one image containing the joint, the first pattern coupled to the first portion of anatomy, and the second pattern coupled to the second portion of anatomy. The image sensor can be included in a mobile device, such as a smartphone or a tablet, that can capture the at least one image without requiring professional or specialized instruments or assistance. In other words, the patient can self-sufficiently capture the at least one image containing the joint, the first pattern coupled to the first portion of anatomy, and the second pattern coupled to the second portion of anatomy. The image can be transmitted to the processor, which can be locally accessible (e.g., the processor of mobile device that includes the image sensor) or remotely accessible over a network (e.g., a processor included in a server accessible to the mobile device via a network connection). The processor can recognize the first pattern and the second pattern in the frame of the image, e.g., using the feature-based recognition algorithm, calculate axes of the first and second portion of anatomy to which the first and second patterns are coupled in the image, and calculate an angle between the axes. At least one range of motion metric of the joint can be calculated based, at least in part, on the calculated angle between the axes in the at least one image. As used herein, a range of motion metric can include an angle of maximum extension of a joint, an angle of maximum flexion of a joint, a full range of motion of a joint, which can be calculated as a difference between the angle of maximum extension and flexion, each of which can be used by a surgeon or other medical professional to assess a recovery and/or functioning of a joint. For example, in some instances a surgeon may be interested primarily in the maximum flexion angle of a joint, while in other cases the surgeon may wish to determine the full range of motion of a joint. Accordingly, the systems and related methods disclosed herein can enable the patient to capture joint ROM data with additional frequency and flexibility, e.g., from the patient's home, without requiring assistance of a trained professional or travel to a specialized location.
Certain exemplary embodiments will now be described to provide an overall understanding of the principles of the structure, function, manufacture, and use of the devices, systems, and methods disclosed herein. One or more examples of these embodiments are illustrated in the accompanying drawings. The devices, systems, and methods specifically described herein and illustrated in the accompanying drawings are non-limiting embodiments. The features illustrated or described in connection with one embodiment may be combined with the features of other embodiments. Such modifications and variations are intended to be included within the scope of the present disclosure.
Additionally, to the extent that linear or circular dimensions are used in the description of the disclosed devices and methods, such dimensions are not intended to limit the types of shapes that can be used in conjunction with such devices and methods. Equivalents to such linear and circular dimensions can be determined for different geometric shapes. Further, in the present disclosure, like-numbered components of the embodiments generally have similar features. Still further, sizes and shapes of the devices, and the components thereof, can depend at least on the anatomy of the subject in which the devices will be used, the size and shape of objects with which the devices will be used, and the methods and procedures in which the devices will be used. To the extent features, sides, components, steps, or the like are described as being “first,” “second,” “third,” etc. such numerical ordering is generally arbitrary, and thus such numbering can be interchangeable.
In the illustrated embodiment of
As used herein, “opposite sides” of a joint can refer to a first location and a second location, with the joint falling between the first and second location, such that there is relative motion of the first and second location upon flexing or extending the joint. For example, where the joint 112 is a knee, as illustrated in
The image capture device 104 can be used to take at least one image containing the first pattern 108 coupled to the first portion 114 of anatomy, the second pattern 110 coupled to the second portion 116 of anatomy, and the joint 112. The phrase “an image” or “the image,” as used herein in conjunction with embodiments of the present disclosure refers to an image captured with the image capture device 104 containing at least the first pattern 108 coupled to the first portion 114 of patient anatomy on the first side of the joint 112, the second pattern 110 coupled to the second portion 116 of the patient anatomy on the second side of the joint opposite the first side, and the joint. The image capture device 104 can be any device with photo and/or video capture capabilities. In some embodiments, the image capture device 104 can transmit the at least one image to the ROM analyzer 106 for further analysis. By way of non-limiting example, the image capture device 104 can be a mobile device, such as a smartphone or tablet, a laptop, a traditional computer, etc. The at least one image taken by the image capture device 104 can include one or more picture images, one or more video segments, or a combination of the two.
The image capture device 104 can be placed such that the first pattern 108, the second pattern 110, and the joint 112 fall within a viewing range 118 of the image capture device for image capture. In some embodiments, the image capture device 104 can be held by a person at an appropriate distance to place at least the first pattern 108, the second pattern 110, and the joint 112 within the viewing range 118. As noted above, the systems and methods described herein do not require specialized training or assistance to capture an image that can be used to determine at least one range of motion metric. Accordingly, a person holding the image capture device 104 can be, for example, a friend, relative, or caregiver of the patient 101 who does not need to possess any formal training or skills in the medical or imaging fields. Alternatively, the image capture device 104 can be mounted on a tripod 105 (see
In use, the ROM analyzer 106 can receive at least one image 100′ (see
The processor 120 can use a feature-based image recognition algorithm, e.g., from the Computer Vision Toolbox™ by MATLAB®, to detect and locate the first and second patterns 108′, 110′ within the frame of the captured image 100′. As an example, the features representing the object can be derived using the Speeded-Up Robust Features (SURF) algorithm. Once the object features are determined, the object can be detected within the captured image 100′ using, for example, the Random Sample Consensus (RANSAC). Each detected pattern 108′, 110′ can be analyzed using, for example, a pattern-recognition algorithm, which can locate centroids of one or more known shapes or elements within the patterns 108, 110. The centroids, or other detected features, can then be used to calculate an orientation of the longitudinal axis A1, A2 for each portion 114′, 116′ of the patient anatomy. The angle α1 between the axes A1, A2 can be calculated, which can represent an angle associated with the joint 112 in a position as captured in the image 100′. A range of motion metric, such as a maximum flexion angle of the joint 112, a maximum extension angle of the joint, and/or a full range of motion, can be determined based on the calculated angle between the longitudinal axes of the first and second portions of patient anatomy.
The ROM analyzer 106 can use the calculated angle α1 to determine a range of motion metric of the joint 112, such as the maximum flexion angle α1Flex, a maximum extension-angle α1Ext, and/or a full range of motion of the joint. The maximum flexion angle α1Flex can be identified by calculating the angle α1 between the first portion 114 and the second portion 116 from the image 100′ captured when the joint 112 is at maximum flexion, e.g., as shown in
As introduced above,
The patient-wearable device 202 can include an information fiducial 216 disposed on the sleeve 204. By way of non-limiting example, the information fiducial 216 can be a two-dimensional barcode, e.g., a QR (quick response) code, that can include a unique product information number to identify the particular patient-wearable device 202. In some embodiments, such as the embodiment illustrated in
As can be seen from the various embodiments of patient-wearable devices 102, 202, 202′ illustrated herein, first and second patterns of the present disclosure can have varying configurations, both with respect to one another (i.e., a first pattern and a second pattern associated with a particular patient-wearable device can be unique from one another) and/or across patient-wearable devices (i.e., a first pattern of a first wearable device can have a different configuration than a first pattern of a second wearable device). The systems and methods disclosed herein can use a feature-based approach to detect and locate the first and second patterns within an image. Features that can be recognized in an image to locate patterns within an image can include, for example, points, edges, objects, shapes defined in terms of curves or boundaries between different image regions, etc. Accordingly, the first and second patterns 108, 110 can be designed based, at least in part, on striking a balance between a number of recognizable features, space constraints on the patent-wearable device, and/or cost of producing the pattern(s). Patterns with certain elements or features, such as sharp edges and/or high contrast, can improve the effectiveness and ease with which the pattern can be identified with the feature-based algorithm.
In the embodiment illustrated in
In some embodiments, a pattern can include one or more abstract elements, pictures, or symbols. For example, a pattern 312 can include an abstract design that can have a desired aesthetic and/or style, while incorporating certain features recognizable by the ROM analyzer 106. A logo or other branding mark can be used as a stand-alone pattern 312, e.g., a DePuy Synthes Companies logo, or can be incorporated among other elements to form a pattern 314, e.g., inclusion of a Johnson & Johnson logo with additional elements to form the pattern. As discussed above, an identification fiducial 316 can be used as a pattern, such as a two-dimensional (2D) barcode. The identification fiducial 316 can contain information particular to the specific wearable-device and/or patient to whom the wearable-device is given. For example, the identification fiducial can include unique identification data specific to a particular wearable-device. In this manner, the identification fiducial can be scanned, for example by a healthcare professional at the time a wearable-device is given to a patient, to link the device to a profile or identification number of the patient in a digital health platform. Accordingly, in some embodiments, the ROM analyzer 106 can associate one or more ROM metrics calculated from an image of a particular wearable-device having the identification fiducial 316 with the appropriate patient. Finally,
The feature-based approach to image recognition can be more robust as compared to template matching or image cross-correlation. For example, the feature-based approach can have improved occlusion tolerance, as compared to other image recognition constructs, and can be invariant to scale and/or rotation of an image.
Although specific embodiments are described above, changes may be made within the spirit and scope of the concepts described. For example, the above embodiments describe a knee joint range-of-motion application. While this is one contemplated use, the methods and devices of the present disclosure can be equally adapted for use in other areas of a patient's body, e.g., an elbow joint, wrist joint, ankle joint, etc. As such, the devices described herein can be formed in a variety of sizes and materials appropriate for use in various areas of a patient's body. Accordingly, it is intended that this disclosure not be limited to the described embodiments, but that it have the full scope defined by the language of the claims. All publications and references cited herein are expressly incorporated herein by reference in their entirety.
This application claims the benefit of U.S. Provisional Application No. 62/899,876, filed Sep. 13, 2019, and entitled “Feature-Based Joint Range of Motion Capturing System,” which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62899876 | Sep 2019 | US |