This application relates to the field of bone mineral density (BMD) estimation and, more particularly, relates to method, electronic device, and computer program product for estimating BMD from plain film hip X-ray images for osteoporosis screening.
Osteoporosis is a common skeletal disorder characterized by decreased bone mineral density (BMD) and bone strength deterioration, leading to an increased risk of fragility fracture. All types of fragility fractures affect the elderly with multiple morbidities, reduced life quality, increased dependence, and mortality. A fracture risk assessment tool, FRAX has been clinically relied on for assessing bone fracture risks by integrating clinical risk factors and BMD. While some clinical risk factors such as age, gender, and body mass index (BMI) can be obtained from electronic medical records, the current gold standard to measure BMD is dual-energy X-ray absorptiometry (DEXA). However, due to the limited availability of DEXA devices, especially in developing countries, osteoporosis is often under-diagnosed and under-treated. Other methods aiming to use imaging obtained from other indications such as CT scans, and particularly high radiation dose of CT scans require longer acquisition time and higher costs, etc. Therefore, alternative lower-cost BMD evaluation protocols and methods using more accessible medical imaging examinations, e.g., X-ray plain films, can be a more accessible and lower-cost imaging tool for osteoporosis screening.
One aspect of the present disclosure provides a method for estimating bone mineral density (BMD). The method includes obtaining an image and cropping one or more regions-of-interest (ROIs) in the image, taking the one or more ROIs as input to a network model for estimating BMDs, training the network model on the labeled one or more ROIs with one or more loss functions to obtain a pre-trained model in a supervised pre-training stage, and fine-tuning the pre-trained model on a first plurality of data representing the labeled one or more ROIs and a second plurality of data representing unlabeled region to determine a fine-tuned network model for estimating BMDs in a semi-supervised self-training stage. The one or more loss functions includes a specific adaptive triplet loss (ATL) configured to encourage distances between one or more feature embedding vectors correlated to differences among the BMDs.
Another aspect of the present disclosure provides an electronic device for estimating bone mineral density (BMD). The electronic device includes a memory for storing a computer program and a processor coupled to the memory. When the computer program is executed, the computer program causes the processor to obtain an image and crop one or more regions-of-interest (ROIs) in the image, take the one or more ROIs as input to a network model for estimating BMDs, train the network model on the labeled one or more ROIs with one or more loss functions to obtain a pre-trained model in a supervised pre-training stage, and fine-tune the pre-trained model on a first plurality of data representing the labeled one or more ROIs and a second plurality of data representing unlabeled region to determine a fine-tuned network model for estimating BMDs in a semi-supervised self-training stage. The one or more loss functions includes a specific adaptive triplet loss (ATL) configured to encourage distances between one or more feature embedding vectors correlated to differences among the BMDs.
Another aspect of the present disclosure provides a computer program product for estimating bone mineral density (BMD). The computer program product includes a non-transitory computer-readable storage medium and program instructions. When executed, the program instructions cause a computer to obtain an image and crop one or more regions-of-interest (ROIs) in the image, take the one or more ROIs as input to a network model for estimating BMDs, train the network model on the labeled one or more ROIs with one or more loss functions to obtain a pre-trained model in a supervised pre-training stage, and fine-tune the pre-trained model on a first plurality of data representing the labeled one or more ROIs and a second plurality of data representing unlabeled region to determine a fine-tuned network model for estimating BMDs in a semi-supervised self-training stage. The one or more loss functions includes a specific adaptive triplet loss (ATL) configured to encourage distances between one or more feature embedding vectors correlated to differences among the BMDs.
Other aspects of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings of the present disclosure.
The following describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Apparently, the described embodiments are merely some but not all the embodiments of the present invention. Other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present disclosure. Certain terms used in this disclosure are first explained in the followings.
Various embodiments provide method, electronic device, and computer program product of a method for estimating BMD from plain film hip X-ray images for osteoporosis screening. The various embodiments are based on the assumption that hip X-ray images contain sufficient information on visual cues for BMD estimation.
As used herein, the term “hip X-ray” refer to X-ray imaging results and/or X-ray examinations, that can help to detect bone cysts, tumors, infection of the hip joint, or other diseases in the bones of the hips, etc.
In some embodiments, a convolutional neural network (CNN) architecture is implemented for regressing BMD from hip X-ray images. For example, paired hip X-ray image and DEXA measured BMD are collected as labeled data for supervised regression learning. In some embodiments, the hip X-ray images and DEXA measured BMD are taken within six months apart. However, it can be difficult to obtain a large amount of hip X-ray images paired with DEXA measured BMDs.
A semi-supervised learning method may be implemented to exploit the usefulness of large-scale hip X-ray images without ground-truth BMDs. This method may make image collection easier than paring hip X-ray images with DEXA measured BMDs. Due to the continuity of BMD values, the model can be formulated as a regression model. In some embodiments, to improve regression accuracy, a novel adaptive triplet loss (ATL) method may be implemented such that the model can better distinguish samples with dissimilar BMDs in a feature space.
According to the embodiments of the present disclosure, training a model for estimating BMDs includes a supervised pre-training stage and a semi-supervised self-training stage.
As shown in
In the self-training stage, the model may be fine-tuned on labeled data and pseudo-labeled data. The pseudo labels may be updated when the model achieves higher performance on the validation set.
Step 301: Obtaining a hip X-ray image and cropping one or more regions-of-interest (ROIs) around femoral neck to take the one or more ROIs as input to a convolutional neural network (CNN).
As shown in
Step 302: Obtaining one or more embedding feature vectors representing the labeled one or more ROIs by replacing two fully-connected (FC) layers of a backbone with a global average pooling (GAP) layer.
In some embodiments, VGG-11 may be used as the backbone. In one example, VGG-11 may be adopted with batch normalization and squeeze-and-excitation (SE) layer as the backbone. The VGG-11 with batch normalization and the SE layer may outperform other VGG networks and ResNets. The last two fully-connected (FC) layers of VGG-11 may be replaced by a global average pooling (GAP) layer such that one or more embedding feature vectors may be obtained. In one example, the embedding feature vector includes a 512-dimensional embedding feature vector.
Step 303: Training the network model on the labeled one or more ROIs with one or more loss functions to obtain a pre-trained model in a supervised pre-training stage, the network model including the one or more embedding feature vectors.
After the one or more embedding feature vectors representing the labeled ROI image are obtained, one or more loss functions may be implemented to train a model on the one or more embedding feature vectors.
Step 304: Fine-tuning the trained model on a first plurality of data representing the labeled ROI image and a second plurality of data representing unlabeled region.
As shown in
Step 401: Determining a mean square error (MSE) loss between an estimated BMD and a ground-truth (GT) BMD.
After the one or more embedding feature vectors representing the labeled ROI image are obtained, a loss function may be implemented to train a model on the one or more embedding feature vectors. In some embodiments, in the supervised pre-training stage, the loss function used for training labeled ROI image may be, for example, mean square error (MSE) loss and an adaptive triplet loss.
As shown in
mse=(−)2. (1)
where y′ denotes a predicted BMD, y denotes a GT BMD, Lmse denotes the MSE loss.
According to formula (1) for determining the MSE loss, when a value of y′ approaches a value of y, the accuracy of the network model's regression can be maximized.
According to the embodiments of the present disclosure, BMD can be a continuous value, and embeddings of the hip ROIs can also be continuous in the feature space. In some embodiments, a distance between embeddings of two samples in the feature space can be correlated with their BMD discrepancy. Based on this characteristic, a novel ATL can be determined to discriminate samples with different BMDs in the feature space.
Step 402: Determining an adaptive triplet loss (ATL) for discriminating multiple samples having different BMDs in a feature space.
As shown in
∥Fa−Fn∥22+m<∥Fa−Ff∥22, (2)
where Fa, Fn, and Ff are embeddings of the anchor sample, the near sample, and the far sample, respectively, and m represents a margin that separates the near sample from the far sample. The margin accounts for the relative BMD differences between the near sample and the far sample. As such, the implementation of the ATL may encourage the distances between feature embeddings of samples correlated to their BMD difference.
Therefore, the ATL may be defined as:
=[∥Fa−Fn∥22−∥Fa−Ff∥22+αm]+. (3)
where α is the adaptive coefficient based on the BMD differences, and can be defined by:
α=∥n−n∥22−∥p−f∥22, (4)
where ya, yn, and yf are the GT BMD values of the anchor, near, and far samples, respectively.
Step 403: Combining the MSE loss with the ATL.
For network training, the MSE loss may be combined with the ATL. A weight may be considered for calculation. For example, the combined MSE loss and the ATL may be determined by:
=mse+λtriplet, (5)
where λ represents a weight for the ATL. For example, λ can be 0.5 according to various embodiments of the present disclosure.
Step 404: Training the network model on the one or more embedding feature vectors with the combined MSE loss and the ATL.
The combined MSE loss and ATL may be used to train a network model on the one or more embedding feature vectors corresponding to the labeled ROI image. Accordingly, because of the implementation of the ATL, the trained model may learn more discriminative feature embeddings for image with different BMDs, thus improving the regression accuracy of the network.
When there are limited images coupled with GT BMDs, a network model can easily overfit the training data and yield poor performance on unseen test data. To overcome this barrier, a semi-supervised self-training algorithm can be implemented to leverage both labeled and unlabeled data. As such, a new semi-supervised self-training algorithm for boosting the BMD estimation accuracy can be implemented by exploiting unlabeled hip X-ray images. In one exemplary implementation, 1,090 hip X-ray images may be collected with associated DEXA-measured BMD values from 819 patients, and 8,219 unlabeled hip X-ray images may be collected.
Step 601: Using the obtained pre-trained model to estimate pseudo GT BMDs on unlabeled images to obtain additional supervisions.
The pre-trained model obtained from step 404 may be used to estimate pseudo GT BMDs on unlabeled images to obtain additional supervisions. The model may be fine-tuned on two groups of data. The two groups of data include a first plurality of data which represent the labeled ROI images and a second plurality of data which represent unlabeled regions. Accordingly, the trained model can be used to predict pseudo GT BMDs based on the unlabeled images to obtain additional supervisions, such that the unlabeled images with pseudo GT BMDs can be subsequently combined with labeled images to fine-tune the model.
Step 602: Combining the unlabeled images having pseudo GT BMDs with labeled images to fine-tune the network model.
The unlabeled images with pseudo GT BMDs may be combined with labeled images to fine-tune the network model. To improve the quality of estimated pseudo GT BMDs, a method for fine-tuning the network model is provided by the present disclosure. According to various embodiments of the present disclosure, a fine-tuned model can achieve higher performance on a validation set than the network model without fine-tuning. The fine-tuned model can also produce more accurate and more reliable pseudo GT BMDs for unlabeled images.
Step 603: Evaluate the network model performance on a validation set by determining a Pearson correlation coefficient and the MSE.
Two metrics for evaluation may be implemented for evaluating the proposed method and all compared methods, including Pearson correlation coefficient (R-value) and MSE or Root Mean Square Error (RMSE). In some embodiments, after each self-training stage, model performance on the validation set may be evaluated using the Pearson correlation coefficient and the MSE.
Step 604: In response to a current network model generating a higher R-value and a lower MSE than a previous network model, determine the current network model to be the fine-tuned network model for re-generating estimated pseudo GT BMDs corresponding to the unlabeled images.
If a fine-tuned model indeed achieves both higher correlation coefficient and lower MSE at the same time than a previous model, then the fine-tuned model may be used to re-generate pseudo GT BMDs for the unlabeled images during a self-training.
Step 605: Use the current network model to re-generate pseudo GT BMDs to complete self-training.
The fine-tuning process using the Pearson coefficient and the MSE as the evaluation factor may be repeated until a total self-training stage is achieved.
In one exemplary implementation, the semi-supervised self-training algorithm can be determined by the following process:
During a semi-supervised learning, optimization algorithm may be applied for training the learning models. For example, Adam optimizer with a learning rate of 10−4 and weight decay of 4×10−4 may be implemented to train the network on labeled images for 200 epochs. The learning rate may be decayed to 10−5 after 100 epochs. In one instance, the learning rate of 10−5 may maintain for another 100 epochs during the fine-tuning process. After each training and fine-tuning epoch, the network model may be evaluated on the validation set to select the highest Person correlation coefficient for testing. All models are implemented using PyTorch 1.7.1 and trained on a workstation with an Intel (R) Xeon (R) CPU, 128 G RAM, and a 12 G NVIDIA Titan V GPU, and a batch size may be set to 16.
Further, to regularize the network model and avoid being misled by inaccurate pseudo labels, each image may be augmented twice and consistency constraints may be employed between the features of each image and also between the predicted BMDs. In one exemplary implementation, consistency loss can be determined by:
c
=∥F
1
−F
2∥22+∥1−2∥22. (6)
Based on the self-training network model provided in various embodiments, the total loss can be determined by:
=mse+λtriplet+λ′c, (7)
where λ′ represents a consistency loss weight. In various embodiments, λt may be set to 1.0.
According to the embodiments of the present disclosure, different backbones may affect the baseline performance without ATL or self-training. The compared backbones include VGG-11, VGG-13, VGG-16, ResNet-18, ResNet-34, and ResNet-50. As shown in Table 1 below, VGG-11 achieves the best R-value of 0.8520 and RMSE of 0.0831. The lower performance of other VGG networks and ResNets may be attributed to overfitting from more learnable parameters.
The present disclosure further provides comparison results between the semi-supervised self-training method according to the embodiments of the present disclosure and three existing semi-supervised learning (SSL) methods such as Π-model, temporal ensembling, and mean teacher. The Π-model is trained to encourage consistent network output between two augmentations of the same input image, the temporal ensembling produces pseudo labels via calculating the exponential moving average of predictions after every training epoch such that the pseudo labels may be then combined with labeled images to train the model, and the mean teacher uses an exponential moving average of model weights to produce pseudo labels for unlabeled images instead of directly ensembling predictions.
Regression MSE loss between predicted and GT BMDs can be used on labeled images for all SSL methods. All the SSL models may be fine-tuned from weights pre-trained on labeled images. As shown in Table 2, the semi-supervised self-training method according to the
embodiments of the present disclosure can achieve the best R-value of 0.8805 and RMSE of 0.0758. Π-model outperforms the baseline by enforcing output consistency as a regularization. While both temporal ensembling and mean teacher obtain improvements with the additional pseudo label supervision, averaging labels or weights can accumulate more errors over time. In contrast, the semi-supervised self-training method according to the embodiments of the present disclosure is more effective because it may only update pseudo labels when the model performs better on the validation set.
The predicted BMDs obtained according to the embodiments of the present disclosure are more evenly distributed in a medium range than end portions.
According to the embodiments of the present disclosure, the effectiveness of using ATL in training the network model is provided by comparing the model using the ATL with non-adaptive counterparts. To assess the importance of various components to the estimated BMDs, collected data may be grouped and different parameters may be applied to the data to evaluate the impact of the components on the BMDs. For example, some hyper-parameters may be varied while other hyper-parameters may remain among the groups of the data. In one exemplary implementation, the effectiveness of using ATL in training the model is compared with non-adaptive counterparts, at various preset margins. As shown below, Table 3 illustrates an ablation study of ATL.
As shown in Table 3, the non-adaptive counterpart deteriorates the model's regression accuracy. Therefore, the adaptive coefficient is necessary in achieving the network model's regression accuracy. Because BMD differences vary for different triplets, it may be unreasonable to use a fixed margin to uniformly separate samples with dissimilar BMDs. As shown in Table 3, the group of data using ATL can achieve higher R-values than the baseline regardless of the margin value (m). Specifically, when m=0.5, the data produces the best R-value of 0.8670 and RMSE of 0.0806.
In another exemplary implementation, one group of data use MSE loss only for fine-tuning the pre-trained model, and the other group of data use the combination of MSE loss and ATL loss for fine-tuning the pre-trained model. Table 4 illustrates an ablation study of adaptive triplet loss (ATL) and corresponding self-training algorithm.
As shown in Table 4, in the first group of data, the R-value and RMSE are evaluated under the condition of having baseline components (denoted as “Baseline”) versus having baseline components and ATL loss (denoted as “Baseline+ATL”) in the pre-trained model; in the second group of data, the R-value and RMSE are evaluated under the condition of having the SSL loss (denoted as “SSL”) versus having the combination of SSL loss and the ATL loss (denoted as “SSL+ATL”); in the third group of data, the contribution of consistency loss in Equation 6 is illustrated, that is, the consistency of loss is removed during the self-training stage, and the R-value and RMSE are evaluated under the condition that the consistency has been removed in comparison with the condition that the consistency of loss has not been removed.
Moreover, as shown in Table 4, the implementation of a straightforward SSL strategy in the self-training stage can be effective in increasing the R-value and decreasing the RMSE value. In one example, the SSL increases the baseline R-value to 0.8605 and decreases the RMSE to 0.0809. Further, the pre-trained model using both the MSE loss and the ATL loss can further increase the R-value and decrease the RMSE. In addition, according to various embodiments of the present disclosure, while using pseudo labels of unlabeled images are effective in self-training stage, the R-values can be further increased and the RMSE can be further decreased when the pseudo labels are updated during fine-tuning. On the other hand, the consistency loss can regularize model training by encouraging consistent output and features. In some embodiments, the performance improvement of the R-value and RMSE becomes marginal in the situation where the pre-trained model does not use the consistency loss, and without the consistency loss, the model may be prone to overfitting to inaccurate pseudo labels and may deteriorate. For example, as shown in Table 4, the improvement becomes marginal from 0.8772 to 0.8776 in R-value without the consistency loss, even if pseudo labels are updated for multiple time during the fine-tuning process. Accordingly, when the self-training algorithm implements the adaptive coefficient ATL and consistency loss, a desirable R-value and RMSE can be achieved thus improving the regression accuracy of the network. For example, as shown in Table 4, in the data set with combined ATL and consistency loss applied to the self-training algorithm, a maximum R-value of 0.8805 and a minimum RMSE of 0.0758 can be achieved. Compared to the baseline, the R-value has been improved by 3.35% and the RMSE has been reduced by 8.78%.
Therefore, according to various embodiments of the present disclosure, a method of obtaining BMD from hip X-ray images instead of relying on the DEXA measurement is provided. A CNN may be employed to estimate BMDs from preprocessed hip ROIs. Further, to improve the regression accuracy of the network model, a novel ATL may be combined with MSE loss for training the network on hip X-ray images with paired ground-truth BMDs, thus providing feasibility of X-ray based BMD estimation and potential opportunistic osteoporosis screening with more accessibility and at reduced cost.
In various embodiments, the method for estimating BMDs provided by the present disclosure may be applied to one or more electronic devices.
In various embodiments, the electronic device is capable of automatically performing numerical calculation and/or information processing according to an instruction configured or stored in advance, and hardware of the electronic device can include, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), and an embedded device, etc. The electronic device can be any electronic product that can interact with users, such as a personal computer, a tablet computer, a smart phone, a desktop computer, a notebook, a palmtop computer, a personal digital assistant (PDA), a game machine, an interactive network television (IPTV), and smart wearable devices, etc. The electronic device can perform human-computer interaction with a user through a keyboard, a mouse, a remote controller, a touch panel, or a voice control device. The electronic device can also include a network device and/or a user device. The network device can include, but is not limited to, a cloud server, a single network server, a server group composed of a plurality of network servers, or a cloud computing system composed of a plurality of hosts or network servers. The electronic device can be in a network. The network can include, but is not limited to, the Internet, a wide region network, a metropolitan region network, a local region network, a virtual private network (VPN), and the like.
Referring to
The memory 810 may include volatile memory such as random-access memory (RAM), and non-volatile memory such as flash memory, hard disk drive (HDD), or solid-state drive (SSD). The memory 810 may also include combinations of various above-described memories. The processor 820 may include a central processing unit (CPU), an embedded processor, a microcontroller, and a programmable device such as an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), and a programmable logic array (PLD), etc.
The present disclosure also provides a computer-readable storage medium storing a computer program. The computer program may be loaded to a computer or a processor of a programmable data processing device, such that the computer program is executed by the computer or the processor of the programmable data processing device to implement the disclosed method.
Various embodiments also provide a computer program product. The computer program product includes a non-transitory computer-readable storage medium and program instructions stored therein. The program instructions may be configured to be executable by a computer to cause the computer to implement the method for estimating BMDs using hip X-rays.
Although the principles and implementations of the present disclosure are described by using exemplary embodiments in the specification, the foregoing descriptions of the embodiments are only intended to help understand the method and core idea of the method of the present disclosure. Meanwhile, a person of ordinary skill in the art may make modifications to the specific implementations and application range according to the idea of the present disclosure. In conclusion, the content of the specification should not be construed as a limitation to the present disclosure.
This application claims the priority of U. S. Provisional Patent Application No. 63/165,223, filed on Mar. 24, 2021, the entire content of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63165223 | Mar 2021 | US |