The present disclosure relates to physical parameter prediction using machine learning, and more specifically, to methods and devices for predicting physical parameter using prior information of physical information as a constraint, such as predicting a fractional flow reserve (FFR) value of a blood vessel.
Machine learning has been used as an essential tool to model complex functions across many domains, such as insurance (insurance premium prediction), healthcare (medical diagnosis, development, and growth), agriculture (plant growth), etc. With the increased complexity of the learning model, it is able to improve the prediction ability for various complex problems in real practice. However, since the learning model is mainly configured to deduce a mapping function (as a black box) from the input physical information to the output physical parameter based on training data, the predicted results may not obey the fundamental rules that govern the physical parameters. As an example, the insurance premium predicted by the learning model may decrease with the age (which contradicts with the fundamental rule that the insurance premium will increase with the age). As another example, the height of a child predicted by the learning model may decrease as the child grows up (which contradicts with the fundamental rule that the height of a child should be growing). As another example, the pressure of blood flow predicted by the learning model may be increasing from upstream to downstream in vessel trees (which contradicts with the fundamental rule that the pressure of blood flow is decreasing from upstream to downstream in vessel trees).
To compensate for the fact that the fundamental rule governing the physical parameter to be predicted is usually ignored by the learning models, some conventional methods consider the fundamental rule related information through post-processing steps. However, these methods require additional steps and these steps decrease the performance of the learning model. Some other methods may use additional loss term(s) in the loss function designed to penalize predications during the training stage which contradict with the fundamental rule. Taking the monotonic profile of the physical parameters in sequence as an example, an addition loss term designed to penalize the non-monotonic predictions is adopted in the loss function during the training stage. However, a low non-monotonic loss in the training data does not necessarily mean a low non-monotonic loss for all testing data, especially when the model is overfitting the training data. More importantly, it does not guarantee that the predictions are strictly monotonic.
There is still room to improve the learning model, especially for those intend to model complex functions with prior information.
The present disclosure is provided to solve the above-mentioned problems existing in the prior art. There is a need for methods and devices for predicting physical parameter based on the input physical information by means of a learning model, and computer-readable media, which may enforce the prior information of the physical parameter as a constraint function into the architecture of the learning model, without requiring additional loss terms or post-processing steps. Accordingly, the prediction result could be forced substantially comply with the fundamental rule, thus the model performance could be improved.
According to a first aspect of the present disclosure, a method for predicting a physical parameter based on input physical information is provided. The method may include predicting, by a processor, an intermediate variable based on the input physical information with an intermediate sub-model, which incorporates a constraint on the intermediate variable according to prior information of the physical parameter. The method may also include transforming, by the processor, the intermediate variable predicted by the intermediate sub-model to the physical parameter with a transformation sub-model.
According to a second aspect of the present disclosure, a device for predicting a physical parameter based on input physical information is provided. The device may include a storage and a processor. The storage may be configured to load or store an intermediate sub-model and a transformation sub-model. The processor may be configured to predict an intermediate variable based on the input physical information with the intermediate sub-model, which incorporates a constraint on the intermediate variable according to prior information of the physical parameter, and transform the intermediate variable predicted by the intermediate sub-model to the physical parameter with the transformation sub-model.
According to a third aspect of the present disclosure, a non-transitory computer-readable medium is provided with computer-executable instructions stored thereon. The computer-executable instructions, when executed by a processor, may perform a method for predicting a physical parameter based on input physical information. The method may comprise predicting an intermediate variable based on the input physical information with an intermediate sub-model, which incorporates a constraint on the intermediate variable according to prior information of the physical parameter. The method may further comprise transforming the intermediate variable predicted by the intermediate sub-model to the physical parameter with a transformation sub-model.
The above method and device, as well as the medium, may enforce the prior information of the physical parameter as a constraint function into the architecture of the learning model, without requiring additional loss terms or post-processing steps, to guarantee that the prediction result substantially comply with the fundamental rule and improve the model performance.
The foregoing general description and the following detailed description are only exemplary and illustrative, and do not intend to limit the claimed invention.
In the drawings that are not necessarily drawn to scale, similar part numbers may describe similar components in different views. Similar part numbers with letter suffixes or different letter suffixes may indicate different examples of similar components. The drawings generally show various embodiments by way of example and not limitation, and together with the description and claims, are used to explain the disclosed embodiments. Such embodiments are illustrative and exemplary, which are not intended to be exhaustive or exclusive embodiments of the method, system, or non-transitory computer-readable medium having instructions for implementing the method thereon.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the drawings.
In this disclosure, “physical information” may be any information which may be collected or acquired in various technical domains that is governed by certain physical rules. The physical information may be acquired in various formats, such as but not limited to a sequence of data, vectors, image patches, list, etc. Correspondingly, “physical parameter” to be predicted may be a physical parameter related to the physical information in the corresponding technical domain. For example, in the technical domain of insurance, the age and healthcare information of the insured object may be adopted as the physical information, and the insurance premium of the insured object may be set as the physical parameter to be predicted. As another example, in the technical domain of healthcare, such as coronary artery stenosis diagnosis, the sequence of image patches in a coronary artery tree may be adopted as the physical information, and the sequence of fractional flow reserve (FFR) or instantaneous wave-free ratio (iFR) in the coronary artery tree may be set as the physical parameter to be predicted. In this disclosure, “prior information of the physical parameter” may comprise known or confirmed knowledge related to the predicted physical parameters, such as the fundamental rule(s) that govern the physical parameter or its transformed parameter according to a physical principle or theory. In the exemplary technical domain of insurance, an example of the prior information may be that the insurance premium has to increase with the insurer's age increasing and the healthcare condition getting worse. In the exemplary technical domain of coronary artery stenosis diagnosis, an example of the prior information may be that FFR values from downstream should not be higher than the ones from upstream of the coronary artery trees.
As shown in
In some embodiments, the prior information of the physical parameter(s) may include a profile tendency (especially for the physical parameters as a sequence) and/or bound range (e.g., of the magnitude) (e.g., positive, negative, or within a range defined by a lower limit and/or an upper limit, etc.,) in temporal domain and/or spatial domain. In some embodiments, the profile tendency may include any one of monotonicity (e.g., increasing, decreasing, non-increasing, or non-decreasing) of profile change, periodicity of profile change, convex shape of the profile, and concave shape of profile for the sequence of physical parameters.
In some embodiments, the intermediate variable may be determined based on the prior information of the physical parameter(s) so that the prior information may be mathematically expressed by the intermediate variable as the constrain function 103b. Based on the prior information of the physical parameter(s), the intermediate variable may be pre-defined to model an intermediate function of the input physical information and the transformation sub-model 104 may be a function constructed according to the intermediate function and the target function, so that they collectively model the target function. As an example, when the prior information is an increasing monotonicity of the profile change of the sequence of physical parameters, the derivative of the physical parameter may be set as the intermediate variable, and a function mapping the derivatives of the physical parameter to positive values, such as but not limited to ReLU, may be adopted as the constraint function 103b as part of the intermediate sub-model 103. Accordingly, the transformation sub-model 104 may be set as an integral function (or based on the integral function).
In some embodiments of the present disclosure, for each prediction of the physical parameter(s), the intermediate variable(s) of the physical parameter(s) is first predicted without constrain conditions and then is treated directly by means of the constrain function 103b to satisfy the prior information. After that, inverse operation with respect to the operation for obtaining the intermediate variable(s) from the physical parameter(s) may be performed as the transformation sub-model 104 on the predicted intermediate variable. As a result, the prediction result of the physical parameter(s), e.g., the output 102, can be ensured to be consistent with the prior information. The resulted physical parameter predict model 100 may achieve an accurate prediction performance on the physical parameter(s) in an end-to-end manner (i.e., post-processing steps may not be needed), meanwhile efficiently suppressing unrealistic (contradicting with the prior information) data and preventing from overfitting the training data.
In some embodiments, for the sequence of physical parameters, the prior information may govern the whole sequence, partial segments, or sporadic locations/points in sequences, or samples in scalar prediction problems.
In some embodiments, the unconstrained intermediate sub-model 103a may be generated in various manners, including but not limited to, as a linear model, curve model (e.g., polynomial model), learning model (such as a machine learning model or a deep learning model), etc. In some embodiments, the unconstrained intermediate sub-model 103a may be configured as a learning model, such as a decision tree, support vector machine, Bayesian forecast model, CNN, or MLP, etc., to model hidden and complex mapping functions between the physical information (e.g., input 101) and the intermediate variable(s).
Generally, the present disclosure may relate to two phases: a prediction phase and a training phase. The training phase can be performed to train the physical parameter predict model 100 and the prediction phase can be performed to apply the trained physical parameter predict model 100 to make predictions of the physical parameter based on input physical information. Each of the prediction phase and the training phase can be performed online (e.g., in real time) or offline (e.g., in advance). In some embodiments, the training phase may be performed offline and the prediction phase may be performed online.
As shown in
For example, the technical domain may be the medical field, and the physical information may be medical information, such as clinical information of the disease history, image(s) (or patches) and/or feature vector (either explicitly defined or hidden feature information) extracted therefrom. And the physical parameter(s) may be medical parameter(s) accordingly. For example, the medical parameter(s) may include a medical index, physiological status parameter, the diseased type, etc. The medical image(s) may be acquired via any image modality among the follows: functional MRI (e.g., fMRI, DCE-MRI and diffusion MRI), Cone Beam CT (CBCT), Spiral CT, Positron Emission Tomography (PET), Single-Photon Emission Computed Tomography (SPECT), X-ray, optical tomography, fluorescence imaging, ultrasound imaging, and radiotherapy portal imaging, etc., or the combination thereof.
The details of each of the intermediate sub-model, the constrain function, and the transformation sub-model have already been described with reference to
In some embodiments, the constrained intermediate sub-model 103 may be a learning model (e.g., a machine learning model or a deep learning model), and the transformation sub-model 104 may be a preset function. In some embodiments, the constrained intermediate sub-model 103 and the transformation sub-model 104 may be collectively trained with training dataset of the physical information annotated with the physical parameter(s). In this manner, the lack of the ground truth labels of the intermediate variable(s) may be overcome, instead, the abundance of ground truth labels of the physical parameter(s) may be utilized to train the physical parameter predict model 100 as a whole. The training of the physical parameter predict model 100 effectively trains the constrained intermediate sub-model 103 as a learning model.
For the physical parameter prediction model with a predefined configuration, i.e., each of the intermediate variable(s), the transformation sub-model, and the constraint function are predefined, and the configuration of the unconstrained intermediate sub-model is predetermined (such as CNN), the training process may be performed as shown in
The training process may begin with step 301, where training data including physical information and the corresponding ground truth labels of the physical parameter(s). The training data is input into the physical parameter prediction model (with predefined framework such as shown in
At step 302, from the physical information in the training data, intermediate variable(s) may be predicted by the constrained intermediate sub-model with the current model parameters. At step 303, the predicted intermediate variables are then transformed to the prediction result of the physical parameter(s) by the transformation sub-model. At step 304, the loss function may be calculated by comparing the prediction result of the physical parameter(s) and the ground truth labels thereof. At step 305, the calculated loss is compared to a stopping criterion, e.g., a nominal threshold value. If the calculated loss is below the stopping criterion (step 305: YES), the current model parameters are sufficiently optimized and no more iteration is necessary. Accordingly, the method proceeds to step 306, to output the physical parameter prediction model with the current model parameters of the unconstrained intermediate sub-model. Otherwise (step 305: NO), further optimization is needed. At step 307, the model parameters of the unconstrained intermediate sub-model may be optimized based on the calculated loss function. Then the method iterates steps 302-305 based on the updated unconstrained intermediate sub-model with the current model parameters, until the loss is less than the stopping criterion.
In some embodiments, the optimization process of the model parameters may be performed by various algorithms, such as but not limited to stochastic gradient descent method, Newton method, conjugate gradient method, Quasi-Newton Method, and Levenberg Marquardt algorithm, etc.
Since the prior information is enforced explicitly, through the constraint applied on the intermediate variable, the physical parameter prediction model does not require additional loss terms with respect to the prior information in the training process. Besides, the training process may guarantee that the prediction results comply with the prior information with workload comparable to the training process of other physical parameter prediction model that attempts to avoid overfitting efficiently without enforcing the prior information.
In some embodiments, the sequence of physical parameters may include vessel parameters at a sequence of positions in a vessel structure, such as a vessel tree or a vessel path.
Hereinafter, fraction flow reserve (FFR) is described as an example of the physical parameter(s). Two examples of prior information, i.e., monotonicity of profile change of a sequence of physical parameters and the bound range of a single physical parameter, are used to illustrate how to enforce explicitly various prior information into the physical parameter prediction model. However, these exemplary methods described for prediction of FFR may be applied or adapted to predict other medical or physiological parameters in the medical fields, or physical parameters in other technical fields. Besides, these methods may also be adapted to accommodate other types of prior information.
Fractional flow reserve (FFR) is considered to be a reliable index for the assessment of the cardiac ischemia and the learning models have been used to predict FFR values in the coronary artery tree. FFR is defined as a ratio between the pressure after a stenosis (or the pressure at any position within the vessel tree) and the pressure at the ostia point (the inlets of the coronary artery tree). Following the physics, in the sequence of FFR values within the coronary artery trees FFR values from downstream should not be higher than the one from upstream.
In some embodiments, instead of predicting FFR values directly, the methods and devices of present disclosure can be used to model the drop of FFR of the current point relative to the adjacent upstream point. The drop of FFR values may be defined as the derivative of the FFR along sequences. Based on the monotonicity of the profile change of sequence of FFR values along vessel structure, the intermediate variable may be defined based on derivative of the sequence of FFR values (such as derivative of the upstream FFR value with respect to its adjacent downstream FFR value), and correspondingly, the constraint function may be defined as mapping into non-negative range, the transformation sub-model may be defined based on an integral function to obtain the sequence of FFR values from the non-negative derivatives of the sequence of FFR values. Similarly, for other physical parameters with its prior information including the monotonicity of profile change of the sequence of physical parameters, the intermediate variable can be defined based on derivative of the sequence of physical parameters.
As shown in
The constrained derivative sub-model 403 aims to model the derivatives of the sequence of FFR values. Based on the predicted derivatives of the sequence of FFR values, the transformation sub-model 404 may map the constrained derivatives to the FFR values in the target domain.
As shown in
The final predicted FFR values y(t) could be calculated from the output of the activation function, i.e., the non-negative derivatives of the sequence of FFR values, essentially the non-negative drop of sequence of FFR values along the vessel trees/paths, recursively using the transformation sub-model 404. Then the final predicted FFR values y(t) may be provided as output 402, as shown in
As a result, it does not require additional loss terms to penalize the non-monotonic predictions as it can be enforced explicitly in the FFR prediction model.
In some embodiments, the FFR prediction model is designed to model a target function, i.e., the true underlying function F(x(t)). For example, the FFR prediction model can be expressed as a function ϕ(x(t)). ϕ(x(t)) is built to model the target function F(x(t)) with an intermediate function f(x(t)) (corresponding to the trained unconstrained derivative unit 403a). For example, the intermediate function f(x(t)) may be derivative functions of F(x(t)), wherein t denotes the position or index in sequences, the position may move toward the downstream as t increases. As an example, the intermediate function f(x(t)) may be defined as Formula 1 below:
or some other transform functions.
Based on the intermediate function f(x(t)), a function ϕ(x(t)) (corresponding to the trained FFR prediction model) may be built which tries to model and approximate the true underlying function F(x(t)).
As shown in
The loss function L may be computed by comparing the yielded prediction result y(t) and the ground truth of the FFR. For a training set D, the parameter θ may be optimized by minimizing the loss function L. Methods such as stochastic gradient descent related methods may be used for optimization.
Without limiting the scope of disclosure, a type of the prior information of FFRs, i.e., non-decreasing monotonicity, may be used as an example throughout the descriptions. For example, the function ϕ(x(t)) may be a monotonic function by using derivative as the intermediate variable together with the non-negative constraint function 403b, which maps the input x(t) 401 to the output y(t) 402, such that y(t1)>y(t2) for any t2>t1. For different prediction problems, input x(t) may be an image or a feature vector. The constrained derivative sub-model 403 Ø(.;θ) may model the derivative function defined by Formula (1), instead of the underlying function F(x(t)). Ø(x(t)) may be easily constrained to be monotonic by enforcing the constrained derivative sub-model Ø(.;θ) to be non-negative (i.e., ensuring that the predicted FFR values are non-decreasing from downstream to upstream). In some embodiments, if the prior information requires non-increasing of the predicted values, the constrained derivative sub-model Ø(.;θ) may be enforced to be non-positive; if the prior information requires only increasing of the predicted values, the constrained derivative sub-model Ø(.;θ) may be enforced to be positive; if the prior information requires only decreasing of the predicted values, the constrained derivative sub-model Ø(.;θ) may be enforced to be negative. The so-predicted constrained derivatives may be fed into the transformation sub-model 404, yielding the final prediction result y(t), e.g., according to Formula (2) as follows:
y(t)=∫Ø(x(t);θ)dt (2)
If the prediction result y(t0) at a position t0 is given (either predefined or determined by a machine learning model), y(t0)=y0, the prediction result y(t) may be computed by the following Formula (3):
y(t)=y0+∫t0tØ(x(t);θ)dt (3)
Finally, a value of the loss function L may be computed by comparing the generated prediction result y(t) and the ground truth FFR value. In some embodiments, the loss function L may be a difference (e.g., L-1, L-2, etc.) between the generated prediction result y(t) and the ground truth FFR value.
In some embodiments, for the prediction of FFR, the input x(t) may be the images, image patches, masks, or features for points along the coronary artery tree. In some embodiments, various learning models such as CNN, FCN, MLP, or other method may be applied by the unconstrained derivative unit 403a to encode the input information. In some embodiments, the intermediate variable may be defined as a derivative function of FFR, or simply the drop of FFR relative to the previous upstream location along the vessel tree.
As shown in
In some embodiments, the input x(t) 501, which may be image patch(es), feature vector(s), etc., may be fed into a first constrained subtraction sub-model 503a and a second constrained subtraction sub-model 503b. In some embodiments, the first constrained subtraction sub-model 503a may include a first unconstrained subtraction unit 503a1 and a ReLU 503a2 as the corresponding constraint function (also working as the activation function at the end of the learning model). The first unconstrained subtraction unit 503a1 may be built based on any one of CNN, MLP, etc., and may be configured to model and determine the difference between the FFR value and the lower limit (e.g., 0). The difference may then be mapped by the ReLU 503a2 into a non-negative range, to enforce the prior information associated with the lower limit. The ReLU 503a2 may output and feed the non-negative difference between the FFR value and the lower limit into a first transformation sub-model 504a. The first transformation sub-model 504a may be built based on a subtraction, e.g., an inverse operation to that performed by the first unconstrained subtraction unit 503a1, to obtain the FFR value therefrom as a first output y1(t) 502a.
Similarly, in the right branch for the upper limit, the second constrained subtraction sub-model 503b may include a second unconstrained subtraction unit 503b1 and a ReLU 503b2 as the corresponding constraint function (also working as the activation function at the end of the learning model). The second unconstrained subtraction unit 503b1 may be built based on any one of CNN, MLP, etc., and may be configured to model and determine the difference between the upper limit (e.g., 1) and the FFR value. The difference may then be mapped by the ReLU 503b2 into a non-negative range, to enforce the prior information associated with the upper limit. The ReLU 503b2 may output and feed the non-negative difference between the upper limit and FFR value into a second transformation sub-model 504b. Like the first transformation sub-model 504a, the second transformation sub-model 504b may also be built based on a subtraction, e.g., an inverse operation to that performed by the second unconstrained subtraction unit 503b1, to obtain the FFR value therefrom as a second output y2(t) 502b.
Both the first output y1(t) 502a and the second output y2(t) 502b may be utilized to obtain the final output y(t) 502c as the finally predicted FFR value. As an example, averaging operation may be performed by an averaging unit 502d with respect to the first output y1(t) 502a and the second output y2(t) 502b, to obtain the final output y(t) 502c. In some embodiments, other operations, such as minimization operation, etc., may be adopted to take both the first output y1(t) 502a and the second output y2(t) 502b into account to obtain the finally predicted FFR value.
Although
In some embodiments, the prior information of convex shape of the profile of the sequence of physical parameters may be adopted and enforced in the learning model. Accordingly, the intermediate variable may be defined based on the second order derivative, the activation function (such as but not limited to RELU) may be adopted at the end of the learning model, and the transformation function may be based on indefinite integration, to recover the physical parameters to be predicted from the output of the intermediate sub-model, i.e., the predicted second order derivatives of the sequence of physical parameters.
In the above embodiments, the coronary artery is used as an example of vessel, however, it is contemplated that the vessel may be any one of coronary artery, carotid artery, abdominal aorta, cerebral vessel, ocular vessel, and femoral artery, etc.
The storage 601 may be configured to load or store the intermediate sub-model(s) according to any one or more embodiments of present disclosure, including, e.g., the constrained intermediate sub-models and transformation sub-models. The processor 602 may be configured to predict an intermediate variable based on the input physical information with the intermediate sub-model; and transform the intermediate variable predicted by the intermediate sub-model to the physical parameter with the transformation sub-model.
In some embodiments, the processor 602 may be a processing device including one or more general processing devices, such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), and so on. More specifically, the processor may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor running other instruction sets, or a processor that runs a combination of instruction sets. The processor may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a system on a chip (SoC), etc.
The storage 601 may be a non-transitory computer-readable medium, such as read only memory (ROM), random access memory (RAM), phase change random access memory (PRAM), static random access memory access memory (SRAM), dynamic random access memory (DRAM), electrically erasable programmable read-only memory (EEPROM), other types of random-access memory (RAM), flash disks or other forms of flash memory, cache, register, static memory, compact disc read only memory (CD-ROM), digital versatile disk (DVD) or other optical memory, cassette tape or other magnetic storage devices, or any other possible non-transitory medium used to store information or instructions that can be accessed by computer equipment, etc. The instructions stored on the storage 601, when executed by the processor 602, may perform the method for predicting a physical parameter based on the input physical information according to any embodiment of present disclosure. In some embodiments, the physical parameter prediction device 600 may also perform the model training function, and accordingly, the storage 601 may be configured to load training dataset of the physical information annotated with the physical parameter, and the processor 602 may be configured to collectively train the intermediate sub-model and the transformation sub-model based on loaded training dataset.
In some embodiments, physical parameter prediction device 600 may further include a memory 601′, which may be configured to load the intermediate sub-model(s) according to any one or more embodiments of present disclosure. The processor 602 may be communicatively coupled to the memory 601′ and configured to execute computer executable instructions stored thereon, to perform a method for predicting a physical parameter based on the input physical information according to any embodiment of present disclosure.
In some embodiments, the memory 601′ may be a non-transitory computer-readable medium, such as read only memory (ROM), random access memory (RAM), phase change random access memory (PRAM), static random access memory access memory (SRAM), dynamic random access memory (DRAM), electrically erasable programmable read-only memory (EEPROM), other types of random access memory (RAM), flash disks or other forms of flash memory, cache, register, static memory, or any other possible medium used to store information or instructions that can be accessed and executed by computer equipment, etc.
In some embodiments, physical parameter prediction device 600 may further include a communication interface 603. In some embodiments, the communication interface 603 may include any one of a network adapter, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adapter (such as optical fiber, USB 3.0, Thunderbolt interface, etc.), a wireless network adapter (Such as WiFi adapter), telecommunication (3G, 4G/LTE, 5G, etc.) adapters, etc.
Specifically, the image acquisition device 701 may include any one of normal CT, normal MRI, functional magnetic resonance imaging (such as fMRI, DCE-MRI, and diffusion MRI), cone beam computed tomography (CBCT), positron emission tomography (PET), Single-photon emission computed tomography (SPECT), X-ray imaging, optical tomography, fluorescence imaging, ultrasound imaging and radiotherapy field imaging, etc.
In some embodiments, the model training device 700 may be configured to train the physical parameter prediction model (for example, the unconstrained intermediate sub-model therein), and transmit the trained physical parameter prediction model to the physical parameter prediction device 600 for predicting physical parameter based on the input physical information according to any embodiment of present disclosure, by using the trained physical parameter prediction model. In some embodiments, the model training device 700 and the physical parameter prediction device 600 may be implemented by a single computer or processor.
In some embodiments, the physical parameter prediction device 600 may be a special purpose computer or a general-purpose computer. For example, the physical parameter prediction device 600 may be a computer customized for a hospital to perform image acquisition and image processing tasks, or may be a server in the cloud.
The physical parameter prediction device 600 may be connected to the model training device 700, the image acquisition device 701, and other components through the communication interface 603. In some embodiments, the communication interface 603 may be configured to receive a trained physical parameter prediction model from the model training device 700, and may also be configured to receive medical images from the image acquisition device 701, such as a set of images of vessels.
In some embodiments, the storage 601 may store a trained model, prediction result of the physical parameter, or the intermediate information generated during the training phase or the prediction phase, such as feature information generated while executing a computer program. In some embodiments, the memory 601′ may store computer-executable instructions, such as one or more image processing (such as physical parameter prediction) programs. In some embodiments, each unit, function, sub-model, and model may be implemented as applications stored in the storage 601, and these applications can be loaded to the memory 601′, and then executed by the processor 602 to implement corresponding processes.
In some embodiments, the model training device 700 may be implemented using hardware specially programmed by software that executes the training process. For example, the model training device 700 may include a processor and a non-transitory computer readable medium similar to the physical parameter prediction device 600. The processor implements training by executing executable instructions for the training process stored in a computer-readable medium. The model training device 700 may also include input and output interfaces to communicate with the training database, network, and/or user interface. The user interface may be used to select training data sets, adjust one or more parameters in the training process, select or modify the framework of the learning model, etc.
Another aspect of the disclosure is directed to a non-transitory computer-readable medium storing instructions which, when executed, cause one or more processors to perform the methods, as discussed above. The computer-readable medium may include volatile or non-volatile, magnetic, semiconductor-based, tape-based, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices. For example, the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed. In some embodiments, the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
Various modifications and changes can be made to the disclosed method, device, and system. In view of the description and practice of the disclosed system and related methods, other embodiments can be derived by those skilled in the art. Each claim of the present disclosure can be understood as an independent embodiment, and any combination between them can also be used as an embodiment of the present disclosure, and it is considered that these embodiments are all comprised in the present disclosure.
It is intended that the description and examples are to be regarded as exemplary only, with the true scope being indicated by the appended claims and their equivalents.
This application is based on and claims the benefit of priority of U.S. Provisional Application No. 63/081,279, filed on Sep. 21, 2020, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63081279 | Sep 2020 | US |