The disclosed subject matter is directed to methods and systems for calculating heart parameters. Particularly, the methods and systems can calculate heart parameters, such as ejection fraction, from a series of two-dimensional images of a heart.
Left ventricle (“LV”) analysis can play a crucial role in research aimed at alleviating human diseases. The metrics revealed by LV analysis can enable researchers to understand how experimental procedures are affecting the animals they are studying. LV analysis can provide critical information on one of the key functional cardiac parameters—ejection fraction—which measures how well the heart is pumping out blood and can be key in diagnosis and staging heart failure. LV analysis can also determine volume and cardiac output. Understanding these parameters can help researchers to produce valid, valuable study results.
Ejection fraction (“EF”) is a measure of how well the heart is pumping blood. The calculation is based on volume at diastole (when the heart is completely relaxed and the LV and right ventricle (“RV”) are filled with blood) and systole (when the heart contracts and blood is pumped from the LV and RV into the arteries). The equation for EF is shown below:
Ejection fraction is often required for point-of-care procedures. Ejection fraction can be computed using a three-dimensional (“3D”) representation of the heart. However, computing ejection fraction based on 3D representations requires a 3D imaging system with cardiac gating (e.g., MRI, CT, 2D ultrasounds with 3D motor, or 3D array ultrasound transducer), which is not always available.
Accordingly, there is a need for methods and systems for calculating heart parameters, such as ejection fraction, for point-of-care procedures.
The purpose and advantages of the disclosed subject matter will be set forth in and apparent from the description that follows, as well as will be learned by practice of the disclosed subject matter. Additional advantages of the disclosed subject matter will be realized and attained by the methods and systems particularly pointed out in the written description and claims hereof, as well as from the appended figures. To achieve these and other advantages and in accordance with the purpose of the disclosed subject matter, as embodied and broadly described, the disclosed subject matter is directed to methods and systems for calculating heart parameters, such as ejection fraction using two-dimensional (“2D”) images of a heart, and for example, in real time. The ability to display heart parameters in real time can enable medical care providers to make a diagnosis more quickly and accurately during ultrasound interventions, without needing to stop and take measurements manually or to send images to specialists, such as radiologists.
In one example, a method for calculating a heart parameter includes receiving, by one or more computing devices, a series of two-dimensional images of a heart, the series covering at least one heart cycle, and identifying, by one or more computing devices, a first systole image from the series of images associated with systole of the heart and a first diastole image from the series of images associated with diastole of the heart. The method also includes calculating, by one or more computing devices, an orientation of the heart in the first systole image and an orientation of the heart in the first diastole image, and calculating, by one or more computing devices, a segmentation of the heart in the first systole image and a segmentation of the heart in the first diastole image. The method also includes calculating, by one or more computing devices, a volume of the heart in the first systole image based on the orientation of the heart in the first systole image and the segmentation of the heart in the first systole image, and a volume of the heart in the first diastole image based at least on the orientation of the heart in the first diastole image and the segmentation of the heart in the first diastole image. The method also includes determining, by one or more computing devices, the heart parameter based at least on the volume of the heart in the first systole image and the volume of the heart in the first diastole image, and determining, by one or more computing devices, a confidence score of the heart parameter. The method also includes displaying, by one or more computing devices, the heart parameter and the confidence score.
In accordance with the disclosed subject matter, the method can include determining areas for the heart including an area for the heart for each image in the series of images, wherein identifying the first systole image can be based on identifying a smallest area among the areas, the smallest area representing a smallest heart volume. The method can include determining areas for the heart including an area for the heart for each image in the series of images, wherein identifying the first systole image can be based on identifying a largest area among the areas, the largest area representing a largest heart volume.
Calculating the orientation of the heart in the first systole image and the orientation of the heart in the first diastole image can be based on a deep learning algorithm. The method can include identifying a base and an apex of the heart in each of the first systole image and the first diastole image, wherein calculating the orientation of the heart in the first systole image and the orientation of the heart in the first diastole image can be based on the base and the apex in the respective image. Calculating the segmentation of the heart in the first systole image and the segmentation of the heart in the first diastole image can be based on a deep learning algorithm. The method can include determining a border of the heart in each of the first systole image and the first diastole image, wherein calculating the segmentation of the heart in the first systole image and the segmentation of the heart in the first diastole image can be based on the orientation of the heart in the respective image and the border of the heart in the respective image.
The method can include generating a wall trace of the heart including a deformable spline connected by a plurality of nodes, and displaying the wall trace of the heart in one of the first systole image and the first diastole image. The method can include receiving a user adjustment of at least one node to modify the wall trace. The method can further include modifying the wall trace of the heart in the other of the first systole image and the first diastole image, based on the user adjustment. The heart parameters can include ejection fraction. Determining the heart parameter can be in real time. The method can include determining a quality metric of the images in the series of two-dimensional images, and confirming that the quality metric is above a threshold.
In accordance with the disclosed subject matter, a method for calculating heart parameters includes receiving, by one or more computing devices, a series of two-dimensional images of a heart, the series covering a plurality of heart cycles, and identifying, by one or more computing devices, a plurality of systole images from the series of images, each associated with systole of the heart and a plurality of diastole images from the series of images, each associated with diastole of the heart. The method also includes calculating, by the one or more computing devices, an orientation of each heart in each of the systole images and an orientation of the heart in each of the diastole images, and calculating, by one or more computing devices, a segmentation of the heart in each of the systole images and a segmentation of the heart in each of the diastole images. The method also includes calculating, by one or more computing devices, a volume of the heart in each of the systoles image based on the orientation of the heart in the respective systole image and the segmentation of the heart in the respective systole image, and a volume of the heart in each of the diastole images based at least on the orientation of the heart in the respective diastole image and the segmentation of the heart in the respective diastole image. The method also includes determining, by one or more computing devices, the heart parameter based at least on the volume of the heart in each systole image and the volume of the heart in each diastole image, and determining, by one or more computing devices, a confidence score of the heart parameter. The method also includes displaying, by one or more computing devices, the heart parameter and the confidence score.
The series of images can cover six heart cycles, and the method can include identifying six systole images and six diastole images. The method can include generating a wall trace of the heart including a deformable spline connected by a plurality of nodes, and displaying the wall trace of the heart in at least one of the systole images and the diastole images. The method can include receiving a user adjustment of at least one node to modify the wall trace. The method can include modifying the wall trace of the heart in one or more other images, based on the user adjustment. The heart parameter can include ejection fraction.
In accordance with the disclosed subject matter, one or more computer-readable non-transitory storage media embodying software are provided. The software is operable when executed to receive a series of two-dimensional images of a heart, the series covering at least one heart cycle, and identify a first systole image from the series of images associated with systole of the heart and a first diastole image from the series of images associated with diastole of the heart. The software is operable when executed to calculate an orientation of the heart in the first systole image and an orientation of the heart in the first diastole image, and calculate a segmentation of the heart in the first systole image and a segmentation of the heart in the first diastole image. The software is operable when executed to calculate a volume of the heart in the first systole image based on the orientation of the heart in the first systole image and the segmentation of the heart in the first systole image, and a volume of the heart in the first diastole image based at least on the orientation of the heart in the first diastole image and the segmentation of the heart in the first diastole image. The software is operable when executed to determine the heart parameter based at least on the volume of the heart in the first systole image and the volume of the heart in the first diastole image, and determine a confidence score of the heart parameter. The software is operable when executed to display the heart parameter and the confidence score.
In accordance with the disclosed subject matter, a system including one or more processors; and a memory coupled to the processors including instructions executable by the processors are provided. The processors are operable when executing the instructions to receive a series of two-dimensional images of a heart, the series covering at least one heart cycle, and identify a first systole image from the series of images associated with systole of the heart and a first diastole image from the series of images associated with diastole of the heart. The processors are operable when executing the instructions to calculate an orientation of the heart in the first systole image and an orientation of the heart in the first diastole image, and calculate a segmentation of the heart in the first systole image and a segmentation of the heart in the first diastole image. The processors are operable when executing the instructions to calculate a volume of the heart in the first systole image based on the orientation of the heart in the first systole image and the segmentation of the heart in the first systole image, and a volume of the heart in the first diastole image based at least on the orientation of the heart in the first diastole image and the segmentation of the heart in the first diastole image. The processors are operable when executing the instructions to determine the heart parameter based at least on the volume of the heart in the first systole image and the volume of the heart in the first diastole image, and determine a confidence score of the heart parameter. The processors are operable when executing the instructions to display the heart parameter and the confidence score.
Reference will now be made in detail to various exemplary embodiments of the disclosed subject matter, exemplary embodiments of which are illustrated in the accompanying figures. For purpose of illustration and not limitation, the methods and systems are described herein with respect to determining parameters of a heart (human or animal), however, the methods and systems described herein can be used for determining parameters of any organ having varying volumes over time, for example, a bladder. As used in the description and the appended claims, the singular forms, such as “a,” “an,” “the,” and singular nouns, are intended to include the plural forms as well, unless the context clearly indicates otherwise. Accordingly, as used herein, the term image can be a medical image record and can refer to one medical image record, or a plurality of medical image records. For example, and with reference to
Referring to
Workstation 60 can take the form of any known client device. For example, workstation 60 can be a computer, such as a laptop or desktop computer, a personal data or digital assistant (“PDA”), or any other user equipment or tablet, such as a mobile device or mobile portable media player, or combinations thereof. Server 30 can be a service point which provides processing, database, and communication facilities. For example, the server 30 can include dedicated rack-mounted servers, desktop computers, laptop computers, set top boxes, integrated devices combining various features, such as two or more features of the foregoing devices, or the like. Server 30 can vary widely in configuration or capabilities, but can include one or more processors, memory, and/or transceivers. Server 30 can also include one or more mass storage devices, one or more power supplies, one or more wired or wireless network interfaces, one or more input/output interfaces, and/or one or more operating systems. Server 30 can include additional data storage such as VNA/PACS 50, remote PACS, VNA, or other vendor PACS/VNA.
The Workstation 60 can communicate with imaging modality 90 either directly (e.g., through a hard wired connection) or remotely (e.g., through a network described above) via a PACS. The imaging modality 90 can include an ultrasound imaging device, such as an ultrasound machine or ultrasound system that transmits the ultrasound signals into a body (e.g., a patient), receives reflections from the body based on the ultrasound signals, and generates ultrasound images from the received reflections. Although described with respect to an ultrasound imaging device, imaging modality 90 can include any medical imaging modality, including, for example, x-ray (or x-ray's digital counterparts: computed radiography (“CR”) and digital radiography (“DR”)), mammogram, tomosynthesis, computerized tomography (“CT”), magnetic resonance image (“MRI”), and positron emission tomography (“PET”). Additionally or alternatively, the imaging modality 90 can include one or more sensors for generating a physiological signal from a patient, such as electrocardiogram (“EKG”), respiratory signal, or other similar sensor systems.
A user can be any person authorized to access workstation 60 and/or server 30, including a health professional, medical technician, researcher, or patient. In some embodiments a user authorized to use the workstation 60 and/or communicate with the server 30 can have a username and/or password that can be used to login or access workstation 60 and/or server 30. In accordance with the disclosed subject matter, one or more users can operate one or more of the disclosed systems (or portions thereof) and can implement one or more of the disclosed methods (or portions thereof).
Workstation 60 can include GUI 65, memory 61, processor 62, and transceiver 63. Medical image records 71 (e.g., 71A, 71B) received by workstation 60 can be processed using one or more processors 62. Processor 62 can be any hardware or software used to execute computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer to alter its function to a special purpose computer, application-specific integrated circuit (“ASIC”), or other programmable digital data processing apparatus, such that the instructions, which execute via the processor of the workstation 60 or other programmable data processing apparatus, implement the functions/acts specified in the block diagrams or operational block or blocks, thereby transforming their functionality in accordance with embodiments herein. The processor 62 can be a portable embedded micro-controller or micro-computer. For example, processor 62 can be embodied by any computational or data processing device, such as a central processing unit (“CPU”), digital signal processor (“DSP”), ASIC, programmable logic devices (“PLDs”), field programmable gate arrays (“FPGAs”), digitally enhanced circuits, or comparable device or a combination thereof. The processor 62 can be implemented as a single controller, or a plurality of controllers or processors. The processor 62 can implement one or more of the methods disclosed herein.
Workstation 60 can send and receive medical image records 71 (e.g., 71A, 71B) from server 30 using transceiver 63. Transceiver 63 can, independently, be a transmitter, a receiver, or both a transmitter and a receiver, or a unit or device that can be configured both for transmission and reception. In other words, transceiver 63 can include any hardware or software that allows workstation 60 to communicate with server 30. Transceiver 63 can be either a wired or a wireless transceiver. When wireless, the transceiver 63 can be implemented as a remote radio head which is not located in the device itself, but in a mast. While
Server 30 can include a server processor 31 and VNA/PACS 50. The server processor 31 can be any hardware or software used to execute computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer to alter its function to a special purpose, a special purpose computer, ASIC, or other programmable digital data processing apparatus, such that the instructions, which execute via the processor of the client station or other programmable data processing apparatus, implement the functions/acts specified in the block diagrams or operational block or blocks, thereby transforming their functionality in accordance with embodiments herein. In accordance with the disclosed subject matter, the server processor 31 can be a portable embedded micro-controller or micro-computer. For example, server processor 31 can be embodied by any computational or data processing device, such as a CPU, DSP, ASIC, PLDs, FPGAs, digitally enhanced circuits, or comparable device or a combination thereof. The server processor 31 can be implemented as a single controller, or a plurality of controllers or processors.
As shown in
In operation, system 100 can be used to detect a heart parameter, such as ejection fraction, of the heart 80 depicted in the images 71 (e.g., 71A, 71B) of series 70. The system 100 can automate the process of detecting the heart parameters, which can remove the element of human subjectivity (which can remove errors) and can facilitate the rapid calculation of the parameter (which reduces the time required to obtain results).
The series 70 of images 71 (e.g., 71A, 71B) can be received by system 100 from imaging modality 90 in real time. The system 100 can identify the image 71 (e.g., 71A, 71B) associated with systole and diastole, respectively. For example, systole and diastole can be determined directly from the images 71 (e.g., 71A, 71B) through computation of the area of the left ventricle 81 in each image 71 (e.g., 71A, 71B). Systole can be the image 71 (e.g., 71B) (or images where several cycles are provided) associated with a minimum area and diastole can be the image 71 (e.g., 71A) (or images where several cycles are provided) associated with a maximum area. The area can be calculated as a summation of the pixels within the segmented region of the left ventricle 81. A model can be trained to perform real-time identification and tracking of the left ventricle 81 in each image 71 (e.g., 71A, 71B) of the series 70. For example, the system 100 can use 2D segmentation model to generate the segmented region, for example, as shown in images 71A and 71B in
In another embodiment, the model can be trained to identify diastole and systole directly from a sequence of images, based on image features. For example using a recurrent neural network (“RNN”), a sequence of images is used as input, and from that sequence the frames which correspond to diastole and systole can be marked.
In accordance with the disclosed subject matter, the system 100 can determine a quality metric of the images in the series of two-dimensional images. The system can confirm that the quality metric is above a threshold. For example, if the quality metric is above the threshold, the system 100 can proceed to calculate the volume; if the quality metric is below the threshold, the images will not be used for determining the volume.
The volume calculation for the left ventricle 81 for each of the images 71 (e.g., 71A, 71B) identified as diastole and systole can be a two-step process including (1) segmentation of the frame; and (2) computation of the orientation. For example, and as shown in
To calculate the segmentation of the frame and the orientation, the system 100 can identify the interior (endocardial) and heart wall boundary. This information can be used to obtain the measurements needed to calculate cardiac function metrics. The system 100 can perform the calculation using a model trained with deep learning. The model can be created using (1) an abundance of labeled input data; (2) a suitable deep learning model; and (3) successful training of the model parameters.
For example, the model can be trained using 2,000 data sets, or another amount, for example, 1,000 data sets or 5,000 data sets, collected in the parasternal long-axis view, and with the inner wall boundaries fully traced over a number of cycles. The acquisition frame rate, which can depend on the transducer and imaging settings used, can vary from 20 to 1,000 frames per second (fps). Accordingly, 30 to 100 individual frames can be traced for each cine loop. As clear to one skilled in the art, more correctly-labeled training data generally results in better AI models. A collection of over 150,000 unique images can be used for training. Training augmentation can include horizontal flip, noise, rotations, sheer transformations, contrast, brightness, and deformable image warp. In some embodiments, Generative Adversarial Networks (“GANS”) can be used to generate additional training data. A model using data organized as 2D or 3D sets can be used, however, a 2D model can provide simpler training. For example, a 3D model taking as input a series of images in sequence through the heart cycle, or a sequence of diastole/systole frames can be used. A human evaluation data set can include approximal 10,000 images at 112×112, or other resolutions, for example, 128×128 or 256×256 pixels with manually segmented LV regions. As one skilled in the art would appreciate, different configurations can balance accuracy with inference (execution) time for the model. In a real-time situation a smaller image can be beneficial to maintain processing speed at the cost of some accuracy.
A U-Net model with an input output size of 128×128 can be trained on a segmentation map of the inner wall region. Other models can be used, including DeepLab, EfficientDet, or MobileNet frameworks, or other suitable models. The model architecture can be designed new or can be a modified version of the aforementioned models. Although one skilled in the art will recognize that the number of parameters in the models can vary, the more parameters, typically, the slower the processing time at inference. However, usage of external AI processors, higher end CPUs, embedded and discrete GPUs can improve processing efficiency.
In one example, an additional model configured to identify orientation of the heart can identify the apex and base points of the heart, the two outflow points, or a slope/intercept pair. The model can output two or more data points (e.g., a set of xy data pairs) or directly the slope and intercept point of the heart orientation. Additionally, or alternately, the model used to compute the LV segmentation can also directly generate this information. For example, the segmentation model can generate as a separate output a set of xy data pairs corresponding to the apex and outflow points or the slope and intercept of the orientation line. Alternatively, the model as a separate output channel, can encode the points of the apex and outflow as regions which, using post processing, can identify these positions.
Training can be performed, for example on an NVIDIA VT100 GPU and can use a TensorFlow/Keras-based training framework. As one skilled in the art would appreciate, other deep learning enabled processors can be used for training. As well, other model frameworks such as PyTorch can be used for training. Other training hardware and other training/model frameworks will become available and are interchangeable.
Deep learning models can use separate models to train for identification of segmentation and orientation, respectively, or a combined model trained to identify both features with separate outputs for each data type. Training models separately allows each model to be trained and tested independently. As an example, the models can run in parallel, which can improve efficiency. Additionally or alternatively, models used to determine the diastole and systole frames can be the same as the LV segmentation model, which is a simple solution, or different, which can enable optimizations to the diastole/systole detection model.
As an example, the models can be combined as shown in the model architecture 200 of
If the model contains more than one output node, it can be trained in a single pass. Alternatively, it can be trained in two separate passes whereby the segmentation output is trained first, at which point the encoding stages parameters are locked, and only the parameters corresponding to the orientation output are trained. Using two separate passes is a common approach with models containing two distinct types of outputs which do not share a similar dimension or shape or type. The training model can be selected based on inference efficiency, accuracy, and implementation simplicity and can be different for different hardware and configurations. Additional models can include sequence networks, RNNS, or networks consisting of embedded LSTM, GRU, or other recurrent layers. The models can be beneficial in that they can utilize prior frame information rather than the instantaneous snapshot of the current frame. Other solutions can utilize 2D models where the input channels are not just the single input frame but can include a number of previous frames. As an example, instead of providing the previous frame, the previous segmentation region can be provided. Additional information can be layered as additional channels to the input data object.
Using the segmentation and the orientation, system 100 can calculate the volume using calculus or other approximations such as a “method of disks” or “Simpson's method,” where the volume is the summation of a number of disks using the equation shown below:
where d is a diameter of each segmentation and h is the height of the left ventricle 81 along its orientation (e.g., the major axis).
Multiple pairs of systole and diastole in sequence can be used to improve overall accuracy of the calculation. For example, in a sequence of systole-diastole “S D S D S D S” six separate ejection fractions can be calculated and can improve the overall accuracy of the calculation. This approach can also give a measure of accuracy (also referred to herein as a confidence score) to the user by calculation of metrics such as standard deviation or variance. The ejection fraction value, or other metrics, can be presented directly to the user in a real time scenario. For example, the confidence score can help inform the user if the detected value is accurate. For instance, a standard deviation measures how much the measurements per each cycle vary. A large variance can indicate that the patient heart cycle is changing too rapidly and thus the measurements are inaccurate. The metrics can be based on the calculated EF value or other measures such as the heart volume, area, or position. For example, if the heart is consistently in the same position, as measured by an intersection-over-union calculation of the diastolic and systolic segmentation regions, then the confidence that the calculations are accurate increases. The confidence score can be displayed as a direct measure of the variance or interpreted and displayed as a relative measure; for example “high quality”, “medium quality”, “poor quality”. In some embodiments an additional model, trained to classify good heart views can be trained and used to provide additional metrics on the heart view used and its suitability for EF calculations.
As used herein, “real-time” data acquisition does not need to be 100% in synchronization with image acquisition. For example, acquisition of images can occur at about 30 fps. Although complete ejection fraction calculation can be slightly delayed, a user can still be provided with relevant information. For example, the ejection fraction value does not change dramatically over a short period of time. Indeed, ejection fraction as a measurement requires information from a full heart cycle (volume at diastole and volume at systole). Additionally or alternatively, a sequence of several systole frames can be batched together before ejection fraction is calculated. Thus, the value for ejection fraction can be delayed by one or more heart cycles. This delay can allow a more complex AI calculation to run than might be able to run at the 30 fps rate of image acquisition. Accordingly, a value delayed by for example, up to 5 seconds (for example 1 second) is considered “real time” as used herein. However, it is further noted that not all frames are required to be used for the volume calculation. Rather, one or more frames associated with systole or diastole can be used. In some embodiments initial results can be displayed immediately after 1 heart cycle and then updated as more heart cycles are acquired and the calculations repeated. For example, as more heart cycles are acquired, an average EF of the previous heart cycles can be displayed. Additionally, or alternatively, out of a set of heart cycles, one or more heart cycles can provide incorrect calculations because of patient motion, or temporary incorrect positioning of the probe. The displayed cardiac parameters can exclude these cycles from the final average improving the accuracy of the calculation.
Referring to
Once a user has adjusted the shape of any particular spline object 18, the change can be propagated to neighboring images 71 (e.g., 71A-71E). For example, if the user adjusts the spline object 18 for image 71E, which depicts systole, the spline objects 18 for neighboring images 71 (e.g., 71A-71E) depicting systole can be adjusted using frame adaptation methods. It can be understood that within a short period of time, over a range of several heart cycles, all of the systole (or diastole) frames are similar to other frames depicting systole (or diastole). The similarities between frames can be estimated. If they are similar, then the results of one frame can be translated to the other frames using methods such as optical flow. The frame the user adjusted can be warped to neighboring systole frames using optical flow, as it can be understood the other frames require similar adjustments as applied by the user to the initial frame. In accordance with the disclosed subject matter, a condition can be added that once a frame is manually adjusted it is not adjusted in future propagated (automatic) adjustments.
In accordance with the disclosed subject matter, an algorithm configured for real-time computation of ejection fraction (for example, an algorithm that can present ejection fraction while a user is imaging a heart) can be simpler and faster than an algorithm configured for post-processing computation of ejection fraction. For example, during imaging a real-time computation of ejection fraction can be presented to the user. Upon pausing acquisition of images the system 100 can run a more complex algorithm and provide a computation of ejection fraction based on a more complex algorithm. Accordingly, the system 100 can generate heart parameters, such as ejection fraction, when traditional systems that merely post process images are too slow to be useful. Moreover, the system 100 can generate more accurate heart parameters than traditional systems and display indications of that accuracy via a confidence score, as described above, thus reducing operator-induced errors.
Although ejection fraction is calculated based on the volume as systole and diastole, area and volume calculations over an entire heart cycle can be useful. Accordingly, trace objects 18 can be generated for all frames (including systole and diastole). This generation can be done by repeating the processes described above, and can include the following workflow: (1) select a region of a data set to process (for example part of a heart cycle, all of a heart cycle, or multiple heart cycles); (2) performed segmentation on each frame; (3) perform intra-frame comparisons to remove anomalous inference results; (4) compute edges of each frame; (5) identify apex and outflow points; and (6) generate smooth splines from edge map. Additionally or alternatively, optical flow can be used to generate frames between the already computed diastole-systole frame pairs. This process can incorporate changes made by the user to the diastole and systole spline objects 18.
The method 1000 can begin at step 1010, where the method includes receiving, by one or more computing devices, a series of two-dimensional images of a heart, the series covering at least one heart cycle. At step 1020, the method includes identifying, by one or more computing devices, a first systole image from the series of images associated with systole of the heart and a first diastole image from the series of images associated with diastole of the heart. At step 1030, the method includes calculating, by one or more computing devices, an orientation of the heart in the first systole image and an orientation of the heart in the first diastole image. At step 1040, the method includes calculating, by one or more computing devices, a segmentation of the heart in the first systole image and a segmentation of the heart in the first diastole image. At step 1050, the method includes calculating, by one or more computing devices, a volume of the heart in the first systole image based on the orientation of the heart in the first systole image and the segmentation of the heart in the first systole image, and a volume of the heart in the first diastole image based at least on the orientation of the heart in the first diastole image and the segmentation of the heart in the first diastole image. At step 1060, the method includes determining, by one or more computing devices, the heart parameter based at least on the volume of the heart in the first systole image and the volume of the heart in the first diastole image. At step 1070, the method includes determining, by one or more computing devices, a confidence score of the heart parameter. At step 1080, the method includes displaying, by one or more computing devices, the heart parameter and the confidence score.
In accordance with the disclosed subject matter, the method can repeat one or more steps of the method of
As described above in connection with certain embodiments, certain components, e.g., server 30 and workstation 60, can include a computer or computers, processor, network, mobile device, cluster, or other hardware to perform various functions. Moreover, certain elements of the disclosed subject matter can be embodied in computer readable code which can be stored on computer readable media (e.g., one or more storage memories) and which when executed can cause a processor to perform certain functions described herein. In these embodiments, the computer and/or other hardware play a significant role in permitting the system and method for calculating a heart parameter. For example, the presence of the computers, processors, memory, storage, and networking hardware provides the ability to calculate a heart parameter in a more efficient manner. Moreover, storing and saving the digital records cannot be accomplished with pen or paper, as such information is received over a network in electronic form.
The subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus.
A computer storage medium can be, or can be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium also can be, or may be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
The term processor encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA or an ASIC. The apparatus also can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program can, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA or an ASIC.
Processors suitable for the execution of a computer program can include, by way of example and not by way of limitation, both general and special purpose microprocessors. Devices suitable for storing computer program instructions and data can include all forms of non-volatile memory, media and memory devices, including by way of example but not by way of limitation, semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Additionally, as described above in connection with certain embodiments, certain components can communicate with certain other components, for example via a network, e.g., a local area network or the internet. To the extent not expressly stated above, the disclosed subject matter is intended to encompass both sides of each transaction, including transmitting and receiving. One of ordinary skill in the art will readily understand that with regard to the features described above, if one component transmits, sends, or otherwise makes available to another component, the other component will receive or acquire, whether expressly stated or not.
In addition to the specific embodiments claimed below, the disclosed subject matter is also directed to other embodiments having any other possible combination of the dependent features claimed below and those disclosed above. As such, the particular features presented in the dependent claims and disclosed above can be combined with each other in other possible combinations. Thus, the foregoing description of specific embodiments of the disclosed subject matter has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosed subject matter to those embodiments disclosed.
It will be apparent to those skilled in the art that various modifications and variations can be made in the method and system of the disclosed subject matter without departing from the spirit or scope of the disclosed subject matter. Thus, it is intended that the disclosed subject matter include modifications and variations that are within the scope of the appended claims and their equivalents.