The present invention relates to a method and apparatus for estimating the gestational age of a fetus, and optionally for assessing some other key aspects of pregnancy evaluation.
Gestational Age (GA) of a fetus is defined as the time elapsed since the start of the Last Menstrual Period (LMP). Knowing GA has numerous clinical applications, including due date estimation, and growth surveillance. While in some cases it may be assessed by the mother's recollection of their last period date, doing so is problematic in general and so estimating it from ultrasound data is common practice.
An estimate of fetal gestational age is therefore a key output of routine obstetric scanning and is obtained by taking certain measurements of the fetus. In early pregnancy, this is currently done by measuring Crown Rump Length (CRL), and later it is estimated from a number of measurements, commonly including Head Circumference, Femur Length and Abdominal Circumference.
Measuring CRL is therefore a key component of current routine first trimester scanning. It requires capturing an image in the correct imaging plane which is defined by certain properties that are well-known to trained sonographers and defined in various imaging standards internationally, the key ones being that:
In practice, obtaining such an image is time-consuming, with much time being wasted on waiting for the fetus to be in a suitable position for measurement, and there is a clinical desire for a more time-efficient method of GA estimation. It has been estimated that around a third of the total first trimester scanning time is spent on obtaining a suitable CRL image.
A number of other key clinical assessments are made during fetal screening alongside assessment of GA. While the detail of these will vary across territories and scanning institutions, the following are often assessed: The presence of multiple fetuses, and in this case their chorionicity and amnionicity; demonstrating fetal viability by confirming cardiac activity and estimating the heart rate of the fetus(es); whether the pregnancy is ectopic; presence of fetal activity; risk of trisomy in the fetus(es); estimated fetal weight and body composition; fetal growth and development including of the brain; presence of pathologies such as acrania, gastroschisis, spina bifida or an anterior wall defect; presence of markers for genetic abnormality such as increased nuchal fold thickness or nasal bone hypoplasia; presence of risk factors for poor outcomes such as stillbirth; the sex of the fetus(es); fetal presentation (e.g. breech, cephalic etc.); placental location; categorisation of placental appearance; estimating the amount of amniotic fluid.
Improved methods for assessing these fetal characteristics are therefore also desirable.
Embodiments of the present invention aim to provide a method for obtaining reliable GA estimates throughout the course of gestation that does not require such stringent imaging criteria as has been necessary in previously considered approaches. Further embodiments include assessment of other key fetal characteristics including, for example, those listed above.
The present invention is defined in the attached independent claims, to which reference should now be made. Further, preferred features may be found in the sub-claims appended thereto.
According to one aspect of the present invention, there is provided a method for estimating the gestational age of a fetus, the method comprising obtaining at least one ultrasound image of at least a part of the fetus, and calculating an estimate of the gestational age of the fetus, and a corresponding confidence assessment for the estimate, from the image .
The method preferably comprises calculating the gestational age of the fetus and an associated confidence interval from any 2D ultrasound image of the fetus. This is in contrast to previously considered methods which require either 2D images from very specific planes to be acquired, or otherwise require a 3D volume.
The method may comprise calculating the gestational age of the fetus from an ultrasound image obtained at any point during gestation.
In a preferred arrangement, the method does not require an ultrasound machine operator to observe the or each ultrasound image.
Preferably, the method comprises calculating the estimate of the gestational age and/or the corresponding confidence assessment for the estimate using a trained machine learning model.
Preferably, the machine learning model is produced by training it on a set of representative ultrasound (US) images of fetuses, for each of which the GA at the time of imaging is known. Model training is preferably accomplished via a process known as supervised learning (which will be well known to a person skilled in the art), whereby the model is configured to achieve the strongest possible (whilst also robust and generalisable) association between each image and its corresponding GA value. In the present invention, the model and supervised training method are preferably additionally configured to output a range of values in which the GA is expected to lie. The known GA values preferably fall outside of this range with low probability.
Preferably, the supervised machine learning process takes the form of deep learning, in which the model is an artificial neural network. The supervised learning method may consist of optimising the parameters of this network, via stochastic gradient descent, in order to minimise a loss function. Preferably, the loss function shall be constructed such that for regression tasks (including for example estimation of GA and fetal heart rate) it generates improved performance in terms of both the accuracy of the estimate and the likelihood that the range supplied contains the true value. There now follows, by way of example, a class of loss functions which have this property. The description is framed in terms of GA prediction, but the skilled person will recognise that it is operable for other regression tasks.
A class of functions that have the desired properties are described by
L=L
reg
+αL
conf
In a preferred arrangement, the loss function is defined as:
L=(ln y−)2+α[max(0, |ln y−
|−f+(
))]
For classification tasks (including, for example, prediction of: chorionicity and amnionicity; the sex of the fetus(es); whether the pregnancy is ectopic; presence of fetal activity; risk of trisomy in the fetus(es); presence of pathologies such as acrania or gastroschisis; fetal presentation; placental location) it is also desirable that the model should both generate a prediction and quantify the uncertainty inherent in it. For such tasks, preferably, the loss function shall be constructed such that it generates improved performance in terms of both the accuracy of the estimate and quantification of the uncertainty inherent in it. We now describe, by way of example, a loss function which has this property. The description is framed in terms of predicting if the pregnancy is singleton or multiple, but the skilled person will recognise that it is readily operable for other categorical classification tasks.
is the digamma function
The loss (for a single example) is defined as:
L=L
class
+αL
conf
and α is an algorithm hyperparameter.
The loss is, of course, summed over all examples in the dataset. The skilled person will recognise that many other loss formulations may be used to achieve the same objective, including formulations based on mean squared error or Bayes risk, for example, and also ones based on distributions other than Dirichlet.
Having trained such a model it is possible to obtain from it point estimates of the class probabilities by computing, for example, the expectation of the resultant Dirichlet distribution:
A predicted category may be obtained by identifying the category having the highest class probability:
The uncertainty inherent in the predicted category may be estimated, for example, by computing the variance associated with it:
Preferably the neural network architecture will be a convolutional neural network, a vision transformer, or a variant thereof. Preferably the stochastic gradient descent will be performed using the Adam optimiser.
In a preferred arrangement, the method comprises determining at least one relative dimension of one or more anatomical features of the fetus.
The method preferably comprises determining a confidence value of the estimate of the gestational age. The confidence value may comprise an expression of the confidence that the estimated gestational age of the fetus is accurate. The method may comprise determining a confidence value of the estimate of the gestational age expressed as a range of ages, wherein the narrower the range the higher the confidence value.
The method may comprise obtaining a plurality of ultrasound images of the fetus. In a preferred arrangement, the method comprises determining a GA estimate and corresponding confidence range by applying a machine learning model to a plurality of ultrasound images, then applying numerical processing to the plurality of outputs obtained (including, though not limited to, averaging them).
In a preferred arrangement, one or more of the steps of determining a relative dimension, calculating a gestational age estimate and determining a confidence value is performed by an electronic processor.
Preferably, the step of determining at least one relative dimension within the image comprises determining a scale of one part of the image relative to at least one other part of the same image. Alternatively, or in addition, the step of determining at least one relative dimension within the image may comprise determining a scale of one part of the image relative to at least one part of at least one other image.
The image (or each of the images) may comprise an image of at least one anatomical feature of a fetus.
The method may comprise producing a plurality of estimates, more preferably with a plurality of corresponding confidence values, and filtering the plurality of estimates to select only one or more estimates that meet a predetermined confidence value threshold. The method may comprise producing a plurality of estimates, more preferably with a plurality of corresponding confidence values, and ranking the estimates based on confidence value.
The method may include directing an operative to obtain one or more specific images of the fetus. In one example, the operative may be instructed from the outset to acquire images of certain regions of the fetus that are known (whether through reasoning derived from clinical expertise or via data analysis) to yield accurate estimates of the GA. In another example the operative may be dynamically instructed, while scanning, to acquire additional images in the event that the images already acquired have wide confidence ranges associated with them.
The method may include arithmetically processing, for example averaging, the estimates of gestational age obtained from a plurality of images.
The images may be selected by an operative during a scan. Alternatively, or in addition, the images may comprise one or more sample frames captured/obtained automatically during a scanning operation.
The method may comprise a method of estimating biometric measurement(s) including, for example, one or more of crown rump length (CRL), head circumference (HC), femur length (FL), abdominal circumference (AC) and trans-cerebellar diameter (TCD).
The invention also includes a method of making one or more clinical assessments of a fetus, the method comprising obtaining an ultrasound image of the fetus and processing the image using a trained machine learning model to determine one or more of the following, including but not limited to: the presence of multiple fetuses, and in this case their chorionicity and amnionicity; demonstrating fetal viability by confirming cardiac activity and estimating the heart rate of the fetus(es); whether the pregnancy is ectopic; presence of fetal activity; risk of trisomy in the fetus(es); estimated fetal weight and body composition; fetal growth and development including of the brain; presence of pathologies such as acrania, gastroschisis, spina bifida or an anterior wall defect; presence of markers for genetic abnormality such as increased nuchal fold thickness or nasal bone hypoplasia; presence of risk factors for poor outcomes such as stillbirth; the sex of the fetus(es); fetal presentation (e.g. breech, cephalic etc.); placental location; categorisation of placental appearance; estimating the amount of amniotic fluid.
According to another aspect of the present invention, there is provided apparatus for estimating the gestational age of a fetus, the apparatus comprising an ultrasound image capturing device for obtaining an image of at least a part of the fetus, and an electronic processing device arranged to process the image, and to calculate an estimate of the gestational age of the fetus, and a corresponding confidence assessment of the estimate.
Preferably, the processing device is arranged to use a trained machine learning model to calculate the estimate and/or the confidence assessment.
The apparatus is preferably arranged to capture and process a plurality of ultrasound images of the fetus. In a preferred arrangement, the apparatus is arranged to determine at least one relative dimension of one or more anatomical features of the fetus.
The apparatus is preferably arranged to determine a confidence value for the, or each, estimate of the gestational age. The confidence value may comprise an expression of the confidence that the estimated gestational age of the fetus is accurate. The apparatus is preferably arranged in use to determine a confidence value of the estimate of the gestational age expressed as a range of ages, wherein the narrower a range the higher the confidence value.
The apparatus may be configured to provide real-time feedback to the user on the confidence range of the GA estimate that has been obtained, in order that they may direct scanning toward more suitable images when necessary.
The processing may be implemented directly on the ultrasound apparatus or else on separate hardware which receives a video feed from the ultrasound apparatus and, optionally displays results on a separate monitor.
The image (or each of the images) may comprise an image of at least one anatomical feature of a fetus.
The invention also includes apparatus for making one or more clinical assessments of a fetus, the apparatus comprising a processor for processing the image using a trained machine learning model to determine one or more of the following, including but not limited to: the presence of multiple fetuses, and in this case their chorionicity and amnionicity; demonstrating fetal viability by confirming cardiac activity and estimating the heart rate of the fetus(es); whether the pregnancy is ectopic; presence of fetal activity; risk of trisomy in the fetus(es); estimated fetal weight and body composition; fetal growth and development including of the brain; presence of pathologies such as acrania, gastroschisis, spina bifida or an anterior wall defect; presence of markers for genetic abnormality such as increased nuchal fold thickness or nasal bone hypoplasia; presence of risk factors for poor outcomes such as stillbirth; the sex of the fetus(es); fetal presentation (e.g. breech, cephalic etc.); placental location; categorisation of placental appearance; estimating the amount of amniotic fluid.
In a further aspect, the invention provides a computer programme product on a computer readable medium, comprising instructions that, when executed by a computer, cause the computer to perform a method of estimating the gestational age of a fetus, or for making one or more clinical assessments of a fetus, the method being according to any statement herein.
The invention also comprises a program for causing a device to perform a method of estimating the gestational age of a fetus, or for making one or more clinical assessments of a fetus, according to any statement herein.
The invention may include any combination of the features or limitations referred to herein, except such a combination of features as are mutually exclusive, or mutually inconsistent.
A preferred embodiment of the present invention will now be described, by way of example only, with reference to the accompanying diagrammatic drawings, in which:
In the developing fetus, different structures will develop at different rates and will change throughout gestation. In embodiments of the present invention, the way in which various structures change over time, optionally the way in which different structures change with respect to one another, can be used to estimate the age of the fetus using a trained model.
The current standard approach to estimating GA is via obtaining biometric measurements of the fetus and making use of correlations published in the clinical literature between those and GA. These measurements must be taken from imaging planes that have particular properties, and acquiring such images and taking accurate measurements requires a high level of skill from the operator and extensive training. It would be highly beneficial, both to healthcare providers and patients, if it were possible to significantly de-skill the process of accurately estimating GA, such that it could be performed by a much broader population, after minimal training. This would enable assessment to be performed in a primary care setting, for instance. It would be further preferable if such a process could be performed without the operator to look at the ultrasound images at all. This is especially preferable where it is undesirable (or illegal—as in India) to reveal the sex of the fetus.
The required level of operator skill may be reduced somewhat by the use of 3D ultrasound probes which, in principle at least, allows the burden of plane finding and measurement to be shifted somewhat from the operator to an algorithm. This is not a complete shift, since acquiring a suitable 3D volume still requires a higher level of skill than the average person has. Also, in practice, the plane finding and measurement problems are very difficult. All approaches that we are aware of are either less accurate than manual biometric measurement or impractical for use during routine scanning—where outputs are required in near real time—or both. Additionally, 3D ultrasound probes (and the types of ultrasound machines that they can work with) are significantly more expensive than 2D probes. It is therefore desirable to have a method of estimating GA which takes solely 2D ultrasound images as its inputs.
In some implementations, embodiments of the present invention use Deep Learning techniques to automatically generate reliable estimates of gestational age from ultrasound images directly, rather than via proxy biometric measurements. The method and associated apparatus are preferably capable of generating accurate estimates from a broad range of images, without having to carefully acquire specific images of the fetus or images having particular properties or even for the operator to have to look at the image. Although the input ultrasound images do not contain information about the real-world scale of the anatomy present within them, they do contain information on the relative scales of different areas of it, which can inform on fetal growth, without requiring information about the real-world size of pixels in the image.
A trained machine learning model is designed to produce an estimate of gestational age from any image of the fetus, rather than requiring images that have certain properties relating to the position of the fetus and presence of certain anatomical features. Since some images presented to the model may be unsuitable for this purpose, the model also reports its confidence in the prediction that it makes, which may be in the form of a prediction interval—or range—optionally containing the actual age with a confidence expressed as a percentage. The prediction intervals are dynamic and based on the suitability of the input image for the task. On good, or highly suitable, images the intervals are narrow, and on bad ones they are wide. The model predictions can then be filtered to include only those which date the gestational age with sufficiently high precision.
In some embodiments of the invention, the sonographer/operative may be directed to obtain certain images of the fetus that are known to be highly predictive of gestational age (e.g., mid-sagittal view of the whole fetus, axial cross-section of the head), and these images combined as inputs to the model to improve its predictive accuracy.
The model outputs may be averaged over a number of images (e.g., a collection selected by the sonographer during scanning, and/or a sample of frames obtained in real-time during scanning) where doing so is found to improve the accuracy of the predictions.
While the primary purpose of the invention is an estimate of gestational age, embodiments thereof may also be capable of estimating the biometric measurements (CRL, head circumference, etc.) that are commonly currently used as the basis of GA estimation and for growth surveillance.
A further purpose of the invention is to additionally output other predictions that are relevant to key aspects of clinical assessment in early pregnancy, including, though not limited to: the presence of multiple fetuses, and in this case their chorionicity and amnionicity; demonstrating fetal viability by confirming cardiac activity and estimating the heart rate of the fetus(es); whether the pregnancy is ectopic; presence of fetal activity; risk of trisomy in the fetus(es); estimated fetal weight and body composition; fetal growth and development including of the brain; presence of pathologies such as acrania, gastroschisis, spina bifida or an anterior wall defect; presence of markers for genetic abnormality such as increased nuchal fold thickness or nasal bone hypoplasia; presence of risk factors for poor outcomes such as stillbirth; the sex of the fetus(es); fetal presentation (e.g. breech, cephalic etc.); placental location; categorisation of placental appearance; estimating the amount of amniotic fluid.
Outputs which depend upon an analysis of multiple frames of video require a small adjustment to model configuration, as follows. Representation vectors, preferably obtained from a layer immediately prior to the prediction heads of the model, should be cached for each frame. The prediction heads should be adjusted to take as input a concatenation of some subset of these cached representation vectors. These adjustments are preferably made during both model training and in its application.
Embodiments of the invention are primarily concerned with the accurate estimation of gestational age during the first trimester of pregnancy without the need for the sonographer to acquire an image of the CRL in perfect position, with the expected result being a substantial reduction in the time required to complete the scan (e.g., ˜7 minutes per scan). However, GA estimates in later stages of pregnancy may also be produced.
Turning to
Whereas the embodiments described above use the example of gestational age estimation, it will be understood by the skilled person that there may, as an alternative or in addition, be assessments of different kinds, such as (but not limited to): the presence of multiple fetuses, and in this case their chorionicity and amnionicity; demonstrating fetal viability by confirming cardiac activity and estimating the heart rate of the fetus(es); whether the pregnancy is ectopic; presence of fetal activity; risk of trisomy in the fetus(es); estimated fetal weight and body composition; fetal growth and development including of the brain; presence of pathologies such as acrania, gastroschisis, spina bifida or an anterior wall defect; presence of markers for genetic abnormality such as increased nuchal fold thickness or nasal bone hypoplasia; presence of risk factors for poor outcomes such as stillbirth; the sex of the fetus(es); fetal presentation (e.g. breech, cephalic etc.); placental location; categorisation of placental appearance; estimating the amount of amniotic fluid.
Whilst endeavouring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance, it should be understood that the applicant claims protection in respect of any patentable feature or combination of features referred to herein, and/or shown in the drawings, whether or not particular emphasis has been placed thereon.
Number | Date | Country | Kind |
---|---|---|---|
2211036.5 | Jul 2022 | GB | national |