The present disclosure relates to predicting a shape of an anatomical region. A computer-implemented method, a computer program product, and a system, are disclosed.
An aneurism is an unusually-enlarged region of a blood vessel. Aneurisms are caused by weaknesses in the blood vessel wall. Aneurisms can develop in any blood vessel in the body, and most frequently occur in the brain and in the abdominal aorta. Aneurisms require treatment in order to avoid the risk of rupture and consequent internal bleeding and/or haemorrhagic stroke.
The monitoring of aneurisms, and moreover anatomical regions in general, often involves the acquisition of an initial three-dimensional, i.e. volumetric, image of the anatomical region. Subsequently, two-dimensional images of the anatomical region may be acquired over time during follow-up imaging procedures in order to investigate how the anatomical region evolves. The initial volumetric image provides a clinician with detailed information on the anatomical region, and may for example be generated with a computed tomography “CT”, or a magnetic resonance “MR” imaging system. The initial volumetric image may be generated using a contrast agent. CT angiography “CTA”, or MR angiography “MRA” images may for example be generated for this purpose. The two-dimensional images that are acquired during the follow-up imaging procedures may be generated periodically, for example every three months, or at different time intervals. The two-dimensional images are often generated using a projection imaging system such as an X-ray imaging system. A patient's exposure to X-ray radiation may be reduced by generating two-dimensional images instead of volumetric images during the follow-up imaging procedures. The two-dimensional images are often generated using a contrast agent. Digital subtraction angiography “DSA” images, may for example be generated for this purpose. In addition to aneurisms, anatomical regions such as lesions, stenoses, and tumors may also be monitored in this manner.
The ability to accurately evaluate how an anatomical region evolves over time between the acquisition of the initial volumetric image, and the acquisition of the subsequent two-dimensional images at the follow-up imaging procedures, is important since this informs critical decisions such as the follow-up imaging interval, and the need for an interventional procedure.
However, the interpretation of the two-dimensional images is challenging in view of the limited shape information they provide as compared to the initial volumetric image.
Consequently, there is a need for improvements in determining the shape of anatomical regions over time.
According to one aspect of the present disclosure, a computer-implemented method of predicting a shape of an anatomical region includes:
Examples of the present disclosure are provided with reference to the following description and figures. In this description, for the purposes of explanation, numerous specific details of certain examples are set forth. Reference in the specification to “an example”, “an implementation” or similar language means that a feature, structure, or characteristic described in connection with the example is included in at least that one example. It is to be appreciated that features described in relation to one example may also be used in another example, and that all features are not necessarily duplicated in each example for the sake of brevity. For instance, features described in relation to a computer implemented method, may be implemented in a computer program product, and in a system, in a corresponding manner.
In the following description, reference is made to computer-implemented methods that involve predicting a shape of an anatomical region. Reference is made to an anatomical region in the form of an aneurism. However, it is to be appreciated that the methods may also be used to predict the shape of other anatomical regions in a similar manner. For example, the methods may be used to predict the shapes of lesions, stenoses, and tumors. Moreover, it is to be appreciated that the anatomical region may be located within the vasculature, or in another part of the anatomy.
It is noted that the computer-implemented methods disclosed herein may be provided in the form of a non-transitory computer-readable storage medium including computer-readable instructions stored thereon, which, when executed by at least one processor, cause the at least one processor to perform the method. In other words, the computer-implemented methods may be implemented in a computer program product. The computer program product can be provided by dedicated hardware, or hardware capable of running the software in association with appropriate software. In a similar manner, the computer-implemented methods disclosed herein may be implemented by a system comprising one or more processors that are configured to carry out the methods. When provided by a processor. the functions of the method features can be provided by a single dedicated processor, or by a single shared processor, or by a plurality of individual processors. some of which can be shared. The explicit use of the terms “processor” or “controller” should not be interpreted as exclusively referring to hardware capable of running software. and can implicitly include, but is not limited to. digital signal processor “DSP” hardware. read only memory “ROM” for storing software. random access memory “RAM”. a non-volatile storage device, and the like. Furthermore. examples of the present disclosure can take the form of a computer program product accessible from a computer-usable storage medium, or a computer-readable storage medium. the computer program product providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description. a computer-usable storage medium or a computer readable storage medium can be any apparatus that can comprise, store, communicate, propagate, or transport a program for use by or in connection with an instruction execution system. apparatus. or device. The medium can be an electronic. magnetic. optical. electromagnetic. infrared. or a semiconductor system or device or propagation medium. Examples of computer-readable media include semiconductor or solid state memories, magnetic tape. removable computer disks. random access memory “RAM”. read-only memory “ROM”. rigid magnetic disks and optical disks. Current examples of optical disks include compact disk-read only memory “CD-ROM”, compact disk-read/write “CD-R/W”. Blu-Ray™ and DVD.
As mentioned above. the ability to accurately evaluate how an anatomical region evolves over time between the acquisition of an initial volumetric image. and the acquisition of subsequent two-dimensional images at follow-up imaging procedures. is important since this informs critical decisions such as the follow-up imaging interval. and the need for an interventional procedure. However, the interpretation of the subsequent two-dimensional images is challenging in view of the limited shape information they provide as compared to the initial volumetric image.
By way of an example. the monitoring of an aneurism over time often involves generating an initial volumetric CT image of the aneurism. and the subsequent generation of two-dimensional DSA projection images during follow-up imaging procedures. DSA imaging employs a contrast agent that highlights the blood flow within the vasculature.
As may be appreciated. the two-dimensional projection DSA image in
The
The
With reference to the inference-time method of
The historic volumetric image data I13D may be provided by various types of imaging systems, including for example a CT imaging system, an MRI imaging system, an ultrasound imaging system and a positron emission tomography “PET” imaging system. In some examples, a contrast agent may be used to generate the historic volumetric image data I13D. Thus, the historic volumetric image data I13D that is received in the operation S110 may for example include MRI, CT, MRA, CTA, ultrasound, or PET image data.
With continued reference to the method of
The example neural network 110 illustrated in
The neural network 110 described with reference to
In general. the training of a neural network involves inputting a large training dataset into the neural network. and iteratively adjusting the neural network's parameters until the trained neural network provides an accurate output. Training is often performed using a Graphics Processing Unit “GPU” or a dedicated neural processor such as a Neural Processing Unit “NPU” or a Tensor Processing Unit “TPU”. Training often employs a centralized approach wherein cloud-based or mainframe-based neural processors are used to train a neural network. Following its training with the training dataset. the trained neural network may be deployed to a device for analyzing new input data during inference. The processing requirements during inference are significantly less than those required during training. allowing the neural network to be deployed to a variety of systems such as laptop computers. tablets. mobile phones and so forth. Inference may for example be performed by a Central Processing Unit “CPU”, a GPU, an NPU. a TPU, on a server, or in the cloud.
The process of training the neural network 110 therefore includes adjusting its parameters. The parameters, or more particularly the weights and biases. control the operation of activation functions in the neural network. In supervised learning. the training process automatically adjusts the weights and the biases. such that when presented with the input data. the neural network accurately provides the corresponding expected output data. In order to do this, the value of the loss functions, or errors. are computed based on a difference between predicted output data and the expected output data. The value of the loss function may be computed using functions such as the negative log-likelihood loss. the mean squared error, or the Huber loss. or the cross entropy loss. During training. the value of the loss function is typically minimized. and training is terminated when the value of the loss function satisfies a stopping criterion. Sometimes, training is terminated when the value of the loss function satisfies one or more of multiple criteria.
Various methods are known for solving the loss minimization problem such as gradient descent. Quasi-Newton methods, and so forth. Various algorithms have been developed to implement these methods and their variants including but not limited to Stochastic Gradient Descent “SGD”, batch gradient descent. mini-batch gradient descent. Gauss-Newton. Levenberg Marquardt. Momentum. Adam, Nadam. Adagrad. Adadelta. RMSProp. and Adamax “optimizers” These algorithms compute the derivative of the loss function with respect to the model parameters using the chain rule. This process is called backpropagation since derivatives are computed starting at the last layer or output layer, moving toward the first layer or input layer. These derivatives inform the algorithm how the model parameters must be adjusted in order to minimize the error function. That is. adjustments to model parameters are made starting from the output layer and working backwards in the network until the input layer is reached. In a first training iteration. the initial weights and biases are often randomized. The neural network then predicts the output data. which is likewise, random.
Backpropagation is then used to adjust the weights and the biases. The training process is performed iteratively by making adjustments to the weights and biases in each iteration. Training is terminated when the error, or difference between the predicted output data and the expected output data. is within an acceptable range for the training data, or for some validation data. Subsequently the neural network may be deployed, and the trained neural network makes predictions on new input data using the trained values of its parameters. If the training process was successful, the trained neural network accurately predicts the expected output data from the new input data.
Various examples of methods for training the neural network 110 are described below with reference to
With reference to
The volumetric training image data I13D that is received in the operation S210 may be provided by any of the imaging systems mentioned above for the historic volumetric image data I13D; i.e. it may be provided by a CT imaging system, or an MRI imaging system, or an ultrasound imaging system, or a positron emission tomography “PET” imaging system. Thus, the volumetric training image data I13D that is received in the operation S210 may for example include MRI, CT, MRA, CTA, ultrasound, or PET image data.
The volumetric training image data I13D that is received in the operation S210 represents the anatomical region at an initial time step t1. The two-dimensional training image data I22D, In2D, In+12D that is received in the operation S220 represents the anatomical region at each of a plurality of time steps t2, tn, tn+1 in a sequence after the initial time step t1. The use of various types of training image data is contemplated for the two-dimensional training image data I22D, In2D, In+12D. In some examples, the two-dimensional training image data I22D, In2D, In+12D is provided by a two-dimensional imaging system, such as for example an X-ray imaging system or a 2D ultrasound imaging system. An X-ray imaging system generates projection data, and therefore the two-dimensional training image data in this former example may be referred-to as projection training image data. In accordance with these examples, the two-dimensional training image data I22D, In2D, In+12D 221 that is received in the operation S220 may therefore include two-dimensional X-ray image data, contrast-enhanced 2D X-ray image data, 2D DSA image data or 2D ultrasound image data. In some examples however, the two-dimensional training image data I22D, In2D may be generated by projecting volumetric training image data that is generated by a volumetric imaging system such as a CT, or an MRI, or an ultrasound, or a PET, imaging system, onto a plane. Techniques such as ray casting or other known methods may be used to project the volumetric training image data onto a plane. This may be useful in situations where only volumetric training image data is available.
The two-dimensional training image data I22D, In2D, In+12D may for example be generated periodically, i.e. at regular intervals after the initial time step t1, for example every three months, or at different intervals after the initial time step t1: i.e. aperiodically.
The volumetric training image data I13D, and the two-dimensional training image data I22D, In2D, In+12D that are received are the respective operations S210 and S220 may be received via any form of data communication, as mentioned above for the historic volumetric image data I13D.
The volumetric training image data I13D that is received in the operation S210, and/or the two-dimensional training image data I22D, In2D that is received in the operation S220, may also be annotated. The annotation may be performed manually by an expert user in order to identify the anatomical region, for example the aneurism. Alternatively, the annotation may be performed automatically. In this respect, the use of various automatic image annotation techniques from the image processing field is contemplated, including for example binary segmentation, triangular mesh extracted from binary segmentation for 3D images, and so forth. The use of known image segmentation techniques is contemplated, such as for example: thresholding, template matching, active contour modeling, model-based segmentation, neural networks, e.g., U-Nets, and so forth.
The operations: inputting S230, generating S240, projecting S250 and adjusting S260 that are performed in the above training method are illustrated in
The operation of constraining of the predicted volumetric shape is therefore implemented by the first loss function 130. Loss functions such as MSE, the L2 loss, or the binary cross entropy loss, and so forth may serve as the first loss function 130. The first loss function may be defined as:
The value of the first loss function may be determined by registering the received two-dimensional training image data I22D to either the received volumetric image data I13D at the initial time step t1 or the predicted volumetric image data for the time step t2 to determine the plane that the predicted volumetric image data or the time step t2 is projected onto and to generate the projected predicted volumetric image data for the time step t2, and computing a value representing the difference between the projected predicted volumetric image data for the time step t2, and to the received two-dimensional training image data I22D for the time step t2.
In the case where an annotation of an anatomical region is available, the value of the first loss function may be determined by applying a binary mask to the projected predicted volumetric image data or the time step t2, and to the received two-dimensional training image data I22D for the time step t2, and computing a value representing their difference in the annotated region.
After having adjusted the parameters of the neural network 110 in the operation S260, the training method continues by predicting the volumetric image data for the next time step in the sequence, i.e. tn, and likewise, constraining this prediction with the two-dimensional training image data from the time step tn, i.e. In2D. This is then repeated for all time steps in the sequence, i.e. up to and including time step tn+1 in
In so doing, the training method described above with reference to
Whilst the training method was described above for an anatomical region in a single subject, the training may be performed for the anatomical region in multiple subjects. The training image data may for example be provided for more than a hundred subjects across different age groups, genders, body mass index, abnormalities in the anatomical region, and so forth. Thus, in one example, the received volumetric training image data I13D represents the anatomical region at an initial time step t1 in a plurality of different subjects: and the received two-dimensional training image data I22D, In2D comprises a plurality of sequences, each sequence representing the anatomical region in a corresponding subject at a plurality of time steps t2, tn in a sequence after the initial time step t1 for the corresponding subject: and the inputting S230, the generating S240, the projecting S250, and the adjusting S260, are performed with the received volumetric training image data I13D and the received two-dimensional training image data I22D, In2D for each subject.
As mentioned above, in the projecting operation S250, the image plane of the received two-dimensional training image data I22D, In2D for the time step t2, tn may be determined by i) registering the received two-dimensional training image data I22D, In2D for the time step t2, tn to the received volumetric training image data I22D, In2D for the initial time step t1, or by ii) registering the received two-dimensional training image data I22D, In2D for the time step t2, tn to the predicted volumetric training image data , for the time step t2, tn. Various known image registration techniques may be used for this purpose.
As mentioned above, anatomical regions are often monitored over time by generating an initial volumetric image, and then generating projection images at subsequent follow-up imaging procedures. This provides a certain amount of training image data that may, as described above, be used to train the neural network 110. In some cases however, additional volumetric image data may also be available from such monitoring procedures, presenting the opportunity for volumetric image data to be used in combination with the two-dimensional training image data I22D, In2D to train the neural network 110. The use of the additional volumetric image data may provide improved, or faster, training of the neural network 110. Thus, in one example, the above-described training method is adapted, and the neural network 110 is trained to predict the volumetric image data representing the anatomical region at the second point in time, by further:
This example is described with reference to
The volumetric training image data I23D, In3D that is used in the
The second loss function 140 described with reference to
The predictions of the neural network 110 described above may in general be improved by training the neural network to predict the volumetric image data , , based further on the time difference between when the historic volumetric image data I13D was acquired, and the time of the prediction, i.e. the time difference between the historic point in time t1, and the time t2, or tn, or tn+. This time difference is illustrated in the Figures by the symbols Dt1, Dt2, and Dtn, respectively. In the illustrated example, Dt1 may be zero. Basing the predictions of the neural network 110 on this time difference allows the neural network 110 to learn the association between a length of the time difference, and changes in the anatomical region. Thus, in one example, the neural network 110 is trained to generate the predicted volumetric image data representing the anatomical region at the second point in time, based further on a time difference between the first point in time and the second point in time, and the inference-time method also includes:
In practice, the time difference that is used may depend on factors such as type of the anatomical region, the rate at which it is expected to evolve, and the severity of its condition. In the example of the anatomical region being an aneurism, follow-up imaging procedures are often performed at three-monthly intervals, and so the time difference may for example be set to three months. In general however, the time interval may be set to any value, and the time interval may be periodic, or aperiodic.
As mentioned above, in some cases, anatomical regions are monitored by acquiring an initial volumetric image, i.e. the historic volumetric image data I13D, and subsequently acquiring two-dimensional image data, or more specifically, projection image data of the anatomical region over time. The projection image data may be generated by an X-ray imaging system. In accordance with one example, projection image data is used at inference-time to constrain the predictions of the volumetric image data , . This constraining is performed in a similar manner to the constrained training operation that was described above. Constraining the predictions of the neural network 110 at inference-time in this manner may provide a more accurate prediction of the volumetric image data.
In the
In the
With reference to
Additional input data may also be inputted into the neural network 110 during training, and likewise during inference, and used by the neural network to predict the subsequent volumetric image data , . For example, the time difference Dt1 between the historic point in time t1 and the subsequent point in time t2, tn may be inputted into the neural network, and the neural network 110, may generate the predicted subsequent volumetric image data , based further on the time difference Dt1. In another example, the neural network 110 is further trained to predict the volumetric image data based on patient data 120, and the inference-time method further includes:
Examples of patient data 120 include patient gender, patient age, a patient's blood pressure, a patient's weight, a patient's genomic data (including e.g. genomic data representing endothelial function), a patient's heart health status, a patient's treatment history, a patient's smoking history, a patient's family health history, a type of the aneurism, and so forth. Using the patient data in this manner may improve the predictions of the neural network 110 since this information affects changes in anatomical regions, for instance, the rate of growth of aneurisms.
The inference-time method may additionally include an operation of computing a measurement of the anatomical region represented in the predicted subsequent volumetric image data , ; and/or an operation of generating one or more clinical recommendations based on the predicted subsequent volumetric image data , .
Measurements of the anatomical region such as its volume, its change in volume since the previous imaging procedure, its rate of change in volume, its diameter, the aneurism neck diameter, in the example of the anatomical region being an aneurism, and so forth may, be computed by post-processing the volumetric image data , that is predicted by the neural network 110. The clinical recommendations may likewise be computed by post-processing the volumetric image data , , or alternatively outputted by the neural network. Example clinical recommendations include the suggested time of a future follow-up imaging procedure, the suggested type of follow-up imaging procedure, and the need for a clinical intervention such as an embolization procedure or a flow-diverting stent in the example of the anatomical region being an aneurism. In the example of the anatomical region being an aneurism, the risk of rupture at a particular point in time may also be calculated and outputted. These recommendations may be based on the predicted measurements, for example based on the predicted volume, or the predicted rate of growth of the anatomical region. For example, the recommendation may be contingent on the predicted volume or the predicted rate of growth of the anatomical region exceeding a threshold value.
In some cases, during the monitoring of an anatomical region, historic volumetric image data I13D is available for an anatomical region in a patient, together with one or more projection images of the anatomical region that have been acquired at subsequent follow-up imaging procedures. A physician may be interested in the subsequent evolution of the anatomical region at a future point in time. The physician may for example want to predict the volumetric image data in order to propose the time of the next follow-up imaging procedure. In this situation, no projection image data is yet available for the future point in time. However, in one example, the trained neural network 110 may be used to make a constrained prediction of the volumetric image data , for one or more time intervals, these constrained predictions being constrained by the projection image data that is available, and to make an unconstrained prediction of the volumetric image data for the future point in time of the proposed follow-up imaging procedure. The unconstrained prediction is possible because, as described above, during inference, it is not essential to the trained neural network to constrain its predictions with the projection image data. The projection image data simply improves the predictions of the neural network. The unconstrained prediction can be made by using the trained neural network, which may indeed be a neural network that is trained to make constrained predictions, and making the unconstrained prediction for the future point in time without the use of any projection data. In this regard,
With reference to
In the inference-time method, the constrained predictions of the volumetric image data and can be made using the projection image data I22D, In2D by projecting the volumetric image data onto the image plane of the projection image data I22D, In2D. Thus, in the inference-time method, the operation of generating S130, using the neural network 110, predicted subsequent volumetric image data , representing the anatomical region at the subsequent point in time t2, tn that is constrained by the subsequent projection image data I22D, may include:
The difference between the projected predicted subsequent volumetric image data , representing the anatomical region at the subsequent point in time t2, tn, and the subsequent projection image data I22D, In2D may be computed using a loss function, as indicated by first loss function 130 in
The image plane of the received subsequent projection image data I22D, In2D may be determined by i) registering the received subsequent projection image data I22D, In2D to the received historic volumetric image data I13D, or by ii) registering the received subsequent projection image data I22D, In2D to the predicted subsequent volumetric image data , .
In some examples, at inference-time, the anatomical region may also be segmented in the historic volumetric image data I13D and/or in the subsequent projection image data I22D, In2D, prior to, respectively, inputting S120 the received historic volumetric image data I13D into the neural network 110 and/or using the received subsequent projection image data I22D, In2D to constrain the predicted subsequent volumetric image data , . The segmentation may improve the predictions made by the neural network. The use of similar segmentations to those used in training the neural network is contemplated, including: thresholding, template matching, active contour modeling, model-based segmentation, neural networks e.g., U-Nets, and so forth.
In some examples, the inference-time method may also include the operation of generating a confidence estimate of the predicted subsequent volumetric image data , A confidence estimate may be computed based on the quality of the inputted projection image data and/or the quality of the inputted volumetric image data, such as the amount of blurriness in the image caused by movement during image acquisition, the amount of contrast flowing through the aneurysm, and so forth. The confidence estimate may be outputted as a numerical value, for example. In examples wherein the predicted subsequent volumetric image data , is constrained by subsequent projection image data I22D, the confidence estimate may be based on the difference between a projection of the predicted subsequent volumetric image data , for the time step t2, tn, onto an image plane of the received two-dimensional training image data I22D, In2D for the time step t2, tn, and the subsequent projection image data I22D, In2D for the time step t2, tn. A value of the confidence estimate may be computed from the value of the loss function 130 described in relation to
At inference time, or during training, the above methods may be accelerated by limiting the predicted volumetric image data , to particular regions. Thus, in some examples, the inference-time method may also include:
Without prejudice to the generality of the above, in one group of Examples, the neural network 110 is trained to generate the predicted volumetric image data representing the anatomical region at the second point in time such that the predicted volumetric image data is constrained by projection image data representing the anatomical region at the second point in time. These Examples are enumerated below:
Example 1. A computer-implemented method of predicting a shape of an anatomical region, the method comprising;
Example 2. The computer-implemented method according to Example 1, wherein the method further comprises:
Example 3. The computer-implemented method according to Example 1 or Example 2,further comprising:
Example 4. The computer-implemented method according to any previous Example, further comprising segmenting the anatomical region in the received historic volumetric image data I13D and/or in the received subsequent projection image data I22D, In2D, prior to, respectively, inputting S120 the received historic volumetric image data I13D into the neural network 110 and/or using the received subsequent projection image data I22D, In2D to constrain the predicted subsequent volumetric image data , .
Example 5. The computer-implemented method according to any previous Example, wherein the generating S130, using the neural network 110, predicted subsequent volumetric image data , representing the anatomical region at the subsequent point in time t2, tn that is constrained by the subsequent projection image data I22D, comprises:
Example 6. The computer-implemented method according to Example 5, wherein the image plane of the received subsequent projection image data I22D, In2D is determined by i) registering the received subsequent projection image data I22D, In2D to the received historic volumetric image data I13D, or by ii) registering the received subsequent projection image data I22D, In2D to the predicted subsequent volumetric image data , .
Example 7. The computer-implemented method according to any previous Example, further comprising:
Example 8. The computer-implemented method according to any previous Example, further comprising:
Example 9. The computer-implemented method according to any previous Example, further comprising:
Example 10. The computer-implemented method according to Example 1, wherein the neural network 110 is trained to generate, from the volumetric image data representing the anatomical region at the first point in time, the predicted volumetric image data representing the anatomical region at the second point in time such that the predicted volumetric image data is constrained by the projection image data representing the anatomical region at the second point in time, by:
Example 11. The computer-implemented method according to Example 10, wherein the neural network 110 is trained to predict the volumetric image data representing the anatomical region at the second point in time, by further:
Example 12. The computer-implemented method according to Example 10 or Example 11, wherein the image plane of the received two-dimensional training image data I22D, In2D for the time step t2, tn is determined by i) registering the received two-dimensional training image data I22D, In2D for the time step t2, tn to the received volumetric training image data I13D for the initial time step t1, or by ii) registering the received two-dimensional training image data I22D, In2D for the time step t2, tn to the predicted volumetric training image data , for the time step t2, tn.
Example 13. The computer-implemented method according to Example 10: wherein the received volumetric training image data 130 represents the anatomical region at an initial time step t1 in a plurality of different subjects:
wherein the received two-dimensional training image data 120, 12D comprises a plurality of sequences, each sequence representing the anatomical region in a corresponding subject at a plurality of time steps t2, tn in a sequence after the initial time step t1 for the corresponding subject: and
Example 14. A computer program product comprising instructions which when executed by one or more processors, cause the one or more processors to carry out the method according to any one of Examples 1-13.
Example 15. A system for predicting a shape of an anatomical region, the system comprising one or more processors configured to:
The above examples are to be understood as illustrative of the present disclosure, and not restrictive. Further examples are also contemplated. For instance, the examples described in relation to computer-implemented methods, may also be provided by a computer program product, or by a computer-readable storage medium, or by a system, in a corresponding manner. It is to be understood that a feature described in relation to any one example may be used alone, or in combination with other described features, and may be used in combination with one or more features of another of the examples, or a combination of other examples. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims. In the claims, the word “comprising” does not exclude other elements or operations, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be used to advantage. Any reference signs in the claims should not be construed as limiting their scope.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/064991 | 6/2/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63208452 | Jun 2021 | US |