The present invention relates to a method for the analysis di radiographic images, and in particular lateral-lateral teleradiographic images of the skull, and relative analysis system.
In greater detail, the invention relates to a computer-implemented method for the analysis of lateral-lateral teleradiographs of the skull in order to detect anatomical points of interest and regions of interest. In fact, the invention relates to a method that, by means of techniques based on computer vision and artificial intelligence, enables an accurate analysis of radiographs in order to detect the position of anatomical points of interest which may be used to perform, by way of non-limiting example, cephalometric analyses in orthodontics.
The description below will be focused, as said, on the analysis of orthodontic images, but it is clearly evident that the same must not be considered limited to this specific use.
As is well known, one of the most frequent orthodontic treatments is related to the treatment of malocclusions, which consist in a lack of stability of occlusal contacts, and in the lack of correct “functional guides” during masticatory dynamics. Malocclusions are also the cause of a highly unaesthetic appearance.
The diagnostic process requires the use of radiographic imaging, and in particular the execution of a lateral-lateral teleradiograph followed by a cephalometric analysis thereof.
Cephalometric analyses take on a fundamental importance also in the diagnosis and planning of orthodontic treatments or orthognathic surgical treatments (with the involvement, therefore, of other medical specialists such as a maxillofacial surgeon).
The first step of the analysis consists in the detection of anatomical points of interest in order to be able to define a cephalometric tracing and perform calculations of the angles and distances of the planes passing through the aforesaid anatomical points.
As is well known, in the medical-dental field, the identification of anatomical points of interest on a lateral-lateral teleradiograph of the skull is, in most cases, presently made by a doctor without any computerised support, except only the simple display of images and storage of information entered manually.
Once the aforesaid anatomical points of interest have been identified, on the market there exist various software systems, i.e. computer implemented programs, which make it possible to define a cephalometric tracing and automatically carry out a cephalometric analysis.
However, the detection of anatomical points on a radiograph is a highly time-consuming activity and is influenced by the level of experience and competence of the doctor who analyses the data, as well as his or her level of concentration and fatigue at the time of actually performing the analysis.
Furthermore, inexperience with particular anatomical sections and inattention could lead one to perform incomplete or incorrect diagnoses, or to prescribe wrong treatments.
It appears evident that the solutions and practices according to the prior art are potentially costly, because they can also create temporary or permanent damage to the patient's detriment.
In the light of the above, therefore, an aim of the present invention is to propose a system and a method for the analysis of lateral-lateral teleradiographs of the skull which overcome the limits of those of the prior art.
Another aim of the present invention is to propose a support system for doctors-radiologists, and dentists in particular, which enables the location and detection of anatomical points useful for cephalometric analysis.
A further aim of the present invention is to reduce as much as possible the risk of the dentist providing inaccurate or mistaken diagnoses, therapies and treatments.
Therefore, a specific object of the present invention is a computer-implemented method for the geometric analysis of digital radiographic images, in particular lateral-lateral teleradiographs of the skull, by means of a radiographic system, wherein said radiographic system comprises a display unit, and processing means connected to said display unit, said method comprising the steps of: performing, by means of said processing means, a learning step comprising the following sub-steps: receiving a plurality of digital learning radiographic images, each accompanied by annotations, wherein an annotation comprises a label identifying an anatomical point of interest of each learning radiographic image, and the geometric coordinates of the anatomical point of interest in the plane of the learning radiographic image; executing, by means of said processing means, for each learning radiographic image, a procedure for learning a general model for detecting one or more points of interest from a learning radiographic image, performing a refinement model learning procedure, comprising the sub-steps of: cutting the radiographic image into a plurality of image cutouts, each comprising a respective group of anatomical points of interest; and training a refinement model for each image cutout; and carrying out an inference step by means of said processing means on a digital analysis radiographic image, comprising the following sub-steps: performing on said analysis radiographic image an inference step based on said general model learned in said general model learning procedure, so as to obtain the geometric coordinates of a plurality of anatomical points of interest; cutting the analysis radiographic image into a plurality of image cutouts, in a similar way to said image cutting out step, wherein each image cutout comprises a respective group of anatomical points of interest; and performing on each cutout of the analysis radiographic image an inference through said refinement model obtained in said training step of said refinement model learning procedure; and combining the anatomical points of interest of each image cutout so as to obtain the final geometric coordinates of the points relative to the original analysis radiographic image; and displaying said final geometric coordinates of the points relative to the original analysis radiographic image by means of said display unit.
Again according to the invention, said learning step can comprise the sub-step of carrying out, by means of said processing means, for each learning radiographic image, a procedure for learning a radiograph cutout model for cutting out the part of the lateral-lateral teleradiograph of the skull that is relevant for the cephalometric analysis.
Likewise according to the invention, said step of carrying out said inference step can comprise the sub-step of performing, on said analysis radiographic image, an inference step based on said radiograph cutout model learned in said radiograph cutout model learning procedure, so as to obtain a cutout of the part of the lateral-lateral teleradiograph of the skull that is relevant for the cephalometric analysis.
Advantageously according to the invention, said method can comprise a step of performing on said analysis radiographic image an inference step based on said radiograph cutout model, which is carried out before said step of performing on said analysis radiographic image an inference step based on said general model learned in said general model learning procedure.
Furthermore, according to the invention, said general model learning procedure can comprise a first data augmentation step comprising the following sub-steps: random rotation of the radiographic image by a predefined range of angles with predefined probability; random horizontal flip, wherein the annotated acquired radiographic images are randomly flipped horizontally with a predefined probability; random contrast adjustment, wherein the image contrast is adjusted based on a predefined random factor; random brightness adjustment, wherein the brightness of images is adjusted based on a predefined random factor; random resizing and cutting out, wherein the radiographic image is resized with a random scale factor and cut out.
Again according to the invention, said general model learning procedure can comprise, before said general model learning step, a resizing sub-step.
Likewise according to the invention, said refinement model learning procedure can comprise the sub-steps of: performing a second data augmentation step; and executing said general model as obtained from said general model learning sub-step.
Advantageously, according to the invention, said second data augmentation step of said refinement model learning procedure can comprise the following sub-steps: random rotation, wherein each radiographic image and the relative annotations are rotated by a predefined range of angles and/or with a predefined probability, thereby generating a plurality of rotated images; horizontal flip of the radiographic images randomly annotated with a predefined probability; adjusting the contrast of said radiographic images based on a predefined random factor; and adjusting the contrast of said radiographic images based on a predefined random factor.
Furthermore, according to the invention, said step of training a refinement model for each image cutout can comprise the following sub-steps: resizing each cutout of said radiographic image; and performing a feature engineering and refinement model learning procedure; and/or performing a procedure for learning a dimensionality reduction model; and carrying out the refinement model learning.
Preferably, according to the invention, said step of performing a feature engineering and refinement model learning procedure can be based on computer vision algorithms, such as Haar or HOG, or on deep learning approaches, such as CNN or autoencoders.
Again according to the invention, said step of performing a dimensionality reduction model learning procedure can comprise Principal Component Analysis—PCA or Partial Least Squares regression—PLS.
Likewise according to the invention, said step of performing a feature engineering and refinement model learning procedure can comprise the following structure: a feature engineering model or procedure; and a set of regression models with the two-level stacking technique, comprising a first level, comprising one or more models; and a second level comprising the metamodel; and wherein at the output of said refinement model the coordinates of the group of anatomical points or points of interest of each cutout of said radiographic image are obtained.
Furthermore, according to the invention, said one or more models of said set of regression models can comprise at least one of the following models: support vector machine; and/or decision trees; random forest; and/or extra tree; and/or gradient boosting.
Advantageously according to the invention, said step of pre-processing said analysis radiographic image can comprise the following sub-steps: performing an adaptive equalization of a contrast-limited histogram, wherein the image is modified in contrast; and resizing the analysis radiographic image.
Preferably, according to the invention, said combining step of said inference step can comprise the steps of: aggregating and repositioning the anatomical points of interest, wherein the annotations returned by the refinement models are aggregated together with those of the original model, in such a way that the geometric coordinates of the anatomical points detected are relative to the original analysis radiographic image; reporting the missing anatomical points of interest, wherein it is reported whether there are points that have not been detected; carrying out a cephalometric tracing, wherein, based on the detected points, the tracing lines are defined; performing a cephalometric analysis, wherein, based on the detected points, one or more cephalometric analyses among those known in the scientific literature are performed.
A further object of the present invention is a system for analysing digital radiographic images, comprising a display unit, such as a monitor, and the like, and processing means, connected to said display unit, configured to carry out the analysis method as defined above.
An object of the present invention is also a computer program comprising instructions which, when the program is executed by a computer, cause the computer to execute the steps of the method as defined above.
Finally, an object of the present invention is a computer readable storage medium comprising instructions which, when executed by a computer, cause the computer to execute the steps of the method as defined above.
The present invention will now be described by way of non-limiting illustration according to the preferred embodiments thereof, with particular reference to the figures in the appended drawings, wherein:
In the various figures, similar parts will be indicated with the same numerical references.
In general terms it is possible to distinguish, in the radiographic analysis method according to the present invention, two distinct modes or operating steps in which the system for the analysis of lateral-lateral teleradiographs of the skull works. In particular, also making reference to
In general, when the analysis method is in the learning mode, machine learning models are generated, which provide a set of radiographic images accompanied by annotations as input to the learning algorithms (better specified below).
For the sake of clarity in what follows, an annotation related to an element present in an image consists of two main components:
Again in general terms, once the learning operating step has ended, the models thus trained are used, as mentioned earlier, in the inference operating step, i.e. in the actual utilisation of the analysis system. In fact, in the inference operating step the method for the analysis of radiographs receives as input analysis radiographic images, even if never acquired previously, and detects the elements present in them, as better defined below, regarding morphological deviations from typical patterns, providing the coordinates of anatomical points of interest.
Preferably, said two operating modes are alternated over time, so as to have models that are always updated, to reduce the detection errors that the analysis method could in any case commit.
The various steps of the operating method of the system for analysing lateral-lateral teleradiographs of the skull, divided into said two specified operating modes, are discussed below.
Making reference to
The learning operating step, indicated by the reference number 1, acquires as input (step 11) the learning radiographic images and the relative annotations, structured in the terms indicated above and, for every model to be learned, carries out one or more learning procedures.
Again in reference to
For each procedure carried out, various pre-processing and data augmentation operations are carried out, after which the actual learning (or so-called training) of the model takes place.
In an experimental setup for the learning procedures, for learning both the general model and the refinement models, use was made of 488 lateral-lateral teleradiographs of the skull, produced by different X-ray machines on various patients of different ages.
The annotation process was performed manually by a team of two expert orthodontists and consisted in marking the anatomical points of interest on the lateral-lateral teleradiographs of the skull by means of a computerised support.
As mentioned above, this preliminary learning procedure acquires, as input, the learning radiographic images and the annotations, as indicated in step 11, and returns a radiograph cutout model capable of detecting, starting from a lateral-lateral teleradiograph of the skull, the area that is relevant for cephalometric analysis.
Again in reference to
Subsequently, after the data augmentation step 121, a resizing sub-step 122 is carried out wherein, in order to enable the execution of the learning algorithms, it is necessary to resize the images so that all parameters of the models can be contained in the memory. In some embodiments, the images are resized to 256×256.
Finally, a radiograph cutout model learning sub-step is carried out wherein the radiograph cutout model learning algorithms are executed. In this context, the learning consists in suitably setting the parameters of the deep learning model in order to minimize the cost functions.
In a preferred embodiment of the present invention, this model was built using an architecture of the Single Shot Detector (SSD) type (Liu, Reed, Fu, & Berg, 2016).
As mentioned above, this preliminary learning procedure acquires as input the learning radiographic images and annotations, as indicated in step 11, and returns a general model capable of detecting one or more anatomical points of interest, providing the coordinates relative to a reference system. In particular, in one embodiment, there are 60 anatomical points of interest, and they are shown in the following table.
Naturally, the number and type of points of interest can be different according to the preferred embodiment and the system's processing capability. In particular, the points to be detected, also as updated with the scientific literature, could change.
Again in reference to
Subsequently, after the data augmentation step 131, a resizing sub-step 132 is carried out wherein, in order to enable the execution of the learning algorithms, it is necessary to resize the images so that all parameters of the models can be contained in the memory. In some embodiments, the images are resized to 256×256.
Finally, a general model learning sub-step 133 is carried out wherein the general model learning algorithms are executed. In this context, the learning consists in suitably setting the parameters of the general deep learning model in order to minimize the cost functions.
In a preferred embodiment of the present invention, this model was built using an architecture of the CenterNet type (Zhou, Wang, & Krshenbuhl, 2019) based on Hourglass-104 (Law & Deng, 2018).
As mentioned above, the general model, in one embodiment thereof, reduces the images to a size of 256×256 so that all the parameters can be contained in the memory. The image resolution and thus also the precision of the general model are reduced. The purpose of the refinement models is to improve the precision of the points of interest found by the general model, thereby reducing the errors due to the low resolution.
This refinement model learning procedure is composed of two essential steps. A first common step for all the models, which is thus carried out only once, and a model-specific step which is carried out several times, once for each refinement model to be learned.
The common step for all the refinement models comprises a data augmentation step 141, followed by an inference sub-step 142, wherein the general model obtained from the general model learning procedure 13 is exploited, and finally a cutting out sub-step 143, used to create the datasets necessary for the learning of the refinement models.
In particular, said data augmentation sub-step 141 comprises the following sub-steps:
As mentioned, an inference step is subsequently carried out by means of the general model learning procedure 13, which is indicated here as sub-step 142. In this case, all the learning radiographic images are provided as input to the general model learned in the general model learning procedure 13 and the 60 anatomical points listed in the above table are detected.
Subsequently, a step 143 of cutting out the learning radiographic image R, the object of processing, is performed wherein the points detected in the previous sub-step are grouped into N groups and, for each group, a cutout of the learning radiographic image R is generated, containing the points belonging to the group starting from the original learning radiographic image R. In one embodiment, N is equal to 10. The output of this sub-step is a plurality of N datasets, where N is the number of refinement models to be learned, one for every group of points.
In other words, the whole radiographic images are passed on to the general model to obtain the 60 (in the present embodiment) anatomical points of interest. Only after this step is the original image cut.
Given the i-th dataset, one example is composed of a pair <Ri,e>, where Ri is the cutout of the original learning radiographic image containing the points detected by the general model and e is the error vector in the form:
wherein K is the number of anatomical points refined by the i-th refinement model, dp
The learning of the refinement models has the aim of defining models that are able to approximate e starting from the image cutout Ri.
After the various cutouts have been obtained on the basis of the groups of points, i.e. the cutouts of the learning radiographic image R, for every refinement model 151 . . . 15N to be learned, the following sub-steps are carried out:
The inference operating step, shown and illustrated in
In particular, the main sub-steps of the inference operating procedure 3 are specified below.
Initially, an inference step 31 is carried out for the radiograph cutout model, wherein the original lateral-lateral teleradiograph of the skull R′ is provided as input to the radiograph cutout model, which identifies the area of interest for cephalometric analysis and cuts out the radiograph so as to obtain the image R″.
Subsequently, a pre-processing step 32 for the general model is carried out, which comprises (see
Subsequently, in step 33, an inference step is carried out based on the general model learned in the general model learning procedure 13, wherein the pre-processed analysis radiographic image R″ is input to a general deep learning model obtained from the first learning procedure, which, in an embodiment thereof, returns the geometric coordinates of the 60 points listed in the table above.
Subsequently, a cutting out step 34 is performed; the points obtained in the previous inference step are organized into N groups and, for every group of points detected, a cutout R′1, R′2, . . . , R′N containing the group of points detected is generated from the original analysis radiograph image R′. The width and height of the cutout generated are preferably at least 256 pixels. The grouping of points or image cutouts is similar to those of the cutting out sub-step 143 described above.
For every refinement model, with reference to
From each inference sub-step, by means of the refinement model 361 . . . 36N (which represents the predicted error of the general model), one obtains the points of each group 1 . . . N, relative to each cutout R′1, R′2, . . . , R′N of the analysis radiographic image R′. These groups of points of the cutouts R′1, R′2, . . . , R′N are combined in a post-processing step with the outputs of the general model (combining step 37), in order to have final geometric coordinates of the points relative to the new, original radiographic image R′. In particular, the post-processing for carrying out the combining step comprises the following sub-steps (see
In particular, as may be observed in
In particular,
Finally, making reference to
Furthermore, the system 4 comprises interaction means 42, which can include a keyboard, a mouse or a touchscreen, and display means 43, typically a monitor or the like, to enable the doctor to examine the processed images and read the coordinates of the anatomical points of interest, in order possibly to derive appropriate diagnoses.
By means of the display means 43 it is possible to display the anatomical points of interest after the processing has been performed and to examine the geometric arrangement thereof.
One advantage of the present invention is that of providing a support for doctors, radiologists and dentists in particular, which makes it possible to detect and locate anatomical points of the skull which are useful for cephalometric analysis.
A further advantage of the present invention is that of enabling the practitioner to carry out correct diagnoses and therapies, thus enabling accurate treatments.
Another advantage of the present invention is that of enabling an automatic analysis of the analysis radiographic images such as to enable the obtainment of data for in-depth epidemiologic studies and analyses of the success of dental treatments.
The present invention has been described by way of non-limiting illustration according to the preferred embodiments thereof, but it is to be understood that variations and/or modifications may be introduced by the person skilled in the art without for this reason going outside the relevant scope of protection as defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
102022000006905 | Apr 2022 | IT | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IT2023/050100 | 4/6/2023 | WO |