The present invention relates to a method deriving cephalometric parameters for orthodontic diagnosis based on machine learning. More specifically, the present invention relates to a method deriving cephalometric parameters for orthodontic diagnosis based on machine learning, which is capable of obtaining a 3D CBCT image of a patient from cone beam computed tomography (CBCT) image data in a state of natural head position and precisely and quickly detecting a plurality of cephalometric landmarks on the 3D CBCT image in order to derive 13 parameters for orthodontic diagnosis by machine learning algorithms and image analysis processing technology being applied.
In general, the condition in which the teeth are not aligned correctly and the upper and lower teeth are misaligned is referred to as malocclusion, and orthodontic treatment may be performed to correct such malocclusion into normal occlusion. Meanwhile, it is required to detect anatomical cephalometric landmarks that are predetermined anatomically in a 3D CBCT image of a patient for precise diagnosis or treatment planning for orthodontic treatment.
Recently, a three-dimensional (3D) cone beam computed tomography (CBCT) image has been obtained from the dental image data of a patient's head captured by a CBCT device without problems such as a screen distortion phenomenon or unclearness in the existing X-ray image, and a study is being conducted to detect cephalometric landmarks for deriving parameters for orthodontic diagnosis based on the CBCT image. In particular, as a method of analyzing a cone beam computed tomography (CBCT) image, recent studies have proposed a method of identifying the antero-posterior skeletal relationship and the degree of protrusion of teeth using the nasion true vertical plane (NTVP) that passes through nasion that is the most concave point between a frontal bone and a nasal bone, and is perpendicular to the ground, and the true horizontal plane (THP) that passes through nasion (N) and is horizontal to the ground, and the like.
In the related art, a skilled operator, such as a dental practitioner, has to manually detect a large number of cephalometric landmarks of approximately 50 or more on a plurality of CBCT images in order to derive parameters for orthodontic diagnosis. In this method, the method of detecting the cephalometric landmarks and the accuracy of detection varies depending on the operator's proficiency, making it difficult to make an accurate correction diagnosis, and it takes a long period of time of approximately 30 minutes or more to detect the cephalometric landmarks, reducing the efficiency of orthodontic treatment.
To solve this problem, machine learning algorithm is introduced in the field of dentistry: orthodontic diagnosis, and the 3D CBCT image of a patient is obtained from 3D cone beam computed tomography (CBCT) image data in a state of natural head position. For precise orthodontic diagnosis, a plurality of cephalometric landmarks on the 3D CBCT image are detected precisely and quickly, and a reduced number of parameters are derived than the related art. Accordingly, there is a strong need for a method deriving cephalometric parameters for orthodontic diagnosis based on machine learning that can improve the efficiency of orthodontic treatment.
The present invention is directed to provide a method deriving cephalometric parameters for orthodontic diagnosis based on machine learning that is capable of improving the efficiency of orthodontic treatment by obtaining a 3D CBCT image of a patient from a cone beam computed tomography (CBCT) image data in a state of natural head position with machine learning algorithm being applied and precisely and quickly deriving a plurality of cephalometric landmarks on the 3D CBCT image to derive 13 parameters, which are reduced from the related art, for precise orthodontic diagnosis.
To achieve the aforementioned objects, the present invention provides a method deriving cephalometric parameters for orthodontic diagnosis based on machine learning, using a 3D CBCT image for orthodontic diagnosis extracted from a step of obtaining a 3D CBCT image for diagnosis which includes a CBCT image in a sagittal plane, a CBCT image in a coronal plane, a dental panoramic image, and an incisor image in a cross-sectional view, respectively, for a patient from a three-dimensional (3D) cone beam computed tomography (CBCT) image data captured of the patient's head at a natural head position, the method may comprise: detecting, based on machine learning algorithm, a plurality of cephalometric landmarks on the 3D CBCT image to derive 13 parameters for orthodontic diagnosis; and deriving 13 parameters corresponding to distances or angles between the detected plurality of cephalometric landmarks.
Here, in order to provide information on the orthodontic diagnosis, the 13 parameters may include a degree of protrusion of a maxilla, a degree of protrusion of a mandible, a degree of protrusion of chin, a degree of displacement of a center of a mandible, a degree of displacement of the midline of upper central incisors, and a degree of displacement of the midline of lower central incisors, a vertical distance from the true horizontal plane (THP) passing through nasion, which is the most concave point between a frontal bone and a nasal bone, to a tip of a right upper canine, a vertical distance from the THP to a tip of a left upper canine, a vertical distance from the THP to a right upper first molars, a vertical distance from the THP to a left upper first molars, a degree of inclination of a upper central incisor, a degree of inclination of a lower central incisor, and a degree of inclination of the mandible with respect to the THP.
In addition, the machine learning algorithm may detect the plurality of cephalometric landmarks to derive the 13 parameters by dividing the CBCT image in the sagittal plane into a region provided between a frontal bone portion and nasion, a region between nasion and upper teeth, a region between lower teeth and the lowest point of the mandible (menton, Me), and a region between menton of the mandible and an articular bone of jaw.
Further, the machine learning algorithm may detect the plurality of cephalometric landmarks to derive the 13 parameters by dividing the CBCT image in the coronal plane into a region between a frontal bone portion and nasion and a mandible region.
In addition, machine learning algorithm may include: applying a region based convolutional neural network (R-CNN) machine learning model to the dental panoramic image to detect individual regions of entire set of teeth; detecting, for each detected individual region of the entire set of teeth, teeth landmarks representing positions of the teeth; analyzing the positions of the detected teeth landmarks to classify the entire set of teeth into upper teeth and lower teeth; numbering each of right upper teeth, left upper teeth, right lower teeth, and left lower teeth sequentially based on a horizontal distance from the midline of a facial portion to the detected teeth landmarks; and analyzing the numbered teeth to detect a plurality of cephalometric landmarks for deriving a parameter from a specific tooth, including an incisor, a canine, and a first molar.
Further, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone, and A-point (A), which is the deepest portion of a line connecting an anterior nasal spine in the maxilla and a prosthion, in the CBCT image in the sagittal plane, and wherein the degree of protrusion of a maxilla is derived by measuring a distance between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion, and A-point.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone, and B-point, which is the deepest portion connecting an infradentale and Pog, which is the most prominent point of chin, in the CBCT image in the sagittal plane, and wherein the degree of protrusion of the mandible is derived by measuring a distance between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion, and B-point.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone, and pogonion, which is the most prominent point of chin, in the CBCT image in the sagittal plane, and wherein the degree of protrusion of chin is derived by measuring a distance between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion, and pogonion.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone, and menton, which is the lowest point of the mandible, in the CBCT image in the coronal plane, and wherein the degree of displacement of a center of the mandible is derived by measuring a distance between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion, and menton.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone in the CBCT image in the coronal plane, and a midpoint of upper central incisors in the dental panoramic image, and wherein the degree of displacement of the midline of the upper central incisors is derived by measuring a distance between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion, and the midpoint of the upper central incisors.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone in the CBCT image in the coronal plane, and a central point of lower central incisors in the dental panoramic image, and wherein the degree of displacement of the midline of the lower central incisors is derived by measuring a distance between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion, and the midpoint of the lower central incisors.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone in the CBCT image in the coronal plane, and the cusp tip of the right upper canine in the dental panoramic image, and wherein the vertical distance between the true horizontal plane (THP), which is a horizontal plane passing through nasion, and the cusp tip of the right upper canine is derived.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone in the CBCT image in the coronal plane, and the cusp tip of the left upper canine in the dental panoramic image, and wherein the vertical distance between the true horizontal plane (THP), which is a horizontal plane passing through nasion, and the cusp tip of the left upper canine is derived.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone in the CBCT image in the coronal plane, and the mesio-buccal cusp tip of a right upper first molar in the dental panoramic image, and wherein, through a distance between the true horizontal plane (THP), which is a horizontal plane passing through nasion, and the mesio-buccal cusp tip of the right upper first molar, the vertical distance from the THP to the right upper first molars is derived.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone in the CBCT image in the coronal plane, and the mesio-buccal cusp tip of a left upper first molar in the dental panoramic image, and wherein, through a distance between the true horizontal plane (THP), which is a horizontal line passing through nasion, and the mesio-buccal cusp tip of the left upper first molar, the vertical distance from the THP to the left upper first molar is derived.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone in the CBCT image in the coronal plane, and the crown tip of the upper incisor and the root tip of the upper incisor in the incisor image in a cross-sectional view, and wherein the degree of inclination of the upper central incisor is derived through an angle between the true horizontal plane (THP), which is a horizontal plane passing through nasion, and a vector connecting the crown tip of the upper incisor and the root tip of the upper incisor.
In addition, the machine learning algorithm may detect menton, which is the lowest point in the mandible, and gonion, which is a point of maximum curvature in the mandibular angle, in the CBCT image in the sagittal plane, and the crown tip of the lower incisor and the root tip of the lower incisor in the incisor image in a cross-sectional view, and wherein the degree of inclination of the lower central incisor is derived through an angle between a MeGo line connecting menton and gonion and a vector connecting the crown tip of the lower incisor and the root tip of the lower incisor.
In addition, the machine learning algorithm may detect nasion, which is the most concave point between a frontal bone and a nasal bone, menton, which is the lowest point in the mandible, and gonion, which is a point of maximum curvature in the mandibular angle, in the CBCT image in the sagittal plane, and wherein, through an angle between the true horizontal plane (THP), which is a horizontal plane passing through nasion, and a MeGo line connecting menton and gonion, the degree of inclination of the mandible with respect to the THP is derived.
The present invention may further include diagnosing a facial profile of the patient or an occlusal state in response to the derived 13 parameters.
Here, when the patient's occlusal state may be diagnosed corresponding to the derived 13 parameters, a state in which an antero-posterior occlusal position of the maxilla and mandible is in a relatively normal category, a state in which the maxilla relatively protrudes relative to the mandible, and a state in which the mandible relatively protrudes relative to the maxilla are classified and diagnosed, respectively.
In addition, when a facial profile of the patient may be diagnosed corresponding to the derived 13 parameters, a state in which a length of the facial portion is in a normal category, a state in which the length of the facial portion is shorter than the normal category, and a state in which the length of the facial portion is longer than the normal category are classified and diagnosed respectively.
Furthermore, the present invention provide a program for deriving cephalometric parameters for orthodontic diagnosis that is installed on a computing device or a computable cloud server and is programmed to automatically perform of: detecting a plurality of cephalometric landmarks as output data after obtaining the 3D CBCT image for orthodontic diagnosis results in the method deriving cephalometric parameters for orthodontic diagnosis; and deriving 13 parameters corresponding to distances or angles between the detected plurality of cephalometric landmarks.
According to the method of deriving cephalometric parameters for orthodontic diagnosis based on machine learning, in accordance with the present invention, a plurality of 3D CBCT images for diagnosis, including CBCT images in the direction of the sagittal plane and the coronal plane, an incisor image in a cross-sectional view, and a dental panoramic image of a patient, are extracted from image data captured by cone beam computed tomography (CBCT) at a natural head position with the machine learning algorithm being applied. The positions of predetermined cephalometric landmarks for extracting the parameters from the image are automatically detected, thereby enabling the orthodontic diagnosis operation to be processed very quickly, at a level within approximately tens of seconds.
According to the method of deriving cephalometric parameters for orthodontic diagnosis based on machine learning in accordance with the present invention, based on the cephalometric landmarks detected on a cone beam computed tomography (CBCT) image taken in a natural head position, 13 parameters for diagnosis are selected and derived, which are reduced from the related art, in order to smoothly identify the anterior-posterior skeletal relationship and the degree of protrusion of teeth in the sagittal plane. Accordingly, the machine learning algorithm for detecting the cephalometric landmarks can be simplified and the time required for orthodontic diagnosis can be shortened.
Furthermore, according to the method deriving cephalometric parameters for orthodontic diagnosis based on machine learning in accordance with the present invention, a 3D CBCT image for diagnosis is extracted through the machine learning algorithm, and cephalometric landmarks on the image are detected. As a result, 13 parameters are derived, and not only the facial profile of the patient or occlusal state is automatically diagnosed, but also the range of applications can be further expanded, such as automatically designing a customized dental correction device for the patient in response to the derived parameters.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. However, the present invention is not limited to the exemplary embodiments described below and may be specified as other aspects. On the contrary, the embodiments introduced herein are provided to make the disclosed content thorough and complete, and sufficiently transfer the spirit of the present invention to those skilled in the art. Like reference numerals indicate like constituent elements throughout the specification.
As illustrated in
In addition, the method deriving cephalometric parameters based on machine learning according to the present invention may further include a step (S400) of diagnosing the patient's facial profile or an occlusal state of the patient in response to the derived 13 parameters.
As illustrated in
In step S100 obtaining the 3D CBCT image for diagnosis, a dental cone beam computed tomography (CBCT) device may be used to obtain three-dimensional (3D) CBCT image data in the entire region for the patient. The CBCT image data 10 may satisfy a digital imaging and communications in medicine (DICOM) standard by the machine learning algorithm, and the CBCT image data may obtain the 3D CBCT image for diagnosis by an image extraction function input to the machine learning algorithm or an image extraction function of a general DICOM viewer.
Here, the CBCT image data 10 may be extracted as a 3D CBCT image for diagnosis, including the CBCT image 20 in the sagittal plane, the CBCT image 30 in the coronal plane, the dental panoramic image 40, and the incisor image in a cross-sectional view 50. Among them, the CBCT image 20 in the sagittal plane and the CBCT image 30 in the coronal plane may be extracted by being categorized into a bone mode image 20a in the sagittal plane and a bone mode image 30a in the coronal plane, which are modes in which the inside of the skull bone tissue is projected and represented, respectively, and may be extracted by being categorized into a depth mode image 20b in the sagittal plane and a depth mode image 30b in the coronal plane, which are modes in which the outside of the skull bone tissue is represented in consideration of the depth, density, or the like of the skull bone tissue of the patient.
Then, in step S200 of detecting a plurality of cephalometric landmarks for orthodontic diagnosis may be automatically detected on the 3D CBCT image for diagnosis through the machine learning algorithm. In the related art, a skilled operator such as a dentist manually designated cephalometric landmarks on the 3D CBCT image for diagnosis, but such a method causes deviations in accuracy depending on the operator's proficiency, and it takes a long time of approximately 30 minutes to one hour to detect the cephalometric landmarks, which leads to a decrease in orthodontic treatment efficiency.
Meanwhile, in the present invention, machine learning algorithm including a facial profile automatic analysis model and a region based convolutional neural network (R-CNN) may be used to automatically detect a plurality of predetermined cephalometric landmarks on the 3D CBCT image for diagnosis. As a result, in step S300 deriving the 13 parameters, 13 parameters selected to correspond to distances or angles defined between a plurality of cephalometric landmarks are derived, and through the derived 13 parameters, the facial state or oral state of the patient is detected to perform the orthodontic diagnosis for the patient.
In the related art, it is necessary to detect a large number of head cephalometric landmarks of approximately 50 or more in the 3D CBCT image for diagnosis in order to derive the parameters for orthodontic diagnosis, but in the present invention, 13 parameters are strictly selected to efficiently diagnose the patient's sagittal anterior-posterior skeletal relationship, occlusal state, and the degree of protrusion of teeth, thereby dramatically reducing the number of cephalometric landmarks that need to be detected to derive the parameters. Accordingly, the machine learning algorithm for implementing this is simplified, and the time required for orthodontic diagnosis can be shortened.
In a method of analyzing cephalometric measurements for the purpose of orthodontic diagnosis, instead of using the Wits appraisal, Rickett appraisal, McNamara appraisal, or the like, which have been widely practiced, the present inventors have devised the method that can easily diagnose the patient's anterior-posterior jaw relationship while dramatically reducing the number of cephalometric landmarks using the nasion true vertical plane (NTVP) or the true horizontal plane (THP) that passes through nasion (N), which is generally a portion from which a nose bridge begins, in an image taken at a natural head position, and the process detecting a plurality of cephalometric landmarks can be performed quickly and efficiently by applying the method to machine learning algorithm.
As illustrated in
Here, in order to provide information on the orthodontic diagnosis with respect to the patient, a degree of protrusion of a maxilla, a degree of protrusion of a mandible, a degree of protrusion of chin, a degree of displacement of a mandible, a degree of displacement of the midline of upper central incisors, and a degree of displacement of the midline of lower central incisors, a vertical distance from the true horizontal plane (THP) passing through nasion (N), which is the most concave point between a frontal bone and a nasal bone, to the cusp tip of a right upper canine, a vertical distance from the THP to the cusp tip of a left upper canine, a vertical distance from the THP to the mesio-buccal cusp tip of a right upper first molars, a vertical distance from the THP to the mesio-buccal cusp tip of a left upper first molars, a degree of inclination of a upper central incisor, s degree of inclination of a lower central incisor, and s degree of inclination of the mandible with respect to the THP, may be derived as the 13 parameters.
To this end, the 13 parameters may be defined by keypoints of cephalometric landmarks corresponding to distances or angles between the detected a plurality of cephalometric landmarks from the 3D CBCT image for diagnosis, as shown in Table 1 below.
, THP)
, MeGo)
With reference to
The machine learning algorithm may detect nasion (N) which is the most concave point between a frontal bone and a nasal bone in the CBCT image 20 in the sagittal plane, and A-point (A), which is the most concave point in the maxilla, and may diagnose the antero-posterior relationship of the maxilla and the mandible of the patient by deriving one of the 13 parameters, which is the degree of protrusion of the maxilla, through measuring a distance in the x-axis direction in the CBCT image 20 in the sagittal plane, between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion (N), and A-point (A).
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone in the CBCT image 20 in the sagittal plane, and B-point (B), which is the most concave point in the mandible, and may diagnose the antero-posterior relationship of the maxilla and the mandible of the patient by deriving one of the 13 parameters, which is the degree of protrusion of the maxilla, through measuring a distance in the x-axis direction in the CBCT image 20 in the sagittal plane, between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion (N), and B-point (B).
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone, and pogonion (Pog), which is the most prominent point on the chin, in the CBCT image 20 in the sagittal plane, and may diagnose the antero-posterior relationship of the maxilla and the mandible of the patient by deriving one of the 13 parameters, which is the degree of protrusion of the chin, through measuring a distance in the x-axis direction in the CBCT image 20 in the sagittal plane, between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion (N), and pogonion (Pog).
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone in the CBCT image 30 in the coronal plane, and menton (Me), which is the lowest point of the mandible, and may diagnose the left-right occlusal relationship of the patient's maxilla and mandible by deriving the degree of displacement of the center of the mandible, which is one of the 13 parameters, through measuring a distance in the y-axis direction in the CBCT image 30 in the coronal plane between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion (N), and menton (Me).
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone extracted from the CBCT image 30 in the coronal plane, and a vertical line passing through the midline of the upper central incisors (Upper dental midline; UDM) in the dental panoramic image 40, and may diagnose the left-right occlusal relationship of the patient's maxilla and mandible by deriving the degree of displacement of the midline of the upper central incisors, which is one of the 13 parameters, through measuring a distance in the y-axis direction in the CBCT image in the coronal plane between the nasion true vertical plane (NTVP), which is a vertical plane passing through nasion (N), and a vertical line passing through the center of the upper central incisors (UDM).
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone extracted from the CBCT image 30 in the coronal plane, and a vertical line passing through the midline of the lower central incisors (Lower dental midline; LDM) in the dental panoramic image 40, and may diagnose the left and right occlusal relationship of the patient's maxilla and mandible by deriving the degree of displacement of the midline of the lower central incisors, which is one of the 13 parameters, through measuring a distance in the y-axis direction in the CBCT image in the coronal plane, between a vertical plane passing through nasion (N) and a vertical line passing through the center of the lower central incisors (LDM).
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone extracted from the CBCT image 30 in the coronal plane, and the cusp tip of the right upper canine (Ct (Rt)) in the dental panoramic image 40. A vertical distance between the true horizontal plane (THP), which is a horizontal plane passing through nasion (N), and the cusp tip of the upper right canine (Ct (Rt)) is derived as one of the 13 parameters, so that the machine learning algorithm may identify a distance between the horizontal plane and the right upper canine.
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone extracted from the CBCT image 30 in the coronal plane, and the cusp tip of the left upper canine (Ct (Lt)) in the dental panoramic image 40. A vertical distance in the z-axis direction between the true horizontal plane (THP), which is a horizontal plane passing through nasion (N), and the cusp tip of the left upper canine (Ct (Lt)) in the CBCT image 30 in the coronal plane is derived as one of the 13 parameters, so that the machine learning algorithm may identify a distance between the horizontal plane and the left upper canine.
As a result, the distances to the right and left upper canines measured in the horizontal plane should match each other, but when there is a discrepancy therebetween, it can be seen that the maxilla is inclined in the canine portion.
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone extracted from the CBCT image 30 in the coronal plane, and the mesio-busccal cusp tip of the right upper first molar (U6 MB (Rt)) in the dental panoramic image 40. A vertical distance between the true horizontal plane (THP), which is a horizontal plane passing through nasion, and the right upper first molar (U6 MB (Rt)) is derived as one of the 13 parameters, so that the machine learning algorithm may identify a distance between the horizontal plane and the right upper first molar.
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone extracted from the CBCT image 30 in the coronal plane, and the mesio-buccal cusp tip of the left upper first molar (U6 MB (Lt)) in the dental panoramic image 40. A vertical distance between the true horizontal plane (THP), which is a horizontal plane passing through nasion (N), and the left upper first molar (U6 MB (Lt)) is derived as one of the 13 parameters, so that the machine learning algorithm may identify a distance between the horizontal plane and the left upper first molar.
As a result, the distances to the right upper first molar and the left upper first molar measured in the horizontal plane should match each other, but when there is a discrepancy therebetween, it can be seen that the maxilla is inclined in the molar portion.
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone extracted from the CBCT image 30 in the sagittal plane, and the crown tip T1 and the root tip T2 of the upper incisor in the incisor image in a cross-sectional view 50. The degree of inclination of the upper central incisor is derived as one of the 13 parameters through an angle of the true horizontal plane (THP), which is a horizontal plane passing through nasion (N), and a vector connecting the crown tip T1 and the root tip T2 of the upper central incisor, so that the machine learning algorithm may diagnose the occlusal state of the patient.
In addition, the machine learning algorithm may detect menton (Me), which is the lowest point in the mandible, and gonion (Go), which is a point of maximum curvature in the mandibular angle, in the CBCT image 30 in the sagittal plane, and the crown tip T3 of the lower incisor and the root tip T4 of the lower incisor in the incisor image in a cross-sectional view 50, respectively. The degree of inclination of the lower central incisor is derived as one of the 13 parameters through an angle of a MeGo line connecting menton (Me) and the gonion (Go) and a vector connecting the the crown tip T3 of the lower incisor and the root tip T4 of the lower incisor, so that the machine learning algorithm may diagnose the occlusal state of the patient.
In addition, the machine learning algorithm may detect nasion (N), which is the most concave point between a frontal bone and a nasal bone, menton (Me), which is the lowest point on the mandible, and the gonion (Go), which is the point of maximum curvature on the mandibular angle, in the CBCT image 20 in the sagittal plane. The degree of inclination of the mandible with respect to the true horizontal plane (THP), which is a horizontal plane passing through nasion (N) is derived as one of the 13 parameters through an angle between the true horizontal plane (THP) and the MeGo line connecting menton (Me) and the gonion (Go), so that the machine learning algorithm may diagnose the patient's vertical mandibular and maxillary relationship.
As illustrated in
To this end, the machine learning algorithm may, in order to divide the CBCT image 20 in the sagittal plane into each of the four regions of interest, divide the CBCT image 20 in the sagittal plane into a facial portion profile region 27, which is indicated by a red line along the front of the patient's facial portion and configured with the first region 21, the second region 23, and the third region 25, and a jaw profile region 27, which is indicated by a green line along the patient's mandible region and configured with the fourth region 25, and may extract the facial portion profile region 27 and the jaw profile region 27. The CBCT image 20 in the sagittal plane may be divided into a plurality of unit pixels that are horizontal or vertical to the y-axis direction, respectively, for extraction of the facial portion profile region 27 and the jaw profile region configured with the fourth region 29 from the CBCT image 20 in the sagittal plane.
As illustrated in
Here, n is the number of pixels divided in the y-axis direction in the CBCT image in the sagittal plane.
Next, based on the facial portion profile region 27 obtained from the CBCT image 20 in the sagittal plane, menton (Me) is designated as a starting point to extract the jaw profile region including the fourth region 29. As illustrated in
With reference to
In addition, the machine learning algorithm may detect the root tip of the upper incisor portion in the second region 23 constituting the CBCT image 20 in the sagittal plane as A-point (A). Meanwhile, when the machine learning algorithm has difficulty in recognizing the shape of the upper incisor portion within the second region 23 constituting the CBCT image in the sagittal plane, the shape of the upper incisor portion may be supplemented by gently connecting a boundary region between an acanthion provided in an upper portion of the upper incisor portion and protruding forward to the teeth and the upper incisor portion. Then, the machine learning algorithm may detect a point that is at the lowest position in the x-axis direction within a facial portion profile boundary region, or the point that has the smallest slope within the facial portion profile boundary region, as A-point (A).
In addition, the machine learning algorithm may detect B-point (B), which is the lowest point in the x-axis direction in the mandible, pogonion (Pog), which is the highest point in the x-axis direction in the mandible, and menton (Me), which is the lowest point in the y-axis direction in the mandible, respectively, in the third region 25 constituting the CBCT image 20 in the sagittal plane.
Meanwhile, when the mandible of the patient is recessed inwardly relative to the lower incisor portion, B-point (B) and pogonion (Pog) may not be smoothly detected by the method described above, in which case the machine learning algorithm may detect the most concave point and the most prominent point of the mandible as B-point B and pogonion (Pog), respectively, in the third region 25 constituting the CBCT image 20 in the sagittal plane.
In addition, the machine learning algorithm may detect gonion (Go), which is the point of maximum curvature of the mandibular angle, in the fourth region 29 constituting the CBCT image 20 in the sagittal plane. To this end, the machine learning algorithm may detect an intersecting point of a tangent line that passes through menton (Me) and is tangent to the lower portion of the mandible, and the tangent line that passes through the articular bone of jaw (Ar) and is tangent to a left sided portion of the mandible, as gonion (Go).
As illustrated in
Meanwhile, since each nasion (N) detected in the CBCT image 30 in the coronal plane and the CBCT image 20 in the sagittal plane shares the same z-axis position coordinate, the y-axis position coordinate in the CBCT image 30 in the coronal plane may serve as a major factor in the process of detecting nasion (N). In the CBCT image 20 in the coronal plane, the y-axis position coordinate yN of nasion (N) may be detected by the following expression 2 by detecting a left end coordinate value Ti and a right end coordinate value Ti′, in the nasal bone region from a plurality of unit pixels included between
(where ZN is the z-axis position coordinate of an N point, and n is a natural number).
In the process of detecting the sixth region 33 in the CBCT image 30 in the coronal plane, the machine learning algorithm detects an intersection points S1 and S2 where a vertical line passing through a z-axis position coordinate (ZA) of A-point (A) and a z-axis position coordinate
of a midpoint of the z-axis position coordinate (ZB) of B-point (B), which are detected in the CBCT image 20 in the sagittal plane, meets the left mandible and the right mandible of the CBCT image 30 in the coronal plane, respectively, and the machine learning algorithm may designate a region below the detected pair of intersection points S1 and S2 as the sixth region 33 in the CBCT image 30 in the coronal plane.
Further, the machine learning algorithm may detect a region of a convex shape in the z-axis direction in the sixth region 33 of the CBCT image 30 in the coronal plane, and detect a point with the largest x-axis coordinate value in the region as menton (Me).
As a statistical technique for the classification above, a two-dimensional position coordinate corresponding to each of the positions of the detected plurality of teeth landmarks 42 may be set according to a linear regression method, and a quadratic function 43 passing through the coordinates may be generated. The teeth landmarks 42 may be divided into upper teeth landmarks 42a and lower teeth landmarks 42b by detecting the position of the detected teeth landmarks 42 in
Through the numbering process above, the entire set of teeth of the patient appearing in the dental panoramic image 40 may be numbered sequentially in order of shortest horizontal distance from the midline 44 of the facial portion to the detected cephalometric landmarks 42 (see
In addition, through the numbering process above, a missing tooth 40m may be detected by detecting an abnormal deviation in the distance from the midline 44 of the facial portion to the teeth landmarks detected on each of two neighboring teeth. In the embodiment in
With reference to
As illustrated in
Here, the three teeth landmarks 47 detected at each of a plurality of teeth 45 to be detected are configured with a left teeth landmark P1, a center teeth landmark P2, and a right teeth landmark P3 in a tooth enamel site of a dental crown constituting the tooth. Consequently, a plurality of cephalometric landmarks may be detected in the dental panoramic image 40 for deriving the parameters from each of the teeth 45 to be detected based on position coordinates 48 for each of the three teeth landmarks detected from the teeth 45 to be detected that have been trained from the CNN model.
For example, the machine learning algorithm may define a midpoint of the center teeth landmark P2 as a midpoint of the upper central incisor (UDM) and a midpoint of the lower central incisor (LDM) among the three cephalometric landmarks detected on each of the upper incisor and the lower incisor in the dental panoramic image 40 illustrated in
Meanwhile, the three teeth landmarks P1, P2, and P3 are detected from each of the four anterior teeth 45 detected through the dental panoramic image 40, and the incisor image in a cross-sectional view 50 may be obtained from the dental panoramic image 40 based on the three teeth landmarks P1, P2, and P3 detected from each of the four anterior teeth 45 (see
The machine learning algorithm may measure the angle between the vector connecting the crown tip T1 and the root tip T2 of the upper incisor detected on the incisor image in a cross-sectional view and the horizontal plane (NTVP) passing through the N point, and measure the angle between the vector connecting the crown tip T3 and the root tip T4 of the lower incisor and the MeGo line, consequently to evaluate the degree of inclination of the anterior teeth portion and use the degree of inclination as the parameter for orthodontic diagnosis.
As described above, the method deriving cephalometric parameters for orthodontic diagnosis based on machine learning according to the present invention may further include a step S400 diagnosing the facial profile of the patient or occlusal state corresponding to the 13 parameters derived in step S300 deriving the 13 parameters.
Here, when the patient's occlusal state is diagnosed in correspondence with the 13 parameters, the machine learning algorithm may diagnose the patient's occlusal state by classifying the occlusal state into a state in which the antero-posterior relationship of the maxilla and the mandible is in a relatively normal category, a state in which the maxilla is relatively protruding compared to the mandible, and a state in which the mandible is relatively protruding compared to the maxilla, respectively, according to an internally stored reference value of the parameter in order to distinguish between a normal occlusion and a malocclusion.
Further, when the facial profile of the patient is diagnosed in correspondence with the derived 13 parameters, the machine learning algorithm may diagnose the facial profile of the patient by classifying the facial profile of the patient into a state in which a length of the facial portion is in a normal category, a state in which the length of the facial portion is shorter than the normal category, and a state in which the length of the facial portion is longer than the normal category according to the internally stored reference value of the parameter in order to analyze the length of the patient's facial portion to perform the correction diagnosis.
The graphic user interface (GUI) to which the method deriving cephalometric parameters based on machine learning illustrated in
For example, when step S100 obtaining the 3D CBCT image for diagnosis is executed in the method deriving cephalometric parameters based on machine learning, a plurality of 3D CBCT images for diagnosis, including the cone beam computed tomography (CBCT) image data 10 of the facial portion of the patient through a 3D CBCT image generation area 100, and the CBCT image 20 in the sagittal plane, the CBCT image 30 in the coronal plane, the dental panoramic image 40, and the incisor image in a cross-sectional view 50 that are extracted from the CBCT image data 10, may appear on the display screen.
Further, when step S200 detecting a plurality of cephalometric landmarks on the method deriving cephalometric parameters based on machine learning is executed, the cephalometric landmarks for deriving the parameters are automatically detected by the machine learning algorithm of the present invention in a cephalometric landmarks display area 200, or icons such as various symbols, shapes, and the like corresponding to the detected cephalometric landmarks may appear on the display screen to allow a skilled person, such as a dentist, to display the cephalometric landmarks that the skilled person manually and directly detects on the display screen.
As illustrated in
Meanwhile, it has been described above that the method deriving cephalometric parameters based on machine learning according to the present invention may further include step S400 diagnosing the facial profile of the patient or occlusal state in correspondence with the 13 parameters that have been derived in step S300 deriving the 13 parameters.
Accordingly, when step S400 diagnosing the facial profile of the patient or occlusal state in the method deriving cephalometric parameters for orthodontic diagnosis based on machine learning is executed, information 320 that the occlusal state or the facial profile of the patient has been automatically diagnosed in correspondence with the 13 parameters may be displayed on the display screen through the diagnosis result display area 300.
For example, when the antero-posterior position of mandible and maxilla of the patient is in the relatively normal category, the diagnosis information may be represented as “Class I”, when the maxilla of the patient is in a state of relatively protrusion compared to the mandible, the diagnosis information may be represented as “Class II”, and when the mandible is in a state of relatively protrusion compared to the maxilla, the diagnosis information may be represented as “Class III”, and the like.
In addition, information 320 on the facial profile of the patient may be displayed as the diagnosis result 300 on the display area of display screen, such as a phrase such as “Meso-cephalic facial pattern” for a state in which the length of the patient's facial portion is in the normal category, “Brachy-cephalic facial pattern” for a state in which the length of the patient's facial portion is shorter than the normal category, and “Dolicho-cephalic facial pattern” for a state in which the length of the patient's facial portion is longer than the normal category.
This method deriving cephalometric parameters may be programmed in a program for deriving cephalometric parameters and installed or stored on a user computing device or a computable cloud server. Such a program may be programmed to automatically perform a step detecting a plurality of cephalometric landmarks as output data and a step deriving 13 parameters corresponding to the distances or angles between the detected plurality of cephalometric landmarks, using the 3D CBCT image for diagnosis extracted after the step obtaining the 3D CBCT image for diagnosis of the aforementioned method deriving cephalometric parameters as input data.
Of course, the program for deriving cephalometric parameters may be programmed such that the step detecting a plurality of cephalometric landmarks as output data and the step of deriving 13 parameters corresponding to the distances or angles between the detected cephalometric landmarks are performed sequentially or step by step according to the user's selection.
As described above, in the method deriving cephalometric parameters based on machine learning according to the present invention, the 3D CBCT image for diagnosis extracted from a specific angle is obtained from the image data captured by the CBCT for a patient with the machine learning algorithm being applied, and the entire process of deriving 13 parameters corresponding to a plurality of cephalometric landmarks detected from the 3D CBCT image for diagnosis can be performed within a few seconds to tens of seconds, so that the derivation of parameters for orthodontic diagnosis can be performed quickly and consistently with high accuracy.
Further, when the method deriving cephalometric parameters based on machine learning according to the present invention is combined with a graphic user interface, the results performed in each step of the method are displayed on a display screen, so that a third party such as a patient and a dentist can smoothly grasp the process deriving cephalometric landmarks and 13 parameters for orthodontic diagnosis and the diagnosis results.
Furthermore, the method deriving cephalometric parameters based on machine learning according to the present invention has an excellent expected effect further expanding the scope of application such as automatically designing a customized dental orthodontic device for a patient in correspondence with the derived 13 parameters, from the effect automatically diagnosing and indicating the facial profile or occlusal state of the patient.
While the present invention has been described above with reference to the exemplary embodiments, it may be understood by those skilled in the art that the present invention may be variously modified and changed without departing from the spirit and scope of the present invention disclosed in the claims. Therefore, it should be understood that any modified embodiment that essentially includes the constituent elements of the claims of the present invention is included in the technical scope of the present invention.
| Number | Date | Country | Kind |
|---|---|---|---|
| 10-2021-0102733 | Aug 2021 | KR | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/KR2021/011012 | 8/19/2021 | WO |