This invention relates to a process for defining a common reference system in a set of volume data that represents an area of a patient's jaw and that is recorded by means of an x-ray imaging process, and a set of surface data, at least some of which represents the same area of the patient's jaw and which is recorded by means of a process for recording visible surfaces.
Such a “registration” of two sets of data is always necessary if the latter were generated with various systems and/or at various times and are produced from the transfer of the sets of data for the processing of relevant information. Thus, for example, for planning an operation, for example an implantation, computer tomographic (CT) or cone beam x-ray images of a patient's jaw are taken, which contain detailed anatomical information. In contrast, for planning and producing dental prosthetic supplies, 3D surface images are taken directly from the jaw, or from a mold of the jaw, with an optical recording unit, for example a CEREC measuring camera from the Sirona Dental Systems Company GmbH. In contrast to the tomographical images, these surface data contain specific information on the course of the visible surface of the jaw, in particular the surface of the teeth and the mucous membrane. The visible surface is thus picked up accurately by this process.
The combination (fusion) of these two sets of data generates valuable additional information, which can be used both in the planning and implementation of the operation and in the planning and production of prosthetic supplies. Up until now, only processes for registering those sets of data that operate using models, markers or similar mechanical aids have been known.
From one publication (Nkenke, E., Zachow, S. et al.: Fusion of Computer Tomography Data and Optical 3D Images of the Dentition for Streak Artifact Correction in the Simulation of Orthognathic Surgery. Dentomaxillofacial Radiology (2004), 33, 226-232), a prototype of the registration of an x-ray data set of the jaw and the surface data set of a corresponding plaster model is known. In this case, first the visible surface, i.e., the surface of the teeth and the mucous membrane, is extracted from the x-ray image of the plaster model before said visible surface is then registered with the surface from the optical image using an ICP algorithm (“iterative closest point”). In practice, however, this process is hardly usable, since the extraction of the surface from the x-ray data set of a real patient is inaccurate, such that the requirements for a specific registration of the surfaces are not met.
Moreover, the use of reference elements (markers) is known. Because of the associated problems of attachment and the difficulties for patients, however, markers are only used when there is no simpler option. Thus, for example, U.S. Pat. No. 5,842,858 discloses a process in which during the x-ray imaging, the patient carries a template with markers; the template is then placed on the model to which a sensor is attached for 3D-position detection. After the positional relationship between the sensor and the markers is determined, the template can be removed, and optical imaging can be done. In this case, the 3D sensor makes possible the registration relative to the patient imaging.
The object of the invention is now to provide a process for registering a volume-data set of a real patient and a set of corresponding surface data, which can be converted directly by the attending physician simply and conveniently without additional aids.
This object is achieved by the process according to claim 1. Advantageous embodiments are mentioned in the subclaims.
The essential idea of the invention can be paraphrased as follows: Starting from the tomographically recorded volume data and the surface data, the two sets of data are advantageously depicted together in one image or optionally also in two separate images on a screen, whereby the mutual orientation of the objects visible therein is initially still relatively insignificant. The objects are ideally teeth, which are readily detectable in both visualizations. On the screen, in a kind of prepositioning, a visualization of a marking object is “manually” placed over the other visualization of the object as well as possible, for example by guiding with a cursor. Then, by means of a transformation function, a marking volume structure that is extracted from the volume data, which is formed by, for example, edges of the object, with the corresponding structure of the surface data, named surface structure below, is positioned overtop as much as possible, whereby a measure of the quality of the uniformity of intersection is defined and whereby the extracted structure in iterative steps under optimization of the quality criterion is matched to the surface structure that can be seen in the surface data.
The idea that is essential to the invention is thus to contain the complete relevant information from the x-ray volume data set and to convert it into another volume form, namely that of the marking volume structure. This then makes possible the direct comparison with the corresponding location on the surface of the optical image. In a way according to the invention, the coordinates of the optical image are automatically positioned overtop by iteration with the coordinates of the x-ray image. The invention thus represents a process that makes possible a precise and automated registration of an optical image with an x-ray image of a patient. With this process, the registration, in which the optical image is overlapped in space with the tomographical image, can be performed without using any external reference elements, physical models, such as plaster models, or mechanical devices. This registration is automated to a large extent and can be performed in an amount of time on the order of magnitude of, for example, 15-30 seconds.
The fusion of the two data sets performed in such a way is helpful both for the planning and for the implementation of operation and can also be used in prosthetic supplies. Thus, for example, in implantation planning, in addition to the anatomical information from the x-ray data set, the exact course of the surface of the mucous membrane can also be examined, which, as already stated, cannot be detected to the desired extent in the volume data recorded by x-ray. Another application is also to integrate the prosthetic information when using the CEREC system and to implement a kind of implantation planning that is based on both anatomy and prosthetics. In addition, artifacts in x-ray data sets can be reduced, which are caused by, for example, metal fillings. By the fusion of the data set with an optical image of the jaw, which is completely free of any metal artifacts, the outer contours of the patient's data set can be correctly reproduced with simultaneous visualization of the relevant volume information.
For a largely automatic prepositioning, it is especially advantageous if the user defines reference points on the object or objects in the first visualization of the two data sets. The latter can then be superimposed by the graphics program, so that already a first approximation to an overlap of the data sets is given. The accuracy at this stage does not need to be especially exact, but rather it has to lie within a specific tolerance. For this automatic prepositioning, at least a first reference point is defined on the surface of the object that is depicted in the volume data, in particular on the surface of a tooth, and at least a second reference point is defined on at least almost the same location on the surface thereof also in the object that is visible in the surface data, especially the same tooth. As pointed out, in the automatic prepositioning, the reference points that correspond to the object are placed on top of one another as much as possible by means of an automatically calculated transformation. Depending on the number of reference points, this transformation advantageously corresponds to an analytically determined shift (a reference point) or shift with rotation (two reference points) or a shift with rotation (three and more reference points) determined by means of a least-square-minimization process (also known as “point-based registration”). Advantageously, the reference points of the user being considered are defined on the screen by means of a cursor that can be moved over the screen, in particular by means of mouse clicks.
The user can be supported by the software when setting the reference points. Thus, for example, suitable reference points in the surface data can automatically be proposed by the software, whereby it is then the object of the user to mark the corresponding reference points in the volume data set.
Below, the procedure according to the invention is explained in more detail based on
In
In
A similar procedure is employed in the next step, shown in
Since the noise in the original image influences such an edge detection, a 3D smoothing can then be performed with a suitable smoothing filter, such as a Gauss filter or a median filter. As a result, the capture range of the optimization algorithm is enlarged. The brightest areas in the following figure correspond to the most prominent edges in the test volume.
In the next step according to
Then, a cost function is defined that indicates how well the test points from the surface data agree with the corresponding edge image. The cost function is a function of the six parameters of a rigid element transformation, whereby three parameters for the shift and three parameters for the rotation are required. In this case, the cost function is defined and calculated as follows for a specific transformation T: First, all extracted points with the transformation T are transformed into the corresponding extracted volume. For each point from the optical image, the corresponding value (“brightness”) is determined in the edge image by interpolation. The sum of the values of all test points, averaged by the number of points, gives the overall value of the cost function for the transformation T.
The cost function can also have other forms that improve the robustness, speed or precision of the optimization or make the optimization insensitive to outliers in the data. Thus, for example, only a portion of the points, which have the highest brightness values, can be included in the calculation.
In the next step, the transformation is sought for which the cost function takes its maximum value. The search is carried out using an iterative optimization algorithm.
In this case, the automatic steps last no longer than a few seconds with use of a commercially available PC.
Number | Date | Country | Kind |
---|---|---|---|
10 2007 001 684 | Jan 2007 | DE | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2007/062510 | 11/19/2007 | WO | 00 | 12/29/2009 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2008/083874 | 7/17/2008 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5568384 | Robb et al. | Oct 1996 | A |
5842858 | Truppe et al. | Dec 1998 | A |
6362821 | Gibson et al. | Mar 2002 | B1 |
6563941 | O'Donnell et al. | May 2003 | B1 |
6628977 | Graumann et al. | Sep 2003 | B2 |
6845175 | Kopelman et al. | Jan 2005 | B2 |
6856310 | Ditt et al. | Feb 2005 | B2 |
7234937 | Sachdeva et al. | Jun 2007 | B2 |
7367801 | Saliger | May 2008 | B2 |
7397934 | Bloch et al. | Jul 2008 | B2 |
7840042 | Kriveshko et al. | Nov 2010 | B2 |
8113829 | Sachdeva et al. | Feb 2012 | B2 |
20010029334 | Graumann et al. | Oct 2001 | A1 |
20030083759 | Ditt et al. | May 2003 | A1 |
20030169913 | Kopelman et al. | Sep 2003 | A1 |
20030216631 | Bloch et al. | Nov 2003 | A1 |
20040015327 | Sachdeva et al. | Jan 2004 | A1 |
20050031176 | Hertel et al. | Feb 2005 | A1 |
20060057534 | Saliger | Mar 2006 | A1 |
20070012101 | Rottger et al. | Jan 2007 | A1 |
20070207437 | Sachdeva et al. | Sep 2007 | A1 |
20080064949 | Hertel et al. | Mar 2008 | A1 |
Number | Date | Country |
---|---|---|
199 63 440 | Jul 2001 | DE |
100 49 942 | Apr 2002 | DE |
101 49 795 | Apr 2003 | DE |
102 50 006 | May 2004 | DE |
10 2005 024 949 | Dec 2006 | DE |
10 18 709 | Jul 2000 | EP |
08 131403 | May 1996 | JP |
2002 528215 | Sep 2002 | JP |
2003 517361 | May 2003 | JP |
2003 532125 | Oct 2003 | JP |
2005 521502 | Jul 2005 | JP |
2006 204330 | Aug 2006 | JP |
WO-01 80761 | Nov 2001 | WO |
Entry |
---|
Ayoub, A. F. et al., “Towards building a photo-realistic virtual human face for craniomaxillofacial diagnosis and treatment planning,” Int. J. Oral Maxillofac. Surg., 2007, vol. 36, pp. 423-428. |
Cizek, J. et al., “Brain Studies—image co-registration and template creation,” Nuc. Med. Rev., 2001, vol. 4, No. 1, pp. 43-45. |
English Translation of Office Action for Related Japanese Patent Application No. 2009 545120 dated Oct. 23, 2012. |
Hitachi Medical Corp., “Image Display Device,” Patent Abstracts of Japan, Publication Date Aug. 10, 2006; English Abstract of JP-2006 204330. |
International Search Report for PCT/EP2077/062510, Date of Completion: Mar. 6, 2008, Date of Mailing: Jul. 1, 2008. |
Khambay, B. et al., “3D sterophotogrammetic image superimposition onto 3D CT scan Images: the future of orthognathic surgery. A Pilot study,” International Journal Adult Orthodon Orthognath Surg., 2002, vol. 17, No. 4, pp. 331-341. |
Method and system for scanning a surface and generating a three-dimensional object, Espacenet, Publication Date: Oct. 28, 2003; English Abstract of JP-2003 532125. |
Nkienke, E. et al., “Fusion of computed tomography data optical 3D images of the dentition for streak artifact correction in the simulation of orthognathic surgery,” Dentomaxiollofacial Radiology, 2004, vol. 33, pp. 226-232. |
Uechi, J. et al., “A novel method for the 3-dimensional simulation of orthognathic surgery by using a multimodal image-fusion technique,” Am. J. Orthod Dentofacial Orthop, 2006, vol. 130, pp. 786-798. |
Wolf Henning, “Volume and surface measurement method used in forensic medicine for corpses involves determining volume data set and surface data set independently from same body and determining additional surface from volume data set,” Espacenet, Publication Date: Apr. 18, 2002; DE-100 499 42. |
Toshiba Medical Eng Co Ltd., “Medical Image Processor,” Thomson Innovation, Publication Date: May 28, 1996; English Abstract of JP-08 131403. |
Number | Date | Country | |
---|---|---|---|
20100124367 A1 | May 2010 | US |