Claims
- 1. A computer-implemented method for use in creating a plan to reposition a patient's teeth from a set of initial tooth positions to a set of final tooth positions, the method comprising:
receiving an initial digital data set representing the teeth at the initial positions, wherein receiving the initial digital data set comprises receiving data obtained by scanning the patient's teeth or a physical model thereof, generating a set of intermediate positions toward which the teeth will move while moving from the initial positions to the final positions; and generating a plurality of appliances having cavities and wherein the cavities of successive appliances have different geometries shaped to receive and reposition teeth from the initial positions to the final positions.
- 2. The method of claim 1, wherein receiving the initial digital data set comprises receiving data obtained by scanning a physical model of the patient's teeth.
- 3. The method of claim 2, further comprising scanning the physical model with a destructive scanning system.
- 4. The method of claim 3, further comprising scanning the physical model with a laser scanning system before scanning the model with the destructive scanning system.
- 5. The method of claim 2, further comprising scanning physical models of the patient's upper and lower teeth in occlusion.
- 6. The method of claim 1, wherein receiving the initial digital data set comprises receiving data obtained by scanning two physical models of the patient's teeth, one representing a positive impression of the teeth and one representing a negative impression of the teeth.
- 7. The method of claim 6, further comprising scanning the positive impression and the negative impression while interlocked with each other.
- 8. The method of claim 1, wherein the initial digital data set includes volume image data of the patient's teeth and the method includes converting the volume image data into a 3D geometric model of the tooth surfaces.
- 9. The method of claim 8, wherein converting the volume image data comprises detecting volume elements in the image data between which a large transition in image value occurs.
- 10. The method of claim 1, further comprising applying a set of predefined rules to segment the initial data set into 3D models of individual dentition components of the patient's mouth.
- 11. The method of claim 10, further comprising deriving the rules from a database of information indicating how a typical data set is segmented into individual tooth models.
- 12. The method of claim 10, wherein the rules include information about the cusp structure of typical teeth.
- 13. The method of claim 1, further comprising applying rules of orthodontic relevance to reduce the amount of data in the initial data set associated with less important orthodontic features.
- 14. The method of claim 1, further comprising modifying the initial data set to include data representing a hidden tooth surface.
- 15. The method of claim 14, wherein the hidden tooth surfaces include tooth roots.
- 16. The method of claim 14, wherein the data representing the hidden tooth surfaces comprises image data representing the hidden surfaces of the patient's teeth.
- 17. The method of claim 16, wherein the image data comprises at least one of the following: X-ray data, CT scan data, MRI data.
- 18. The method of claim 14, wherein the data representing the hidden tooth surfaces comprises data representing the hidden surfaces of typical teeth.
- 19. The method of claim 14, further comprising extrapolating visible surfaces of the patient's teeth to derive the data representing the hidden tooth surfaces.
- 20. The method of claim 1, further comprising receiving information indicating whether the patient's teeth are moving as planned and, if not, using the information to revise the set of intermediate positions.
- 21. The method of claim 1, wherein generating the set of intermediate positions comprises generating more than one candidate set of intermediate position for each tooth and providing a graphical display of each candidate set to a human user for selection.
- 22. The method of claim 1, further comprising applying a set of rules to detect any collisions that will occur between teeth as the patient's teeth move toward the set of final positions.
- 23. The method of claim 22, wherein detecting collisions comprises calculating distances between a first tooth and a second tooth by:
establishing a neutral projection plane between the first tooth and the second tooth, establishing a z-axis that is normal to the plane and that has a positive direction and a negative direction from each of a set of base points on the projection plane, computing a pair of signed distances comprising a first signed distance to the first tooth and a second signed distance to the second tooth, the signed distances being measured on a line through the base points and parallel to the z-axis, and determining that a collision occurs if any of the pair of signed distances indicates a collision.
- 24. The method of claim 23, wherein the positive direction for the first distance is opposite the positive direction for the second distance and a collision is detected if the sum of any pair of signed distances is less than or equal to zero.
- 25. The method of claim 1, further comprising applying a set of rules to detect any improper bite occlusions that will occur as the patient's teeth move toward the set of final positions.
- 26. The method of claim 25, further comprising calculating a value for a malocclusion index and displaying the value to a human user.
- 27. The method of claim 1, wherein generating the set of intermediate positions includes receiving data indicating restraints on movement of the patient's teeth and applying the data to generate the intermediate positions.
- 28. The method of claim 1, wherein generating the set of intermediate positions includes determining the minimum amount of transformation required to move each tooth from the initial position to the final position and creating the intermediate positions to require the minimum amount of movement.
- 29. The method of claim 1, wherein generating the set of intermediate positions includes generating intermediate positions for at least one tooth between which the tooth undergoes translational movements of equal sizes.
- 30. The method of claim 1, further comprising rendering a representation of the teeth at the set of positions corresponding to a selected data set.
- 31. The method of claim 1, further comprising providing a user interface with an input component that allows a human user to control an animation of the movement of the teeth.
- 32. The method of claim 30, further comprising using only a portion of the data in the selected data set to render the graphical representation of the teeth.
- 33. The method of claim 30, further comprising applying level-of-detail compression to the data set to render the graphical representation of the teeth.
- 34. The method of claim 30, further comprising receiving an instruction from a human user to modify the graphical representation of the teeth and modifying the graphical representation in response to the instruction.
- 35. The method of claim 34, further comprising modifying the selected data set in response to the instruction from the user.
- 36. The method of claim 30, further comprising allowing a human user to select a tooth in the graphical representation and, in response, displaying information about the tooth.
- 37. The method of claim 36, wherein the information relates to the forces that the tooth will experience while moving toward the set of final positions.
- 38. The method of claim 36, wherein the information indicates a linear distance between the tooth and another tooth selected in the graphical representation.
- 39. The method of claim 30, wherein rendering the graphical representation comprises rendering the teeth at a selected one of multiple viewing orthodontic-specific viewing angles.
- 40. The method of claim 30, further comprising providing a user interface through which a human user can provide text-based comments after viewing the graphical representation of the patient's teeth.
- 41. The method of claim 30, wherein rendering the graphical representation comprises downloading data to a remote computer.
- 42. The method of claim 30, further comprising receiving an input signal from a 3D input device controlled by a human user and using the input signal to alter the orientation of the teeth in the graphical representation.
- 43. The method of claim 1, further comprising delivering data identifying the intermediate treatment positions to an appliance fabrication system for use in fabricating at least one orthodontic appliance structured to move the patient's teeth toward the final positions.
- 44. The method of claim 43, further comprising including in the data a digital model of an orthodontic attachment that the appliance must accommodate.
- 45. The method of claim 44, wherein the digital model represents an attachment to be placed on one of the patient's teeth.
- 46. The method of claim 44, wherein the digital model represents an anchor to be placed in the patient's mouth and against which the appliance must pull.
- 47. The method of claim 43, further comprising receiving data indicating material properties of the appliance to be fabricated and using the data to generate the set of intermediate positions.
- 48. A computer program, residing on a tangible storage medium, for use in creating a plan to reposition a patient's teeth from a set of initial tooth positions to a set of final tooth positions, the program comprising executable instructions operable to cause a computer to:
receive an initial digital data set representing the teeth at the initial positions, wherein receiving the initial digital data set comprises receiving data obtained by scanning the patient's teeth or a physical model thereof; generate a set of intermediate positions toward which the teeth will move while moving from the initial positions to the final position; and generate a plurality of appliances having cavities and wherein the cavities of successive appliances have different geometries shaped to receive and reposition teeth from the initial positions to the final positions.
- 49. The program of claim 48, wherein the initial digital data set includes data obtained by scanning a physical model of the patient's teeth.
- 50. The program of claim 48, wherein the initial digital data set includes data obtained by scanning a positive impression and a negative impression of the patient's teeth interlocked together.
- 51. The program of claim 48, wherein the initial digital data set includes volume image data of the patient's teeth and the computer converts the volume image data into a 3D geometric model of the tooth surfaces by detecting volume elements in the image data between which a large transition in image value occurs.
- 52. The program of claim 48, wherein the computer applies a set of predefined rules to segment the initial data set into 3D models of the individual teeth.
- 53. The program of claim 48, wherein the computer modifies the initial digital data set to include data representing hidden tooth surfaces.
- 54. The program of claim 48, wherein the computer applies a set of rules to detect any collisions that will occur as the patient's teeth move toward the final positions.
- 55. The program of claim 54, the computer detects collisions by calculating distances between a first tooth and a second tooth by:
establishing a neutral projection plane between the first tooth and the second tooth, establishing a z-axis that is normal to the plane and that has a positive direction and a negative direction from each of a set of base points on the projection plane, computing a pair of signed distances comprising a first signed distance to the first tooth and a second signed distance to the second tooth, the signed distances being measured on a line through the base points and parallel to the z-axis, and determining that a collision occurs if any of the pair of signed distances indicates a collision.
- 56. The program of claim 48, wherein the computer applies a set of rules to detect any improper bite occlusions that will occur as the patient's teeth move toward the final positions.
- 57. The program of claim 48, wherein the computer renders a 3D graphical representation of the teeth at the positions corresponding to a selected data set.
- 58. The program of claim 57, wherein the computer animates the graphical representation of the teeth to provide a visual display of the movement of the teeth toward the final positions.
- 59. The program of claim 48, wherein the computer applies level-of-detail compression to the selected data set to render the graphical representation of the teeth.
- 60. The program of claim 48, wherein the computer receives an instruction from a human user to modify the graphical representation of the teeth and, in response to the instruction, modifies the graphical representation and the selected data set.
- 61. The program of claim 48, wherein the computer delivers data identifying the intermediate treatment positions to an appliance fabrication system for use in fabricating at least one orthodontic appliance structured to move the patient's teeth toward the final positions.
- 62. The program of claim 61, wherein the computer includes in the data a digital model of an orthodontic attachment that the appliance must accommodate.
- 63. A system for repositioning a patient's teeth from a set of initial tooth positions to a set of final tooth positions, the system comprising:
an input component that receives an initial digital data set representing the teeth at the initial positions, wherein receiving the initial digital data set comprises receiving data obtained by scanning the patient's teeth or a physical model thereof; a path-generating component that generates a set of intermediate positions toward which the teeth will move while moving from the initial positions to the final positions, and a component to generate a plurality of appliances having cavities and wherein the cavities of successive appliances have different geometries shaped to receive and reposition teeth from the initial positions to the final positions.
- 64. The system of claim 63, wherein the initial digital data set includes data obtained by scanning a physical model of the patient's teeth.
- 65. The system of claim 63, wherein the initial digital data set includes data obtained by scanning a positive impression and a negative impression of the patient's teeth interlocked together.
- 66. The system of claim 63, wherein the initial digital data set includes volume image data of the patient's teeth and the system includes a component that converts the volume image data into a 3D geometric model of the tooth surfaces by detecting volume elements in the image data between which a large transition in image value occurs.
- 67. The system of claim 63, further comprising a segmentation component that applies a set of predefined rules to segment the initial data set into 3D models of the individual teeth.
- 68. The system of claim 63, wherein input component modifies the initial digital data set to include data representing hidden tooth surfaces.
- 69. The system of claim 63, further comprising a collision-detection component that applies a set of rules to detect any collisions that will occur as the patient's teeth move towards the final positions.
- 70. The system of claim 69, wherein the collision-detection component detects collisions by calculating distances between a first tooth and a second tooth by:
establishing a neutral projection plane between the first tooth and the second tooth, establishing a z-axis that is normal to the plane and that has a positive direction and a negative direction from each of a set of base points on the projection plane, computing a pair of signed distances comprising a first signed distance to the first tooth and a second signed distance to the second tooth, the signed distances being measured on a line through the base points and parallel to the z-axis, and determining that a collision occurs if any of the pair of signed distances indicates a collision.
- 71. The system of claim 63, further comprising an occlusion-monitoring component that applies a set of rules to detect any improper bite occlusions that will occur as the patient's teeth move towards the final positions.
- 72. The system of claim 63, further comprising a display element that renders a 3D graphical representation of the teeth at the positions corresponding to a selected data set.
- 73. The system of claim 72, wherein the display element animates the graphical representation of the teeth to provide a visual display of the movement of the teeth toward the final positions.
- 74. The system of claim 72, wherein the display element applies level-of-detail compression to the selected data set to render the graphical representation of the teeth.
- 75. The system of claim 72, wherein the display element receives an instruction from a human user to modify the graphical representation of the teeth and, in response to the instruction, modifies the graphical representation and uses the instruction to the modify the selected data set.
- 76. The system of claim 63, further comprising an output component that delivers data identifying the intermediate treatment positions to an appliance fabrication system for use in fabricating at least one orthodontic appliance structured to move the patient's teeth toward the final positions.
- 77. The system of claim 76, wherein the output component includes in the data a digital model of an orthodontic attachment that the appliance must accommodate.
- 78. A computer-implemented method for use in generating 3D three-dimensional models of a patient's teeth, the method comprising:
receiving an initial data set that contains a 3D representation of a group of the patient's teeth, wherein receiving the initial digital data set comprises receiving data obtained by scanning the patient's teeth or a physical model thereof, identifying points in the initial data set corresponding to each individual tooth, and segmenting the initial data set into multiple data sets, each containing the points identified for one of the teeth; storing each data set as a 3D geometric model representing the visible surfaces of the corresponding tooth; and modifying each 3D model to include hidden surfaces of the corresponding tooth.
- 79. The method of claim 78, wherein the initial data set contains digital volume image data, and the method includes converting the volume image data into a 3D geometric model by detecting volume elements in the image data between which a sharp transition in digital image value occurs.
- 80. A computer-implemented method for use in determining whether a patient's teeth can be moved from a first set of positions to a second set of positions, the method comprising:
receiving digital data sets representing the teeth at the first set of positions and the second set of positions, wherein receiving the digital data sets comprises receiving data obtained by scanning the patient's teeth or a physical model thereof; determining whether any of the teeth will collide while with each other moving to the second set of positions; and generating a plurality of appliances having cavities and wherein the cavities of successive appliances have different geometries shaped to receive and reposition teeth from the initial positions to the final positions.
- 81. The method of claim 80, wherein determining whether any of the teeth will collide comprises calculating distances between a first tooth and a second tooth by:
establishing a neutral projection plane between the first tooth and the second tooth, establishing a z-axis that is normal to the plane and that has a positive direction and a negative direction from each of a set of base points on the projection plane, computing a pair of signed distances comprising a first signed distance to the first tooth and a second signed distance to the second tooth, the signed distances being measured on a line passing through the base points and parallel to the z-axis, and determining that a collision will occur if any of the pair of signed distances indicates a collision.
- 82. The method of claim 81, wherein the positive direction for the first distance is opposite the positive direction for the second distance and a collision is detected if the sum of any pair of signed distances is less than or equal to zero.
- 83. A computer-implemented method for use in determining final positions for an orthodontic patient's teeth, the method comprising:
receiving a digital data set representing the teeth at recommended final positions, rendering a three-dimensional (3D) graphical representation of the teeth at the recommended final positions, receiving an instruction to reposition one of the teeth in response to a user's manipulation of the tooth in the graphical representation, and in response to the instruction, modifying the digital data set to represent the teeth at the user-selected final positions.
- 84. A computer-implemented method for use in analyzing a recommended treatment plan for an orthodontic patient's teeth, the method comprising:
receiving a digital data set representing the patient's upper teeth after treatment, receiving a digital data set representing the patient's lower teeth after treatment, orienting the data in the data sets to simulate the patient's bite occlusion, manipulating the data sets in a manner that simulates motion of human jaws, and detecting collisions between the patient's upper teeth and lower teeth during the simulation of motion; and rendering a graphical representation of the teeth.
- 85. The method of claim 84, wherein manipulating the data sets comprises applying a set of animation instructions based on the observed motion of typical human jaws.
- 86. The method of claim 84, wherein manipulating the data sets comprises applying a set of animation instructions based on the observed motion of the patient's jaws.
- 87. The method of claim 1, further comprising generating a final data set representing the teeth at the final positions.
- 88. The method of claim 1, further comprising generating a series of orthodontic devices for repositioning the patient's teeth from the initial positions to the final positions.
- 89. The method of claim 88, further comprising:
using the appliances to treat the patient's teeth; receiving an in-course digital data set representing actual positions of the patient's teeth after treatment has begun; and displaying a graphical representation of the patient's teeth at the actual positions.
- 90. The method of claim 1, further comprising generating treatment paths among the intermediate positions along which the teeth will move from the initial positions to the final positions.
- 91. The method of claim 1, further comprising generating an alternative set of intermediate treatment positions.
- 92. The method of claim 91, further comprising displaying at least two different sets of intermediate treatment positions to a user and allowing the user to select one of the sets for use in treating the patient's teeth.
- 93. The method of claim 1, further comprising generating, for each tooth at each tooth position, a transformation representing a translational position of the tooth and a rotational position of the tooth with respect to an origin.
- 94. The method of claim 1, wherein generating the intermediate positions comprises representing the teeth in a configuration space.
- 95. The method of claim 1, further comprising generating a renderable model of the patient's teeth at the final positions.
- 96. The method of claim 95, further comprising making the renderable model available on a computer accessible by the treating clinician.
- 97. The method of claim 96, further comprising generating a graphical representation of the patient's teeth at the final positions when the clinician accesses the renderable model.
- 98. The method of claim 95, further comprising making the renderable model available on a computer accessible by the patient.
- 99. The method of claim 98, further comprising generating a graphical representation of the patient's teeth at the final positions when the patient accesses the renderable model.
- 100. The method of claim 1, wherein generating the intermediate treatment positions comprises receiving information about a material property of a device that will be used to treat the patient's teeth and deriving from the information a constraint on the movement of at least one of the teeth.
- 101. The method of claim 2, wherein receiving the initial data set includes receiving image data obtained directly by imaging the patient's teeth.
- 102. The method of claim 101, wherein the image data is digital.
- 103. The method of claim 101, wherein the image data includes at least one of the following: 2D x-ray data, 3D x-ray data, CT scan data, and MRI data.
- 104. The method of claim 2, further comprising analyzing the data obtained by scanning the physical model to determine physical characteristics of a material used in the model.
- 105. The method of claim 5, wherein scanning the physical models of the patient's upper and lower teeth comprises scanning the physical models with a laser scanning system.
- 106. The method of claim 10, wherein one of the dentition components comprises at least a portion of an individual tooth.
- 107. The method of claim 10, wherein one of the dentition components comprises gum in the patient's mouth.
- 108. The method of claim 10, wherein applying the set of predetermined rules comprises applying a rule for recognizing noise in a tooth cast from which the initial data set is derived.
- 109. The method of claim 14, further comprising:
receiving image data containing an image of the patient's teeth; analyzing the image data to identify a particular feature of at least one of the patient's teeth; and using the identified feature to guide the inclusion of the hidden tooth surface.
- 110. The method of claim 109, wherein the image data is digital.
- 111. The method of claim 109, wherein the image data comprises at least one of the following: 2D x-ray data, 3D x-ray data, CT scan data, and MRI data.
- 112. The method of claim 28, wherein each set of intermediate positions is created to require, in addition to the minimum amount of movement, any movement that is needed to satisfy an orthodontic restraint that applies to the corresponding tooth.
- 113. The method of claim 30, further comprising subsequently rendering a graphical representation of the teeth at the set of positions corresponding to another of the data sets to illustrate how the patient's teeth will move during treatment.
- 114. The method of claim 113, wherein the graphical representation includes a three dimensional representation of the teeth.
- 115. The method of claim 30, further comprising:
receiving data indicating two positions in the graphical representation that a user has selected with a pointing device; calculating the distance between the two points; and displaying the distance in the graphical representation.
- 116. The method of claim 31, wherein the input components allow the user to take any of the following actions: view the animation at a normal frame rate, step through the animation one frame at a time, select a particular frame in the animation for viewing, and stop the animation.
- 117. The method of claim 42, wherein the 3D input device comprises a gyroscopic pointing device.
- 118. The program of claim 48, wherein the computer generates a final data set representing the teeth at the final positions.
- 119. The program of claim 48, wherein the computer generates data for use in creating a series of orthodontic devices for repositioning the patient's teeth from the initial positions to the final positions.
- 120. The program of claim 48, wherein the computer generates treatment paths among the intermediate positions along which the teeth will move from the initial positions to the final positions.
- 121. The system of claim 63, further comprising an output component that generates a final data set representing the teeth at the final positions.
- 122. The system of claim 63, further comprising an output component that generates data for use in creating a series of orthodontic devices for repositioning the patient's teeth from the initial positions to the final positions.
- 123. The system of claim 63, wherein the path-generating component generates treatment paths among the intermediate positions along which the teeth will move from the initial positions to the final positions.
- 124. A computer-implemented method for use in generating a digital model of an orthodontic patient's teeth, the method comprising:
receiving image data obtained by collecting an image of the teeth; analyzing the image data to identify a particular feature of an individual tooth; and selecting an attribute of the digital model of the teeth based on the identified feature; wherein using the identified feature to select an attribute of the digital model includes selecting at least a portion of an ideal tooth model for incorporation in the digital model of the patient's teeth.
- 125. A computer-implemented method for use in generating a digital model of an orthodontic patient's teeth, the method comprising:
receiving image data obtained by collecting an image of the teeth; using the image data to build a three dimensional (3D) geometric model of the patient's teeth for manipulation in a computer; and analyzing the image data to identify a particular feature of an individual tooth and, based on the identified feature, selecting an attribute of the 3D geometric model of the patient's teeth; wherein selecting an attribute comprises specifying the appearance of a hidden tooth surface to be incorporated in the 3D geometric model.
- 126. The method of claim 125, wherein the image data is digital.
- 127. The method of claim 125, wherein the image data comprises CT scan data.
- 128. The method of claim 125, wherein the image data comprises 2D x-ray data.
- 129. The method of claim 125, wherein the image data comprises 3D x-ray data.
- 130. The method of claim 125, wherein the image data comprises MRI data.
- 131. The method of claim 125, further comprising manipulating the 3D geometric model to select corrected positions for the patient's teeth following a course of orthodontic treatment.
- 132. The method of claim 131, generating a digital data set for use in creating a series of orthodontic appliances to move the patient's teeth to the corrected positions.
- 133. The method of claim 30, wherein the representation includes a three-dimensional (3D) graphical representation of the teeth.
- 134. The method of claim 31, wherein the user interface includes a graphical user interface.
- 135. A computer-implemented method for use in creating a plan to reposition a patient's teeth from a set of initial tooth positions to a set of final tooth positions, the method comprising:
receiving an initial digital data set representing the teeth at the initial positions, wherein receiving the initial digital data set comprises receiving data obtained by scanning a physical model of the patient's teeth; generating a set of intermediate positions toward which the teeth will move while moving from the initial positions to the final positions; rendering a representation of the teeth at the set of positions corresponding to a selected data set; and animating the graphical representation of the teeth to provide a visual display of the movement of the teeth toward the set of final positions.
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] This application is a continuation of U.S. application Ser. No. 09/686,190 (Attorney Docket No. 018563-004810-AT-00105.1), filed Oct. 10, 2000, which was a continuation of U.S. application Ser. No. 09/169,276 (Attorney Docket No. 18563-004800-AT-00105), filed Oct. 8, 1998, (now abandoned), which is a continuation-in-part of PCT Application No. US98/12681 (Attorney Docket No. 18563-000120PC-AT-00003PC), filed on Jun. 19, 1998, which claimed priority from U.S. patent application Ser. No. 08/947,080 (Attorney Docket No. 18563-000110-AT-00002), filed on Oct. 8, 1997, (now U.S. Pat. No. 5,975,893), which claims priority from U.S. Provisional Application No. 60/050,342 (Attorney Docket No. 18563-000100-AT-0001), filed on Jun. 20, 1997, the full disclosures of which are incorporated in this application by reference.
[0002] This application is related to U.S. patent application Ser. No. 09/169,036 (Attorney Docket No. 18563-004900-AT-00106), filed Oct. 8, 1998 (now U.S. Pat. No. 6,450,807) and U.S. patent application Ser. No. 09/169,034 (Attorney Docket No. 18563-005000-AT-00107), filed Oct. 8, 1998, (now U.S. Pat. No. 6,471,511), both filed on Oct. 8, 1998, the full disclosures of which are incorporated herein by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60050342 |
Jun 1997 |
US |
Continuations (2)
|
Number |
Date |
Country |
Parent |
09686190 |
Oct 2000 |
US |
Child |
10718779 |
Nov 2003 |
US |
Parent |
09169276 |
Oct 1998 |
US |
Child |
09686190 |
Oct 2000 |
US |
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
PCT/US98/12681 |
Jun 1998 |
US |
Child |
09169276 |
Oct 1998 |
US |