All publications and patent applications mentioned in this specification are herein incorporated by reference in their entirety to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.
This disclosure relates generally to methods and apparatuses for the analysis of dental images, including methods and apparatuses for capturing dental images and methods and apparatuses for remotely pre-screening a patient for an orthodontic treatment.
In dental and/or orthodontic treatment, a set of 2D facial and dental photos is often taken. Traditional dental photography uses a camera, for example, a digital single-lens reflex (SLR) camera with a lens with a focal length of 90-100 mm and circular flash as shown in
Thus, there is a need for new and useful methods and apparatuses for obtaining high quality dental images.
Described herein are methods, apparatuses (including devices and systems, such as non-transitory, computer-readable storage media and systems including these) for capturing dental images and for using these dental images to determine if a patient is a candidate for an orthodontic procedure. In general, the methods and apparatuses described herein may obtain an image of a patient's teeth for therapeutic use, which may include viewing the patient's teeth, for example, on a screen of a mobile telecommunications device (such as a mobile phone or other hand-held personal computing device, e.g., smartwatch, pad, laptop, etc.).
Any of the methods described herein may include guiding or assisting in taking a predetermined set of images of the patient's teeth from specified viewing angles. The specified viewing angles (views) may be used to manually or automatically determine if a patient is a candidate for a particular orthodontic treatment. Often, it may be necessary or helpful that the views are taken with the proper resolution and focus, particularly when automatically analyzing the images later. Thus, in any of the methods and apparatuses described herein, an overlay may be shown over a camera image while preparing to take a picture of the teeth. The overlay may guide the user (e.g., dental technician, including dentist, orthodontist, dental assistant, nurse, etc.) in taking the images. Further, the overlay may be use to aid in focusing and illuminating the teeth during collection of the images. For example, the methods described herein may include displaying, on a screen (e.g., of a mobile communications device such as a smartphone, etc.), an overlay comprising an outline of teeth in a predetermined view. The overlay may be displayed atop the view from the camera, which may showing a real-time display of from the camera. As the camera is used to image the patient's teeth, the mobile communications device may be moved so that the overlay approximately matches the patient's teeth in the view of the patient's teeth. The method can further comprise capturing an image of the view of the patient's teeth.
For example, a method of obtaining an image of a patient's teeth for therapeutic use is described herein. The method can comprise viewing the patient's teeth, for example, on a screen of a mobile telecommunications device having a camera (e.g., smartphone, mobile phone, etc.). The method can further comprise displaying, on the screen, an overlay comprising an outline of teeth in a predetermined view, wherein the overlay is displayed atop the view of the patient's teeth. The method can comprise moving the mobile telecommunications device relative to the patient's teeth and triggering an indicator when the overlay approximately matches with the patient's teeth. The method can comprise capturing an image of the view of the patient's teeth when the indicator is triggered.
A method to obtain and an image of a patient's teeth for therapeutic use may include viewing, on a screen of a mobile telecommunications device, the patient's teeth. The method can further comprise displaying, on the screen, an overlay comprising a cropping frame and an outline of teeth in one of an anterior view, a buccal view an upper jaw view, or a lower jaw view, wherein the overlay is displayed atop the view of the patient's teeth. The method can further comprise moving the mobile telecommunications device so that the overlay approximately matches the patient's teeth in the view of the patient's teeth. The method can comprise capturing an image of the view of the patient's teeth. The method can further comprise reviewing the captured image on the mobile telecommunications device and indicating on the screen of the mobile telecommunications device if the captured image is out of focus. The method can further comprise automatically cropping the captured image as indicated by the cropping frame.
For example, the overlay can comprise a generic overlay in some embodiments. For another example, the overlay can comprise a patient-specific overlay derived from the patient's teeth in some other embodiments.
For example, the method can further comprise automatically triggering an indicator when the overlay approximately matches with the patient's teeth. For example, the method can further comprise triggering the indicator when the overlay approximately matches with the patient's teeth comprises estimating an indicator of the distance between an edge of the patient's teeth in the view of the patient's teeth and the outline of teeth in the overlay. For another example, the method can further comprise triggering the indicator when the overlay approximately matches with the patient's teeth comprises estimating an indicator of the distance between an edge of the patient's teeth at two or more regions and the outline of teeth and comparing that indicator to a threshold value. For example, the indicator can be a visual indicator, such as a change of color. In some variations, the indicator can be other forms of indicators, such as a voice indicator.
The method can further comprise automatically capturing an image of the patient's teeth when the overlay approximately matches with the patient's teeth.
The method can further comprise checking the image quality of the captured image and displaying on the screen if the image quality is below a threshold for image quality.
The method can further comprise cropping the captured image based on a cropping outline displayed as part of the overlay. Cropping may be manual or automatic
Any of these methods can further comprise evaluating the captured image for medical treatment by using the image. For example, the method can further comprise transmitting the captured image to a remote server.
The predetermined view can comprise an anterior view, a buccal view an upper jaw view, or a lower jaw view. The predetermined view can comprise a set of dental images according to the orthodontic standards. For example, the method can further comprise repeating the steps of viewing, displaying, moving and capturing to capture anterior, buccal, upper jaw and lower jaw images of the patient's teeth.
The method can further comprise imaging a patient's identification using the mobile telecommunications device and automatically populating a form with user identification information based on the imaged identification.
Any of these methods can further comprise displaying instructions about positioning the patient's teeth on the screen of the mobile telecommunications device prior to displaying the overlay.
Also described herein are apparatuses adapted to perform any of the methods described herein, including in particular software, firmware, and/or hardware adapted to perform one or more of these methods. Specifically, described herein are non-transitory, computer-readable storage media storing a set of instructions capable of being executed by a processor (e.g., of a mobile telecommunications device), that, when executed by the processor, causes the processor to display real-time images of the patient's teeth on a screen of the mobile telecommunications device, display an overlay comprising an outline of teeth in a predetermined view atop the images of the patient's teeth, and enable capturing of an image of the patient's teeth.
For example, described herein are non-transitory, computer-readable storage media storing a set of instructions capable of being executed by a processor of a mobile telecommunications device, that, when executed by the processor, causes the processor to display real-time images of the patient's teeth on a screen of the mobile telecommunications device, display an overlay comprising an outline of teeth in a predetermined view atop the images of the patient's teeth, trigger an indicator when the overlay approximately matches with the patient's teeth, and enable capturing of an image of the patient's teeth when the indicator is triggered.
Also described herein are non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of a mobile telecommunications device, that, when executed by the processor, causes the processor to display real-time images of the patient's teeth on a screen of the mobile telecommunications device and display an overlay comprising a cropping frame and an outline of teeth in one of an anterior view, a buccal view an upper jaw view, or a lower jaw view, wherein the overlay is displayed atop the images of the patient's teeth, and enable capturing of an image of the patient's teeth. The non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to review the captured image and indicate on the screen if the captured image is out of focus and automatically crop the captured image as indicated by the cropping frame.
The set of instructions, when executed by the processor, can further cause the processor to display a generic overlay. For another example, set of instructions can cause the processor to display a patient-specific overlay derived from the patient's teeth.
The set of instructions, when executed by the processor, can further cause the processor to automatically trigger an indicator when the overlay approximately matches with the patient's teeth. For example, the set of instructions can further cause the processor to estimate an indicator of the distance between an edge of the patient's teeth in the view of the patient's teeth and to trigger the indicator when the outline of teeth in the overlay is less than or equal to a threshold value. The set of instructions, when executed by the processor, can further cause the processor to estimate an indicator of the distance between an edge of the patient's teeth at two or more regions in the view of the patient's teeth and to trigger the indicator when the outline of teeth in the overlay is less than or equal to a threshold value. The set of instructions can cause the processor to trigger the indicator by displaying a visual indicator on the screen. Any appropriate visual indicator may be displayed, including a color, intensity (e.g., changing the color and/or intensity of the outline of the teeth overlay, cropping window, etc.), a textual/character indicator, or some combination thereof. Alternatively or additionally the indicator may be audible (beeping, tonal, etc.) and/or tactile (a vibration, buzzing, etc.).
The set of instructions, when executed by the processor, can further cause the processor to check the image quality of the captured image and displaying on the screen if the image quality is below a threshold for image quality. The quality may automatically determine focus, lighting (dark/light), etc. of the image and may alert the user and/or automatically reject or accept the image. The apparatus may further process the image (e.g., sharpen, lighten/darken, etc., including cropping). For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to automatically crop the captured image based on a cropping outline displayed as part of the overlay.
The set of instructions, when executed by the processor, can further cause the processor to transmit the captured image to a remote server. Transmission may be automatic or manual.
The apparatus (e.g., including the non-transient set of instructions) can further cause the processor to display an overlay comprising an outline of teeth in a predetermined view such as an anterior view, a buccal view an upper jaw view, or a lower jaw view. The apparatus may be configured to take a full or partial set of views. For example, the set of instructions, when executed by the processor, can further cause the processor to repeat the steps of viewing, displaying, moving and capturing to capture anterior, buccal, upper jaw and lower jaw images of the patient's teeth.
In addition to taking one or more views (e.g., anterior, buccal, upper jaw and lower jaw) the apparatuses described herein may be configured to automatically determine patient-specific information on the identity and other patient characteristics, and to associate this information with the images taken. For example, the set of instructions can cause the processor to capture an image of a patient's identification (e.g., driver's license) using the mobile telecommunications device and automatically populate a form with user identification information based on the imaged identification.
In any of these apparatuses the set of instructions can further cause the processor to display instructions on positioning the patient's teeth on the screen of the mobile telecommunications device prior to displaying the overlay.
Any of the methods described herein may be methods to obtain a series of images of a patient's teeth. These methods may include: displaying, on a screen of a mobile telecommunications device having a camera, a real-time image from the camera; and guiding a user in taking a series of predetermined views of the patient's teeth by sequentially, for each predetermined view: displaying, superimposed over the real-time image on the screen, an overlay comprising an outline of teeth in a predetermined view from the plurality of predetermined views; and triggering the capture of an image of the patient's teeth when the overlay approximately matches with the image of the patient's teeth in the display on the screen.
For example, a method to obtain a series of images of a patient's teeth may include: displaying, on a screen of a mobile telecommunications device having a camera, the patient's teeth; and guiding a user in taking a series of predetermined views of the patient's teeth by sequentially, for each predetermined view: displaying, on the screen, an overlay comprising an outline of teeth in a predetermined view, wherein the overlay is displayed atop the view of the patient's teeth; automatically adjusting the camera to focus within a region of the screen that is within the overlay; automatically adjusting a light emitted by the camera based on a level of light of within the region of the screen that is within the overlay; and triggering the capture of an image of the patient's teeth when the overlay approximately matches with the patient's teeth in the display of the patient's teeth, wherein the predetermined views include at least one each of: an anterior view, a buccal view, an upper jaw view, and a lower jaw view.
A series of images of a patient's teeth may include a set, collection, or grouping. A series may be organized or ordered, e.g., in a predefined order based on the predefined views (e.g., viewing angles). The series of images may be collected or linked to together, and may include identifying information, including information identifying the corresponding viewing angle (e.g., anterior view, buccal view, an upper jaw view, a lower jaw view, etc.). In any of these variations additional information may be included, such as the user and/or patient's chief dental concern (e.g., crowding, spacing, smile width, arch width, smile line, horizontal overjet, vertical overbite, cross bite, bite relationship, etc.). In general, the set of images may refer to a series of predetermined views. The predetermined views may refer to predetermined viewing angles for visualizing the teeth. Viewing angles may refer to the view of the upper and/or lower dental arch, and may include, for example: anterior (e.g., upper and lower anterior, typically with a closed bite), anterior open bite, right buccal (typically with a closed bite), right buccal open bite, left buccal (typically with a closed bite), left buccal open bite, upper jaw (e.g., viewed from occlusal surface), and lower jaw (e.g., viewed from an occlusal surface). The predetermined views may also include views that include the entire head, e.g., a head profile, a facial view (typically with the mouth closed, unsmiling), as well as a facial view with the patient smiling. A dental mirror may be used to take the upper and lower jaw images. The systems and methods described herein may automatically determine if a mirror is used, and may orient the image accordingly.
Any of the methods and apparatuses described herein may guide a user in taking a series. The method or apparatus may provide audible and/or visual instructions to the user. In particular, as mentioned above, any of these apparatuses may include an overlay on the display (screen) of the mobile telecommunications device showing an outline that may be matched to guide the user in taking the image(s). The overlay may be shown as an outline in a solid and/or semi-transparent color. An overlay may be shown for each predetermined view. The user may observe the screen and, once the image shows the patient's anatomy approximately matching within the overlay, the image may be captured. Image capture may be manual (e.g., manually triggered for capture by the user activating a control, such as pushing a button to take the image) and/or automatic (e.g., detected by the system and automatically triggered to take the image when the overlay is matched with the corresponding patient anatomy). In general, capturing or triggering the capture of the image of the patient's teeth (and/or the patient's head) when the overlay approximately matches with the image of the patient's teeth in the display may refer to automatic capturing/automatic triggering, semi-automatic capturing/semi-automatic triggering, or manual capturing/manual triggering. Automatic triggering (e.g., automatic capturing) may refer to automatic capture of the image, e.g., taking one or more images when the patient's anatomy (e.g., teeth) show on the screen matches the overlay on the screen. Semi-automatic triggering (e.g., semi-automatic capturing) may refer to producing a signal, such as an audible sound and/or visual indicator (e.g., flashing, color change, etc.) when the patient's anatomy (e.g., teeth) shown on the screen matches the overlay on the screen. Manual triggering (e.g., manual capturing) may refer to the user manually taking the image, e.g., taking one or more images when the patient's anatomy (e.g., teeth) is shown on the screen to match the overlay.
As described in greater detail herein, automatic or semi-automatic triggering (e.g., automatic or semi-automatic capture of images) may be accomplished by a variety of well-known image processing techniques. For example, detection of a match between the patient's anatomy (e.g., teeth) and the overlay on a screen may be achieved by edge detection; the edge of the patient's teeth may be compared to the overly region and if two or more regions (e.g., two opposite regions, etc.) are within a defined distance (e.g., +/−1 mm, +/−2 mm, +/−3 mm, +/−4 mm, +/−5 mm, +/−6 mm, +/−7 mm, +/−8 mm, +/−10 mm, etc. or +/−a corresponding number of pixels for the image, +/−a percentage, such as 1%, 2%, 3%, 5%, 7%, 10%, etc. of the screen diameter, etc.). The automatic detection of match may be determined by machine learning, e.g., training a machine to recognize matching of the patient anatomy (e.g., teeth) within the overlay with an acceptable percentage of match.
Any of these methods may include displaying on a screen of the mobile telecommunications device, images, and particularly real-time images, from the cameral of the mobile telecommunications device. Real-time may be refer to the current, or approximately concurrent, display of images detected by the camera on a screen or screens, e.g., of the mobile telecommunications device.
In general, the overlay may also be used to improve the image quality. Of the image(s) being taken. For example, any of these methods and apparatuses may automatically focus the imaging only within the region defined by the overlay. For example, any of these methods and apparatuses may disable or modify the autofocusing of the cameral of the mobile telecommunications device (e.g., mobile phone) and may autofocus on just the region within the overlay, or a sub-region within the overlay (e.g., on the anterior teeth, the incisor, canine, bicuspid, molars, etc.).
The overly may also control the illumination (e.g., lighting) of the images based on the region within all or a portion of the overlay. For example, the apparatus or method may detect and adjust the light level based on the light level within the overlay or a sub-region within the overlay (e.g., on the incisors, canines, bicuspids, molars, etc.). The illumination may generally be provided by the mobile telecommunications device, which may include a flash or LED light source that can be adjusted for continuous and/or discrete illumination.
In any of the methods and apparatuses described herein, the images taken for particular views (e.g., anterior, anterior open bite, right buccal, right buccal open bite, left buccal, left buccal open bite, upper jaw, and lower jaw, etc.) may be labeled with the corresponding view, either manually or automatically. Further, the view may be detected and identified by the method or apparatus. In variations in which the overlay for a particular view is provided before taking the image, the view shown in the overlay may determine the label for the resulting image. As mentioned herein, in some variations, automatic detection of the nearest view may be performed on the imaging, and the view (viewing angle) may be detected automatically. Additionally or alternatively, mirror images, may be detected or identified, and the resulting images flipped/rotated, and/or labeled to indicate that a dental mirror was used to take the image.
In any of the methods and apparatuses described herein, the overlay displayed over an image on the screen of the mobile telecommunication device may be selected automatically, e.g., by identifying the closest match to one of the predetermined viewing angles. The overly having the closet match may then be used to take an image for the set of images. Alternatively or additionally, the overlay may be provided first, and the user may then move the camera portion of the mobile telecommunications device to fit the patient's anatomy into the overlay.
As mentioned, any of the images for predetermined viewing angles (views) described herein may be taken with the use of a cheek retainer. An apparatus instruction the user to take the images may include written, pictorial (visual) and/or audible instruction on the use of the cheek retainer. Any of the methods and apparatuses described herein may automatically detect a cheek retainer; this may aid in automatic labeling and interpretation of the resulting image(s). In some variations the apparatus and/or method may detect one or more markers on the cheek retainer and use this information identify a view, to identify a match between an image and an overlay, etc.
Also described herein are methods and apparatuses for remotely pre-screening a patient for an orthodontic treatment. Any orthodontic treatment may be attempted, particularly orthodontic treatments for aligning the patient's teeth. Typically such methods may include taking a series of predetermined views of the patient's teeth, and optionally collecting information about the patient, such as one or more chief dental concerns (e.g., a chief patient dental concern such as, e.g., crowding, spacing, smile width, arch width, smile line, horizontal overjet, vertical overbite, cross bite, bite relationship, etc.). This additional information may be linked to the series of images, and may be used, along with the series of images, to determine if a patient is, or is not, a good candidate for an orthodontic treatment.
For example, described herein are methods for remotely pre-screening a patient for an orthodontic treatment, the method comprising: guiding a user, with a mobile telecommunications device having a camera, to take a series of images of the patient's teeth in a plurality of predetermined views; transmitting the series of images from the mobile telecommunications device to a remote location to determine if the patient is, or is not, a candidate for the orthodontic treatment based on the series of images; and displaying, on a screen of the mobile telecommunications device, an indicator that the patient is, or is not, a candidate for the orthodontic treatment.
In any of these methods, guiding may refer to sequentially, for each predetermined view, displaying, on the screen, an overlay comprising an outline of teeth in one of the predetermined views from the plurality of predetermined views, wherein the overlay is displayed atop an image of the patient's teeth. The overlay may be provided first, or the overlay may be selected from the set of overlay viewing angles that best matches the current view being imaged by the mobile telecommunications device. Alternatively, the predetermined views may be presented in a fixed order.
As mentioned, guiding the user may include, for each predetermined view, capturing an image of the patient's teeth when the image of the patient's teeth approximately matches an overlay corresponding to a predetermined view. Capturing may be manual, automatic or semi-automatic, as discussed above. For example, any of these methods may include automatically determining when the image of the patient's teeth approximately matches the overlay by detecting an edge of the patient's teeth and comparing the detected edge to the overlay.
Any of these methods and apparatuses may include automatically adjusting the camera to focus the camera within a region of the screen that is within the overlay. This region may include all of the region within the overlay, or a sub-set (e.g., corresponding to the anterior teeth, the posterior teeth, etc.
Any of these methods and apparatuses may include selecting the overlay based on one or more images of the patient's teeth. This may include selecting the overlay corresponding to the particular viewing angle, as mentioned above, and/or it may include customizing the overlay based on the patient's specific anatomy. For example the overlay maybe selected to match the shape, size and arrangement of the patient's dentition.
Any of these methods and apparatuses may include automatically adjusting the light emitted by the camera based on a level of light of within a region of the screen that is within the overlay. The light may be continuous or intermittent (e.g., flash). Thus, the apparatus or method may first disable the default light sensing for the mobile telecommunications device, and may instead use the region (or a sub-section of the region) within the overlay to set the light level for adjusting the flash/applied light from the mobile telecommunications device.
As mentioned, any of these methods and apparatuses may be configured to capture the image of the patient's teeth when the image of the patient's teeth approximately matches the overlay corresponding to the predetermined view, e.g., by automatically capturing the image. Similarly, any of these methods and apparatuses may capture the image of the patient's teeth when the image of the patient's teeth approximately matches the overlay corresponding to the predetermined view by semi-automatically capturing, e.g., triggering a visual, audible, or visual and audible indicator that permits the user to take the image. In some variations a plurality of images may be taken and averaged or used to select the best image.
Transmitting the series of images from the mobile telecommunications device to the remote location may generally include receiving, in the mobile telecommunications device, an indication that patient is, or is not, a candidate within a fixed period of time (e.g., 10 minutes, 15 minutes, 20 minutes, etc.) from transmitting the series of images. In general, the initial decision that a patient is a good candidate for the orthodontic treatment may use the set of images transmitted, and may also include the chief concern. The decision may be made at the remote location (e.g., a remote server, etc.) either manually or automatically. Automatic decisions may be based on the amount of movement required to position the teeth in order to or address the chief concern and/or a standard of orthodontic positioning. The methods and apparatuses describe herein may provide images with sufficient clarity so that individual tooth positions may be determined relative to the dental arch and used to at least roughly approximate the complexity of an orthodontic procedure. Cases in which the amount and/or type of movement is complex may be indicated as not candidates. Cases in which the amount and/or type of movement is not complex may be indicated as candidates. Complex dental movements may include movements of greater than a minimum threshold (e.g., greater than 3 mm distal/proximal movement, greater than 4 mm distal/proximal movement, 5 mm distal/proximal movement, greater than 6 mm distal/proximal movement, greater than 7 mm distal/proximal movement, etc.), and/or rotation of greater than a minimum threshold (e.g., greater than 5, 10, 15, 20, 25, 30, 35, 45, etc., degrees), and/or extruding of one or more teeth greater than a minimum threshold (e.g., greater than 0.5 mm, 1 mm, 2 mm, 3 mm, 4 mm, etc.).
As mentioned, in general, any of these methods and apparatuses may include automatically identifying each image of the series of images to indicate which view of the plurality of views each image includes (e.g., anterior, anterior open bite, right buccal, right buccal open bite, left buccal, left buccal open bite, upper jaw, lower jaw, etc.). Any of these methods may also include automatically determining if one or more of the series of images was taken using a mirror. For example, the image may be automatically examined to identify reflections (e.g., a plane or mirror), and/or to determine if the orientation of the teeth within the image are reversed (mirrored) in the image. Mirrored images may be reversed for display as a non-mirrored image. Alternatively or additionally, duplicate mirrored regions may be cropped from the image.
Any of the methods and apparatuses described herein may include receiving, in the mobile telecommunications device, an indication of the patient's chief dental concern and aggregating the patient's chief dental concern with the series of images. Transmitting the series of images may comprise transmitting the aggregated series of images and the patient's chief dental concern.
As mentioned, any of the methods and apparatuses described herein may be configured to include instructing the user to retract the patient's cheek with a cheek retractor. A marker on the cheek retractor may be used to automatically identify the image to indicate which view of the plurality of views it includes based on the identified cheek retractor.
Although the terms “user” and “patient” are used separately herein, the user may be the patient. For example, the person taking the images using the methods and apparatuses described herein may be the patient. Thus, in any of these methods, the user may be the patient. Alternatively, a separate use (e.g., dentist, orthodontist, dental technician, dental assistant, etc.) may act as the user, taking the images as described herein on a patient.
A method for remotely pre-screening a patient for an orthodontic treatment may include: guiding a user, with a mobile telecommunications device having a camera, to take a series of images of the patient's teeth in a plurality of predetermined views by sequentially, for each predetermined view: displaying, on the screen, an overlay comprising an outline of teeth in one of the predetermined views from the plurality of predetermined views, wherein the overlay is displayed atop an image of the patient's teeth; and capturing the image of the patient's teeth when the overlay approximately matches the patient's teeth in the view of the patient's teeth; transmitting the series of images to a remote location to determine if the patient is a candidate for the orthodontic treatment based on the series of images; and displaying, on the screen of the mobile telecommunications device, an indicator that the patient is, or is not, a candidate for the orthodontic treatment.
A method for remotely pre-screening a patient for an orthodontic treatment may include: guiding a user, with a mobile telecommunications device having a camera, to take a series of images of the patient's teeth from a plurality of predetermined views by sequentially displaying, on a screen of the mobile telecommunications device, an overlay comprising an outline of teeth in each of the predetermined views; receiving, in the mobile telecommunications device, an indication of the patient's chief dental concern; aggregating, in the mobile telecommunications device, the series of images and the chief dental concern; transmitting the aggregated series of images and the chief dental concern to a remote location to determine if the patient is a candidate for the orthodontic treatment based on the series of images; and displaying, on the screen of the mobile telecommunications device, an indicator that the patient is, or is not, a candidate for the orthodontic treatment.
Any of the methods (and method steps) described herein may be performed by an apparatus configured to perform the method(s). For example, described herein are systems for remotely pre-screening a patient for an orthodontic treatment. A system for remotely pre-screening a patient (e.g., remote to the patient) may include: a non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of a mobile telecommunications device having a camera, that, when executed by the processor, causes the processor to: guide a user to take a series of images of the patient's teeth in a plurality of predetermined views with the camera; transmit the series of images from the mobile telecommunications device to a remote location to determine if the patient is a candidate for the orthodontic treatment based on the series of images; and display, on a screen of the mobile telecommunications device, an indicator that the patient is, or is not, a candidate for the orthodontic treatment. The non-transitory, computer readable storage medium may cause the processor to guide the user to take the series of images of the patient's teeth by: displaying, on the screen of the mobile telecommunications device, an image from the camera and an overlay comprising an outline of teeth in one of the predetermined views from the plurality of predetermined views, wherein the overlay is displayed atop the image from the camera. The non-transitory, computer readable storage medium may cause the processor to guide the user to take the series of images of the patient's teeth by automatically adjusting the camera to focus the camera within a region of the screen that is within the overlay. The non-transitory, computer readable storage medium may causes the processor to guide the user to take the series of images of the patient's teeth by selecting the overlay based on one or more images of the patient's teeth. The non-transitory, computer readable storage medium may cause the processor to guide the user to take the series of images of the patient's teeth by automatically adjusting a light emitted by the camera based on a level of light of within a region of the screen that is within the overlay.
The non-transitory, computer readable storage medium may cause the processor to guide the user to take the series of images of the patient's teeth by: indicating when an overlay comprising an outline of teeth in one of the predetermined views from the plurality of predetermined views aligns with a view of the patient's teeth from the camera, wherein the overlay is displayed atop the view from the camera. The non-transitory, computer readable storage medium may cause the processor to guide the user to take the series of images of the patient's teeth by: automatically taking an image of the patient's teeth when an overlay comprising an outline of teeth in one of the predetermined views from the plurality of predetermined views aligns with a view of the patient's teeth from the camera, wherein the overlay is displayed atop the view from the camera.
The non-transitory, computer readable storage medium may cause the processor to guide the user to take the series of images of the patient's teeth comprises guiding the user to take at least one each of: an anterior view, a buccal view, an upper jaw view, and a lower jaw view.
Any of these systems may be configured so that the non-transitory, computer readable storage medium further causes the processor to receive, in the mobile telecommunications device, an indication that patient is, or is not, a candidate within 15 minutes of transmitting the series of images. The non-transitory, computer readable storage medium may cause the processor to automatically identify each image of the series of images to indicate which view of the plurality of views each image includes. The non-transitory, computer readable storage medium may cause the processor to automatically determine if one or more of the series of image was taken using a mirror.
The non-transitory, computer readable storage medium may further cause the processor to receive, in the mobile telecommunications device, an indication of the patient's chief dental concern and aggregating the patient's chief dental concern with the series of images, further wherein the on-transitory, computer readable storage medium may be configured to transmit the series of images as the aggregated series of images and the patient's chief dental concern.
The non-transitory, computer readable storage medium may further cause the processor to instruct the user to retract the patient's cheek with a cheek retractor, and/or to identify a marker on the cheek retractor and to mark an image from the plurality of predetermined views to indicate which view of the plurality of views it includes based on the identified cheek retractor.
Any of the systems described herein may include a remote processor configured to receive the transmitted series of images and to transmit an indicator that the patient is, or is not, a candidate for the orthodontic treatment based on the series of images back to the non-transitory, computer-readable storage medium.
For example, a system for remotely pre-screening a patient for an orthodontic treatment may include a non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of a mobile telecommunications device having a camera, that, when executed by the processor, causes the processor to: guide a user to take a series of images of the patient's teeth in a plurality of predetermined views by sequentially, for each predetermined view: displaying, on a screen of the mobile telecommunications device, an image from the camera and an overlay comprising an outline of teeth in one of the predetermined views from the plurality of predetermined views, wherein the overlay is displayed atop the image from the camera; and capturing the image when the overlay approximately matches the patient's teeth on the screen; and transmit the series of images to a remote location.
A system for remotely pre-screening a patient for an orthodontic treatment may include: a non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of a mobile telecommunications device having a camera, that, when executed by the processor, causes the processor to: guide a user to take a series of images of the patient's teeth in a plurality of predetermined views by sequentially, for each predetermined view: displaying, on a screen of the mobile telecommunications device, an image from the camera and an overlay comprising an outline of teeth in one of the predetermined views from the plurality of predetermined views, wherein the overlay is displayed atop the image from the camera; and capturing the image when the overlay approximately matches the patient's teeth on the screen; and transmit the series of images to a remote location; and a remote processor configured to receive the transmitted series of images and to transmit an indicator that the patient is, or is not, a candidate for the orthodontic treatment based on the series of images back to the non-transitory, computer-readable storage medium.
The novel features of the invention are set forth with particularity in the claims that follow. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:
The following description of the various embodiments of the invention is not intended to limit the invention to these embodiments, but rather to enable any person skilled in the art to make and use this invention.
Described herein are methods and apparatuses (including devices, systems, non-transitory, computer-readable storage medium storing instructions capable of being executed by a processor, etc.) for capturing high quality dental images, including obtaining an image or set of images at predetermined positions of a patient's teeth and/or face for therapeutic use. In particular, described herein are methods and apparatuses for remotely pre-screening a patient for an orthodontic treatment that includes taking a defined set of images of the patient's teeth at known angles, and transmitting them as a set for remote analysis. The images must be at the predetermined angles and must be sufficiently well focused and illuminated, as will be described herein, despite being taken with the built-in camera found in most mobile telecommunications devices. The methods (and apparatuses for performing them) described herein guide a user in taking the images of the patient's teeth, for example, by displaying an overlay comprising an outline of teeth in a predetermined view (or sequence of predetermined views), on a screen of a mobile telecommunications device, wherein the overlay is displayed atop the view of the patient's teeth (or other body parts, such as face, head, etc.). The user may then move the mobile telecommunications device so that the overlay approximately matches the patient's teeth in the view of the patient's teeth and capture (e.g., manually or automatically by means of the apparatus) an image of the view of the patient's teeth (and in some instances, face and head).
Mobile telecommunications devices can be used to capture dental images instead of using expensive and bulky SLR cameras.
It may be particularly helpful to adapt a traditional handheld consumer electronics device, such as phone (e.g., smartphone, smartwatch, pad, tablet, etc.) to take one or, more preferably, a series of images of the teeth at sufficient clarity, e.g., focus, magnification, resolution, lighting, etc. so that these images may be used to track a patient's progress and/or pre-screen the patient for a dental or orthodontic procedure. Thus, such images (and image series) when taken from the proper orientations and with sufficient clarity (focus, magnification, resolution, lighting, etc.) may be used in one or more of: planning a dental/orthodontist procedure, determining the feasibility of a dental/orthodontic procedure for a particular patient, tracking patient progress during a dental/orthodontic procedure, determining the effectiveness of a dental/orthodontic procedure, etc. For example, and of the methods and apparatuses described herein may be used to specifically track a portion of an orthodontic procedure, including one or more phases of treatment (such as palatal expansion).
In general, these method and apparatuses may improve on existing technology by guiding a user through the process of collecting relevant patient images (e.g., a predetermined set of images), enhancing and/or confirming the image quality, associating patient information and transmitting the set of images so that they may be used in a dental/orthodontic procedure.
For example, the methods and apparatuses described herein may use a user's own handheld electronics apparatus having a camera (e.g., smartphone) and adapt it so that the user's device guides the user in taking high-quality images (e.g., at the correct aspect ratio/sizing, magnification, lighting, focus, etc.) of a predetermined sequence of orientations. In particular, these apparatuses and methods may include the use of an ‘overlay’ on a real-time image of the screen, providing immediate feedback on each of the desired orientations, which may also be used to adjust the lighting and/or focus, as described herein.
An overlay may include an outline (e.g., a perspective view outline) of a set of teeth that may be used as a guide to assist in placement of the camera to capture the patient image. The overlay may be based on a generic image of teeth, or it may be customized to the user's teeth, or to a patient-specific category (by patient age and/or gender, and/or diagnosis, etc.). The overlay may be shown as partially transparent, or it may be solid, and/or shown in outline.
High quality dental images are usually required to submit a dental case or request a case assessment. The overlay on the screen of the mobile telecommunications device can increase the quality of dental images by increasing the accuracy of the alignment of the camera lenses with teeth of the patient. The on-screen overlay with the outline of teeth can help doctors to take quality dental photos using the mobile telecommunications device.
The dental photos captured by the mobile telecommunications device can be uploaded automatically. In general, the methods described herein can increase the efficiency of the user. Taking dental photos by using the method disclosed herein can be much faster than using digital cameras. The dental images captured by the method can be of a higher quality and consistency than simply using a default camera application of a mobile telecommunications device.
As shown in
The overlay with an outline of teeth in a predetermined view can further provide information such as required visible teeth in the predetermined view. For example,
An overlay can be an average (generic) overlay obtained by an artificial model. The average overlay may not require specific information from a patient. For example, the average overlay can be obtained from an artificial model which approximately fits most of patients. Users can be guided to approximately match the average overlay with the teeth of a patient. In some other embodiments, the average overlay can comprise a plurality of overlays with a plurality of sizes and types based a plurality of artificial models, for example, overlays for different age groups, overlays for female and male patients, overlays for patient's having an overbite, under bite, etc. In some variations, the users can manually select an overlay or family of overlays that may fit the teeth of the patient and may then be guided to match the selected overlay with the teeth of the patient. In some variations the method or apparatus may automatically select an overlay or family of overlays for the patient. For example, the system may be trained to recognize (e.g., using machine learning) images corresponding to a particular type of overlay or family of overlays. Selection may be based on patient information provided by the user (e.g., patient age, gender, etc.) or based on prior dental record information specific to the patient.
In some other embodiments, the overlay can be a patient-specific overlay derived from the patient's teeth. An example of a customized overlay is shown in
Aligning may be done manually by the user, manually, semi-automatically, or automatically. For example, any of the methods and apparatuses described herein may be used to indicate (e.g., by visual and/or audible and/or tactile) image that the teeth are aligned in the frame with the overlay. For example,
As mentioned, in general, the overlay may be generic, categorical (e.g., appropriate for all patients having one or more citatory-related characteristics) or specific to the patient. The overlay can be a treatment-specific overlay. The teeth of a patient can change. For example, the teeth of the patient can change over time including with treatment. The overlay can be modified according to the predicted or actual changes of teeth of the patient with treatment (e.g., sequential alignment treatment), thus matching the teeth of the patient more precisely. The patient specific overlay or treatment specific overlay can give user's real-time insight into the treatment progress.
During the imaging procedure, users can view the patient's teeth, for example, on the screen of the mobile telecommunications device. Users can further display, on the screen, the overlay comprising the outline of teeth in a predetermined view, wherein the overlay is displayed atop the view of the patient's teeth. The overlay can be an average overlay, or a patient specific overlay, or a treatment specific overlay. Users can move the mobile telecommunications device relative to the patient's teeth so that the overlay approximately matches the patient's teeth in the view of the patient's teeth. Users can then capture an image of the view of the patient's teeth.
As mentioned, the method can further comprise estimating the quality of contour matching between the outline of the overlay and the teeth of the patient in order to help take high quality photos. When the patient's teeth are located on expected place and at the right angle, the overlay approximately matches the patient's teeth in the view of the patient's teeth. The method and apparatuses described herein can help prevent accidently confusing view or angle, and may comprise real-time reaction on the image on the screen of the mobile telecommunications device. Interactive contour matching estimation can enable capturing high quality dental images. The method can comprise real-time interactively estimating contour matching between the outline of the overlay and the teeth of the patient, for example, by using an indicator. In some embodiments, the methods and apparatuses can automatically detect matching between the patient's teeth on the screen and the overlay to confirm the position of the teeth.
Thus, the methods and apparatuses disclosed herein may trigger an indicator when the overlay approximately matches with the patient's teeth. The method or apparatus can automatically capture (or enhance manual capture of) an image of the view of the patient's teeth when the indicator is triggered as shown in
The apparatus (e.g., system) described herein may also provide guidance by indicating when the image is too far or too close from the camera. For example the camera output may be analyzed to determine the approximate distance to/from the patient's mouth and this distance compared to an expected distance to take an optimal image. For example, in any of these variations, image processing may be used to identify the patient's face (e.g., facial recognition), mouth (e.g., using machine-learned feature recognition) or other features, such as the nose, eyes, etc. to determine the distance from the camera and/or what regions to focus on. Any of the methods and apparatuses described herein may include the use (either with an overlay or without an overlay) of tooth recognition. For example, an apparatus may be configured to automatically detect one or more teeth and trigger an alert to take an image (or automatically take the image) when the identified tooth or teeth are in the desired orientation, size and/or location on the screen.
As mentioned, the indicator can be triggered by estimating an indicator of the distance between an edge of the patient's teeth in the view of the patient's teeth and the outline of teeth in the overlay. For another example, the indicator can be triggered by estimating an indicator of the distance between an edge of the patient's teeth at two or more regions and the outline of teeth and comparing that indicator to a threshold value. For yet another example, the indicator can be triggered by estimating an indicator of an average deviation of the outline from a contour of the teeth of the patient. The indicator can be triggered by a variety of ways of estimating a match between the outline of the overlay and the teeth of the patient, not being limited to the examples illustrated herein. In general, in any of the example described herein, images taken may be associated (including marking, labeling, etc.) with an indicator of the predetermined view and/or user identification information and/or date information. For example an image such as shown in 3B may be taken and marked “frontal”.
The indicator can be a visual indicator, such as a color change as illustrated in
In some variations the apparatus and methods may include providing one or more visual or audible clues to the user in aligning the images. For example, one or more arrows may be shown on the screen to indicate that the user should move the mobile telecommunications device in a particular direction (or by a particular amount) to align the patient's teeth with the overlay.
For another example, the indicator can be a sound (e.g., tone or voice) which is triggered when the overlay matches the teeth of the patient. For example, the phone can generate a beeping sound and indicate the overlay matches the teeth and it is the moment to capture the image. The method can comprise automatically capturing an image of the patient's teeth when the indicator is triggered and/or permit the user to manually take one or more image.
Automatically capturing an image of the patient's teeth when the overlay approximately matches with the patient's teeth may be triggered by the apparatus. For example, the shutter of the camera of the mobile telecommunications device can be triggered by an internal indicator.
Generally, the methods and apparatuses described herein are configured to take a series of predetermined specific views of the teeth. Thus, the apparatus may be an application software (“app”) that guides a user in taking a sequence of specific images. The specific images may be a set that has particular clinical and/or therapeutic significance, such as frontal/anterior (mouth closed), frontal/anterior (mouth open), left buccal (mouth closed/open), right buccal (mouth closed/open), upper jaw, lower jaw, profile, face, etc.
For example, a method to obtain an image of the teeth may include displaying, on the screen, an overlay comprising an outline of teeth in one of an anterior view, a buccal view, an upper jaw view, or a lower jaw view, wherein the overlay is displayed atop the view of the patient's teeth. The method and apparatus may walk the user through taking a complete or partial set of these images. For example,
In
The method can include display on a user interface a plurality of overlays for each of a plurality of dental images in a plurality of predetermined views. As mentioned above, the user can select one of the overlays. For example, there can be three different facial images and 8 different dental images as shown in
The method can further allow the users to take more images of their choice. For example, the method can have a user interface to allow a user to input an angle and a distance of his or her choice, thus giving the user freedom to capture custom images. In some embodiments, the method can further enable the user to take several photos in motion to restore 3D structure of the teeth.
Any of these methods and apparatuses may further include reviewing the captured image on the mobile telecommunications device to confirm image quality and/or automatically accept/reject the image(s). For example, a method or apparatus may be configured to check the image quality of the captured image and displaying on the screen if the image quality is below a threshold for image quality.
In
The method can comprise evaluating the image quality and warning doctors if the image might be rejected. For example, if the image is too dark, or blurry, or out of focus, a warn message is shown on the screen to warn the doctors to retake the photo. The quality of the image can be evaluated and reviewed by a variety of methods. For example, the method can comprise evaluating the focus status of the image. For another example, the method can comprise analyzing the image using a library. For yet another example, the method can comprise analyzing the brightness of the image. In some embodiments, the method can further comprise using image recognition algorithms to ensure that all required teeth are visible.
Often doctors have to wait one or more days to crop and edit the dental images before uploading the images, the method described herein can further comprise displaying the overlay with the cropping frame and the outline of teeth in the predetermined view, and automatically cropping the captured image.
In general, a method for obtaining a series of images of a patient's teeth can include viewing, on a screen of the mobile telecommunications device (in real time), the patient's teeth as shown in step 803. The method can also include displaying, on the screen, an overlay comprising a predetermined view and (optionally a cropping frame). The overlay of the predetermined view may be, for example: an anterior view (jaws open or closed), a buccal view (left, right, and jaws open/closed) an upper jaw view, or a lower jaw view, etc. The overlay may be displayed atop the view of the patient's teeth as shown in step 805. As described herein the method or apparatus performing the method may be configured to include a selection step for determining which overlay to use (e.g., by automatic identification of teeth/face shown in the screen and/or by user manually selecting from a menu, etc.). For example, the method can further comprise enabling the user to choose the predetermined view. A user interface may show the user a predetermined view and a plurality overlays for a plurality of predetermined photo views (for example, anterior (open/closed), left buccal (open/closed), right buccal (open/closed), occlusal maxillary (upper jaw), occlusal mandibular (lower jaw), etc.). The user can select one of the plurality overlays to capture the corresponding dental images. For each predetermined view, the overlay with an outline is shown on the screen of the mobile telecommunications device. The method can further comprise moving the mobile telecommunications device so that the overlay approximately matches the patient's teeth in the view of the patient's teeth as shown in step 806. For example, the method can further comprise displaying instructions about positioning the patient's teeth on the screen of the mobile telecommunications device prior to displaying the overlay. Optionally, as described herein, the method may include using the overlay region to adjust the focus 811, lighting, exposure 813, etc.
In some embodiments, the method can further comprise triggering an indicator when the overlay approximately matches with the patient's teeth as in (optional) step 807. For example, if the overlay is not matched, it will be displayed using red color. If the overlay matches the teeth, the color of the outline changes to green. The method can further comprise capturing an image of the view of the patient's teeth as shown in step 808. Steps 803 to step 808 can be performed for each predetermined photo view 815. The user may take several photos for each view for more accurate treatment progress estimation. The method can further comprise repeating the steps of viewing, displaying, moving and capturing to capture anterior, buccal, upper jaw and lower jaw images of the patient's teeth. In addition, the apparatus may check the image to be sure that the quality is sufficiently high (e.g., in focus, etc.); if not, it may repeat the step for that view.
The method can further comprise transmitting the captured image to a remote server as in step 810 and/or evaluating the captured image for medical treatment by using the set of images collected 809. The captured dental images can be transferred to server part for performing more precise estimation of treatment progress and/or for pre-screening a patient. For example, the captured dental images can be used for case evaluation before starting aligner treatment to evaluate if the patient is a candidate for the treatment as described in
In
Targeted Focus
Any of the method and apparatuses described herein may include tooth-specific focusing. In general, the camera of the mobile telecommunications device may be configured so that the camera automatically focuses on a tooth or teeth. In variations in which the system is configured to detect a patients teeth within an image, the apparatus may then focus on the teeth automatically. Alternatively or additionally, the apparatus or method may be configured to use the overlay, or a region within the overlay, to focus on the patient's teeth. For example, the apparatus (e.g., a non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of a mobile telecommunications device having a camera) may be configured to focus on a subset of the region within the overlay on the display, and automatically focus within this region or sub-region. Within this region or sub-region, the camera may be controlled to perform any appropriate type of autofocusing, including but not limited to: contrast-detection auto-focus, phase-detection auto-focus, and laser auto-focus.
For example, the apparatus or method may first disable the camera's native autofocusing, which may default on a particular region (e.g., the central region), motion detection, and/or object (e.g., face) recognition, or some variation of these. The native autofocusing, if used with the overlay and method of matching the overlay to a patient's teeth may instead focus on the lips, cheek retractors, tongue, etc., rather than the teeth or a portion of the teeth. By instead restricting the autofocusing to a region that is limited to the overlay or a portion of the overly, the apparatus and method may instead properly focus on the teeth.
For example,
Adapted Lighting Mode
Any of the apparatuses and methods described herein may also be configured to automatically and accurately adjust the lighting and/or exposure time so that the teeth are optimally illuminated for imaging, so that details may be apparent. As described above for the autofocusing within the image, the illumination (lighting) may be similarly adjusted by using the overlay or a sub-region of the overlay to set the intensity of the applied lighting, such as the flash.
In general, a camera of a mobile communications device may include a light source providing illumination when taking an image. The camera may have one or more lighting modes for operation, including, for example, a bust flash (a pulse or flash of light correlating with image capture), torch (a continuously on light), or no flash/no illumination (e.g., not providing illuminated). Any of the methods and apparatuses described herein may improve the images of the patient's teeth that are captured by automatically selecting and controlling a particular lighting mode for each image captured (e.g., each predetermined view). For example, in particular, the apparatus may be configured by adjusting or controlling the lighting mode so that no flash is used when taking the facial images (e.g., profile facial images, frontal facial images, with or without smile, etc.); the lighting mode may be set to burst flash when taking the occlusal images (e.g., upper occlusal/upper jaw and lower occlusal/lower jaw); and torch illumination may be automatically selected when taking intra oral photos (e.g., anterior views, buccal views etc.). The intensity of the flash may also be adjusted. For example, the intensity of the light applied may be adjusted based on a light level detected from the region of an image within an overlay on the screen. In some variations the choice to use any additional illumination at all may be made first, based on the light intensity within the overlay region; if the light level is below a threshold (e.g., within the lower 5%, 10%, 15%, 20%, 25%, 30%, etc. of the dynamic range for intensity of light for the camera of the mobile telecommunications device) within all or a portion of the overly, then then lighting mode may be selected based on the type of predetermined view for the overlay. For example, if the overlay shows an intra oral view (e.g., anterior, anterior mouth open, buccal, buccal mouth open, etc.) then the lighting mode may be set to torch, and in some cases the level of brightness of the torch may be adjusted based on the light level detected within the overlay. If the overlay corresponds to an occlusal view (e.g., upper jaw, lower jaw) then then lighting mode may be set to burst flash, and in some cases the level of brightness of the torch may be adjusted based on the light level detected within the overlay.
Thus, any of the methods and apparatuses described herein may automatically adjust the illumination provided by switching on or off the automatic flash and/or by setting the light level, or allowing the user to adjust the light level(s). For example, the method of apparatus may toggle between automatic flash, no flash/illumination, user-adjusted light level (“torch mode”) or automatically adjusted light level, based on one or more of user preferences and the image to be taken. For example, when taking a profile or face image, the apparatus may be configured so that the flash is turned on, e.g., defaulting to the camera's auto-flash function, if one is present. When taking an intraoral images (e.g., an anterior view, a buccal view an upper jaw view, or a lower jaw view, etc.) the flash may be turned off, and instead the apparatus may be adjusted to use a “torch” mode, in which the light is continuously on, particularly when imaging. The level of the light may be set automatically, as mentioned above, or it may be adjusted by the user. For example, when taking intraoral images of the teeth, the torch function of the camera light may be set to be on at a level that is approximately 10-50% of the peak intensity (e.g., between 15-40%, between 20-30%, etc.). In some variations the torch intensity may be adjusted in real time based on the image quality, and in particular, based on the light level of a region within the overlay (e.g., centered on or near the molars). Alternatively or additionally, the user may manually adjust the light intensity in the torch mode, e.g., by adjusting a slider on the screen of the device or by one or more buttons on the side of the device.
Similarly, the exposure time for the camera may be adjusted based on the amount of light within all or a portion of the overly region of the imaging field. Controlling the image exposure and/or the depth of field scope for the image may be guided by the region within the overlay. For example, the apparatus (e.g., a non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of a mobile telecommunications device having a camera) may control the camera so that any auto-exposure function of the camera is modified to base the exposure on a targeted point or region within the overlay. The depth of field may also be adjusted by the apparatus.
The exposure time may be set automatically based on a region within the overlay, as mentioned above. The region within the overlay used for setting the exposure may be different from a region used to set either the focus or the light intensity. In addition, these regions may be different for different views. For example, in anterior images, the focus may be set using a region within the overlay that is on the front (e.g. incisor) teeth, as shown in
Cheek Retractor Detection
Any of the methods and apparatuses (e.g., systems) described herein may include a check retractor and/or the automatic detection of a cheek retractor. For example, the apparatuses and methods described herein may avoid having a user take a patient's images of the predetermined views without using a cheek retractor, particularly when the predetermined views benefit from the use of a cheek retractor.
The apparatuses and methods described herein may remind the user to use cheek retractors before taking intra-oral photos, as described above (e.g., using an alert window, etc.). However, the user (who may be the patient) may choose not to, leading to images for the intra-oral predetermined views in particular that are not optimal, and may be harder to process. In some variations, the apparatus or method may automatically detect if a check retractor is present and may warn the user when they are not detected. For example machine learning may be used to train the apparatus to automatically detect a cheek retractor. Alternatively or additionally, the cheek retractor may include one or more markings on the check retractor in visible regions that may be readily detected. The markings (e.g., dots, bar codes, QR codes, text, patterns, icons, etc.) may be detected by the apparatus and may also be used to help automatically identify the position of the patient, and therefore which predetermined view (e.g., which predetermined view overlay) should be used or is being taken.
In the exemplary device shown in
The markings on the retractor may also be used to aid in automatically cropping the images.
Any of the apparatuses and methods described herein may also estimate between the patient and the camera of the mobile telecommunications device. For example, any of these methods and apparatuses may facial detection from the image to identify the patient's face; once identified, the size and position of the face (and/or any landmarks from the patient's face, such as eyes, nose, ears, lips, etc.) and may determine the approximate distance to the camera. This information may be used to guide the user in positioning the camera; e.g., instructing the user to get closer or further from the patient in order to take the images as described above. For example, in
The distance from the camera may be approximate using other identified features, as mentioned, including the eye separation distance (pupil separation distance, etc.) when these features are visible. Any of these distance estimates may be stored with the image for later use, e.g., in estimating size or projecting three-dimensional features.
Also described herein is the use of continuous imaging (shooting) of the teeth. For example, rather than taking individual images, e.g., one at a time, the apparatus or method may be configured to general patient photos by using a continuous shooting mode. A rapid series of images may be taken while moving the mobile device. Movement can be guided by the apparatus, and may be from left to right, upper to lower, etc. From the user's perspective it may be similar to taking a video, but a series of images (still images) may be extracted by the apparatus. For example, the apparatus may automatically review the images and match (or approximately match) views to the predetermined views, e.g., using the overlays as described above. The apparatus may select only those images having a sufficiently high quality. For example, blurry, dark or not optimally positioned photos may be automatically rejected. Similarly, multiple photos may be combined (by stitching, averaging, etc.).
In some variations, the user may be patient, and the apparatus may be configured to allow the user to take continuous “selfies” in this manner. For example, a continuous shooting mode may allow the patient to take photos of their smile alone with the camera (e.g., a back-facing camera). The apparatus may use face and/or smile detection to guide patient and indicate if the camera is well positioned. In variations in which the screen is not facing the user, the apparatus may be configured to provide a user-detectable output (e.g., a flash, sound or the like) indicating that the patient should start moving the device around their head/mouth and may indicate that they should move the camera closer or further, to the right/left, up/down, etc. For example, an indicator such as light (e.g., flash) or sound (a voice or tone, etc.) may be used as an indicator. For example, a flashing and/or sound may be used to indicate to patient when to start moving mobile device, and the apparatus may start taking photos in the continuous mode (also referred to as a burst mode) and move the mobile device in the indicated direction.
As mentioned above, any of these variations may include detection of the teeth automatically, e.g., by machine learning. Detection of the patient teeth automatically may improve photo quality. In some variations, the machine learning (e.g., the machine learning framework provided by Apple with iOS 11) may be used to detect the presence of teeth when photos are taken, and to further guide the user. For example, the user may be alerted when the teeth are not visible, or to automatically select which predetermined view overly to use, to indicate if the angle is not correct, to indicate that the user is too close or too far from the patient, etc.
In general, in addition to the images captured by the apparatus, additional information, including the orientation and location of the camera relative to the patient, may be extracted from the sequence of images taken from a continuous shooting described above. For example, positional information, including the relative distance and/or angle that the camera is relative to the patient's mouth, may be extracted from the time sequence. Additional information, such as motion sensors (e.g., accelerometers, etc.) in the mobile telecommunications device, may be used as well. This additional sensor information (e.g., accelerometer, angle, etc.) information may be provided with the images. Such information may be helpful in both guiding the user, e.g., in instructing the user to get closer/farther, move more slowly, etc., but also in calculating dimensional information (e.g., size, three-dimensional position and/or surface or volumetric information, etc.).
Pre-Screening for Orthodontic Treatment
As mentioned above, in general, the apparatuses and methods described herein may be used to remotely pre-screen a patient for an orthodontic treatment. For example, the methods described herein may include, and/or may be used to determine if a patient would benefit from an orthodontic correction procedure to orthodontically move and straighten the patient's teeth using, e.g., a series of dental aligners. These methods and apparatuses may be part of a case assessment tool. The user (a dental professional or in some cases the potential patient) may be guided to take a set of predetermined views as described above, and these views may be transmitted (e.g., uploaded) from the mobile telecommunications device to a remote location. At the remote location, which may include a server, the images may be processed manually, automatically or semi-automatically to determine if the patient is a candidate for the orthodontic procedure.
The series of predetermined views described herein may be used to determine if a patient is a good candidate by identifying (manually or automatically) the amount and extent of movement of teeth required in order to straighten the teeth. Patients requiring excessive tooth movement may be indicated as not a candidate. Patient requiring surgical treatment (e.g., patients requiring palatal expansion, etc.) may be indicated as not a candidate. In some variations patients requiring tooth extraction and/or interproximal reduction may be indicated as not a candidate, at least for some orthodontic treatments. In variations, rather than simply determining that a patient is a candidate or not a candidate for a particular orthodontic treatment, the apparatus or method may instead indicate which type of orthodontic treatment would be best.
These methods may be used to pre-screen for any type of orthodontic treatment, as mentioned, including (for example), teeth straightening using one or a series of temporary aligners (e.g., which may be changed regularly, e.g., weekly). The type of orthodontic treatment may be limited to relatively easy orthodontic straightening procedures, such as orthodontic treatments with aligners that may take less than x months to complete (e.g., 1 month or less, 2 months or less, 3 months or less, 4 months or less, etc.).
Either before, during or after capturing the series of images, any of these methods and apparatuses may be configured to collect information about the patient, as discussed above. In addition to, or instead of, patient-identification information, the apparatus may also include information about the patient and/or user's chief orthodontic concern for the patient's teeth (e.g., tooth crowding, tooth spacing, smile width/arch width, smile line, horizontal overjet, vertical overbite, cross bite, bite relationship, etc.). The apparatus may include a menu of these concerns and may allow the user (dental professional and/or patient) to select one or more of them, or enter their own. The one or more chief dental concerns may be added to the set or series of images from the predetermined views. For example, the chief dental concern(s) may be appended or combined with the set or series of images, and transmitted to the remote site and used to determine if the patient is a good candidate for a particular orthodontic procedure.
In general, the images may be used to quantify and/or model the position of the patient's teeth. The position and orientation of the patients teeth, relative to the other teeth in the dental arch or the opposite dental arch, may provide an estimate of the movement or procedures necessary to correct (e.g., align) the patients teeth, or in some variations, the progress of an ongoing treatment.
The methods described herein, may also include monitoring a patient undergoing an orthodontic treatment. For example, the steps of guiding the user, with the same or a different mobile telecommunications device having a camera, to take a series of images of the patient's teeth from the plurality of predetermined views (e.g., by sequentially displaying, on a screen of the mobile telecommunications device, an overlay comprising an outline of teeth in each of the predetermined views) may be repeated during an orthodontic treatment in order to monitor the treatment. Images taken prior to treatment may be compared with images taken during or after treatment.
In any of the methods an apparatuses described herein, the images may be uploaded to a remote server, or storage facility, or they may be kept local to the mobile telecommunications device. When maintained locally on the mobile device, any copies sent remotely for analysis may be destroyed within a predetermined amount of time (e.g., after analysis is complete), so that no additional copies are sent. The images and any accompanying information may generally be encrypted.
As mentioned, any of the methods and apparatuses described herein may be configured for automatic detection of a mirror, such as a dental mirror, used to take any of the images. For example, the apparatus may be configured to identify that the image is a reflection, or to identify a marker on the mirror. A reflection may be determined by identifying a discontinuity (e.g., a line) at the edge(s) of the mirror, and/or the mirror-imaged/inverted images of parts of the image, such as the teeth. When a mirror is detected, the apparatus may display a mirror icon (or may indicate on the mirror icon. In some variations the image resulting may be inverted (mirrored) so that the image is in the same orientation as it would be had a mirror not been used.
The images (e.g., the predetermined series of images) may be used to supplement additional information (e.g., scans, 3D models, etc.) of the patient's teeth. Images taken as described herein may provide information on the shape, location and/or orientation of the patient's teeth and gingiva, including information related to the patient's root. Thus, this information may be used in conjunction with other images or models, including 3D models (e.g., digital models) of the patient's teeth, and may be combined with, or may supplement this information.
Various embodiments of the disclosure further disclose a non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of a mobile telecommunications device, that, when executed by the processor, causes the processor to display real-time images of the patient's teeth on a screen of the mobile telecommunications device, display an overlay comprising an outline of teeth in a predetermined view atop the images of the patient's teeth, and enable capturing of an image of the patient's teeth. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to display a generic overlay. For another example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to display a patient-specific overlay derived from the patient's teeth.
For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to automatically trigger an indicator when the overlay approximately matches with the patient's teeth. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to estimate an indicator of the distance between an edge of the patient's teeth in the view of the patient's teeth and to trigger the indicator when the outline of teeth in the overlay is less than or equal to a threshold value. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to estimate an indicator of the distance between an edge of the patient's teeth at two or more regions in the view of the patient's teeth and to trigger the indicator when the outline of teeth in the overlay is less than or equal to a threshold value. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to trigger the indicator by displaying a visual indicator on the screen.
In general, various embodiments of the disclosure further disclose a non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of a mobile telecommunications device, that, when executed by the processor, causes the processor to display real-time images of the patient's teeth on a screen of the mobile telecommunications device and display an overlay comprising a cropping frame and an outline of teeth in one of an anterior view, a buccal view an upper jaw view, or a lower jaw view, wherein the overlay is displayed atop the images of the patient's teeth, and enable capturing of an image of the patient's teeth. The non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to review the captured image and indicate on the screen if the captured image is out of focus and automatically crop the captured image as indicated by the cropping frame.
For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to check the image quality of the captured image and displaying on the screen if the image quality is below a threshold for image quality. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to automatically crop the captured image based on a cropping outline displayed as part of the overlay. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to transmit the captured image to a remote server.
For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to display an overlay comprising an outline of teeth in a predetermined view such as an anterior view, a buccal view an upper jaw view, or a lower jaw view. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to repeat the steps of viewing, displaying, moving and capturing to capture anterior, buccal, upper jaw and lower jaw images of the patient's teeth. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to capture an image of a patient's identification using the mobile telecommunications device and automatically populate a form with user identification information based on the imaged identification. For example, the non-transitory, computer-readable storage medium, wherein the set of instructions, when executed by the processor, can further cause the processor to display instructions on positioning the patient's teeth on the screen of the mobile telecommunications device prior to displaying the overlay.
The systems, devices, and methods of the preferred embodiments and variations thereof can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions are preferably executed by computer-executable components preferably integrated with the system including the computing device configured with software. The computer-readable medium can be stored on any suitable computer-readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (e.g., CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component is preferably a general or application-specific processor, but any suitable dedicated hardware or hardware/firmware combination can alternatively or additionally execute the instructions.
In one example, described herein is a photo uploading mobile app (control software) that may be installed on a mobile device such as a smartphone and may control the smartphone camera to take dental images of a patient in a particular manner. The particular manner may include a specific sequence of photos, as well as controlling the image quality and/or imaging characteristics in order to more accurately plan or track a therapeutic treatment, e.g., of a series of dental aligners.
For example, the application (“app”) may be configured to require login (username, password) in order to use. Alternative logins (e.g., fingerprint, passcode, etc.) may be used to login. Once logged in, the app may present a list of patients and/or may allow patients to be added. The user may select or add a patient, e.g., from a selectable list that is displayed. Adding a patient may include entering identifying information about the patient (e.g., first name, last name, date of birth, location such as country, city, state, zip code, etc., and gender. Thereafter, the app may guide the doctor or clinician in taking a predetermined series of images, such as those shown in
For existing patients, the app may allow the user to view photos that were already taken and/or begin taking new photos. Thus, any of the apparatuses (including apps) described herein may allow the user to take the requested series of images (which may depend on the patient identity, treatment plan, etc.) and may include taking additional images (e.g. 3 or more, 4 or more, 5 or more, 6 or more, 7 or more, 8 or more, 9 or more, 10 or more, 11 or more, 12 or more, etc.). For each patient, photos may be taken at different times, showing the progression of treatment. For example, a series of photographs (such as those shown in
Images/photos taken may be uploaded by the apparatus into a remote server or site, and stored for later retrieval by the physician and/or transmission to a third party along with patient and/or physician identifying information, and/or information about when they were taken. The physicians may approve or review the images using the app, or using another computer accessing the remote site. Images may be individually uploaded, or they may be uploaded as a composite of multiple separate images.
A physician may also comment and/or append comments, on the images or to accompany the images. In any of these apparatuses, the app may also provide access to patient case management software including modeling and 3D images/rendering of the patient's teeth.
The app may also include instructions (e.g., a frequently asked questions portion, a tutorial, etc.).
In some variations, the physician may mark or otherwise signify that a particular patient or case be processed by a remote server, e.g., for clinical assessment, and/or to prepare a series of aligners. As mentioned above, the app may be used on any mobile device having or communicating with a camera, such as a smartphone, smartwatch, pad, etc.
Although the examples described herein are specifically described as being for use with a mobile telecommunication device (such as a smartphone, pad, etc.), in some variations these methods and apparatuses implementing them may be performed with devices that include display and a processor that ae not limited to mobile telecommunications devices. For example, the methods and apparatuses may be configured for use with a virtual reality (VR)/augmented reality (AR) headset or any other imaging device that may transmit images (e.g., photos) directly to a computer via a direct connection (e.g., cable, dedicated wireless connection, etc.), and/or may save the images to removable or transferrable memory (e.g., an SD card); the image data could then be uploaded to a remote server via another device. Thus, any of the methods and apparatuses described herein which recite or refer to a mobile telecommunications device may be performed with an imaging device.
For example, a method for remotely pre-screening a patient for an orthodontic treatment may include: guiding a user, with an imaging device having a camera, to take a series of images of the patient's teeth in a plurality of predetermined views, transmitting the series of images from the imaging device to a remote location to determine if the patient is, or is not, a candidate for the orthodontic treatment based on the series of images; and displaying, on a screen, an indicator that the patient is, or is not, a candidate for the orthodontic treatment.
Similarly, a method for remotely pre-screening a patient for an orthodontic treatment may include: guiding a user, with an imaging device having a camera, to take a series of images of the patient's teeth from a plurality of predetermined views by sequentially displaying, on a screen of the imaging device, an overlay comprising an outline of teeth in each of the predetermined views; receiving, in the imaging device, an indication of the patient's chief dental concern; aggregating, in the imaging device, the series of images and the chief dental concern; transmitting the aggregated series of images and the chief dental concern to a remote location to determine if the patient is a candidate for the orthodontic treatment based on the series of images; and displaying, on the screen of the imaging device or a device in communication with the imaging device, an indicator that the patient is, or is not, a candidate for the orthodontic treatment.
As another example, a system for remotely pre-screening a patient for an orthodontic treatment, may include: a non-transitory, computer-readable storage medium storing a set of instructions capable of being executed by a processor of an imaging device having a camera, that, when executed by the processor, causes the processor to: guide a user to take a series of images of the patient's teeth in a plurality of predetermined views with the camera; transmit the series of images from the imaging device to a remote location to determine if the patient is a candidate for the orthodontic treatment based on the series of images; and display, on a screen of the imaging device or a device in communication with the imaging device, an indicator that the patient is, or is not, a candidate for the orthodontic treatment.
When a feature or element is herein referred to as being “on” another feature or element, it can be directly on the other feature or element or intervening features and/or elements may also be present. In contrast, when a feature or element is referred to as being “directly on” another feature or element, there are no intervening features or elements present. It will also be understood that, when a feature or element is referred to as being “connected”, “attached” or “coupled” to another feature or element, it can be directly connected, attached or coupled to the other feature or element or intervening features or elements may be present. In contrast, when a feature or element is referred to as being “directly connected”, “directly attached” or “directly coupled” to another feature or element, there are no intervening features or elements present. Although described or shown with respect to one embodiment, the features and elements so described or shown can apply to other embodiments. It will also be appreciated by those of skill in the art that references to a structure or feature that is disposed “adjacent” another feature may have portions that overlap or underlie the adjacent feature.
Terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. For example, as used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as “/”.
Spatially relative terms, such as “under”, “below”, “lower”, “over”, “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is inverted, elements described as “under” or “beneath” other elements or features would then be oriented “over” the other elements or features. Thus, the exemplary term “under” can encompass both an orientation of over and under. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Similarly, the terms “upwardly”, “downwardly”, “vertical”, “horizontal” and the like are used herein for the purpose of explanation only unless specifically indicated otherwise.
Although the terms “first” and “second” may be used herein to describe various features/elements (including steps), these features/elements should not be limited by these terms, unless the context indicates otherwise. These terms may be used to distinguish one feature/element from another feature/element. Thus, a first feature/element discussed below could be termed a second feature/element, and similarly, a second feature/element discussed below could be termed a first feature/element without departing from the teachings of the present invention.
As used herein in the specification and claims, including as used in the examples and unless otherwise expressly specified, all numbers may be read as if prefaced by the word “about” or “approximately,” even if the term does not expressly appear. The phrase “about” or “approximately” may be used when describing magnitude and/or position to indicate that the value and/or position described is within a reasonable expected range of values and/or positions. For example, a numeric value may have a value that is +/−0.1% of the stated value (or range of values), +/−1% of the stated value (or range of values), +/−2% of the stated value (or range of values), +/−5% of the stated value (or range of values), +/−10% of the stated value (or range of values), etc. Any numerical range recited herein is intended to include all sub-ranges subsumed therein.
Although various illustrative embodiments are described above, any of a number of changes may be made to various embodiments without departing from the scope of the invention as described by the claims. For example, the order in which various described method steps are performed may often be changed in alternative embodiments, and in other alternative embodiments one or more method steps may be skipped altogether. Optional features of various device and system embodiments may be included in some embodiments and not in others. Therefore, the foregoing description is provided primarily for exemplary purposes and should not be interpreted to limit the scope of the invention as it is set forth in the claims.
The examples and illustrations included herein show, by way of illustration and not of limitation, specific embodiments in which the subject matter may be practiced. As mentioned, other embodiments may be utilized and derived there from, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Such embodiments of the inventive subject matter may be referred to herein individually or collectively by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept, if more than one is, in fact, disclosed. Thus, although specific embodiments have been illustrated and described herein, any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
This patent application is a continuation of U.S. patent application Ser. No. 16/827,594, filed Mar. 23, 2020, titled “METHODS AND APPARATUSES FOR DENTAL IMAGES,” which is a continuation of U.S. patent application Ser. No. 15/803,718, filed on Nov. 3, 2017, titled “METHODS AND APPARATUSES FOR DENTAL IMAGES,” now U.S. Pat. No. 10,595,966, which claims priority to U.S. Provisional Patent Application No. 62/417,985, filed on Nov. 4, 2016 and titled “METHODS AND APPARATUSES FOR DENTAL IMAGES,” each of which is herein incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
2171695 | Harper | Sep 1939 | A |
2467432 | Kesling | Apr 1949 | A |
2531222 | Kesling | Nov 1950 | A |
3379193 | Monsghan | Apr 1968 | A |
3385291 | Martin | May 1968 | A |
3407500 | Kesling | Oct 1968 | A |
3478742 | Bohlmann | Nov 1969 | A |
3496936 | Gores | Feb 1970 | A |
3533163 | Kirschenbaum | Oct 1970 | A |
3556093 | Quick | Jan 1971 | A |
3600808 | Reeve | Aug 1971 | A |
3660900 | Andrews | May 1972 | A |
3683502 | Wallshein | Aug 1972 | A |
3738005 | Cohen et al. | Jun 1973 | A |
3860803 | Levine | Jan 1975 | A |
3885310 | Northcutt | May 1975 | A |
3916526 | Schudy | Nov 1975 | A |
3922786 | Lavin | Dec 1975 | A |
3950851 | Bergersen | Apr 1976 | A |
3983628 | Acevedo | Oct 1976 | A |
4014096 | Dellinger | Mar 1977 | A |
4195046 | Kesling | Mar 1980 | A |
4253828 | Coles et al. | Mar 1981 | A |
4255138 | Frohn | Mar 1981 | A |
4324546 | Heitlinger et al. | Apr 1982 | A |
4324547 | Arcan et al. | Apr 1982 | A |
4348178 | Kurz | Sep 1982 | A |
4419992 | Chorbajian | Dec 1983 | A |
4433956 | Witzig | Feb 1984 | A |
4478580 | Barrut | Oct 1984 | A |
4500294 | Lewis | Feb 1985 | A |
4505673 | Yoshii | Mar 1985 | A |
4519386 | Sullivan | May 1985 | A |
4526540 | Dellinger | Jul 1985 | A |
4575330 | Hull | Mar 1986 | A |
4575805 | Moermann et al. | Mar 1986 | A |
4591341 | Andrews | May 1986 | A |
4609349 | Cain | Sep 1986 | A |
4611288 | Duret et al. | Sep 1986 | A |
4656860 | Orthuber et al. | Apr 1987 | A |
4663720 | Duret et al. | May 1987 | A |
4664626 | Kesling | May 1987 | A |
4676747 | Kesling | Jun 1987 | A |
4755139 | Abbatte et al. | Jul 1988 | A |
4757824 | Chaumet | Jul 1988 | A |
4763791 | Halverson et al. | Aug 1988 | A |
4764111 | Knierim | Aug 1988 | A |
4793803 | Martz | Dec 1988 | A |
4798534 | Breads | Jan 1989 | A |
4836778 | Baumrind et al. | Jun 1989 | A |
4837732 | Brandestini et al. | Jun 1989 | A |
4850864 | Diamond | Jul 1989 | A |
4850865 | Napolitano | Jul 1989 | A |
4856991 | Breads et al. | Aug 1989 | A |
4877398 | Kesling | Oct 1989 | A |
4880380 | Martz | Nov 1989 | A |
4886451 | Cetlin | Dec 1989 | A |
4889238 | Batchelor | Dec 1989 | A |
4890608 | Steer | Jan 1990 | A |
4935635 | O'Harra | Jun 1990 | A |
4936862 | Walker et al. | Jun 1990 | A |
4937928 | van der Zel | Jul 1990 | A |
4941826 | Loran et al. | Jul 1990 | A |
4952928 | Carroll et al. | Aug 1990 | A |
4964770 | Steinbichler et al. | Oct 1990 | A |
4975052 | Spencer et al. | Dec 1990 | A |
4983334 | Adell | Jan 1991 | A |
4997369 | Shafir | Mar 1991 | A |
5002485 | Aagesen | Mar 1991 | A |
5011405 | Lemchen | Apr 1991 | A |
5017133 | Miura | May 1991 | A |
5027281 | Rekow et al. | Jun 1991 | A |
5035613 | Breads et al. | Jul 1991 | A |
5037295 | Bergersen | Aug 1991 | A |
5055039 | Abbatte et al. | Oct 1991 | A |
5100316 | Wildman | Mar 1992 | A |
5103838 | Yousif | Apr 1992 | A |
5121333 | Riley et al. | Jun 1992 | A |
5123425 | Shannon et al. | Jun 1992 | A |
5128879 | Erdman et al. | Jul 1992 | A |
5130064 | Smalley et al. | Jul 1992 | A |
5131843 | Hilgers et al. | Jul 1992 | A |
5131844 | Marinaccio et al. | Jul 1992 | A |
5139419 | Andreiko et al. | Aug 1992 | A |
5145364 | Martz et al. | Sep 1992 | A |
5176517 | Truax | Jan 1993 | A |
5204670 | Stinton | Apr 1993 | A |
5242304 | Truax et al. | Sep 1993 | A |
5245592 | Kuemmel et al. | Sep 1993 | A |
5273429 | Rekow et al. | Dec 1993 | A |
5278756 | Lemchen et al. | Jan 1994 | A |
5306144 | Hibst et al. | Apr 1994 | A |
5328362 | Watson et al. | Jul 1994 | A |
5335657 | Terry et al. | Aug 1994 | A |
5338198 | Wu et al. | Aug 1994 | A |
5340309 | Robertson | Aug 1994 | A |
5342202 | Deshayes | Aug 1994 | A |
5368478 | Andreiko et al. | Nov 1994 | A |
5372502 | Massen et al. | Dec 1994 | A |
D354355 | Hilgers | Jan 1995 | S |
5382164 | Stern | Jan 1995 | A |
5395238 | Andreiko et al. | Mar 1995 | A |
5431562 | Andreiko et al. | Jul 1995 | A |
5440326 | Quinn | Aug 1995 | A |
5440496 | Andersson et al. | Aug 1995 | A |
5447432 | Andreiko et al. | Sep 1995 | A |
5452219 | Dehoff et al. | Sep 1995 | A |
5454717 | Andreiko et al. | Oct 1995 | A |
5456600 | Andreiko et al. | Oct 1995 | A |
5474448 | Andreiko et al. | Dec 1995 | A |
RE35169 | Lemchen et al. | Mar 1996 | E |
5499633 | Fenton | Mar 1996 | A |
5528735 | Strasnick et al. | Jun 1996 | A |
5533895 | Andreiko et al. | Jul 1996 | A |
5540732 | Testerman | Jul 1996 | A |
5542842 | Andreiko et al. | Aug 1996 | A |
5543780 | McAuley et al. | Aug 1996 | A |
5549476 | Stern | Aug 1996 | A |
5562448 | Mushabac | Oct 1996 | A |
5570182 | Nathel et al. | Oct 1996 | A |
5587912 | Andersson et al. | Dec 1996 | A |
5605459 | Kuroda et al. | Feb 1997 | A |
5607305 | Andersson et al. | Mar 1997 | A |
5614075 | Andre | Mar 1997 | A |
5621648 | Crump | Apr 1997 | A |
5626537 | Danyo et al. | May 1997 | A |
5645420 | Bergersen | Jul 1997 | A |
5645421 | Slootsky | Jul 1997 | A |
5651671 | Seay et al. | Jul 1997 | A |
5655653 | Chester | Aug 1997 | A |
5659420 | Wakai et al. | Aug 1997 | A |
5683243 | Andreiko et al. | Nov 1997 | A |
5683244 | Truax | Nov 1997 | A |
5691539 | Pfeiffer | Nov 1997 | A |
5692894 | Schwartz et al. | Dec 1997 | A |
5725376 | Poirier | Mar 1998 | A |
5725378 | Wang | Mar 1998 | A |
5737084 | Ishihara | Apr 1998 | A |
5740267 | Echerer et al. | Apr 1998 | A |
5742700 | Yoon et al. | Apr 1998 | A |
5769631 | Williams | Jun 1998 | A |
5774425 | Ivanov et al. | Jun 1998 | A |
5790242 | Stern et al. | Aug 1998 | A |
5799100 | Clarke et al. | Aug 1998 | A |
5800174 | Andersson | Sep 1998 | A |
5816800 | Brehm et al. | Oct 1998 | A |
5818587 | Devaraj et al. | Oct 1998 | A |
5823778 | Schmitt et al. | Oct 1998 | A |
5848115 | Little et al. | Dec 1998 | A |
5857853 | van Nifterick et al. | Jan 1999 | A |
5866058 | Batchelder et al. | Feb 1999 | A |
5879158 | Doyle et al. | Mar 1999 | A |
5880961 | Crump | Mar 1999 | A |
5880962 | Andersson et al. | Mar 1999 | A |
5882192 | Bergersen | Mar 1999 | A |
5904479 | Staples | May 1999 | A |
5934288 | Avila et al. | Aug 1999 | A |
5957686 | Anthony | Sep 1999 | A |
5964587 | Sato | Oct 1999 | A |
5971754 | Sondhi et al. | Oct 1999 | A |
5975893 | Chishti et al. | Nov 1999 | A |
5980246 | Ramsay et al. | Nov 1999 | A |
5989023 | Summer et al. | Nov 1999 | A |
6044309 | Honda | Mar 2000 | A |
6049743 | Baba | Apr 2000 | A |
6053731 | Heckenberger | Apr 2000 | A |
6068482 | Snow | May 2000 | A |
6099303 | Gibbs et al. | Aug 2000 | A |
6099314 | Kopelman et al. | Aug 2000 | A |
6123544 | Cleary | Sep 2000 | A |
6152731 | Jordan et al. | Nov 2000 | A |
6154676 | Levine | Nov 2000 | A |
6183248 | Chishti et al. | Feb 2001 | B1 |
6186780 | Hibst et al. | Feb 2001 | B1 |
6190165 | Andreiko et al. | Feb 2001 | B1 |
6200133 | Kittelsen | Mar 2001 | B1 |
6201880 | Elbaum et al. | Mar 2001 | B1 |
6212435 | Lattner et al. | Apr 2001 | B1 |
6217334 | Hultgren | Apr 2001 | B1 |
6230142 | Benigno et al. | May 2001 | B1 |
6231338 | de Josselin de Jong et al. | May 2001 | B1 |
6239705 | Glen | May 2001 | B1 |
6243601 | Wist | Jun 2001 | B1 |
6263234 | Engelhardt et al. | Jul 2001 | B1 |
6299438 | Sahagian et al. | Oct 2001 | B1 |
6309215 | Phan et al. | Oct 2001 | B1 |
6315553 | Sachdeva et al. | Nov 2001 | B1 |
6328745 | Ascherman | Dec 2001 | B1 |
6334073 | Levine | Dec 2001 | B1 |
6350120 | Sachdeva et al. | Feb 2002 | B1 |
6364660 | Durbin et al. | Apr 2002 | B1 |
6382975 | Poirier | May 2002 | B1 |
6402510 | Williams | Jun 2002 | B1 |
6402707 | Ernst | Jun 2002 | B1 |
6405729 | Thornton | Jun 2002 | B1 |
6436058 | Krahner et al. | Aug 2002 | B1 |
6450167 | David et al. | Sep 2002 | B1 |
6450807 | Chishti et al. | Sep 2002 | B1 |
6482298 | Bhatnagar | Nov 2002 | B1 |
6499995 | Schwartz | Dec 2002 | B1 |
6515593 | Stark et al. | Feb 2003 | B1 |
6516805 | Thornton | Feb 2003 | B1 |
6520772 | Williams | Feb 2003 | B2 |
6524101 | Phan et al. | Feb 2003 | B1 |
6540707 | Stark et al. | Apr 2003 | B1 |
6572372 | Phan et al. | Jun 2003 | B1 |
6573998 | Cohen-Sabban | Jun 2003 | B2 |
6594539 | Geng | Jul 2003 | B1 |
6597934 | de Jong et al. | Jul 2003 | B1 |
6602070 | Miller et al. | Aug 2003 | B2 |
6611783 | Kelly et al. | Aug 2003 | B2 |
6613001 | Dworkin | Sep 2003 | B1 |
6616579 | Reinbold et al. | Sep 2003 | B1 |
6623698 | Kuo | Sep 2003 | B2 |
6624752 | Klitsgaard et al. | Sep 2003 | B2 |
6626180 | Kittelsen et al. | Sep 2003 | B1 |
6640128 | Vilsmeier et al. | Oct 2003 | B2 |
6697164 | Babayoff et al. | Feb 2004 | B1 |
6702765 | Robbins et al. | Mar 2004 | B2 |
6702804 | Ritter et al. | Mar 2004 | B1 |
6705863 | Phan et al. | Mar 2004 | B2 |
6729876 | Chishti et al. | May 2004 | B2 |
6749414 | Hanson et al. | Jun 2004 | B1 |
6830450 | Knopp et al. | Dec 2004 | B2 |
6885464 | Pfeiffer et al. | Apr 2005 | B1 |
6890285 | Rahman et al. | May 2005 | B2 |
7036514 | Heck | May 2006 | B2 |
7106233 | Schroeder et al. | Sep 2006 | B2 |
7112065 | Kopelman et al. | Sep 2006 | B2 |
7121825 | Chishti et al. | Oct 2006 | B2 |
7138640 | Delgado et al. | Nov 2006 | B1 |
7142312 | Quadling et al. | Nov 2006 | B2 |
7166063 | Rahman et al. | Jan 2007 | B2 |
7184150 | Quadling et al. | Feb 2007 | B2 |
7192273 | McSurdy | Mar 2007 | B2 |
7220124 | Taub et al. | May 2007 | B2 |
7286954 | Kopelman et al. | Oct 2007 | B2 |
7292759 | Boutoussov et al. | Nov 2007 | B2 |
7302842 | Biester et al. | Dec 2007 | B2 |
7338327 | Sticker et al. | Mar 2008 | B2 |
D565509 | Fechner et al. | Apr 2008 | S |
7351116 | Dold | Apr 2008 | B2 |
7357637 | Liechtung | Apr 2008 | B2 |
7450231 | Johs et al. | Nov 2008 | B2 |
7458810 | Bergersen | Dec 2008 | B2 |
7460230 | Johs et al. | Dec 2008 | B2 |
7462076 | Walter et al. | Dec 2008 | B2 |
7463929 | Simmons | Dec 2008 | B2 |
7500851 | Williams | Mar 2009 | B2 |
D594413 | Palka et al. | Jun 2009 | S |
7544103 | Walter et al. | Jun 2009 | B2 |
7553157 | Abolfathi et al. | Jun 2009 | B2 |
7561273 | Stautmeister et al. | Jul 2009 | B2 |
7577284 | Wong et al. | Aug 2009 | B2 |
7596253 | Wong et al. | Sep 2009 | B2 |
7597594 | Stadler et al. | Oct 2009 | B2 |
7609875 | Liu et al. | Oct 2009 | B2 |
D603796 | Sticker et al. | Nov 2009 | S |
7616319 | Woollam et al. | Nov 2009 | B1 |
7626705 | Altendorf | Dec 2009 | B2 |
7632216 | Rahman et al. | Dec 2009 | B2 |
7633625 | Woollam et al. | Dec 2009 | B1 |
7637262 | Bailey | Dec 2009 | B2 |
7668355 | Wong et al. | Feb 2010 | B2 |
7670179 | Müller | Mar 2010 | B2 |
7695327 | Bäuerle et al. | Apr 2010 | B2 |
7698068 | Babayoff | Apr 2010 | B2 |
7724378 | Babayoff | May 2010 | B2 |
D618619 | Walter | Jun 2010 | S |
7731508 | Borst | Jun 2010 | B2 |
7735217 | Borst | Jun 2010 | B2 |
7780460 | Walter | Aug 2010 | B2 |
7787132 | Körner et al. | Aug 2010 | B2 |
7791810 | Powell | Sep 2010 | B2 |
7796243 | Choo-Smith et al. | Sep 2010 | B2 |
7806727 | Dold et al. | Oct 2010 | B2 |
7813787 | de Josselin de Jong et al. | Oct 2010 | B2 |
7824180 | Abolfathi et al. | Nov 2010 | B2 |
7828601 | Pyczak | Nov 2010 | B2 |
7845969 | Stadler et al. | Dec 2010 | B2 |
7854609 | Chen et al. | Dec 2010 | B2 |
7862336 | Kopelman et al. | Jan 2011 | B2 |
7872760 | Ertl | Jan 2011 | B2 |
7874836 | McSurdy | Jan 2011 | B2 |
7874849 | Sticker et al. | Jan 2011 | B2 |
7878801 | Abolfathi et al. | Feb 2011 | B2 |
7892474 | Shkolnik et al. | Feb 2011 | B2 |
7907280 | Johs et al. | Mar 2011 | B2 |
7929151 | Liang et al. | Apr 2011 | B2 |
7947508 | Tricca et al. | May 2011 | B2 |
7959308 | Freeman et al. | Jun 2011 | B2 |
7963766 | Cronauer | Jun 2011 | B2 |
7986415 | Thiel et al. | Jul 2011 | B2 |
8017891 | Nevin | Sep 2011 | B2 |
8026916 | Wen | Sep 2011 | B2 |
8027709 | Arnone et al. | Sep 2011 | B2 |
8038444 | Kitching et al. | Oct 2011 | B2 |
8054556 | Chen et al. | Nov 2011 | B2 |
8077949 | Liang et al. | Dec 2011 | B2 |
8083556 | Stadler et al. | Dec 2011 | B2 |
D652799 | Mueller | Jan 2012 | S |
8108189 | Chelnokov et al. | Jan 2012 | B2 |
8118592 | Tortorici | Feb 2012 | B2 |
8126025 | Takeda | Feb 2012 | B2 |
8144954 | Quadling et al. | Mar 2012 | B2 |
8160334 | Thiel et al. | Apr 2012 | B2 |
8201560 | Dembro | Jun 2012 | B2 |
8215312 | Garabadian et al. | Jul 2012 | B2 |
8240018 | Walter et al. | Aug 2012 | B2 |
8279450 | Oota et al. | Oct 2012 | B2 |
8292617 | Brandt et al. | Oct 2012 | B2 |
8294657 | Kim et al. | Oct 2012 | B2 |
8296952 | Greenberg | Oct 2012 | B2 |
8297286 | Smernoff | Oct 2012 | B2 |
8306608 | Mandelis et al. | Nov 2012 | B2 |
8314764 | Kim et al. | Nov 2012 | B2 |
8332015 | Ertl | Dec 2012 | B2 |
8354588 | Sticker et al. | Jan 2013 | B2 |
8366479 | Borst et al. | Feb 2013 | B2 |
8465280 | Sachdeva et al. | Jun 2013 | B2 |
8477320 | Stock et al. | Jul 2013 | B2 |
8488113 | Thiel et al. | Jul 2013 | B2 |
8520922 | Wang et al. | Aug 2013 | B2 |
8520925 | Duret et al. | Aug 2013 | B2 |
8556625 | Lovely | Oct 2013 | B2 |
8570530 | Liang | Oct 2013 | B2 |
8573224 | Thornton | Nov 2013 | B2 |
8577212 | Thiel | Nov 2013 | B2 |
8650586 | Lee et al. | Feb 2014 | B2 |
8675706 | Seurin et al. | Mar 2014 | B2 |
8723029 | Pyczak et al. | May 2014 | B2 |
8743923 | Geske et al. | Jun 2014 | B2 |
8767270 | Curry et al. | Jul 2014 | B2 |
8768016 | Pan et al. | Jul 2014 | B2 |
8771149 | Rahman et al. | Jul 2014 | B2 |
8839476 | Adachi | Sep 2014 | B2 |
8870566 | Bergersen | Oct 2014 | B2 |
8878905 | Fisker et al. | Nov 2014 | B2 |
8896592 | Boltunov et al. | Nov 2014 | B2 |
8899976 | Chen et al. | Dec 2014 | B2 |
8936463 | Mason et al. | Jan 2015 | B2 |
8948482 | Levin | Feb 2015 | B2 |
8956058 | Rösch | Feb 2015 | B2 |
8992216 | Karazivan | Mar 2015 | B2 |
9022792 | Sticker et al. | May 2015 | B2 |
9039418 | Rubbert | May 2015 | B1 |
9084535 | Girkin et al. | Jul 2015 | B2 |
9108338 | Sirovskiy et al. | Aug 2015 | B2 |
9144512 | Wagner | Sep 2015 | B2 |
9192305 | Levin | Nov 2015 | B2 |
9204952 | Lampalzer | Dec 2015 | B2 |
9242118 | Brawn | Jan 2016 | B2 |
9261358 | Atiya et al. | Feb 2016 | B2 |
9336336 | Deichmann et al. | May 2016 | B2 |
9351810 | Moon | May 2016 | B2 |
9375300 | Matov et al. | Jun 2016 | B2 |
9408743 | Wagner | Aug 2016 | B1 |
9433476 | Khardekar et al. | Sep 2016 | B2 |
9439568 | Atiya et al. | Sep 2016 | B2 |
9444981 | Bellis et al. | Sep 2016 | B2 |
9500635 | Islam | Nov 2016 | B2 |
9506808 | Jeon et al. | Nov 2016 | B2 |
9545331 | Ingemarsson-Matzen | Jan 2017 | B2 |
9584771 | Mandelis et al. | Feb 2017 | B2 |
9610141 | Kopelman et al. | Apr 2017 | B2 |
9675430 | Verker et al. | Jun 2017 | B2 |
9693839 | Atiya et al. | Jul 2017 | B2 |
9744006 | Ross | Aug 2017 | B2 |
9795461 | Kopelman et al. | Oct 2017 | B2 |
9861451 | Davis | Jan 2018 | B1 |
9936186 | Jesenko et al. | Apr 2018 | B2 |
10123706 | Elbaz et al. | Nov 2018 | B2 |
10130445 | Kopelman et al. | Nov 2018 | B2 |
10159541 | Bindayel | Dec 2018 | B2 |
10248883 | Borovinskih et al. | Apr 2019 | B2 |
10449016 | Kimura et al. | Oct 2019 | B2 |
10470847 | Shanjani et al. | Nov 2019 | B2 |
10504386 | Levin et al. | Dec 2019 | B2 |
10517482 | Sato et al. | Dec 2019 | B2 |
10528636 | Elbaz et al. | Jan 2020 | B2 |
10585958 | Elbaz et al. | Mar 2020 | B2 |
10595966 | Carrier, et al. | Mar 2020 | B2 |
10606911 | Elbaz et al. | Mar 2020 | B2 |
10613515 | Cramer et al. | Apr 2020 | B2 |
10639134 | Shanjani et al. | May 2020 | B2 |
10813720 | Grove et al. | Oct 2020 | B2 |
10885521 | Miller et al. | Jan 2021 | B2 |
20010038705 | Rubbert et al. | Nov 2001 | A1 |
20020010568 | Rubbert et al. | Jan 2002 | A1 |
20020015934 | Rubbert et al. | Feb 2002 | A1 |
20030009252 | Pavlovskaia et al. | Jan 2003 | A1 |
20030139834 | Nikolskiy et al. | Jul 2003 | A1 |
20030190575 | Hilliard | Oct 2003 | A1 |
20030207224 | Lotte | Nov 2003 | A1 |
20030224311 | Cronauer | Dec 2003 | A1 |
20040009449 | Mah et al. | Jan 2004 | A1 |
20040019262 | Perelgut | Jan 2004 | A1 |
20040058295 | Bergersen | Mar 2004 | A1 |
20040094165 | Cook | May 2004 | A1 |
20040167646 | Jelonek et al. | Aug 2004 | A1 |
20050023356 | Wiklof et al. | Feb 2005 | A1 |
20050031196 | Moghaddam et al. | Feb 2005 | A1 |
20050037312 | Uchida | Feb 2005 | A1 |
20050048433 | Hilliard | Mar 2005 | A1 |
20050100333 | Kerschbaumer et al. | May 2005 | A1 |
20050181333 | Karazivan et al. | Aug 2005 | A1 |
20050186524 | Abolfathi et al. | Aug 2005 | A1 |
20050244781 | Abels et al. | Nov 2005 | A1 |
20060084024 | Farrell | Apr 2006 | A1 |
20060099546 | Bergersen | May 2006 | A1 |
20060154198 | Durbin et al. | Jul 2006 | A1 |
20060223032 | Fried et al. | Oct 2006 | A1 |
20060223342 | Borst et al. | Oct 2006 | A1 |
20060234179 | Wen et al. | Oct 2006 | A1 |
20070046865 | Umeda et al. | Mar 2007 | A1 |
20070053048 | Kumar et al. | Mar 2007 | A1 |
20070087300 | Willison et al. | Apr 2007 | A1 |
20070184402 | Boutoussov et al. | Aug 2007 | A1 |
20070231765 | Phan et al. | Oct 2007 | A1 |
20070238065 | Sherwood et al. | Oct 2007 | A1 |
20080045053 | Stadler et al. | Feb 2008 | A1 |
20080090208 | Rubbert | Apr 2008 | A1 |
20080115791 | Heine | May 2008 | A1 |
20080176448 | Muller et al. | Jul 2008 | A1 |
20080242144 | Dietz | Oct 2008 | A1 |
20090030347 | Cao | Jan 2009 | A1 |
20090040740 | Muller et al. | Feb 2009 | A1 |
20090061379 | Yamamoto et al. | Mar 2009 | A1 |
20090061381 | Durbin et al. | Mar 2009 | A1 |
20090075228 | Kumada et al. | Mar 2009 | A1 |
20090142724 | Rosenblood | Jun 2009 | A1 |
20090148805 | Kois | Jun 2009 | A1 |
20090210032 | Beiski et al. | Aug 2009 | A1 |
20090218514 | Klunder et al. | Sep 2009 | A1 |
20090298017 | Boerjes et al. | Dec 2009 | A1 |
20090305540 | Stadler et al. | Dec 2009 | A1 |
20100045902 | Ikeda et al. | Feb 2010 | A1 |
20100145898 | Malfliet et al. | Jun 2010 | A1 |
20100152599 | DuHamel et al. | Jun 2010 | A1 |
20100165275 | Tsukamoto et al. | Jul 2010 | A1 |
20100167225 | Kuo | Jul 2010 | A1 |
20100179789 | Sachdeva | Jul 2010 | A1 |
20100231577 | Kim et al. | Sep 2010 | A1 |
20100312484 | DuHamel et al. | Dec 2010 | A1 |
20110081625 | Fuh | Apr 2011 | A1 |
20110102549 | Takahashi | May 2011 | A1 |
20110102566 | Zakian et al. | May 2011 | A1 |
20110143673 | Landesman et al. | Jun 2011 | A1 |
20110235045 | Koerner et al. | Sep 2011 | A1 |
20110269092 | Kuo et al. | Nov 2011 | A1 |
20110316994 | Lemchen | Dec 2011 | A1 |
20120081786 | Mizuyama et al. | Apr 2012 | A1 |
20120086681 | Kim et al. | Apr 2012 | A1 |
20120129117 | McCance | May 2012 | A1 |
20120147912 | Moench et al. | Jun 2012 | A1 |
20120172678 | Logan et al. | Jul 2012 | A1 |
20120281293 | Gronenborn et al. | Nov 2012 | A1 |
20120295216 | Dykes et al. | Nov 2012 | A1 |
20120322025 | Ozawa et al. | Dec 2012 | A1 |
20130089828 | Borovinskih et al. | Apr 2013 | A1 |
20130095446 | Andreiko et al. | Apr 2013 | A1 |
20130103176 | Kopeiman et al. | Apr 2013 | A1 |
20130110469 | Kopelman | May 2013 | A1 |
20130163627 | Seurin et al. | Jun 2013 | A1 |
20130201488 | Ishihara | Aug 2013 | A1 |
20130235165 | Gharib et al. | Sep 2013 | A1 |
20130252195 | Popat | Sep 2013 | A1 |
20130266326 | Joseph et al. | Oct 2013 | A1 |
20130280671 | Brawn et al. | Oct 2013 | A1 |
20130286174 | Urakabe | Oct 2013 | A1 |
20130293824 | Yoneyama et al. | Nov 2013 | A1 |
20130323664 | Parker | Dec 2013 | A1 |
20130323671 | Dillon et al. | Dec 2013 | A1 |
20130323674 | Hakomori et al. | Dec 2013 | A1 |
20140061974 | Tyler | Mar 2014 | A1 |
20140081091 | Abolfathi et al. | Mar 2014 | A1 |
20140122027 | Andreiko et al. | May 2014 | A1 |
20140265034 | Dudley | Sep 2014 | A1 |
20140272774 | Dillon et al. | Sep 2014 | A1 |
20140294273 | Jaisson | Oct 2014 | A1 |
20140313299 | Gebhardt et al. | Oct 2014 | A1 |
20140329194 | Sachdeva et al. | Nov 2014 | A1 |
20140363778 | Parker | Dec 2014 | A1 |
20150002649 | Nowak et al. | Jan 2015 | A1 |
20150079531 | Heine | Mar 2015 | A1 |
20150097315 | DeSimone et al. | Apr 2015 | A1 |
20150097316 | DeSimone et al. | Apr 2015 | A1 |
20150102532 | DeSimone et al. | Apr 2015 | A1 |
20150140502 | Brawn et al. | May 2015 | A1 |
20150164335 | Van Der Poel et al. | Jun 2015 | A1 |
20150173856 | Iowe et al. | Jun 2015 | A1 |
20150230885 | Wucher | Aug 2015 | A1 |
20150238280 | Wu et al. | Aug 2015 | A1 |
20150238283 | Tanugula et al. | Aug 2015 | A1 |
20150306486 | Logan et al. | Oct 2015 | A1 |
20150320320 | Kopelman et al. | Nov 2015 | A1 |
20150325044 | Lebovitz | Nov 2015 | A1 |
20150338209 | Knüttel | Nov 2015 | A1 |
20160000332 | Atiya et al. | Jan 2016 | A1 |
20160003610 | Lampert et al. | Jan 2016 | A1 |
20160051345 | Levin | Feb 2016 | A1 |
20160064898 | Atiya et al. | Mar 2016 | A1 |
20160067013 | Morton et al. | Mar 2016 | A1 |
20160128624 | Matt | May 2016 | A1 |
20160135924 | Choi et al. | May 2016 | A1 |
20160135925 | Mason et al. | May 2016 | A1 |
20160163115 | Furst | Jun 2016 | A1 |
20160228213 | Tod et al. | Aug 2016 | A1 |
20160242871 | Morton et al. | Aug 2016 | A1 |
20160246936 | Kahn | Aug 2016 | A1 |
20160296303 | Parker | Oct 2016 | A1 |
20160328843 | Graham et al. | Nov 2016 | A1 |
20170007365 | Kopelman et al. | Jan 2017 | A1 |
20170007366 | Kopelman et al. | Jan 2017 | A1 |
20170007367 | Li et al. | Jan 2017 | A1 |
20170049326 | Alfano et al. | Feb 2017 | A1 |
20170056131 | Alauddin et al. | Mar 2017 | A1 |
20170265970 | Verker | Sep 2017 | A1 |
20170325690 | Salah et al. | Nov 2017 | A1 |
20180000565 | Shanjani et al. | Jan 2018 | A1 |
20180028064 | Elbaz et al. | Feb 2018 | A1 |
20180153648 | Shanjani et al. | Jun 2018 | A1 |
20180153649 | Wu et al. | Jun 2018 | A1 |
20180153733 | Kuo | Jun 2018 | A1 |
20180192877 | Atiya et al. | Jul 2018 | A1 |
20180280118 | Cramer | Oct 2018 | A1 |
20180353264 | Riley et al. | Dec 2018 | A1 |
20180360567 | Xue et al. | Dec 2018 | A1 |
20190021817 | Sato et al. | Jan 2019 | A1 |
20190029784 | Moalem et al. | Jan 2019 | A1 |
20190076214 | Nyukhtikov et al. | Mar 2019 | A1 |
20190099129 | Kopelman et al. | Apr 2019 | A1 |
20190175303 | Akopov et al. | Jun 2019 | A1 |
20190175304 | Morton et al. | Jun 2019 | A1 |
20200229901 | Carrier et al. | Jul 2020 | A1 |
Number | Date | Country |
---|---|---|
517102 | Nov 1977 | AU |
3031677 | Nov 1977 | AU |
1121955 | Apr 1982 | CA |
102726051 | Oct 2012 | CN |
105962966 | Sep 2016 | CN |
2749802 | May 1978 | DE |
69327661 | Jul 2000 | DE |
102005043627 | Mar 2007 | DE |
202010017014 | Mar 2011 | DE |
102011051443 | Jan 2013 | DE |
102014225457 | Jun 2016 | DE |
0428152 | May 1991 | EP |
490848 | Jun 1992 | EP |
541500 | May 1993 | EP |
714632 | May 1997 | EP |
774933 | Dec 2000 | EP |
731673 | May 2001 | EP |
1941843 | Jul 2008 | EP |
2437027 | Apr 2012 | EP |
2447754 | May 2012 | EP |
1989764 | Jul 2012 | EP |
2332221 | Nov 2012 | EP |
2596553 | Dec 2013 | EP |
2612300 | Feb 2015 | EP |
2848229 | Mar 2015 | EP |
463897 | Jan 1980 | ES |
2455066 | Apr 2014 | ES |
2369828 | Jun 1978 | FR |
2930334 | Oct 2009 | FR |
1550777 | Aug 1979 | GB |
53-058191 | May 1978 | JP |
04-028359 | Jan 1992 | JP |
08-508174 | Sep 1996 | JP |
2007260158 | Oct 2007 | JP |
2008523370 | Jul 2008 | JP |
04184427 | Nov 2008 | JP |
2009000412 | Jan 2009 | JP |
2009018173 | Jan 2009 | JP |
2009205330 | Sep 2009 | JP |
2011087733 | May 2011 | JP |
2013007645 | Jan 2013 | JP |
201735173 | Feb 2017 | JP |
10-1266966 | May 2013 | KR |
10-2016-041632 | Apr 2016 | KR |
10-2016-0071127 | Jun 2016 | KR |
WO91004713 | Apr 1991 | WO |
WO94010935 | May 1994 | WO |
WO98032394 | Jul 1998 | WO |
WO98044865 | Oct 1998 | WO |
WO02017776 | Mar 2002 | WO |
WO02062252 | Aug 2002 | WO |
WO02095475 | Nov 2002 | WO |
WO03003932 | Jan 2003 | WO |
WO2006096558 | Sep 2006 | WO |
WO2006133548 | Dec 2006 | WO |
WO2009085752 | Jul 2009 | WO |
WO2009089129 | Jul 2009 | WO |
WO2009146788 | Dec 2009 | WO |
WO2009146789 | Dec 2009 | WO |
WO2010123892 | Oct 2010 | WO |
WO2012007003 | Jan 2012 | WO |
WO2012064684 | May 2012 | WO |
WO2012074304 | Jun 2012 | WO |
WO2013058879 | Apr 2013 | WO |
WO2014068107 | May 2014 | WO |
WO2014091865 | Jun 2014 | WO |
WO2015015289 | Feb 2015 | WO |
WO2015063032 | May 2015 | WO |
WO2015176004 | Nov 2015 | WO |
WO2016004415 | Jan 2016 | WO |
WO2016042393 | Mar 2016 | WO |
WO2016061279 | Apr 2016 | WO |
WO2016084066 | Jun 2016 | WO |
WO2016099471 | Jun 2016 | WO |
WO2016113745 | Jul 2016 | WO |
WO2016116874 | Jul 2016 | WO |
WO2017006176 | Jan 2017 | WO |
Entry |
---|
US 8,553,966 B1, 10/2013, Alpern et al. (withdrawn) |
AADR. American Association for Dental Research; Summary of Activities; Los Angeles, CA; p. 195; Mar. 20-23,(year of pub. sufficiently earlier than effective US filing date and any foreion priority date) 1980. |
Alcaniz et al.; An Advanced System for the Simulation and Planning of Orthodontic Treatments; Karl Heinz Hohne and Ron Kikinis (eds.); Visualization in Biomedical Computing, 4th Intl. Conf, VBC '96, Hamburg, Germany; Springer-Verlag; pp. 511-520; Sep. 22-25, 1996. |
Alexander et al.; The DigiGraph Work Station Part 2 Clinical Management; J. Clin. Orthod.; pp. 402-407; (Author Manuscript); Jul. 1990. |
Align Technology; Align technology announces now teen solution with introduction of invisaiign teen with mandibular advancement; 2 pages; retrieved from the internet (http://investor.aligntech.com/static-files/eb4fa6bb-3e62-404f-b74d-82059366a01b); Mar. 6, 2017. |
Allesee Orthodontic Appliance: Important Tip About Wearing the Red White & Blue Active Clear Retainer System; Allesee Orthodontic Appliances-Pro Lab; 1 page; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date); 1998. |
Allesee Orthodontic Appliances: DuraClearTM; Product information; 1 page; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1997. |
Allesee Orthodontic Appliances; The Choice Is Clear: Red, White & Blue . . . The Simple, Affordable, No-Braces Treatment; ( product information for doctors); retrieved from the internet (http://ormco.com/aoa/appliancesservices/RWB/doctorhtml); 5 pages on May 19, 2003. |
Allesee Orthodontic Appliances; The Choice Is Clear: Red, White & Blue . . . The Simple, Affordable, No-Braces Treatment; (product information), 6 pages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 2003. |
Allesee Orthodontic Appliances; The Choice is Clear: Red, White & Blue . . . The Simple, Affordable, No-Braces Treatment;(Patient information); retrieved from the internet (http://ormco.com/aoa/appliancesservices/RWB/patients.html); 2 pages on May 19, 2003. |
Allesee Orthodontic Appliances; The Red, White & Blue Way to Improve Your Smile; (information for patients), 2 pages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1992. |
Allesee Orthodontic Appliances; You may be a candidate for this invisible no-braces treatment; product information for patients; 2 pages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 2002. |
Altschuler et al.; Analysis of 3-D Data for Comparative 3-D Serial Growth Pattern Studies of Oral-Facial Structures; AADR Abstracts, Program and Abstracts of Papers, 57th General Session, IADR Annual Session, Mar. 29, 1979-Apr. 1, 1979, New Orleans Marriot; Journal of Dental Research; vol. 58. Special Issue A, p. 221; Jan. 1979. |
Altschuler et al.; Laser Electro-Optic System for Rapid Three-Dimensional (3D) Topographic Mapping of Surfaces; Optical Engineering; 20(6); pp. 953-961; Dec. 1981. |
Altschuler et al.; Measuring Surfaces Space-Coded by a Laser-Projected Dot Matrix; SPIE Imaging q Applications for Automated Industrial Inspection and Assembly; vol. 182; pp. 187-191; Oct. 10, 1979. |
Altschuler; 3D Mapping of Maxillo-Facial Prosthesis: AADR Abstract #607; 2 pages total, (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1980. |
Alves et al.; New trends in food allergens detection: toward biosensing strategies; Critical Reviews in Food Science and Nutrition; 56(14); pp. 2304-2319; doi: 10.1080/10408398.2013.831026; Oct. 2016. |
Andersson et al.; Clinical Results with Titanium Crowns Fabricated with Machine Duplication and Spark Erosion; Acta Odontologica Scandinavica; 47(5); pp. 279-286; Oct. 1989. |
Andrews, The Six Keys to Optimal Occlusion Straight Wire, Chapter 3, L.A. Wells; pp. 13-24; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1989. |
Bandodkar et al.; All-printed magnetically self-healing electrochemical devices; Science Advances; 2(11); 11 pages; e1601465; Nov. 2016. |
Bandodkar et al.; Self-healing inks for autonomous repair of printable electrochemical devices; Advanced Electronic Materials; 1(12); 5 pages; 1500289; Dec. 2015. |
Bandodkar et al.; Wearable biofuel cells: a review; Electroanalysis; 28 (6); pp. 1188-1200; Jun. 2016. |
Bandodkar et al.; Wearable chemical sensors: present challenges and future prospects; Acs Sensors; 1(5); pp. 464-482; May 11, 2016. |
Barone et al.; Creation of 3D multi-body orthodontic models by using independent imaging sensors; Sensors; 13(2); pp. 2033-2050; Feb. 5, 2013. |
Bartels et al.; An Introduction to Splines for Use in Computer Graphics and Geometric Modeling; Morgan Kaufmann Publishers; pp. 422-425 Jan. 1, 1987. |
Baumrind et al, “Mapping the Skull in 3-D,” reprinted from J. Calif. Dent. Assoc, 48(2), 11 pages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) Fall Issue 1972. |
Baumrind et al.; A Stereophotogrammetric System for the Detection of Prosthesis Loosening in Total Hip Arthroplasty; NATO Symposium on Applications of Human Biostereometrics; SPIE; vol. 166: pp. 112-123; Jul. 9-13, 1978. |
Baumrind; A System for Cranio facial Mapping Through the Integration of Data from Stereo X-Ray Films and Stereo Photographs; an invited paper submitted to the 1975 American Society of Photogram Symposium on Close-Range Photogram Systems; University of Illinois; pp. 142-166; Aug. 26-30, 1975. |
Baumrind; Integrated Three-Dimensional Craniofacial Mapping: Background, Principles, and Perspectives; Seminars in Orthodontics; 7(4); pp. 223-232; Dec. 2001. |
Begole et al.; A Computer System for the Analysis of Dental Casts; The Angle Orthodontist; 51(3); pp. 252-258; Jul. 1981. |
Bernard et al; Computerized Diagnosis in Orthodontics for Epidemiological Studies: A ProgressReport; (Abstract Only), J Dental Res. Special Issue, vol. 67, p. 169, paper presented at International Association for Dental Research 66th General Session, Montreal Canada; Mar. 9-13, 1988. |
Bhatia et al.; A Computer-Aided Design for Orthognathic Surgery; British Journal of Oral and Maxillofacial Surgery: 22(4); pp. 237-253; Aug. 1, 1984. |
Biggerstaff et al.; Computerized Analysis of Occlusion in the Postcanine Dentition; American Journal of Orthodontics; 61(3); pp. 245-254; Mar. 1972. |
Biggerstaff; Computerized Diagnostic Setups and Simulations; Angle Orthodontist; 40(I); pp. 28-36; Jan. 1970. |
Biostar Operation & Training Manual. Great Lakes Orthodontics, Ltd. 199 Fire Tower Drive,Tonawanda, New York. 14150-5890, 20 gages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1990. |
Blu et al.; Linear interpolation revitalized; IEEE Transactions on Image Processing; 13(5); pp. 710-719; May 2004. |
Bourke, Coordinate System Transformation; 1 page; retrived from the internet (http://astronomy.swin.edu.au/{grave over ( )} pbourke/prolection/coords) on Nov. 5, 2004; Jun. 1996. |
Boyd et al.; Three Dimensional Diagnosis and Orthodontic Treatment of Complex Malocclusions With the Invisalipn Appliance; Seminars in Orthodontics; 7(4); pp. 274-293; Dec. 2001. |
Brandestini et al.; Computer Machined Ceramic Inlays: In Vitro Marginal Adaptation; J. Dent. Res. Special Issue; (Abstract 305); vol. 64; p. 208; (year of pub. sufficiently earlier than effective US fling date and any foreign priority date) 1985. |
Brook et al.; An Image Analysis System for the Determination of Tooth Dimensions from Study Casts: Comparison with Manual Measurements of Mesio distal Diameter; Journal of Dental Research; 65(3); pp. 428-431; Mar. 1986. |
Burstone et al.; Precision Adjustment of the Transpalatal Lingual Arch: Computer Arch Form Predetermination; American Journal of Orthodontics; 79(2);pp. 115-133; Feb. 1981. |
Burstone; Dr. Charles J. Burstone on The Uses of the Computer in Orthodontic Practice (Part 1); Journal of Clinical Orthodontics; 13(7); pp. 442-453; (interview); Jul. 1979. |
Burstone; Dr. Charles J. Burstone on The Uses of the Computer in Orthodontic Practice (Part 2); journal of Clinical Orthodontics; 13(8); pp. 539-551 (interview); Aug. 1979. |
Cardinal Industrial Finishes; Powder Coatings; 6 pages; retrieved from the internet (http://www.cardinalpaint.com) on Aug. 25, 2000. |
Carnaghan, An Alternative to Holograms for the Portrayal of Human Teeth; 4th Int'l. Conf. on Holographic Systems, Components and Applications; pp. 228-231; Sep. 15, 1993. |
Chaconas et al,; The DigiGraph Work Station, Part 1, Basic Concepts; Journal of Clinical Orthodontics; 24(6); pp. 360-367; (Author Manuscript); Jun. 1990. |
Chafetz et al.; Subsidence of the Femoral Prosthesis, A Stereophotogrammetric Evaluation; Clinical Orthopaedics and Related Research; No. 201; pp. 60-67; Dec. 1985. |
Chiappone; Constructing the Gnathologic Setup and Positioner; Journal of Clinical Orthodontics; 14(2); pp. 121-133; Feb. 1980. |
Chishti et al.; U.S. Appl. No. 60/050,342 entitled “Procedure for moving teeth using a seires of retainers,” filed Jun. 20, 1997. |
Cottingham; Gnathologic Clear Plastic Positioner; American Journal of Orthodontics; 55(1); pp. 23-31; Jan. 1969. |
Crawford; CAD/CAM in the Dental Office: Does It Work?; Canadian Dental Journal; 57(2); pp. 121-123 Feb. 1991. |
Crawford; Computers in Dentistry: Part 1: CAD/CAM: The Computer Moves Chairside, Part 2: F. Duret {grave over ( )} A Man With A Vision, Part 3: The Computer Gives New Vision—Literally, Part 4: Bytes 'N Bites The Computer Moves From The Front Desk To The Operatory; Canadian Dental Journal; 54(9); pp. 661-666 Sep. 1988. |
Crooks; CAD/CAM Comes to USC; USC Dentistry; pp. 14-17; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) Spring 1990. |
CSI Computerized Scanning and Imaging Facility; What is a maximum/minimum intensity projection (MIP/MinIP); 1 page; retrived from the internet (http://csi.whoi.edu/context/what-maximumminimum-intensity-projection-mipminip); Jan. 4, 2010. |
Cureton; Correcting Malaligned Mandibular Incisors with Removable Retainers; Journal of Clinical Orthodontics; 30(7); pp. 390-395; Jul. 1996. |
Curry et al.; Integrated Three-Dimensional Craniofacial Mapping at the Craniofacial Research InstrumentationLaboratory/University of the Pacific; Seminars in Orthodontics; 7(4); pp. 258-265; Dec. 2001. |
Cutting et al.; Three-Dimensional Computer-Assisted Design of Craniofacial Surgical Procedures: Optimization and Interaction with Cephalometric and CT-Based Models; Plastic and Reconstructive Surgery; 77(6); pp. 877-885; Jun. 1986. |
DCS Dental AG; The CAD/CAM ‘DCS Titan System’ for Production of Crowns/Bridges; DSC Production; pp. 1-7; Jan. 1992. |
Defranco et al.; Three-Dimensional Large Displacement Analysis of Orthodontic Appliances; Journal of Biomechanics; 9(12); pp. 793-801; Jan. 1976. |
Dental Institute University of Zurich Switzerland; Program for International Symposium on Computer Restorations: State of the Art of the CEREC-Method; 2 pages; May 1991. |
Dentrac Corporation; Dentrac document; pp. 4-13; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1992. |
Dent-X; Dentsim. . . Dent-x's virtual reality 3-D training simulator. . . A revolution in dental education; 6 pages; retrieved from the internet (http://www.dent-x.com/DentSim.htm); on Sep. 24, 1998. |
Di Muzio et al.; Minimum intensity projection (MiniP); 6 pages; retrieved from the internet (https://radiopaedia.org/articles/minimum-intensity-projection-minip) on Sep. 6, 2018. |
Doruk et al.; The role of the headgear timer in extraoral co-operation; European Journal of Orthodontics; 26; pp. 289-291; Jun. 1, 2004. |
Doyle; Digital Dentistry; Computer Graphics World; pp. 50-52 andp. 54; Oct. 2000. |
Dummer et al.; Computed Radiography Imaging Based on High-Density 670 nm VCSEL Arrays; International Society for Optics and Photonics; vol. 7557; p. 75570H; 7 pages; (Author Manuscript); Feb. 24, 2010. |
Duret et al.; CAD/CAM Imaging in Dentistry; Current Opinion in Dentistry; 1 (2); pp. 150-154; Apr. 1991. |
Duret et al.; CAD-CAM in Dentistry; Journal of the American Dental Association; 117(6); pp. 715-720; Nov. 1988. |
Duret; The Dental CAD/CAM, General Description of the Project; Hennson International Product Brochure, 18 pages; Jan. 1986. |
Duret; Vers Une Prosthese Informatisee; Tonus; 75(15); pp. 55-57; (English translation attached); 23 pages; Nov. 15, 1985. |
Economides; The Microcomputer in the Orthodontic Office; Journal of Clinical Orthodontics; 13(11); pp. 767-772; Nov. 1979. |
Ellias et al.; Proteomic analysis of saliva identifies potential biomarkers for orthodontic tooth movement: The Scientific World Journal; vol. 2012: Article ID 647240; dio:10.1100/2012/647240; 7 pages; Jul. 2012. |
Elsasser; Some Observations on the History and Uses of the Kesling Positioner; American Journal of Orthodontics; 36(5); pp. 368-374; May 1, 1950. |
English translation of Japanese Laid-Open Publication No. 63-11148 to inventor T. Ozukuri (Laid-Open on Jan. 18, 1998) pp. 1-7. |
Faber et al.; Computerized Interactive Orthodontic Treatment Planning; American Journal of Orthodontics; 73(1); pp. 36-46; Jan. 1978. |
Felton et al.; A Computerized Analysis of the Shape and Stability of Mandibular Arch Form; American Journal of Orthodontics and Dentofacial Orthopedics; 92(6); pp. 478-483; Dec. 1987. |
Florez-Moreno; Time-related changes in salivary levels of the osteotropic factors sRANKL and OPG through orthodontic tooth movement; American Journal of Orthodontics and Dentofacial Orthopedics; 143(1); pp. 92-100; Jan. 2013. |
Friede et al.; Accuracy of Cephalometric Prediction in Orthognathic Surgery; Journal of Oral and Maxillofacial Surgery; 45(9); pp. 754-760; Sep. 1987. |
Friedrich et al; Measuring system for in vivo recording of force systems in orthodontic treatment-concept and analysis of accuracy; J. Biomech.; 32(1); pp. 81-85; (Abstract Only) Jan. 1999. |
Futterling et al.; Automated Finite Element Modeling of a Human Mandible with Dental Implants; JS WSCG '98-Conference Program; 8 pages; retrieved from the Internet (https://dspace5.zcu.cz/bitstream/11025/15851/1/Strasser_98.pdf); on Aug. 21, 2018. |
Gao et al.; 3-D element Generation for Multi-Connected Complex Dental and Mandibular Structure; IEEE Proceedings International Workshop in Medical Imaging and Augmented Reality; pp. 267-271; Jun. 12, 2001. |
Gim-Alldent Deutschland, “Das DUX System: Die Technik,” 3 pages; (English Translation Included); (year of pub. sufficiently earlier than effective US filing date and any foreign priority date); 2002. |
Gottleib et al.; JCO Interviews Dr. James A. McNamara, Jr., on the Frankel Appliance: Part 2: Clinical 1-1 Management; Journal of Clinical Orthodontics; 16(6); pp. 390-407; retrieved from the internet (http://www.jco-online.com/archive/print_article.asp?Year=1982&Month=06&ArticleNum+); 21 pages; Jun. 1982. |
Grayson; New Methods for Three Dimensional Analysis of Craniofacial Deformity, Symposium: Computerized Facial Imaging in Oral and Maxillofacial Surgery; American Association of Oral and Maxillofacial Surgeons; 48(8) suppl 1; pp. 5-6; Sep. 13, 1990. |
Grest, Daniel; Marker-Free Human Motion Capture in Dynamic Cluttered Environments from a Single View-Point, PhD Thesis; 171 pages; Dec. 2007. |
Guess et al.; Computer Treatment Estimates In Orthodontics and Orthognathic Surgery; Journal of Clinical Orthodontics; 23(4); pp. 262-268; 11 pages; (Author Manuscript); Apr. 1989. |
Heaven et al.; Computer-Based Image Analysis of Artificial Root Surface Caries; Abstracts of Papers #2094; Journal of Dental Research; 70:528; (Abstract Only); Apr. 17-21, 1991. |
Highbeam Research; Simulating stress put on jaw. (ANSYS Inc.'s finite element analysis software); 2 pages; retrieved from the Internet (http://static.highbeam.eom/t/toolingampproduction/november011996/simulatingstressputonfa..); on Nov. 5, 2004. |
Hikage; Integrated Orthodontic Management System for Virtual Three-Dimensional Computer Graphic Simulation and Optical Video Image Database for Diagnosis and Treatment Planning; Journal of Japan KA Orthodontic Society; 46(2); pp. 248-269: 56 pages; (English Translation Included); Feb. 1987. |
Hoffmann et al.; Role of Cephalometry for Planning of Jaw Orthopedics and Jaw Surgery Procedures; Informatbnen, pp. 375-396; (English Abstract Included); Mar. 1991. |
Hojjatie et al.; Three-Dimensional Finite Element Analysis of Glass-Ceramic Dental Crowns; Journal of Biomechanics; 23(11); pp. 1157-1166; Jan. 1990. |
Huckins; CAD-CAM Generated Mandibular Model Prototype from MRI Data: AAOMS, p. 96; (Abstract Only); (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1999. |
Imani et al.; A wearable chemical-electrophysiological hybrid biosensing system for real-time health and fitness monitoring; Nature Communications; 7; 11650. doi 1038/ncomms11650; 7 pages; May 23, 2016. |
Invisalign, “You were made to move. There's never been a better time to straighten your teeth with the most advanced clear aligner in the world”; Product webpage; 2 pages; retrieved from the internet (www.invisalign.com/) on Dec. 28, 2017. |
JCO Interviews; Craig Andreiko , DDS, MS on the Elan and Orthos Systems; Interview by Dr. Larry W. White; Journal of Clinical Orthodontics; 28(8); pp. 459-468; 14 pages; (Author Manuscript); Aug. 1994. |
JCO Interviews; Dr. Homer W. Phillips on Computers in Orthodontic Practice, Part 2; Journal of Clinical Orthodontics; 17(12); pp. 819-831; 19 pages; (Author Manuscript); Dec. 1983. |
Jeerapan et al.; Stretchable biofuel cells as wearable textile-based self-powered sensors; Journal of Materials Chemistry A; 4(47); pp. 18342-18353; Dec. 21, 2016. |
Jerrold; The Problem, Electronic Data Transmission and the Law; American Journal of Orthodontics and Dentofacial Orthopedics; 113(4); pp. 478-479; 5 pages; (Author Manuscript); Apr. 1998. |
Jia et al.; Epidermal biofuel cells: energy harvesting from human perspiration; Angewandie Chemie International Edition; 52(28); pp. 7233-7236; Jul. 8, 2013. |
Jia et al.; Wearable textile biofuel cells for powering electronics; Journal of Materials Chemistry A; 2(43); pp. 18184-18189; Oct. 14, 2014. |
Jones et al.; An Assessment of the Fit of a Parabolic Curve to Pre- and Post-Treatment Dental Arches; British Journal of Orthodontics; 16(2); pp. 85-93; May 1989. |
Kamada et.al.; Case Reports On Tooth Positioners Using LTV Vinyl Silicone Rubber; J. Nihon University School of Dentistry; 26(1); pp. 11-29; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1984. |
Kamada et.al.; Construction of Tooth Positioners with LTV Vinyl Silicone Rubber and Some Case KJ Reports; J. Nihon University School of Dentistry; 24(1); pp. 1-27; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1982. |
Kanazawa et al.; Three-Dimensional Measurements of the Occlusal Surfaces of Upper Molars in a Dutch Population; Journal of Dental Research; 63(11); pp. 1298-1301; Nov. 1984. |
Kesling et al.; The Philosophy of the Tooth Positioning Appliance; American Journal of Orthodontics and Oral surgery; 31(6); pp. 297-304; Jun. 1945. |
Kesling; Coordinating the Predetermined Pattern and Tooth Positioner with Conventional Treatment; American Journal of Orthodontics and Oral Surgery; 32(5); pp. 285-293; May 1946. |
Kim et al.; A wearable fingernail chemical sensing platform: pH sensing at your fingertips; Talanta; 150; pp. 622-628; Apr. 2016. |
Kim et al.; Advanced materials for printed wearable electrochemical devices: A review; Advanced Electronic Materials; 3(1); 15 pages; 1600260; Jan. 2017. |
Kim et al.; Noninvasive alcohol monitoring using a wearable tatto-based iontophoretic-biosensing system: Acs Sensors; 1(8); pp. 1011-1019; Jul. 22, 2016. |
Kim et al.; Non-invasive mouthguard biosensor for continuous salivary monitoring of metabolites; Analyst; 139(7); pp. 1632-1636; Apr. 7, 2014. |
Kim et al.; Wearable salivary uric acid mouthguard biosensor with integrated wireless electronics; Biosensors and Bioelectronics; 74; pp. 1061-1068; 19 pages; (Author Manuscript); Dec. 2015. |
Kleeman et al.; The Speed Positioner; J. Clin. Orthod., 30(12); pp. 673-680; Dec. 1996. |
Kochanek; Interpolating Splines with Local Tension, Continuity and Bias Control; Computer Graphics; 18(3); pp. 33-41; Jan. 1, 1984. |
Kumar et al.; All-printed, stretchable Zn—Ag2o rechargeable battery via, hyperelastic binder for self-powering wearable electronics; Advanced Energy Materials; 7(8); 8 pages; 1602096; Apr. 2017. |
Kumar et al.; Biomarkers in orthodontic tooth movement; Journal of Pharmacy Bioallied Sciences; 7(Suppl 2); pp. S325-S330; 12 pages; (Author Manuscript); Aug. 2015. |
Kumar et al.; Rapid maxillary expansion: A unique treatment modality in dentistry; J. Clin. Diagn. Res.; 5(4); pp. 906-911; Aug. 2011. |
Kunii et al.; Articulation Simulation for an Intelligent Dental Care System; Displays; 15(3); pp. 181-188; Jul. 1994. |
Kuroda et al.; Three-Dimensional Dental Cast Analyzing System Using Laser Scanning: American Journal of Orthodontics and Dentolacial Orthopedics; 110(4); pp. 365-369; Oct. 1996. |
Laurendeau et al.; A Computer-Vision Technique for the Acquisition and Processing of 3-D Profiles of 7 Dental Imprints: An Application in Orthodontics; IEEE Transactions on Medical Imaging; 10(3); pp. 453-461; Sep. 1991. |
Leinfelder et al.; A New Method for Generating Ceramic Restorations: a CAD-CAM System; Journal of the American Dental Association; 118(6); pp. 703-707; Jun. 1989. |
Manetti et al.; Computer-Aided Cefalometry and New Mechanics in Orthodontics; Fortschr Kieferorthop; 44; pp. 370-376; 8 pages; (English Article Summary Included); (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1983. |
Mccann; Inside the ADA; J. Amer. Dent. Assoc, 118:286-294; Mar. 1989. |
Mcnamara et al.; Invisible Retainers; J. Clin Orthod.; pp. 570-578; 11 pages; (Author Manuscript); Aug. 1985. |
Mcnamara et al.; Orthodontic and Orthopedic Treatment in the Mixed Dentition; Needham Press; pp. 347-353; Jan. 1993. |
Moermann et al, Computer Machined Adhesive Porcelain Inlays: Margin Adaptation after Fatigue Stress; IADR Abstract 339, J. Dent. Res.; 66(a):763; (Abstract Only); (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1987. |
Moles; Correcting Mild Malalignment—As Easy As One, Two, Three; AOA/Pro Corner; 11(2); 2 pages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 2002. |
Mormann et al.; Marginale Adaptation von adhasuven Porzellaninlays in vitro; Separatdruck aus:Schweiz. Mschr. Zahnmed.; 95; pp. 1118-1129; 8 pages; (Machine Translated English Abstract); (year of pub. sufficiently earlier than effective US filing date and any foreign priority date); 1985. |
Nahoum; The Vacuum Formed Dental Contour Appliance: N. Y. State Dent. J.; 30(9); pp. 385-390; Nov. 1964. |
Nash; Cerec CAD/CAM Inlays: Aesthetics and Durability in a Single Appointment; Dentistry Today; 9(8); pp. 20, 22-23 and 54; Oct. 1990. |
Nedelcu et al.; “Scanning Accuracy And Precision In 4 Intraoral Scanners: An In Vitro Comparison Based On 3-Dimensional Analysis”; J. Prosthet. Dent.; 112(6); pp. 1461-1471; Dec. 2014. |
Nishiyama et al.; A New Construction of Tooth Repositioner by LTV Viny Silicone Rubber; The Journal of Nihon University School of Dentistry; 19(2); pp. 93-102 (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1977. |
Ogawa et al.; Mapping, profiling and clustering of pressure pain threshold (PPT) in edentulous oral muscosa; Journal of Dentistry; 32(3); pp. 219-228; Mar. 2004. |
Ogimoto et al.; Pressure-pain threshold determination in the oral mucosa; Journal of Oral Rehabilitation; 29(7); pp. 620-626; Jul. 2002. |
Parrilla et al.; A textile-based stretchable multi-ion potentiometric sensor; Advanced Healthcare Materials; 5(9); pp. 996-1001; May 2016. |
Paul et al.; Digital Documentation of Individual Human Jaw and Tooth Forms for Applications in Orthodontics; Oral Surgery and Forensic Medicine Proc. of the 24th Annual Conf. of the IEEE Industrial Electronics Society (IECON '98); vol. 4; pp. 2415-2418; Sep. 4, 1998. |
Pinkham; Foolish Concept Propels Technology; Dentist, 3 pages , Jan./Feb. 1989. |
Pinkham; Inventor's CAD/CAM May Transform Dentistry; Dentist; pp. 1 and 35, Sep. 1990. |
Ponitz; Invisible retainers; Am. J. Orthod.; 59(3); pp. 266-272; Mar. 1971. |
Procera Research Projects; Procera Research Projects 1993 {grave over ( )} Abstract Collection; 23 pages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1993. |
Proffit et al.; The first stage of comprehensive treatment alignment and leveling; Contemporary Orthodontics, 3rd Ed.; Chapter 16; Mosby Inc.; pp. 534-537; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 2000. |
Proffit et al.; The first stage of comprehensive treatment: alignment and leveling; Contemporary Orthodontics; (Second Ed.); Chapter 15, MosbyYear Book; St. Louis, Missouri; pp. 470-533 Oct. 1993. |
Raintree Essix & ARS Materials, Inc., Raintree Essix, Technical Magazine Table of contents and Essix Appliances, 7 pages; retrieved from the internet (http://www.essix.com/magazine/defaulthtml) on Aug. 13, 1997. |
Redmond et al.; Clinical Implications of Digital Orthodontics; American Journal of Orthodontics and Dentofacial Orthopedics; 117(2); pp. 240-242; Feb. 2000. |
Rekow et al.; CAD/CAM for Dental Restorations—Some of the Curious Challenges; IEEE Transactions on Biomedical Engineering; 38(4); pp. 314-318; Apr. 1991. |
Rekow et al.; Comparison of Three Data Acquisition Techniques for 3-D Tooth Surface Mapping; Annual International Conference of the IEEE Engineering in Medicine and Biology Society; 13(1); pp. 344-345 (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1991. |
Rekow; A Review of the Developments in Dental CAD/CAM Systems; Current Opinion in Dentistry; 2; pp. 25-33; Jun. 1992. |
Rekow; CAD/CAM in Dentistry: A Historical Perspective and View of the Future; Journal Canadian Dental Association; 58(4); pp. 283, 287-288; Apr. 1992. |
Rekow; Computer-Aided Design and Manufacturing in Dentistry: A Review of the State of the Art; Journal of Prosthetic Dentistry; 58(4); pp. 512-516; Dec. 1987. |
Rekow; Dental CAD-CAM Systems: What is the State of the Art?; The Journal of the American Dental Association; 122(12); pp. 43-48; Dec. 1991. |
Rekow; Feasibility of an Automated System for Production of Dental Restorations, Ph.D. Thesis; Univ. of Minnesota, 250 pages, Nov. 1988. |
Richmond et al.; The Development of the PAR Index (Peer Assessment Rating): Reliability and Validity.; The European Journal of Orthodontics; 14(2); pp. 125-139; Apr. 1992. |
Richmond et al.; The Development of a 3D Cast Analysis System; British Journal of Orthodontics; 13(1); pp. 53-54; Jan. 1986. |
Richmond; Recording The Dental Cast In Three Dimensions; American Journal of Orthodontics and Dentofacial Orthopedics; 92(3); pp. 199-206; Sep. 1987. |
Rudge; Dental Arch Analysis: Arch Form, A Review of the Literature; The European Journal of Orthodontics; 3(4); pp. 279-284; Jan. 1981. |
Sahm et al.; “Micro-Electronic Monitoring of Functional Appliance Wear”; Eur J Orthod.; 12(3); pp. 297-301; Aug. 1990. |
Sahm: Presentation of a wear timer for the clarification of scientific questions in orthodontic orthopedics; Fortschritte der Kieferorthopadie; 51 (4); pp. 243-247; (Translation Included) Jul. 1990. |
Sakuda et al.; integrated Information-Processing System In Clinical Orthodontics: An Approach with Use of a Computer Network System; American Journal of Orthodontics and Dentofacial Orthopedics; 101(3); pp. 210-220; 20 pages; (Author Manuscript) Mar. 1992. |
Schafer et al.; “Quantifying patient adherence during active orthodontic treatment with removable appliances using microelectronic wear-time documentation”; Eur J Orthod.; 37(1)pp. 1-8; doi:10.1093/ejo/cju012; Jul. 3, 2014. |
Schellhas et al.; Three-Dimensional Computed Tomography in Maxillofacial Surgical Planning; Archives of Otolaryngology—Head and Neck Surgery; 114(4); pp. 438-442; Apr. 1988. |
SCHROEDER et al; Eds. The Visual Toolkit, Prentice Hall PTR, New Jersey; Chapters 6, 8 & 9, (pp. 153-210,309-354, and 355-428; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1998. |
Shilliday; Minimizing finishing problems with the mini-positioner; American Journal of Orthodontics; 59(6); pp. 596-599; Jun. 1971. |
Shimada et al.; Application of optical coherence tomography (OCT) for diagnosis of caries, cracks, and defects of restorations; Current Oral Health Reports; 2(2); pp. 73-80; Jun. 2015. |
Siemens; Cerec—Computer-Reconstruction, High Tech in der Zahnmedizin; 15 pagesl; (Includes Machine Translation); (year of pub. sufficiently earlier than effective US filing date and any foreign priority date); 2004. |
Sinclair; The Readers{grave over ( )} Corner; Journal of Clinical Orthodontics; 26(6); pp. 368-372; 5 pages; retrived from the internet (http://www.jco-online.com/archive/print_article asp?Year=1992&Month=06&ArticleNum=); Jun. 1982. |
Sirona Dental Systems GmbH, CEREC 3D, Manuel utilisateur, Version 2.0X (in French); 114 pages; (English translation of table of contents included); (year of pub. sufficiently earlier than effective US filing date and any foreign priority date); 2003. |
Stoll et al.; Computer-aided Technologies in Dentistry; Dtsch Zahnárztl Z 45, pp. 314-322; (English Abstract Included); (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1990. |
Sturman; Interactive Keyframe Animation of 3-D Articulated Models; Proceedings Graphics Interface '84; vol. 86; pp. 35-40; May-Jun. 1984. |
The American Heritage, Stedman's Medical Dictionary; Gingiva; 3 pages; retrieved from the interent (http://reference.com/search/search?q=gingiva) on Nov. 5, 2004. |
The Dental Company Sirona: Cere omnicam and cerec bluecam brochure: The first choice in every case; 8 pages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 2014. |
Thera Mon; “Microsensor”; 2 pages; retrieved from the internet (www.english.thera-mon.com/the-product/transponder/index.html); on Sep. 19, 2016. |
Thorlabs; Pellin broca prisms; 1 page; retrieved from the internet (www.thorlabs.com); Nov. 30, 2012. |
Tiziani et al.; Confocal principle for macro and microscopic surface and defect analysis; Optical Engineering; 39(1); pp. 32-39; Jan. 1, 2000. |
Truax; Truax Clasp-Less(TM) Appliance System; The Functional Orthodontist; 9(5); pp. 22-24, 26-28; Sep.-Oct. 1992. |
Tru-Tatn Orthodontic & Dental Supplies, Product Brochure, Rochester, Minnesota 55902, 16 pages; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date) 1996. |
U.S. Department of Commerce, National Technical Information Service, Holodontography: An introduction to Dental Laser Holography; School of Aerospace Medicine Brooks AFB Tex; Mar. 1973, 40 pages; Mar. 1973. |
U.S. Department of Commerce, National Technical Information Service; Automated Crown Replication Using Solid Photography SM; Solid Photography Inc., Melville NY,; 20 pages; Oct. 1977. |
Vadapalli; Minimum intensity projection (MinIP) is a data visualization; 7 pages; retrieved from the internet (https://prezi.com/tdmttnmv2knw/minimum-intensity-projection-minip-is-a-data-visualization/) on Sep. 6, 2018. |
Van Der Linden et al.; Three-Dimensional Analysis of Dental Casts by Means of the Optocom; Journal of Dental Research; 51(4); p. 1100; Jul.-Aug. 1972. |
Van Der Linden; A New Method to Determine Tooth Positions and Dental Arch Dimensions; Journal of Dental Research; 51(4); p. 1104; Jul.-Aug. 1972. |
Van Der Zel; Ceramic-Fused-to-Metal Restorations with a New CAD/CAM System; Quintessence International; 24(A); pp. 769-778; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date); 1993. |
Van Hilsen et al.; Comparing potential early caries assessment methods for teledentistry; BMC Oral Health; 13(16); doi: 10.1186/1472-6831-13-16; 9 pages; Mar. 2013. |
Varady et al.; Reverse Engineering of Geometric Models{grave over ( )}An Introduction; Computer-Aided Design; 29(4); pp. 255-268; 20 pages; (Author Manuscript); Apr. 1997. |
Verstreken et al.; An Image-Guided Planning System for Endosseous Oral Implants; IEEE Transactions on Medical Imaging; 17(5); pp. 842-852; Oct. 1998. |
Warunek et al., Physical and Mechanical Properties of Elastomers in Orthodonic Positioners; American Journal of Orthodontics and Dentofacial Orthopedics; 95(5); pp. 388-400; 21 pages; (Author Manuscript); May 1989. |
Warunek et.al.; Clinical Use of Silicone Elastomer Applicances; JCO; 23 (10); pp. 694-700; Oct. 1989. |
Watson et al.; Pressures recorded at te denture base-mucosal surface interface in complete denture wearers; Journal of Oral Rehabilitation 14(6); pp. 575-589; Nov. 1987. |
Wells; Application of the Positioner Appliance in Orthodontic Treatment; American Journal of Orthodontics; 58(4); pp. 351-366; Oct. 1970. |
Wikipedia; Palatal expansion; 3 pages; retrieved from the internet (https://en.wikipedia.org/wiki/Palatal__expansion) on Mar. 5, 2018. |
Williams; Dentistry and CAD/CAM: Another French Revolution; J. Dent. Practice Admin.; 4(1); pp. 2-5 Jan./Mar. 1987. |
Williams; The Switzerland and Minnesota Developments in CAD/CAM; Journal of Dental Practice Administration; 4(2); pp. 50-55; Apr./Jun. 1987. |
Windmiller et al.; Wearable electrochemical sensors and biosensors: a review; Electroanalysis; 25(1); pp. 29-46; Jan. 2013. |
Wireless Sensor Networks Magazine; Embedded Teeth for Oral Activity Recognition; 2 pages; retrieved on Sep. 19, 2016 from the internet (www.wsnmagazine.com/embedded-teeth/); Jul. 29, 2013. |
Wishan; New Advances in Personal Computer Applications for Cephalometric Analysis, Growth Prediction, Surgical Treatment Planning and Imaging Processing; Symposium: Computerized Facial Imaging in Oral and Maxilofacial Surgery; p. 5; Presented on Sep. 13, 1990. |
Witt et al.; The wear-timing measuring device in orthodontics-cui bono? Reflections on the state-of-the-art in wear-timing measurement and compliance research in orthodontics; Fortschr Kieferorthop.; 52(3): pp. 117-125; (Translation Included) Jun. 1991. |
Wolf; Three-dimensional structure determination of semi-transparent objects from holographic data; Optics Communications; 1(4); pp. 153-156; Sep. 1969. |
WSCG'98—Conference Program, The Sixth International Conference in Central Europe on Computer Graphics and Visualization '98; pp. 1-7; retrieved from the Internet on Nov. 5, 2004, (http://wscg.zcu.cz/wscg98/wscg98.htm); Feb. 9-13, 1998. |
Xia et al.; Three-Dimensional Virtual-Reality Surgical Planning and Soft-Tissue Prediction for Orthognathic Surgery; IEEE Transactions on Information Technology in Biomedicine; 5(2); pp. 97-107; Jun. 2001. |
Yamada et al.; Simulation of fan-beam type optical computed-tomography imaging of strongly scattering and weakly absorbing media; Applied Optics; 32(25); pp. 4808-4814; Sep. 1, 1993. |
Yamamoto et al.; Optical Measurement of Dental Cast Profile and Application to Analysis of Three-Dimensional Tooth Movement in Orthodontics; Front. Med. Biol. Eng., 1(2); pp. 119-130; (year of pub. sufficiently earlier than effective US filing date and any foreign priority date); 1988. |
Yamamoto et al.; Three-Dimensional Measurement of Dental Cast Profiles and Its Applications to Orthodontics; Conf. Proc. IEEE Eng. Med. Biol. Soc.; 12(5); pp. 2052-2053; Nov. 1990. |
Yamany et al.; A System for Human Jaw Modeling Using Intra-Oral Images; Proc. of the 20th Annual Conf. of the IEEE Engineering in Medicine and Biology Society; vol. 2; pp. 563-566; Oct. 1998. |
Yoshii; Research on a New Orthodontic Appliance: The Dynamic Positioner (D.P.); 111. The General Concept of the D.P. Method and Its Therapeutic Effect, Part 1, Dental and Functional Reversed Occlusion Case Reports; Nippon Dental Review; 457; pp. 146-164; 43 pages; (Author Manuscript); Nov. 1980. |
Yoshii; Research on a New Orthodontic Appliance: The Dynamic Positioner (D.P.); I. The D.P. Concept and Implementation of Transparent Silicone Resin (Orthocon); Nippon Dentai Review; 452; pp. 61-74; 32 pages; (Author Manuscript); Jun. 1980. |
Yoshii; Research on a New Orthodontic Appliance: The Dynamic Positioner (D.P.), II. The D.P. Manufacturing Procedure and Clinical Applications; Nippon Dental Review; 454; pp. 107-130; 48 pages; (Author Manuscript); Aug. 1980. |
Yoshii; Research on a New Orthodontic Appliance: The Dynamic Positioner (D.P.); III—The General Concept of the D.P. Method and its Therapeutic Effect, Part 2. Skeletal Reversed Occlusion Case Reports; Nippon Dental Review; 458; pp. 112-129; 40 pages; (Author Manuscript); Dec. 1980. |
Zhou et al.; Biofuel cells for self-powered electrochemical biosensing and logic biosensing: A review: Electroanalysis, 24(2); pp. 197-209; Feb. 2012. |
Zhou et al.; Bio-logic analysis of injury biomarker patterns in human serum samples; Talanta; 83(3); pp. 955-959; Jan. 15, 2011. |
Number | Date | Country | |
---|---|---|---|
20210068923 A1 | Mar 2021 | US |
Number | Date | Country | |
---|---|---|---|
62417985 | Nov 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16827594 | Mar 2020 | US |
Child | 16952072 | US | |
Parent | 15803718 | Nov 2017 | US |
Child | 16827594 | US |