The disclosure relates generally to 3-D diagnostic imaging and more particularly to apparatus and methods for guided surgery with dynamic updating of image display according to treatment progress.
Guided surgery techniques have grown in acceptance among medical and dental practitioners, allowing more effective use of image acquisition and processing utilities and providing image data that is particularly useful to the practitioner at various stages in the treatment process. Using guided surgery tools, for example, the practitioner can quickly check the positioning and orientation of surgical instruments and verify correct angles for incision, drilling, and other invasive procedures where accuracy can be a particular concern.
The capability for radiographic volume imaging, using tools such as cone-beam computed tomography (CBCT), has been particularly helpful for improving the surgical planning process. Intraoral volume imaging, for example, makes it possible for the practitioner to study bone and tissue structures of a patient in detail, such as for implant positioning. Surgical planning tools, applied to the CBCT volume image, help the practitioner to visualize and plan where drilling needs to be performed and to evaluate factors such as amount of available bone structure, recommended drill depth, clearance obstructions, and other variables. Symbols for drill paths or other useful markings can be superimposed onto the volume image display so that these can be viewed from different perspectives and used for guidance during the procedure.
One problem with radiographic volume imaging for surgical guidance relates to update. Once a drilling or other procedure has begun, and as it continues, the volume image that was originally used for surgical planning can become progressively less accurate as a guide to ongoing work. Removal or displacement of tissue may not be accurately represented in the volume image display, so that further guidance may not be as reliable as the initial surgical plan.
A number of conventional surgical guidance imaging systems address the update problem by providing fiducial markers of some type, positioned on the patient's skin or attached to adjacent teeth or nearby structures, or positioned on the surgical instrument itself. Fiducial markers are then used as guides for updating the volume image content. There are drawbacks with this type of approach, however, including obstruction or poor visibility, added time and materials needed for mounting the fiducial markers or marking the surface of the patient, patient discomfort, and other difficulties. Moreover, fiducial markers only provide reference landmarks for the patient anatomy or surgical instrumentation; additional computation is still required in order to update the volume display to show procedure progress. The display itself becomes increasingly less accurate as to actual conditions. Similar limitations relate to inaccurate surface depiction; when using the radiographic image content, changes to the surface contour due to surgical procedures, such as due to incision, drilling, tooth removal, or implant placement, are not displayed.
Among solutions proposed for surgical guidance, fiducial markers, and related techniques for combined image content are those described in U.S. Patent Application Publication No. 2006/0281991 by Fitzpatrick, et al.; U.S. Patent Application Publication No. 2008/0183071 by Strommer et al.; U.S. Patent Application Publication No. 2008/0262345 by Fichtinger et al.; U.S. Patent Application Publication No. 2012/0259204 by Carrat et al.; U.S. Patent Application Publication No. 2010/0168562 by Zhao et al.; U.S. Patent Application Publication No. 2006/0165310 by Newton; U.S. Patent Application Publication No. 2013/0063558 by Phipps; U.S. Patent Application Publication No. 2011/0087332 by Bojarski et al.; U.S. Pat. No. 6,122,541 to Cosman et al.; U.S. Patent Application Publication No. 20100298712 by Pelissier et al.; Patent application WO 2012/149548 A2 by Siewerdsen et al.; Patent application WO 2012/068679 by Dekel et al.; Patent application WO 2013/144208 by Daon; and Patent application WO 2010/086374 by Lavalee et al.
Structured light imaging is one familiar technique that has been successfully applied for surface characterization. In structured light imaging, a pattern of illumination is projected toward the surface of an object from a given angle. The pattern can use parallel lines of light or more complex periodic features, such as sinusoidal lines, dots, or repeated symbols, and the like. The light pattern can be generated in a number of ways, such as using a mask, an arrangement of slits, interferometric methods, or a spatial light modulator, such as a Digital Light Processor from Texas Instruments Inc., Dallas, Tex. or similar digital micromirror device. Multiple patterns of light may be used to provide a type of encoding that helps to increase robustness of pattern detection, particularly in the presence of noise. Light reflected or scattered from the surface is then viewed from another angle as a contour image, taking advantage of triangulation in order to analyze surface information based on the appearance of contour lines or other patterned illumination.
Intraoral structured light imaging is now becoming a valuable tool for the dental practitioner, who can obtain this information by scanning the patient's teeth using an inexpensive, compact intraoral scanner, such as the Model CS3500 Intraoral Scanner from Carestream Dental, Atlanta, Ga. However, structured light imaging only provides information about the surface contour at the time of scanning. This information can quickly become inaccurate as a dental procedure progresses.
There is a need for providing automated surgical guidance apparatus and methods that can help practitioners to plan and execute procedures such as the placement of implants and other devices. Capable imaging tools for both internal structures and contour imaging have been developed. However, there is a need to make this information accessible to the practitioner during the surgery procedure, without requiring cumbersome display apparatus and without distracting the practitioner from concentration on the surgical treatment site.
It is an object of the present disclosure to advance the art of dental surgical guidance. Apparatus and methods can be provided that take advantage of volume image reconstruction and contour surface image characterization to present real-time guidance images to the dental surgical practitioner.
Another aspect of this application is to address, in whole or in part, at least the foregoing and other deficiencies in the related art.
It is another aspect of this application to provide, in whole or in part, at least the advantages described herein.
These objects are given only by way of illustrative example, and such objects may be exemplary of one or more embodiments of the disclosure. Other desirable objectives and advantages inherently achieved by the may occur or become apparent to those skilled in the art. The invention is defined by the appended claims.
According to one aspect of the disclosure, there is provided a method for acquiring and updating a 3-D surface of a dentition that can include a) acquiring a collection of 3-D image content of the dentition from a different points of view using a 3-D scanning device; b) gradually forming the 3-D surface of the dentition using a matching algorithm that aggregates 3-D images from the 3-D image content based on a determination of overlap of each 3-D image relative to the 3-D surface of the dentition; wherein for each newly acquired 3-D image, i) when the newly acquired 3-D image partly overlaps with the 3-D surface of the dentition, augmenting the 3-D surface of the dentition with a portion of the newly acquired 3-D image that does not overlap with the 3-D surface of the dentition, and ii) when the newly acquired 3-D image completely overlaps with the 3-D surface of the dentition, updating the 3-D surface of the dentition in real time by replacing the corresponding portion of the 3-D surface of the dentition with the contents of newly acquired 3-D image, where the corresponding portion of the 3-D surface of the dentition no longer contributes to the updated 3D surface of the dentition. In one aspect, the position of the 3-D scanning device relative to the 3-D surface of the dentition can be determined in real time by comparing the size and the shape of the overlap to the cross-section of the field-of-view of the 3-D scanning device, where the size and the shape of the overlap of the newly acquired 3-D image is used to determine the distance and the angles from which the 3-D image was acquired relative to the 3-D surface of the dentition.
According to one aspect of the disclosure, there is provided a method for updating display of a dentition to a practitioner that can include obtaining 3-D surface contour image content that includes a dentition treatment region; obtaining radiographic volume image content that includes the dentition treatment region; combining the 3-D surface contour image content and the radiographic volume image content into a single 3-D virtual model that comprises the dentition treatment region; obtaining instructions that define a surgical treatment plan related to the treatment region; repeating the steps of a1) acquiring new 3-D contour images of the dentition treatment region that include physical dental objects in the dentition treatment region from different points of view using a 3-D scanning device, and a2) updating the 3-D surface of the dentition treatment region in real time by replacing the corresponding portion of the 3-D surface of the dentition treatment region with the contents of the newly acquired 3-D contour images, where the corresponding portion of the 3-D surface of the dentition no longer contributes to the updated 3D surface of the dentition; and repeating the steps of b1) sensing the position of a surgical instrument mounted to the 3-D scanning device at a surgical site within the dentition treatment region, relative to the single 3-D virtual model; b2) updating the single 3-D virtual model according to the surgical treatment plan; b3) determining a field of view of the practitioner and detecting a tooth surface in the dentition treatment region in the practitioner's field of view and displaying at least a portion of the updated single 3-D virtual model onto the field of view and oriented to the field of view and registered to the actual tooth surface as seen from the practitioners' field of view.
The foregoing and other objects, features, and advantages of the disclosure will be apparent from the following more particular description of the embodiments of the disclosure, as illustrated in the accompanying drawings.
The elements of the drawings are not necessarily to scale relative to each other.
The following is a detailed description of exemplary embodiments, reference being made to the drawings in which the same reference numerals identify the same elements of structure in each of the several figures.
Where they are used, the terms “first”, “second”, and so on, do not necessarily denote any ordinal or priority relation, but may be used for more clearly distinguishing one element or time interval from another.
The term “exemplary” indicates that the description is used as an example, rather than implying that it is an ideal. The terms “subject” and “object” may be used interchangeably to identify the object of an optical apparatus or the subject of an image.
The term “in signal communication” as used in the application means that two or more devices and/or components are capable of communicating with each other via signals that travel over some type of signal path. Signal communication may be wired or wireless. The signals may be communication, power, data, or energy signals which may communicate information, power, and/or energy from a first device and/or component to a second device and/or component along a signal path between the first device and/or component and second device and/or component. The signal paths may include physical, electrical, magnetic, electromagnetic, optical, wired, and/or wireless connections between the first device and/or component and second device and/or component. The signal paths may also include additional devices and/or components between the first device and/or component and second device and/or component.
In the context of the present disclosure, the terms “pixel” and “voxel” may be used interchangeably to describe an individual digital image data element, that is, a single value representing a measured image signal intensity. Conventionally an individual digital image data element is referred to as a voxel for 3-dimensional or volume images and a pixel for 2-dimensional (2-D) images. For the purposes of the description herein, the terms voxel and pixel can generally be considered equivalent, describing an image elemental datum that is capable of having a range of numerical values. Voxels and pixels have attributes of both spatial location and image data code value.
Volumetric imaging data is obtained from a volume radiographic imaging apparatus such as a computed tomography system, CBCT system 120 as shown in
In the context of the present disclosure, a 3-D image or “3-D image content” can include:
“Patterned light” is used to indicate light that has a predetermined spatial pattern, such that the light has one or more features such as one or more discernable parallel lines, curves, a grid or checkerboard pattern, or other features having areas of light separated by areas without illumination. In the context of the present disclosure, the phrases “patterned light” and “structured light” are considered to be equivalent, both used to identify the light that is projected onto the head of the patient in order to derive contour image data.
In the context of the present disclosure, a single projected line of light is considered a “one dimensional” pattern, since the line has an almost negligible width, such as when projected from a line laser, and has a length that is its predominant dimension. Two or more of such lines projected side by side, either simultaneously or in a scanned arrangement, can be used to provide a two-dimensional pattern.
The terms “3-D model” and “point cloud” may be used synonymously in the context of the present disclosure. The dense point cloud is formed using techniques familiar to those skilled in the volume imaging arts for forming a point cloud and relates generally to methods that identify, from the point cloud, vertex points corresponding to surface features. The dense point cloud can be generated using the reconstructed contour data from one or more reflectance images. Dense point cloud information serves as the basis for a polygon model at high density, such as can be used for a 3-D surface for dentition including the teeth and gum surface.
In the context of the present disclosure, the terms “virtual view” and “virtual image” are used to connote computer-generated or computer-processed images that are displayed to the viewer. The virtual image that is generated can be formed by the optical system using a number of well-known techniques and this virtual image can be formed by the display optics using convergence or divergence of light. A magnifying glass, as a simple example, provides a virtual image of its object. A virtual image is not formed on a display surface but is formed by an optical system that provides light at angles that give the appearance of an actual object at a position in the viewer's field of view; the object is not actually at that position. With a virtual image, the apparent image size is independent of the size or location of a display surface. The source object or source imaged beam for a virtual image can be small. In contrast to systems that project a real image on a screen or display surface, a more realistic viewing experience can be provided by forming a virtual image that is not formed on a display surface but formed by the optical system; the virtual image appears to be some distance away and appears, to the viewer, to be superimposed onto or against real-world objects in the field of view (FOV) of the viewer.
In the context of the present disclosure, an image is considered to be “in register” with a subject that is in the field of view when the image and subject are visually aligned from the perspective of the observer. As the term “registered” is used in the current disclosure, a registered feature of a computer-generated or virtual image is sized, positioned, and oriented on the display so that its appearance represents the planned or intended size, position, and orientation for the corresponding object, correlated to the field of view of the observer. Registration is in three dimensions, so that, from the view perspective of the dental practitioner/observer, the registered feature is rendered at the position and angular orientation that is appropriate for the patient who is in the treatment chair and within the visual field of the observing practitioner. Thus, for example, where the computer-generated feature is a registered virtual image for a drill hole or drill axis for a patient's tooth, and where the observer is looking into the mouth of the patient, the display of the drill hole or axis can appear as if superimposed or overlaid within the mouth sized, oriented and positioned at the actual tooth for drilling and/or dentition surgical site as seen from the detected perspective of the observer. The relative opacity of superimposed content and/or registered virtual content can be modulated to allow ease of visibility of both the real-world view and the virtual image content that is superimposed thereon. In addition, because the virtual image content can be digitally generated, the superimposed content and/or registered content can be removed or its appearance changed in order to provide improved visibility of the real-world scene in the field of view or in order to provide various types of information to the practitioner.
In the context of the present disclosure, the term “real-time image” refers to an image that is actively acquired from the patient or displayed during a procedure in such a way that the image reflects the actual status of the procedure with no more than a few seconds' lag time, with imaging system response time as the primary factor in determining lag time. Thus, for example, a real-time display of drill position would closely approximate the actual drill position or targeted position, offset in time only by the delay time needed to process and display the image after being acquired or processed from stored image data.
In the context of the present disclosure, the term “highlighting” for a displayed feature has its conventional meaning as is understood to those skilled in the information and image display arts. In general, highlighting uses some form of localized display enhancement to attract the attention of the viewer. Highlighting a portion of an image, such as an individual tooth or a set of teeth or other structure(s) can be achieved in any of a number of ways, including, but not limited to, annotating, displaying a nearby or overlaying symbol, outlining or tracing, display in a different color or at a markedly different intensity or gray scale value than other image or information content, blinking or animation of a portion of a display, or display at higher sharpness or contrast.
In the context of the present disclosure, the terms “viewer”, “operator”, and “user” are considered to be equivalent and refer to the viewing practitioner, technician, or other person who views and manipulates a contour image that is formed from a combination of multiple structured light images on a display monitor.
A “viewer instruction”, “operator instruction”, or “operator command” can be obtained from explicit commands entered by the viewer or may be implicitly obtained or derived based on some other user action, such as making an equipment setting, for example. With respect to entries entered on an operator interface, such as an interface using a display monitor and keyboard, for example, the terms “command” and “instruction” may be used interchangeably to refer to an operator entry.
In the context of the present disclosure, the term “at least one of” is used to mean one or more of the listed items can be selected. The terns “about” indicates that the value listed can be somewhat altered, as long as the alteration does not result in nonconformance of the process or structure to the illustrated embodiment.
In the context of the present disclosure, the term “coupled” is intended to indicate a mechanical association, connection, relation, or linking between two or more components, such that the disposition of one component affects the spatial disposition of a component to which it is coupled. For mechanical coupling, two components need not be in direct contact, but can be linked through one or more intermediary components.
Embodiments of the present disclosure are directed to the need for improved status tracking and guidance for the practitioner during surgical procedure using a volume image and augmented reality display, wherein the display of the volume image content is continuously refreshed to update the progress of the drill or other surgical instrument. Advantageously, radiographic volume image content for internal structures can be combined with surface contour image content for outer surface features, to form a virtual model or a single 3-D virtual model so that the combination forms the 3-D image content that displays to the practitioner as a virtual model that provides a surgical plan that can be continuously updated as work on the patient progresses. Certain exemplary embodiments can register the updatable single 3-D virtual model to the detected field of view of the practitioner.
The schematic block diagram of
Real time feedback can be presented to the practitioner on the conventional display 74 monitor or on a wearable display such as a head-mounted device (HMD) 110. A scanning imaging apparatus 70 is disposed to continuously monitor the progress of a surgical instrument 112 as the treatment procedure progresses.
Alternately, 3-D image content can be obtained by acquiring and processing radiographic image data from a scanned cast, such as a molded appliance obtained from the patient.
In structured light imaging, a pattern of lines, or other structured pattern, is projected from illumination array 10 toward the surface of an object from a given angle. The projected pattern from the surface is then viewed from another angle as a contour image, taking advantage of triangulation in order to analyze surface information based on the appearance of contour lines. Phase shifting, in which the projected pattern is incrementally shifted spatially for obtaining additional measurements at the new locations, is typically applied as part of structured light imaging, used in order to complete the contour mapping of the surface and to increase overall resolution in the contour image.
The schematic diagram of
By projecting and capturing images that show structured light patterns that duplicate the arrangement shown in
By knowing the instantaneous position of the scanner and the instantaneous position of the line of light within a object-relative coordinate system when the image was acquired, a computer equipped with appropriate software can use triangulation methods to compute the coordinates of numerous illuminated surface points. As the plane is moved to intersect eventually with some or all of the surface of the object, the coordinates of an increasing number of points are accumulated. As a result of this image acquisition, a point cloud of vertex points or vertices can be identified and used to characterize the surface contour.
The surface data for surface contour characterization, also referred to as a surface data set, is obtained by a process that derives individual points from the structured images, typically in the form of a point cloud, wherein the individual points represent points along the surface of the imaged tooth or other feature. A close approximation of the surface object can be generated from a point cloud by connecting adjacent points and forming polygons, each of which closely approximates the contour of a small portion of the surface. Alternately, surface data can be obtained from the volumetric voxel data, such as data from a CBCT apparatus. Surface voxels can be identified and distinguished from voxels internal to the volume using threshold techniques or boundary detection using gray levels, for example. Thus, the term “surface” can be used to indicate data that is obtained either by processing volumetric data from a radiography-based system or as contour data acquired from a scanner or camera using structured or patterned light. While different file formats can be used to represent surface data, a number of systems that show surface features of various objects use the STL (STereoLithography) file format originally used with computer-aided design systems for 3D.
It should also be noted that image content for forming the mesh 140 of
By way of example,
Embodiments of the present disclosure not only allow for updating of mesh 140, but also allow for its expansion according to structured light image data over areas adjacent to treatment region R. By way of example,
Update of the existing mesh 140 can also be accomplished in a similar way to extension of the mesh.
In certain exemplary embodiments, the existing mesh 140 can be updated when a newly acquired 3-D image (e.g., newly acquired 3-D image 142) partly overlaps with 3-D surface of the existing mesh 140 by augmenting the existing mesh 140 with a portion of the newly acquired 3-D image that does not overlap with the existing mesh 140. Further, when the newly acquired 3-D image completely overlaps with the existing mesh 140, existing mesh 140 can be updating in real time by replacing the corresponding portion of the existing mesh 140 with the contents of newly acquired 3-D image. In other words, complete overlap occurs when the newly acquired 3-D image falls within the boundaries of the existing mesh 140 or completely covers a portion of the existing mesh that is totally included within the boundaries of the existing mesh 140. In one embodiment, the corresponding portion of the existing mesh 140 that was replaced no longer contributes to the updated existing mesh 140.
In certain exemplary method and/or apparatus embodiments, determining a position of an intraoral scanner relative to the existing mesh 140 in real time can be performed by comparing the size and the shape of the overlap to the cross-section of the field-of-view of the intraoral scanner. Preferably, the size and the shape of the overlap of a newly acquired 3-D image is used to determine the distance and the angles from which the newly acquired 3-D image was acquired relative to the 3-D surface of the existing mesh 140.
In one exemplary embodiment, determining a position of an intraoral scanner relative to the existing mesh 140 in real time can be preferably performed when at least 50% of the newly acquired 3-D image overlaps the existing mesh 140. However, in certain exemplary method and/or apparatus embodiments, determining a position of an intraoral scanner relative to the existing mesh 140 in real time can be performed when 20%-100% of the newly acquired 3-D image overlaps the existing mesh 140. In some exemplary embodiment, determining a position of an intraoral scanner relative to the existing mesh 140 in real time can be performed when greater than 75% or greater than 90% of the newly acquired 3-D image overlaps the existing mesh 140.
The capability to generate, extend, and update the mesh 140 can be provided by a scanner that is coupled to the surgical instrument itself, as described in more detail subsequently. This arrangement enables real-time information to be acquired and related to the surgical site within the treatment area and/or position of the surgical instrument relative to the mesh and/or practitioner. Continuous tracking of this information enables visualization tools associated with the treatment system to display timely instructional information for the practitioner.
An embodiment of the present disclosure can be used for providing assistance according to a surgical treatment plan, such as an implant plan that has been developed using existing volume image content and a set of 2-D contour images of the patient. Implant planning, for example, uses image information in order to help locate the location of an implant fixture relative to nearby teeth and to structures in and around the jaw, including nerve, sinus, and other features. Software utilities for generating an implant plan or other type of surgical plan are known to those skilled in the surgical arts and have recognized value for helping to identify the position, dimensions, hole size and orientation, and overall geometry of an incision, implant, prosthetic device, or other surgical feature. Surgical treatment plans can be displayed as a reference to the practitioner during a procedure, such as on a separate display monitor that is viewable to the practitioner. However, conventional display approaches have a number of noteworthy limitations. Among problems with conventional surgical plan display is the need to focus somewhere other than on the patient; the practitioner must momentarily look away from the incision or drill site in order to view the referenced surgical plan. Additionally, the plan is not updated once the procedure begins, so that displayed information can be increasingly less accurate, such as where surface material is removed or moved aside. An embodiment of the present disclosure addresses these problems by providing surgical plan data, continuously updated, using ongoing surface scanning as well as augmented reality display tools. An embodiment of the present disclosure can provide surgical plan data, continuously updated, using ongoing surface scanning as well as augmented reality display tools registered to the field of view of the practitioner.
The implant plan can initially use 3-D information from both volumetric imaging, such as from a CBCT apparatus, and surface contour imaging, such as from a structured light scanning device. The two sets of data, volumetric and surface contour, relative to each other and the initial implant plan, can give the practitioner useful information related to both visible surfaces and invisible tissue beneath the surface. Advantageously, as execution of the plan progresses, embodiments of the present disclosure allow recomputation and updating of the displayed surface, based on work performed by the practitioner.
The schematic view of
The schematic diagram of
Continuing with the
HMD devices and related wearable devices that have cameras, sensors, and other integrated components are known in the art and are described, for example, in U.S. Pat. Nos. 6,091,546 to Spitzer et al.; 8,582,209 to Amirparviz; 8,576,276 to Bar-Zeev et al.; and in U.S. Patent Application Publication 2013/0038510 to Brin et al. HMD devices are capable of superimposing image content onto the field of view of the wearer, so that virtual or computer-generated image content appears to the viewer along with the real-world object that lies in the field of view, such as a tooth or other anatomy.
For the superimposition of computer-generated image features as virtual images from the surgical plan onto the real-world view of the patient's mouth in field of view 124 (
Registration of mesh content with the field of view can be performed by the apparatus shown in
According to an embodiment of the present disclosure, a registration sequence is provided, in which the practitioner follows initial procedural instructions for setting up registration coordinates, such as to scan the region of interest using an intra-oral camera 24 (
Progress indicators can be provided by highlighting a particular tooth or treatment area of the mouth or other anatomy by the display of overlaid image content generated from processor 90 (
According to an embodiment of the present disclosure, progress indicators are provided by overlaid virtual images according to system tracking of treatment progress at the surgical site. For drilling a tooth, image content can show the practitioner features such as drill location, drill axis, depth still needed according to the surgical plan, and completed depth thus far, for example. As the drill nears the required depth, image content can be changed to reflect the treatment status and thus help to prevent the practitioner from drilling too deeply. Display color can be used, for example, to indicate when drilling is near-complete or complete. Display color can also be used to indicate proper angle of approach or drill axis and to indicate whether or not the current drill angular position is suitably aligned with the intended axis or should be adjusted.
According to an embodiment, image content is superimposed on the practitioner FOV only when treatment thresholds or limits are reached, such as when a drilled hole is at the target depth or when the angle of a drill or other instrument is incorrect. In one embodiment, deviation information to the practitioner can be registered onto the field of view and oriented to the field of view when the sensed position of a surgical instrument is contrary to the surgical treatment plan. Exemplary deviation information is a representation (e.g., orientation) of the surgical instrument and correction information in accordance with the surgical treatment plan displayed in the practitioners' field of view registered to the actual object as seen from the practitioners' field of view. With continual monitoring of the surgical site by the camera that is coupled with the surgical instrument, up-to-date information is available on treatment progress and can be refreshed continually so that treatment status can be reported with accuracy.
Real-time images from treatment region R in the practitioners FOV can be obtained from a camera and from one or more image sensors provided in a number of different ways.
Other possible types of sensors that can be used to indicate instrument location or orientation include optical sensors, including sensors that employ lasers, and ultrasound sensors, as well as a range of mechanical, Hall effect, and other sensor types.
It has been noted that structured light imaging is only one of a number of methods for obtaining and updating surface contour information for intraoral features. Other methods that can be used include multi-view imaging techniques that obtain 3-D structural information from 2-D images of a subject, taken at different angles about the subject. Processing for multi-view imaging can employ a “structure-from-motion” (SFM) imaging technique, a range imaging method that is familiar to those skilled in the image processing arts. Multi-view imaging and some applicable structure-from-motion techniques are described, for example, in U.S. Patent Application Publication No. 2012/0242794 entitled “Producing 3D images from captured 2D video” by Park et al., incorporated herein in its entirety by reference. Other methods for characterizing the surface contour use focus or triangularization of surface features, such as by obtaining and comparing images taken at the same time from two different cameras at different angles relative to the subject treatment region.
Force monitoring can be applied to help indicate how much force should be applied, such as in order to extract a particular tooth, given information obtained through images of the tooth. Force monitoring can also help to track progress throughout the procedure. Sensing can be provided to help indicate when the practitioner should stop or change direction of an instrument, or when to stop to avoid other structures. Excessive force application can also be sensed and can cause the system to alert the practitioner to a potential problem. The system can exercise further control by monitoring and changing the status or speed of various tools according to detected parameters. Drill speed can be adjusted for various conditions or the drill or other instrument slowed or stopped according to status sensing and progress reporting. Radio-frequency (RF) sensing devices can also be used to help guide the orientation, positioning, and application of surgical and other instruments.
According to an embodiment of the present disclosure, the tool head of a drill or other surgical instrument 60 can be automatically swapped or otherwise moved in order to allow imaging of a surface 20 or element being treated. A telescopic extension can be provided to help limit or define the extent of depth or motion of a tool or instrument.
According to an alternate embodiment of the present disclosure, as shown in surgical instrument 60 of
Camera 154 and associated scanner 84 components can similarly be clipped to other types of dental instruments, such as probes, for example. Camera 154 and associated scanner 84 components can also be integrally designed into the drill or other instrument 150, so that it is an integral part of the dental instrument 150. Camera 154 can be separately energized from the dental instrument 150 so that image capture takes place with appropriate timing. Exemplary types of dental instruments 150 for coupling with camera 154 and associated scanner 84 components can include drills, probes, inspection devices, polishing devices, excavators, scalers, fastening devices, and plugging devices.
It should be noted that step S110 of
According to an aspect of the present embodiment, surgical instrument 60 (
The schematic views of
Image sensing circuitry 210 is provided by camera 154 of intra-oral scanner 84 that is coupled to instrument 60 control logic. The camera of sensing circuit 210 provides ongoing image capture and processing in order to generate and update mesh M. In certain exemplary embodiments, the mesh M can be updated in real time when a newly acquired 3-D contour image partly overlaps with 3-D surface of the mesh M by adding a portion of the newly acquired 3-D contour image that does not overlap with the mesh M to the mesh M. Further, the existing mesh M can be updating in real time by replacing the corresponding portion of the existing mesh M with the contents of newly acquired 3-D contour image that completely overlaps with the existing mesh M. In one embodiment, the corresponding portion of the existing mesh M that was replaced no longer contributes to the updated existing mesh and/or is stored for later use or discarded.
Projector 270 of scanner 84 directs a pattern P of light of a prescribed shape onto the surface of the treatment region R. In certain embodiments, determining a position of an intra-oral scanner 84 relative to the existing mesh M in real time can be performed by comparing the size and the shape of the overlap on the mesh M to the cross-section of the field-of-view of the intraoral scanner. Preferably, the size and the shape of the overlap (e.g., position of the projected light pattern P on the mesh M) of a newly acquired 3-D contour image is used to determine the distance and the angles from which the newly acquired 3-D contour image was acquired relative to the 3-D surface of the existing mesh M. In an alternative embodiment, combined information about relative distortion or deformation of size and shape of the projected pattern P of light and the detected surface contour of the mesh M within pattern P allow calculation of distance d between projector 270 and the surface and calculation of the angle of instrument 60 relative to a normal N to a reference point on the surface or other angular reference. For example, the outline of projected pattern P is distorted according to the deviation of projector 270 angle from normal, as well as according to the varying slope and contour of the surface. For example, the light beam that forms projected pattern P can have a rectangular or circular cross-section as output from projector 270. However, the distortion of the pattern P outline on the surface can be used to compute distance and angle that indicates the position of intra-oral scanner 84, taking into account the slope and features of the imaged surface.
The schematic view of
The logic flow diagram of
The logic flow diagram of
In certain exemplary method and/or apparatus embodiments, for updating display of a dentition to a practitioner, first 3-D surface contour image content such as a 3-D mesh and/or radiographic volume image content such as a 3-D volume reconstruction that includes a dentition treatment region can be obtained. Then, the 3-D surface contour image content and the radiographic volume image content can be combined into a single 3-D virtual model that includes the dentition treatment region. Next, the practitioner's field of view can be detected and at least a portion of the single 3-D virtual model can be display preferably superimposed and oriented to the practitioner's field of view to be registered to the actual dentition treatment region as seen from the practitioner's field of view. Next or concurrently to the previous steps, a surgical treatment plan related to the dentition treatment region can be obtained and preferably displayed by corresponding virtual image data in the practitioner's field of view.
Then repeatedly, and preferably in real time, the 3-D surface of the dentition treatment region is updated by replacing the corresponding portion of the 3-D surface of the dentition treatment region with contents of newly acquired 3-D images of the dentition treatment region that comprise physical dental objects in the dentition treatment region from different points of view using a 3-D intra-oral scanning device. In one embodiment, the replaced corresponding portion of the 3-D surface of the dentition no longer contributes. Concurrently, the position of a surgical instrument, preferably mounted to the 3-D intra-oral scanning device, is determined and can be displayed, for example by corresponding virtual image data in the practitioner's field of view, relative to the single 3-D virtual model. Also, concurrently, the superimposed single 3-D virtual model can be updated and continuously or intermittently displayed at the practitioner's field of view registered to actual objects in the dentition treatment region as seen from the practitioners' field of view according to the surgical treatment plan.
Further, deviation information can be provided to the practitioner superimposed onto the practitioner's field of view by corresponding virtual image data oriented to the field of view when the sensed position of a surgical instrument is contrary to the surgical treatment plan. In one embodiment, the deviation information can be an orientation of the surgical instrument and correction information in accordance with the surgical treatment plan displayed in the practitioners' field of view registered to the actual dentition treatment region as seen from the practitioners' field of view.
Additional deviation information can be for additional guided dental surgery related information and treatment plans. For example, the deviation information can include information related to and/or necessary to guide a surgical dental instrument to an entrance to a root canal of a selected tooth, information related to and/or necessary to excavate the root canal such as position, angle and orientation of the surgical dental instrument. Additional deviation information can be related to additional dental practice areas including endodontics or restorations.
In the context of the present disclosure, the term “camera” relates to a device that is enabled to acquire a reflectance, 2D digital image from reflected visible or NIR (near-infrared) light, such as structured light that is reflected from the surface of teeth and supporting structures.
Exemplary method and/or apparatus embodiments of the present disclosure provide a depth-resolved volume imaging for obtaining signals that characterize the surfaces of teeth, gum tissue, and other intraoral features where saliva, blood, or other fluids may be present. Depth-resolved imaging techniques are capable of mapping surfaces as well as subsurface structures up to a certain depth. Using certain exemplary method and/or apparatus embodiments of the present disclosure can provide the capability to identify fluid within a sample, such as saliva on and near tooth surfaces, and to compensate for fluid presence and reduce or eliminate distortion that could otherwise corrupt surface reconstruction.
Descriptions of the present invention will be given in terms of an optical coherence tomography imaging system. The invention can also be implemented using photo-acoustic or ultrasound imaging systems. For more detailed information on photo-acoustic and ultrasound imaging, reference is made to Chapter 7 “Handheld Probe-Based Dual Mode Ultrasound/Photoacoustics for Biomedical Imaging” by Mithun Kuniyil, Ajith Singh, Wiendelt Steenbergen, and Srirang Manohar, in Frontiers in Biophotonics for Translational Medicine”, pp. 209-247. Reference is also made to an article by Minghua Xu and Lihong V. Wang, entitled “Photoacoustic imaging in biomedicine”, Review of Scientific Instruments 77, (2006) pp. 041101-1 to -21.
Following the basic model of
Depending on the type of excitation and response signals, accordingly, detection circuitry 1860 processes light signal for OCT or acoustic signal for ultrasound and photo-acoustic imaging.
The simplified schematic diagrams of
In the
The schematic diagram of
It should be noted that the B-scan drive signal 2192 drives the actuable scanning mechanics, such as a galvo or a microelectro-mechanical mirror, for the raster scanner of the OCT probe 1846 (
From the above description, it can be appreciated that a significant amount of data is acquired over a single B-scan sequence. In order to process this data efficiently, a Fast-Fourier Transform (FFT) is used, transforming the spectral-based signal data to corresponding spatial-based data from which image content can more readily be generated.
In Fourier domain OCT, the A scan corresponds to one line of spectrum acquisition which generates a line of depth (z-axis) resolved OCT signal. The B scan data generates a 2DOCT image as a row R along the corresponding scanned line. Raster scanning is used to obtain multiple B-scan data by incrementing the raster scanner acquisition in the C-scan direction.
For ultrasound and for photo-acoustic imaging apparatus 1800, the probe 1846 transducer for signal feedback must be acoustically coupled to sample T, such as using a coupling medium. The acoustic signal that is acquired typically goes through various gain control and beam-forming components, then through signal processing for generating display data.
Embodiments of the present disclosure use depth-resolved imaging techniques to help counteract the effects of fluid in intraoral imaging, allowing 3D surface reconstruction without introducing distortion due to fluid content within the intraoral cavity. In order to more effectively account for and compensate for fluid within the mouth, there remain some problems to be addressed when using the 3D imaging methods described herein.
Among problems with the imaging modalities described for 3D surface imaging is the shift of image content due to the light or sound propagation in fluid. With either OCT or ultrasound methods, the retro-reflected signals from the imaged features provide information resolvable to different depth layers, depending on the relative time of flight of light or sound. Thus the round trip propagation path length of light or sound within the fluid can cause some amount of distortion due to differences between propagation speeds of light or sound in fluid and in air. OCT can introduce a position shift due to the refractive index difference between the surrounding fluid medium and air. The shift is 2Δnd, wherein Δn is the difference in refractive index between fluid and air, distance d is the thickness of fluid. The factor 2 is introduced due to the round trip propagation of light through distance d.
The example of
Similarly, ultrasound has a shift effect caused by a change in the speed of sound in the fluid. The calculated shift is Δc×2d, wherein Δc is the speed difference of sound between air and fluid.
Photoacoustics imaging relies on pulsed light energy to stimulate thermal extension of probed tissue in the sample. The excitation points used are the locations of the acoustic sources. Photoacoustics devices capture these acoustic signals and reconstruct the 3D depth resolved signal depending on the receiving time of sound signals. If the captured signal is from the same path of light, then the depth shift is Δc×d, where Δc is the speed difference of sound between air and fluid. Value d is the thickness of fluid.
The logic flow diagram of
The thickness of the region is determined through a calibrated relationship between the coordinate system inside the OCT probe and the physical coordinates of the teeth, dependent on the optical arrangement and scanner motion inside the probe. Geometric calibration data are obtained separately by using a calibration target of a given geometry. Scanning of the target and obtaining the scanned data establishes a basis for adjusting the registration of scanned data to 3D space and compensating for errors in scanning accuracy. The calibration target can be a 2D target, imaged at one or more positions, or a 3D target.
The processing carried out in steps S2320 and S2330 of
Various image segmentation algorithms can be used for the processing described with relation to
Processing for photoacoustics and ultrasound imaging is similar to that shown in
The logic flow diagram of
Consistent with one embodiment, the present disclosure utilizes a computer program with stored instructions that control system functions for image acquisition and image data processing for image data that is stored and accessed from an electronic memory. As can be appreciated by those skilled in the image processing arts, a computer program of an embodiment of the present disclosure can be utilized by a suitable, general-purpose computer system, such as a personal computer or workstation that acts as an image processor, when provided with a suitable software program so that the processor operates to acquire, process, and display data as described herein. Many other types of computer systems architectures can be used to execute the computer program of the present disclosure, including an arrangement of networked processors, for example.
The computer program for performing the method of the present disclosure may be stored in a computer readable storage medium. This medium may comprise, for example; magnetic storage media such as a magnetic disk such as a hard drive or removable device or magnetic tape; optical storage media such as an optical disc, optical tape, or machine readable optical encoding; solid state electronic storage devices such as random access memory (RAM), or read only memory (ROM); or any other physical device or medium employed to store a computer program. The computer program for performing the method of the present disclosure may also be stored on computer readable storage medium that is connected to the image processor by way of the internet or other network or communication medium. Those skilled in the image data processing arts will further readily recognize that the equivalent of such a computer program product may also be constructed in hardware.
It is noted that the term “memory”, equivalent to “computer-accessible memory” in the context of the present disclosure, can refer to any type of temporary or more enduring data storage workspace used for storing and operating upon image data and accessible to a computer system, including a database. The memory could be non-volatile, using, for example, a long-term storage medium such as magnetic or optical storage. Alternately, the memory could be of a more volatile nature, using an electronic circuit, such as random-access memory (RAM) that is used as a temporary buffer or workspace by a microprocessor or other control logic processor device. Display data, for example, is typically stored in a temporary storage buffer that is directly associated with a display device and is periodically refreshed as needed in order to provide displayed data. This temporary storage buffer can also be considered to be a memory, as the term is used in the present disclosure. Memory is also used as the data workspace for executing and storing intermediate and final results of calculations and other processing. Computer-accessible memory can be volatile, non-volatile, or a hybrid combination of volatile and non-volatile types.
It is understood that the computer program product of the present disclosure may make use of various image manipulation algorithms and processes that are well known. It will be further understood that the computer program product embodiment of the present disclosure may embody algorithms and processes not specifically shown or described herein that are useful for implementation. Such algorithms and processes may include conventional utilities that are within the ordinary skill of the image processing arts. Additional aspects of such algorithms and systems, and hardware and/or software for producing and otherwise processing the images or co-operating with the computer program product of the present disclosure, are not specifically shown or described herein and may be selected from such algorithms, systems, hardware, components and elements known in the art.
Exemplary embodiments according to the application can include various features described herein, individually or in combination.
While the invention has been illustrated with respect to one or more implementations, alterations and/or modifications can be made to the illustrated examples without departing from the spirit and scope of the appended claims. In addition, while a particular feature of the invention can have been disclosed with respect to one of several implementations, such feature can he combined with one or more other features of the other implementations as can be desired and advantageous for any given or particular function. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims, and all changes that come within the meaning and range of equivalents thereof are intended to be embraced therein.
Number | Date | Country | Kind |
---|---|---|---|
PCT/IB2016/000325 | Feb 2016 | IB | international |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/054260 | 2/23/2017 | WO | 00 |