The present invention relates generally to three-dimensional imaging, and more particularly to intraoral three-dimensional imaging.
Digital dental impressions utilize intraoral scanning to generate three-dimensional digital models of an intraoral three-dimensional surface of a subject. Digital intraoral scanners often use structured light three-dimensional imaging. The surface of a subject's teeth may be highly reflective and somewhat translucent, which may reduce the contrast in the structured light pattern reflecting off the teeth.
US 2019/0388193 to Saphier et al., which is assigned to the assignee of the present application and is incorporated herein by reference, describes an apparatus for intraoral scanning including an elongate handheld wand that has a probe. One or more light projectors and two or more cameras are disposed within the probe. The light projectors each have a pattern generating optical element, which may use diffraction or refraction to form a light pattern. Each camera may be configured to focus between 1 mm and 30 mm from a lens that is farthest from the camera sensor. Other applications are also described.
US 2019/0388194 to Atiya et al., which is assigned to the assignee of the present application and is incorporated herein by reference, describes handheld wand including a probe at a distal end of the elongate handheld wand. The probe includes a light projector and a light field camera. The light projector includes a light source and a pattern generator configured to generate a light pattern. The light field camera includes a light field camera sensor, the light field camera sensor includes an image sensor including an array of sensor pixels, and an array of micro-lenses disposed in front of the image sensor such that each micro-lens is disposed over a sub-array of the array of sensor pixels. Other applications are also described.
US 2020/0404243 to Saphier et al., which is assigned to the assignee of the present application and is incorporated herein by reference, describes a method for generating a 3D image, including driving structured light projector(s) to project a pattern of light on an intraoral 3D surface, and driving camera(s) to capture images, each image including at least a portion of the projected pattern, each one of the camera(s) comprising an array of pixels. A processor compares a series of images captured by each camera and determines which of the portions of the projected pattern can be tracked across the images. The processor constructs a three-dimensional model of the intraoral three-dimensional surface based at least in part on the comparison of the series of images. Other embodiments are also described.
Applications of the present invention include systems and methods related to calibration of the optical system of an intraoral scanner. Typically, the intraoral scanner includes an elongate wand (e.g., an elongate handheld wand) with a probe at a distal end of the wand. The probe has a transparent window through which light rays enter and exit the probe. One or more cameras are disposed within the probe and arranged within the probe such that the one or more cameras receive rays of light from an intraoral cavity in a non-central manner, i.e., the relationship between points in 3D world-coordinate space and corresponding points on the camera sensors of the one or more cameras is described by a set of camera rays for which there is no single point in space through which all of the cameras rays pass (in contrast, for example, to a pinhole camera model). The non-central manner in which the one or more cameras receive the rays of light from the intraoral cavity introduces image distortion in scanned images captured by the intraoral scanner. In order to accurately digitally reconstruct the scanned intraoral surface a computer processor generates a three-dimensional model of the intraoral surface based on the scanned images from the one or more cameras using a reconstruction algorithm that compensates for the image distortion specifically introduced by the non-central manner in which the one or more cameras receive the rays of light from the intraoral cavity, as further described hereinbelow.
For some applications, the compensation for the image distortion specifically introduced by the non-central manner in which the one or more cameras receive the rays of light from the intraoral cavity is performed by analyzing the scanning data, e.g., the scanned images, using camera calibration data indicating a set of camera rays respectively corresponding to points (u,v) on a sensor of the one or more cameras, each of the camera rays having a distinct origin and direction. To generate the camera calibration data, calibration images of a 2D camera calibration target having a plurality of distinct calibration features are captured while the 2D camera calibration target is disposed at a respective plurality of distances, in a given direction z in space. A relationship is modeled between (a) points in 3D space defined by an x,y,z coordinate system and (b) corresponding points (u,v) on a sensor of the one or more cameras, as a set of camera rays using a model in which (x,y) as a function of (u,v) varies linearly with distance along z, further described hereinbelow.
It is known in the field of intraoral scanning to place a sleeve having a transparent sleeve-window over the probe of an intraoral scanner, e.g., for hygienic reasons. For some applications the entire sleeve is transparent such that the sleeve-window refers to the specific area of the transparent sleeve which lines up with the transparent window of the probe when the sleeve is placed over the probe. For some applications, the sleeve itself is not transparent, but has a transparent sleeve-window which lines up with the transparent window of the probe when the sleeve is placed over the probe. The transparent window of the sleeve through which light rays exit and enter the probe during scanning typically changes the overall optical conditions. In order to achieve high-accuracy 3D digital reconstruction of the scanned intraoral surface, the changes in the optical conditions due to the presence of a sleeve are accounted for during a calibration process of the intraoral scanner. The inventors have realized that if the optical changes caused by the addition of a sleeve are known, e.g., by knowing the optical properties of a sleeve-window and how they affect light rays passing through the sleeve-window, then the intraoral scanner may be initially calibrated without the presence of a sleeve, and the calibration data mathematically updated to account for the presence of a sleeve-window. As further described hereinbelow, this reduces the complexity of the physical calibration system and provides the flexibility to generate calibration data for a plurality of different sleeve-windows, each with differing optical properties, all based on initial calibration data acquired without the presence of an additional calibration window representing a sleeve.
For some applications, the intraoral scanner includes an elongate handheld wand used to obtain scanning data of an intraoral surface. The elongate handheld wand includes (i) a probe at a distal end of the handheld wand, the probe having a transparent window, (ii) one or more cameras disposed within the probe and arranged to receive light from the intraoral surface through the transparent window of the probe, and (iii) one or more structured light projectors arranged to project structured light onto the intraoral surface through the transparent window of the probe. Typically, the one or more cameras and the one or more structured light projectors go through an initial calibration during manufacturing in order to obtain initial calibration data. The initial calibration data for the one or more cameras and/or the one or more structured light projectors may be updated mathematically based on a known optical property of a given sleeve-window, as further described hereinbelow. For some applications, during a scan, a computer processor (i) receives the initial calibration data for the one or more cameras and/or the one or more structured light projectors, (ii) receives an indication, e.g., input from a dentist, of the presence of a sleeve-window positioned between the intraoral surface and the transparent window of the probe, and (iii) accesses the updated model of the initial calibration data for the combination of the transparent window of the probe and the sleeve-window.
For some applications, images of the intraoral surface are captured under non-structured light, e.g., broad spectrum light and/or Near Infra-Red (NIR). Typically, the elongate handheld wand of the intraoral scanner includes one or more non-structured illumination sources disposed within the handheld wand. Ideally, in order to capture images of the intraoral surface under non-structured light, the non-structured light should uniformly illuminate the intraoral surface. However, due to the close proximity of the scanned surface to the probe, the illumination from the one or more non-structured light sources is incident on the intraoral surface in a non-uniform manner, i.e., images of the intraoral surface are captured using the one or more cameras under non-uniform illumination from the one or more non-structured illumination sources.
In order to compensate for the non-uniformity of the illumination, for each manufactured intraoral scanner, a mathematical model of the illumination from the specific one or more non-structured illumination sources of the intraoral scanner is calculated and stored as part of the initial calibration data for the intraoral scanner. A computer processor analyzes the images captured by the one or more cameras under the non-uniform illumination from the one or more non-structured illumination sources and compensates for the non-uniformity of the illumination using the calibration data generated based on the mathematical model of the illumination from the specific one or more non-structured illumination sources of the apparatus. For some applications, the mathematical model of the illumination includes the location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras. For some applications, the mathematical model of the illumination also includes a measure of relative illumination (e.g., vignetting) for each of the one or more cameras, and an estimated illumination intensity-per-angle emitted from each of the one or more non-structured illumination sources.
Alternatively or additionally, for some applications, the computer processor analyzes images captured by the one or more cameras under the non-uniform illumination from the one or more non-structured illumination sources and compensates for the non-uniformity of the illumination using calibration data indicating an amount of light received at each point (u,v) on a sensor of the one or more cameras for different respective distances from the one or more cameras. To generate the calibration data, calibration images are captured, using the one or more cameras, of a 2D calibration target. The capturing of the calibration images is performed while the 2D calibration target is disposed at a respective plurality of distances, in a given direction z in space, from the one or more cameras. A mathematical function is fit to the calibration images corresponding to an amount of light received at each point (u,v) on a sensor of the one or more cameras for each of the respective plurality of distances in the z direction.
Typically, there are a plurality of different types of illumination sources in the intraoral scanner, e.g., structured light projectors utilizing lasers in different colors, broad spectrum LEDs, and NIR LEDs. In order to achieve high accuracy scanning, small deviations in the camera optics for different respective wavelengths, e.g., chromatic aberrations, are accounted for in the initial calibration process of the one or more cameras. Thus, for some applications, the initial calibration of one or more cameras of the intraoral scanner is repeated under illumination from each type of illumination that may be used during scanning. As described hereinabove, to calibrate the one or more cameras a 2D camera calibration target having a plurality of distinct calibration features is used, e.g., a checkerboard pattern with additional unique markings such as letters and numbers. During calibration of the one or more cameras, for each repetition of the calibration the 2D camera calibration target is uniformly backlit with one of the various types of illumination that the one or more cameras may encounter during intraoral scanning, such that the initial camera calibration data includes calibration data for each wavelength the one or more cameras may encounter during intraoral scanning.
For some applications, in order to simplify the calibration system, a single uniform illumination source includes multiple arrays of light emitting diodes (LEDs), each array having one of the various different wavelengths. The LEDs of each of the respective arrays are spaced such that when each of the respective arrays is activated individually, the uniform illumination source uniformly illuminates the 2D camera calibration target. For some applications, the uniform illumination source includes (i) a first array of light emitting diodes (LEDs), each LED of the first array having a first wavelength between 400 and 700 nanometers, (ii) a second array of LEDs interleaved with the first array, each LED of the second array having a second wavelength between 400 and 700 nanometers, the second wavelength distinct from the first wavelength, (iii) a third array of LEDs interleaved with the first and second arrays, each LED of the third array having a third wavelength between 800 and 2500 nanometers, and (iv) a fourth array of LEDs interleaved with the first, second, and third arrays, each LED of the fourth array emitting broadband light.
There is therefore provided, in accordance with some applications of the present invention, apparatus for intraoral scanning, the apparatus including:
For some applications, the one or more cameras are rigidly fixed within the probe.
For some applications, the probe includes a transparent window, and the non-central manner in which the one or more cameras receives the rays of light from the intraoral cavity is due to the rays of light passing through the transparent window of the probe.
For some applications, for each of the one or more cameras, an angle between an optical axis of the camera and an axis that is normal to the transparent window is 7-45 degrees.
There is further provided, in accordance with some applications of the present invention, a method for intraoral scanning, the method including:
For some applications, modeling the relationship includes modeling the set of rays as a function that takes a given u,v,z and outputs a corresponding x,y, the function in the form of:
F(u,v,z)=G1(u,v)+z*G2(u,v)
There is further provided, in accordance with some applications of the present invention, apparatus for intraoral scanning, the apparatus including:
For some applications, modeling the relationship includes modeling the set of rays as a function that takes a given u,v,z and outputs a corresponding x,y, the function in the form of:
F(u,v,z)=G1(u,v)+z*G2(u,v)
There is further provided, in accordance with some applications of the present invention, a method for intraoral scanning, the method including:
For some applications, the optical property of the sleeve window is a thickness of the sleeve-window and the index of refraction of the sleeve window.
For some applications:
For some applications:
For some applications:
There is further provided, in accordance with some applications of the present invention, apparatus for intraoral scanning, the apparatus including:
For some applications, the computer processor is configured to compensate for the non-uniformity of the illumination using calibration data generated based on the mathematical model of the illumination from the specific one or more non-structured light sources of the apparatus, the mathematical model including the location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras via images of a reflective calibration target.
For some applications, the computer processor is configured to compensate for the non-uniformity of the illumination using calibration data generated based on the mathematical model of the illumination from the specific one or more non-structured light sources of the apparatus, the mathematical model including the location of each of the one or more non-structured illumination sources as seen in a 3D world-coordinate space by the one or more cameras via images of the reflective calibration target that are acquired prior to the apparatus being packaged for commercial sale.
For some applications, the computer processor is configured to update the mathematical model of the illumination from the specific one or more non-structured light sources of the apparatus after a given use of the handheld wand to scan an intraoral surface of a patient and before a subsequent use of the handheld wand to scan an intraoral surface of a patient.
For some applications, the mathematical model of the illumination from the specific one or more non-structured illumination sources of the apparatus includes an estimated illumination intensity-per-angle emitted from each of the one or more non-structured illumination sources.
For some applications, the estimated illumination intensity-per-angle emitted from each of the one or more non-structured illumination sources is estimated based on calibration images captured using the one or more cameras of a diffusive calibration target illuminated with the one or more non-structured illumination sources.
For some applications, the computer processor is configured to further compensate for the non-uniformity of the illumination of the one or more non-structured illumination sources using calibration data indicative of a measure of relative illumination for each of the one or more cameras. In one embodiment, camera vignette calibration data indicative of a measure of vignetting is used for each of the one or more cameras.
For some applications, the camera-vignette calibration data is generated by:
For some applications, the mathematical model of the illumination from the specific one or more non-structured illumination sources of the apparatus includes an estimated illumination intensity-per-angle emitted from each of the one or more non-structured illumination sources.
For some applications, the probe has a transparent window through which the one or more non-structured illumination sources are configured to emit light onto an intraoral surface, and the mathematical model includes (i) the distance of a calibration target from the transparent window of the probe and (ii) Fresnel reflections from the transparent window.
For some applications, the one or more non-structured illumination sources include one or more broad spectrum illumination sources.
For some applications, the one or more non-structured illumination sources include one or more Near Infra-Red (NIR) illumination sources.
There is further provided, in accordance with some applications of the present invention, apparatus for intraoral scanning, the apparatus including:
For some applications, capturing includes capturing calibration images of a solid-color 2D calibration target.
There is further provided, in accordance with some applications of the present invention, apparatus including a uniform illumination source for calibration of one or more cameras disposed within an intraoral scanner, the uniform illumination source configured to illuminate a camera calibration target and including:
For some applications, the total number of LEDs is between 16 and 100.
The present invention will be more fully understood from the following detailed description of applications thereof, taken together with the drawings, in which:
Reference is now made to
Reference is now made to
In order to geometrically calibrate each camera 22, these applications of the present invention use a relationship between points in 3D world-coordinate space and corresponding points (u,v) on the camera sensors. Thus, the correspondence between (i) a given vector 30′ in 3D world-coordinate space along which light from a particular point in 3D world-coordinate space enters probe 24, and (ii) a specific point (u,v) on the sensor, is defined by the relationship. Part (a) of
Generally speaking, for a camera which operates in a central manner, camera rays in 3D world-coordinate space corresponding to each pixel on the camera sensor can be found by projecting rays from every pixel through a virtual pin-hole representing a point in which all of the camera rays intersect. It is noted that cameras 22 may be modeled as central cameras, however, factors of the overall optical system of handheld wand 20 cause cameras 22 to act in a non-central manner (further described hereinbelow with reference to
For some applications, handheld wand 20 performs the intraoral scanning using structured light illumination. One or more structured light projectors 54 are disposed within handheld wand 20 and project a pattern of structured light, e.g., a pattern of spots, onto intraoral surface 38. In order for computer processor 40 to generate a three-dimensional model of intraoral surface 38 based on images from one or more cameras 22, computer processor 40 solves a “correspondence problem,” where a correspondence between pattern elements in the structured light pattern and pattern elements seen by a camera 22 viewing the pattern is determined. For some applications, computer processor 40 compensates for the image distortion specifically introduced by the non-central manner in which one or more cameras 22 receive rays of light from the intraoral cavity by altering the coordinates of one or more of the structured light pattern elements as seen by one or more cameras 22 in order to account for the non-central manner in which the one or more cameras 22 receive rays of light from the intraoral cavity.
Reference is now made to
For example, 2D camera calibration target 42 may be a checkerboard target, e.g., a black and white checkerboard target, with the corner intersections of squares of different colors used as distinct calibration features 44. Alternatively or additionally, camera calibration target 42 has unique markings as well, such as letters and numbers, so that the relative positioning of the one or more cameras 22 may be determined. Each distinct calibration feature is given a corresponding coordinate value (x,y). Each respective distance D1 is given a value in the z-direction, with z=0 being at an arbitrary selected distance from one or more cameras 22. For example, z=0 may be selected as a distance very near transparent window 32 of probe 24 or sleeve-window 58 (as described hereinbelow with reference to
For each calibration image captured with camera calibration target 42, points (u,v) on sensor 46 which correspond to the distinct calibration features are known, and the (x,y,z) coordinates of the distinct calibration features in 3D world-coordinate space are known. Correspondence is solved such that each point (u,v) on sensor 46 which detects a distinct calibration feature corresponds to a known point in 3D world-coordinate space. Typically, a set of rays is then modeled as a function that takes a given (u,v,z) and outputs a corresponding (x,y), i.e., the function describes a linear transformation R3→R2:(u,v,z)→(x,y). The function is typically in the form of:
For some applications, functions G1 and G2 can be defined as high order polynomial functions defined as:
The terms can now be written as a linear equation:
The problem can then be separated to separately find a function of (u,v,z) that outputs (x) and a function of (u,v,z) that outputs (y). For stability, the input data is normalized to be within [−1,1]. It is noted that fitting gk can be done using polynomial fit, or any other type of function, e.g., radial basis function (RBF) fit.
Reference is now made to
Reference is now made to
Reference is now made to
The initial calibration data for the one or more cameras 22 models the relationship between points in 3D space (x,y,z) and corresponding points (u,v) on sensors 46 of the one or more cameras 22 as a set of camera rays, each camera ray having an origin Oi and a direction, as follows:
For each camera 22, the points (u,v) on sensor 46 form a grid of points.
For each point on the (u,v) grid of sensor 46, Eqn. 1 is used to find two (x,y, z) points, Pt1 and Pt2, in 3D world-coordinate space, each at a distinct z-value.
A camera ray Ri is formed for each point on the (u,v) grid of sensor 46 by setting the origin of camera ray Ri to be at Pt1 and defining the direction of camera ray Ri as Pt2 minus Pt1 (Pt2−Pt1).
Thus, the initial calibration data for each camera 22 includes a camera ray Ri corresponding to each point (u,v) on camera sensor 46, each camera ray Ri having an origin Oi and a direction.
For the one or more structured light projectors 54, each structured light projector 54 projects a plurality of structured light features, each projected structured light feature corresponding to a distinct projector ray Rj. The initial calibration data for one or more structured light projectors 54 defines an origin Oj and a direction vector for each projector ray Rj, as follows:
Each structured light projector 54 is activated to project a pattern onto a 2D diffusive calibration target 66 (shown in
Using one or more cameras 22, projector calibration images are captured while 2D diffusive calibration target 66 is disposed at a plurality of distinct z-values.
Based on the projector calibration images, (u,v) coordinates on camera sensors 46 are found for each detected pattern feature (e.g., spot) Sj at each z-value of 2D diffusive calibration target 66.
Using the initial calibration data for the one or more cameras 22, for each detected pattern feature Sj captured while 2D diffusive calibration target 66 is disposed at a given z-value, an (x,y,z) point can be found in 3D world-coordinate space at the intersection of (i) the camera ray Ri corresponding to the (u,v) coordinate of the detected pattern feature Sj, and (ii) a z-plane at the z-value of 2D diffusive calibration target 66.
Such (x,y,z) points are found for each detected pattern feature Sj at a plurality of different z-values.
A correspondence algorithm is then solved in order to determine which of the (x,y,z) points in 3D world-coordinate space corresponds to each projector ray Rj.
For each projector ray Rj, a line is then fitted to the (x,y,z) points corresponding to that projector ray Ri in order to obtain an origin Oj and direction vector for each projector ray Rj.
In step 62, computer processor 40 receives an indication of the presence of sleeve-window 58 positioned between intraoral surface 38 and transparent window 32 of probe 24. The optical properties of sleeve-window 58, e.g., a thickness of sleeve-window 58 and the index of refraction of sleeve-window 58 are typically known. Thus, the effect of the optical properties of sleeve-window 58 on light rays passing through sleeve-window 58 can be mathematically modeled. Each ray of light passing through sleeve-window 58 is shifted such that the origin of the light ray is altered while the propagation direction of the light ray remains unchanged. Thus, for a sleeve-window of given optical properties, (a) an updated model of the initial calibration data for one or more cameras 22 is calculated by mathematically shifting origin Oi for each camera ray Ri, and (b) an updated model of the initial calibration data for one or more structured light projectors 54 is calculated by mathematically shifting origin Oj for each projector ray Rj. For some applications, updated sets of camera rays and projector rays may be stored for a plurality of different sleeve-windows having different respective optical properties. Thus, in step 64, in response to the indication that sleeve-window 58 is positioned between intraoral surface 38 and transparent window 32 of probe 24, i.e., in response to an indication that sleeve 56 has been placed over probe 24, computer processor 40 accesses an updated model of the initial calibration data for the combination of transparent window 32 of probe 24 and sleeve-window 58, the updated model calculated from the initial calibration data based on an optical property of sleeve-window 58.
Updating the initial calibration data mathematically to account for the presence of sleeve-window 58 allows handheld wand 20 to be used for intraoral scanning with a variety of different types of sleeves, i.e., sleeves having a variety of different optical properties, without having to redo the initial calibration process of cameras 22 and/or structured light projectors 54. Updating the initial calibration data mathematically to account for the presence of sleeve-window 58 also allows the initial calibration process to be performed without having to add a physical window to the calibration jig in order to simulate the presence of sleeve-window 58. Thus, the calibration jig is simpler and can be kept more stable.
Reference is now made to
Reference is now made to
In step 68, computer processor 40 receives projector-ray calibration data indicating, for each projector ray Rj, at least one projector-ray parameter. For some applications, the projector-ray parameter for each projector ray Rj may be a shape of the projected structured light feature corresponding to projector ray Rj based on (1) a plurality of calibration images of the projected structured light feature on a calibration target, e.g., 2D diffusive calibration target 66, and (2) a known respective angle at which projector ray Rj was incident on the calibration target for each calibration image. For example, the shape of structured light feature Sj may be an ellipsoid which changes in response to the angle at which projector ray Rj is incident on the calibration target. Additionally or alternatively, the projector-ray parameter for each projector ray Rj may be an intensity of projector ray Rj based on a plurality of calibration images of the projected structured light feature on a calibration target, e.g., 2D diffusive calibration target 66. Furthermore, the projector-ray parameter(s) typically change smoothly with depth. For some applications, the projector-ray calibration data received by computer processor 40 includes how each projector-ray parameter changes with depth.
In step 70, computer processor 40 receives scanning data of intraoral surface 38, the scanning data comprising images of a plurality of projected structured light features on intraoral surface 38. In step 72, computer processor 40 runs a correspondence algorithm to identify which projected structured light feature on intraoral surface 38 corresponds to each respective projector ray Rj, using the at least one project-ray parameter per projector ray Rj as an input to the correspondence algorithm. For example, the projector-ray parameter(s) for each projector ray Rj may help the correspondence algorithm identify which projected structured light features match which respective projector rays Rj. Additionally or alternatively, the projector-ray parameter(s) for each projector ray Rj may help the correspondence algorithm differentiate between image features that are true projected structured light features and image features that are false alarms.
Reference is now made to
Reference is now made to
Thus, the mathematical model of the illumination from the specific one or more non-structured illumination sources 74 of each specific handheld wand 20 includes the location of each of one or more non-structured illumination sources 74 as optically seen in 3D world-coordinate space by one or more cameras 22 via images of reflective calibration target 80. It is noted that the mathematical model of the non-uniform illumination includes the location of each non-structured illumination source 74 as optically seen by one or more cameras 22, and not the actual physical location at which each non-structured illumination source 74 is disposed within handheld wand 20. For example, if non-structured illumination source(s) 74 are disposed within a handle of handheld wand 20 and light from non-structured illumination source(s) 74 is led to probe 24 by a light pipe or optical fiber, then the mathematical model of the non-uniform illumination includes the location of the tip of each light pipe or optical fiber that emits light out of probe 24, as optically seen by one or more cameras 22 via reflective calibration target 80.
For some applications, the size and/or shape of each non-structured illumination source 74, as seen by one or more cameras 22 via reflective calibration target 80, is stored as part of the calibration data. During a scan of a patient's intraoral surface 38, parts of intraoral surface 38 may be reflective and as such one or more cameras 22 may see a reflected non-uniform illumination source 74 in addition to projected pattern features. Having the size and/or shape of each non-uniform illumination source 74 as seen by the cameras stored in the calibration data may help computer processor 40 remove the reflection of non-uniform illumination source 74 from the correspondence algorithm.
For some applications, the calibration images of reflective calibration target 80 are acquired prior to the handheld wand 20 being packaged for commercial sale, i.e., as part of a manufacturing process of handheld wand 20. The mathematical model of the non-uniform illumination may also be updated once handheld wand 20 is already in commercial use based on captured scans of patients using that handheld wand. Thus, for some applications, computer processor 40 updates the mathematical model of the illumination from the specific one or more non-structured light sources 74 of a given handheld wand 20 after a given use of handheld wand 20 to scan an intraoral surface 38 of a patient and before a subsequent use of handheld wand 20 to scan an intraoral surface 38 of a patient.
For some applications, the mathematical model of the non-uniform illumination from the specific one or more non-structured illumination sources 74 of a given handheld wand 20 further includes camera-vignette calibration data indicative of a measure of relative illumination (e.g., vignetting) for each of the one or more cameras 22, and computer processor 40 further compensates for the non-uniformity of the illumination of one or more non-structured illumination sources 74 using the camera-vignette calibration data. The camera-vignette calibration data is generated by capturing, using one or more cameras 22, calibration images of 2D camera calibration target 42 (shown in
Reference is now made to
The estimated illumination intensity-per-angle emitted from each non-structured illumination source 74 is estimated based on calibration images captured using one or more cameras 22 of a 2D diffusive calibration target, such as 2D diffusive calibration target 66 shown in
For some applications, the known positions of each non-structured illumination source 74 and camera-vignette calibration data are used when analyzing the calibration images used for estimating the intensity profile of each non-structured illumination source 74. Using known parameters of the specific non-structured illumination sources 74, e.g., known parameters of the LEDs, computer processor 40 simulates images of what 2D diffusive calibration target 66 would look like under non-uniform illumination from non-structured illumination sources 74. Actual calibration images of 2D diffusive calibration target 66 under non-uniform illumination from non-structured illumination sources 74 are captured using one or more cameras 22. Computer processor 40 then compares the simulated images to the actual images and uses a minimization process to update the parameters until the simulated images match the actual images, thus arriving at a parametric profile of the intensity and angular distribution of the illumination from the specific non-structured illumination sources 74 per given handheld wand 20.
For some applications, the mathematical model of the non-uniform illumination from the specific one or more non-structured illumination sources 74 of a given handheld wand 20 includes (i) the distance of a calibration target, e.g., reflective calibration target 80, or 2D diffusive calibration target 66, from transparent window 32 of probe 24 and (ii) Fresnel reflections from transparent window 32 of probe 24.
Alternatively or additionally, computer processor 40 analyzes images captured by one or more cameras 22 under the non-uniform illumination from one or more non-structured illumination sources 74 and compensates for the non-uniformity of the illumination using calibration data indicating an amount of light received at each point (u,v) on sensor 46 of one or more cameras 22 for different respective distances from the cameras. To generate the calibration data indicating an amount of light received at each point (u,v) on sensor 46, calibration images are captured, using the one or more cameras, of a 2D calibration target, e.g., a solid-color 2D calibration target, e.g., 2D diffusive calibration target 66. The capturing of the calibration images is performed while the 2D calibration target is disposed at a respective plurality of distances, in the z direction in space, from one or more cameras 22. Computer processor 40 fits a mathematical function to the calibration images corresponding to an amount of light received at each point (u,v) on sensor 46 of one or more cameras 22 for each of the respective plurality of distances in the z direction.
Reference is now made to
For some applications, in order to simplify the calibration system, a single uniform illumination source 86 includes multiple arrays of light emitting diodes (LEDs) 88, each array having one of the various different wavelengths. The LEDs 88 of each of the respective arrays are spaced such that when each of the respective arrays is activated individually, uniform illumination source 86 uniformly illuminates 2D camera calibration target 42.
For the broadband illumination and NIR illumination, uniform illumination source 86 includes a respective array of the same LEDs that are used in handheld wand 20. Structured light projector(s) 54 typically use blue and green laser light. In order to avoid speckle noise during the calibration process, green and blue LEDs having similar central wavelengths to the blue and green lasers of structured light projector(s) 54 are used in uniform illumination source 86. Thus, uniform illumination source 86 includes:
A first array of light emitting diodes (LEDs) 88, each specific LED 90 of the first array having a first wavelength between 400 and 700 nanometers, e.g., blue LEDs 90 having a similar central wavelength to a blue-laser structured light projector 54. For some applications, blue LEDs 90 of the first array are sized to be 2×2 mm.
A second array of LEDs 88 interleaved with the first array, each specific LED 92 of the second array having a second wavelength between 400 and 700 nanometers, the second wavelength distinct from the first wavelength, e.g., green LEDs 92 having a similar central wavelength to a green-laser structured light projector 54. For some applications, green LEDs 92 of the first array are sized to be 2×2 mm.
A third array of LEDs 88 interleaved with the first and second arrays, each specific LED 94 of the third array having a third wavelength between 800 and 2500 nanometers, i.e., NIR LEDs 94. For some applications, NIR LEDs 94 of the third array are sized to be 1.85×1.85 mm.
A fourth array of LEDs 88 interleaved with the first, second, and third arrays, each specific LED 96 of the fourth array emitting broadband light. For some applications, the broadband LEDs 96 of the fourth array are sized to be 1.5×1.5 mm.
For some applications, the total number of LEDs 88 in uniform illumination source 86 is between at least 16 and or less than 100. For some applications, the arrays of LEDs 88 in uniform illumination source are arranged such that the overall shape of uniform illumination source 86 is a square. The use of a square is such that uniform illumination source 86 matches the size and shape of 2D camera calibration target 42 which is uniformly backlit using uniform illumination source 86. If an alternative shape is used for 2D camera calibration target 42, then a corresponding alternative shape may be used for uniform illumination source 86.
Applications of the invention described herein can take the form of a computer program product accessible from a computer-usable or computer-readable medium (e.g., a non-transitory computer-readable medium) providing program code for use by or in connection with a computer or any instruction execution system, such as computer processor 40. For the purpose of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Typically, the computer-usable or computer readable medium is a non-transitory computer-usable or computer readable medium.
Examples of a computer-readable medium include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD. For some applications, cloud storage, and/or storage in a remote server is used.
A data processing system suitable for storing and/or executing program code will include at least one processor (e.g., computer processor 40) coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments of the invention.
Network adapters may be coupled to the processor to enable the processor to become coupled to other processors or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages.
It will be understood that the methods described herein can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer (e.g., computer processor 40) or other programmable data processing apparatus, create means for implementing the functions/acts specified in the methods described in the present application. These computer program instructions may also be stored in a computer-readable medium (e.g., a non-transitory computer-readable medium) that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the methods described in the present application. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the methods described in the present application.
Computer processor 40 is typically a hardware device programmed with computer program instructions to produce a special purpose computer. For example, when programmed to perform the methods described herein, the computer processor typically acts as a special purpose computer processor. Typically, the operations described herein that are performed by computer processors transform the physical state of a memory, which is a real physical article, to have a different magnetic polarity, electrical charge, or the like depending on the technology of the memory that is used.
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof that are not in the prior art, which would occur to persons skilled in the art upon reading the foregoing description.
This patent application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 63/433,379, filed Dec. 16, 2022.
Number | Date | Country | |
---|---|---|---|
63433379 | Dec 2022 | US |