The present disclosure relates generally to three-dimensional imaging, and more particularly to intraoral three-dimensional imaging.
Digital dental impressions utilize intraoral scanning to generate three-dimensional digital models of an intraoral three-dimensional surface of a subject. Digital intraoral scanners may use structured light three-dimensional imaging using a combination of structured light projectors and cameras disposed within the intraoral scanner.
US 2019/0388193 to Saphier et al., which is assigned to the assignee of the present application and is incorporated herein by reference, describes an apparatus for intraoral scanning including an elongate handheld wand that has a probe. One or more light projectors and two or more cameras are disposed within the probe. The light projectors each have a pattern generating optical element, which may use diffraction or refraction to form alight pattern. Each camera may be configured to focus between 1 mm and 30 mm from a lens that is farthest from the camera sensor. Other applications are also described.
US 2019/0388194 to Atiya et al., which is assigned to the assignee of the present application and is incorporated herein by reference, describes a handheld wand including a probe at a distal end of the elongate handheld wand. The probe includes a light projector and a light field camera. The light projector includes a light source and a pattern generator configured to generate a light pattern. The light field camera includes a light field camera sensor. The light field camera sensor includes (a) an image sensor including an array of sensor pixels and (b) an array of micro-lenses disposed in front of the image sensor such that each micro-lens is disposed over a sub-array of the array of sensor pixels. Other applications are also described.
US 2020/0404243 to Saphier et al., which is assigned to the assignee of the present application and is incorporated herein by reference, describes a method for generating a 3D image, including driving structured light projector(s) to project a pattern of light on an intraoral 3D surface, and driving camera(s) to capture images, each image including at least a portion of the projected pattern, each one of the camera(s) comprising an array of pixels. A processor compares a series of images captured by each camera and determines which of the portions of the projected pattern can be tracked across the images. The processor constructs a three-dimensional model of the intraoral three-dimensional surface based at least in part on the comparison of the series of images. Other embodiments are also described.
Applications of the present disclosure include systems and methods relating to stitching together incoming scans from an intraoral scanner to generate a 3D image of an intraoral 3D surface being scanned. Typically, the intraoral scanner includes an elongate wand (e.g., an elongate handheld wand) with a probe at a distal end of the wand. The probe has a transparent window through which light rays enter and exit the probe. One or more cameras are disposed within the probe and arranged within the probe such that the one or more cameras receive rays of light from an intraoral cavity through the transparent window. One or more structured light projectors are disposed within the wand and arranged to project a pattern of structured light features onto the intraoral surface. One or more broadband light projectors are disposed within the handheld wand and arranged to illuminate the intraoral surface such that 2D images of the intraoral 3D surface under broadband illumination may be captured during the intraoral scanning.
Images captured by the one or more cameras of the projected pattern of structured light features on the intraoral 3D surface are used to generate a 3D image of the intraoral 3D surface being scanned. Typically, the images captured in each structured light image frame are used to compute a 3D point cloud of 3D positions of projected features of the pattern of light on the intraoral 3D surface, for example using techniques described in US 2019/0388193 to Saphier et al., US 2019/0388194 to Atiya et al., and US 2020/0404243 to Saphier et al. The computed point clouds from each structured light image frame are stitched together in real-time, e.g., soft real-time, during the intraoral scanning, as further described hereinbelow, in order to generate the 3D image of the intraoral 3D surface while the intraoral scan is ongoing.
In accordance with some applications of the present disclosure, a simultaneous localization and mapping (SLAM) algorithm is used to track motion of the wand and to generate 3D images. A SLAM algorithm typically iteratively determines the position of the intraoral scanner relative to scanned object (“localization”) then uses this position to update knowledge of scanned object (“mapping”). For some applications, one or more SLAM algorithms described in US 2020/0404243 to Saphier et al. may be used.
It is noted that registering 3D point clouds together refers to orienting and/or positioning the 3D point clouds relative to each other so that the 3D point clouds can be stitched together into a larger combined 3D point cloud. The use of the word “stitch” (in any form) throughout the present application is interchangeable with “register,” and refers to the process of registration and stitching of 3D point clouds together. Thus, for example, “stitching seed” as used hereinbelow is interchangeable with “registration seed” as used hereinbelow.
During scanning of an intraoral 3D surface, a processing device (e.g., a computer processor) receives a new “scan” for each structured light image frame. Thus, during intraoral scanning of the intraoral 3D surface, the processing device receives a plurality of consecutive intraoral scans of the intraoral 3D surface generated by the intraoral scanner. The processing device computes a 3D point cloud of 3D positions of projected features of the pattern of light on the intraoral 3D surface for each new scan. The 3D point clouds are stitched to each other and displayed on a user display so that the user of the intraoral scanner sees the 3D image being generated during the scanning. Typically, the processor works to stitch the incoming 3D point clouds to each other in soft real-time (further described hereinbelow). For example, the processor may update the 3D image on the display 3-100 times per second, e.g., 10-60 times per second, e.g., 40-60 times per second.
Typically, the features of the projected pattern in each structured light image frame are sparse, which, in turn results in a sparse 3D point cloud for each intraoral scan. Typically, each intraoral scan has at least 100 and/or less than 1000 3D points in the corresponding 3D point cloud. The sparsity of the 3D point clouds presents challenges in registering the 3D point clouds to each other. Thus, for some applications, instead of registering each 3D point cloud to the 3D point cloud corresponding to the previous intraoral scan, each incoming scan of a subset of the plurality of intraoral scans received during the intraoral scanning is registered to a registration seed (also referred to hereinbelow as “stitching seed”). The registration seed is a subset of the intraoral scans that immediately precede the incoming scan and that have previously been stitched together. That is, a 3D point cloud corresponding to an incoming intraoral scan is registered to a larger 3D point cloud containing 3D points from a set of intraoral scans, thus increasing the amount of information available for the registration. It is noted that as used throughout this application including in the claims, a “subset” is a set each of whose elements is an element of an inclusive set. Thus, a subset includes the original set or any smaller grouping of the elements in the original set.
There is therefore provided, in accordance with some applications of the present invention, a method including, using at least one computer processor:
For some applications, each intraoral scan includes an image of a distribution of 400-3000 discrete unconnected spots of light projected on the intraoral 3D surface.
There is further provided, in accordance with some applications of the present invention, a method including, using at least one computer processor:
For some applications, each intraoral scan includes an image of a distribution of 400-3000 discrete unconnected spots of light projected on the intraoral 3D surface.
For some applications, the segment of the 3D model is updated at a rate of 3-100 times per second.
For some applications, the registration seed includes a subset of the last 2-8 intraoral scans generated by the intraoral scanner during the intraoral scanning.
For some applications, the method further includes, using the at least one computer processor, selecting a quantity of intraoral scans to be included in the registration seed such that the registration seed includes images of a combined total of 500-5000 discrete unconnected spots of light projected on the intraoral 3D surface.
For some applications:
For some applications:
For some applications:
For some applications:
For some applications, assessing the extent of overlap includes:
For some applications:
For some applications, extracting the plurality of features includes extracting 2-50 features.
For some applications, inputting the extracted features into a classifier algorithm includes inputting the extracted features into a neural network.
For some applications, inputting the extracted features into a classifier algorithm includes inputting the extracted features into a support vector machine.
For some applications:
For some applications, the method further includes, using the at least one computer processor:
For some applications, the method further includes, using the at least one computer processor, discarding the oldest candidate set when the number of stored candidate sets surpasses a threshold number.
For some applications, the threshold number is between 2 and 10.
For some applications, the method further includes, using the at least one computer processor:
For some applications, the method further includes, using the at least one computer processor:
For some applications, the segment of the 3D model is a main segment of the 3D model, and the method further includes, using the at least one computer processor:
For some applications, iteratively repeating steps (e) and (f) until a given backup candidate set grows large enough to be considered a backup segment of the 3D model of the intraoral surface further includes attempting to register the backup segment of the 3D model to the main segment of the 3D model.
For some applications:
For some applications, if upon performing step II the attempted registration of the updated backup segment to the main segment or of the new backup segment to the main segment fails, the method further comprises iteratively repeating the waiting of step (II), wherein if no registration of any backup segment to the main segment is successful for a threshold amount of time, the method further includes storing the main segment of the 3D model as a first part of the 3D model of the intraoral surface and using the last backup segment stored as the beginning of a new main segment for a second part of the 3D model of the intraoral surface.
For some applications, attempting to register the backup segment of the 3D model to the main segment of the 3D model comprises using a Kalman filter algorithm.
For some applications, using a Kalman filter algorithm includes:
For some applications, the method further includes, using the at least one computer processor, receiving, during the intraoral scanning, a plurality of 2D images of the intraoral 3D surface captured under broadband illumination, and wherein using the Kalman filter algorithm further includes, using the at least one computer processor:
The present invention will be more fully understood from the following detailed description of applications thereof, taken together with the drawings, in which:
Reference is now made to
For some applications, the field of view of the intraoral scanner is wide. For example, the field of view of each of the cameras may be at least 45 degrees, e.g., at least 80 degrees, e.g., 85 degrees. Optionally, the field of view of each of the cameras may be less than 120 degrees, e.g., less than 90 degrees. In addition, for some applications, the number of features of the projected pattern in each structured light image frame is sparse, e.g., each intraoral scan includes an image of a distribution of discrete unconnected spots of light projected on the intraoral 3D surface. For example, each structured light projector may project at least 400 and/or less than 3000 features, e.g., spots of light. Both the wide field of view of the intraoral scanner and the sparsity of the projected features present challenges to the stitching of the 3D point clouds, in particular to the stitching of 3D point clouds from structured light image frames that are early on in an intraoral scan.
Some additional challenges are as follows:
The overall immediate stitching algorithm 20 is used by processing device 22 to build the 3D image of the intraoral 3D surface while the 3D surface is being scanned, i.e., registering the plurality of intraoral scans to each other (step 26, also referred to herein as “chain stitching”) during the intraoral scanning to update a segment 27 of a 3D model of the intraoral 3D surface and outputting a view of the 3D model, e.g., of a segment of the 3D model, of the intraoral surface to a display (step 29). Immediate stitching algorithm 20 includes multiple components. The stitching of each incoming scan to a set of previously stitched scans is referred to herein as “chain stitching” (step 26). The chain stitching component (step 26) of immediate stitching algorithm 20 handles most of the incoming scans and attempts to stitch each new scan to part of an existing “segment,” i.e., set of previously stitched scans. As described further hereinbelow, there are also dedicated components in the stitching algorithm for (i) starting a new segment (step 28), (ii) detecting and excluding moving tissue from consideration (step 30), and (iii) handling cases where the chain stitching fails to stich an incoming scan to an existing segment, referred to herein as “recovery stitching” (step 32).
Typically, the processor works to stitch the incoming 3D point cloud to the existing segment in soft real-time such that there are enough updates per second to the 3D image being shown on the user display and low enough latency (i.e., the time between when a new structured light image frame is captured and when the user sees the corresponding update to the 3D image on the user display) so that the user of the intraoral scanner experiences a real-time effect of the 3D image being generated on the user display as the scan is ongoing. For example, the processor may update the 3D image on the display 3-100 times per second, e.g., 10-60 times per second, e.g., 40-60 times per second. Thus, for some applications, the latency time is less than 300 milliseconds (ms), e.g., less than 200 ms, e.g., less than 150 ms, e.g., less than 100 ms, e.g., less than 50 ms, e.g., less than 15 ms.
Chain stitching component (step 26) of immediate stitching algorithm 20 is based on Iterative Closest Point (ICP) stitching between point clouds, in which a source cloud is stitched to a target cloud. The segment to which the incoming scans are stitched is a target cloud that grows linearly as scans are added to it. Each incoming intraoral scan is a source cloud. As described hereinabove, the source cloud for each incoming scan is sparse, thus presenting challenges in stitching each incoming scan to the single scan last stitched to the segment. Thus, each incoming scan is stitched to a plurality of previously stitched scans. For some applications, chain stitching component 26 registers each incoming scan of a subset of the plurality of intraoral scans received during the intraoral scanning to a subset of the intraoral scans that immediately precede the incoming scan and that have been previously registered together. The subset of the intraoral scans that immediately precede the incoming scan, to which the incoming scan is registered, is referred to herein as a “stitching seed” 34 (used herein interchangeably with “registration seed”). Thus, for some applications, processing device 22 (i) computes a source 3D point cloud from the distribution of discrete unconnected features, e.g., spots, of light captured in each incoming scan and (ii) computes a target 3D point cloud from the registration seed. The ICP algorithm (step 40, shown in
In order to save processing time, once a segment is large enough, only a subset of points in the segment, e.g., a subset consisting of recent points added to the target cloud, are used as the stitching seed. For example, the registration seed may include a subset of the last 2-8 intraoral scans generated by the intraoral scanner during the intraoral scanning. For example, each incoming scan may be stitched to the segment using only the last few thousand points, e.g., at least 500 points and/or less than 5000 points, added to the target cloud. Thus, for some applications, processing device 22 selects a quantity of intraoral scans to be included in the registration seed such that the registration seed includes images of a combined total of at least 500 and/or less than 5000 discrete unconnected features, e.g., spots, of light projected on the intraoral 3D surface. For some applications, the ICP stitching takes less than 10 ms, e.g., less than 5 ms, e.g., 4 ms to converge. Incoming intraoral scans that are added to current segment 27 are added to the registration seed of current segment 27. Thus, as more incoming scans are added, processing device 22 trims the data stored in the registration seed (step 33), i.e., trims the number of scans included in the registration seed, thus updating registration seed 34.
For some applications, the chain stitching component of the immediate stitching algorithm combines ICP with a moving tissue detection (MTD) algorithm (step 30). Typically, in addition to the structured light projectors, there is also one or more non-structured light projectors which project broadband light, e.g., white light, into the intraoral cavity. The one or more cameras capture 2D images under the broadband illumination from the non-structured light projectors. The MTD algorithm pairs the 2D images with raw scan data (step 36) in order to determine which parts of the scan include moving tissue. For each incoming scan the points in the source cloud are given a rigid or non-rigid classification based on the MTD algorithm (step 38). Typically, only the points classified as rigid are used for the ICP stitching to the target cloud.
Thus, for some applications, processing device 22 receives, during the intraoral scanning, a plurality of 2D images of the intraoral 3D surface captured under broadband illumination and pairs one or more of the plurality of 2D images with each incoming scan (step 36). Based on the pairing, in step 38 processing device 22 classifies a subset of the points in the source 3D point cloud (computed from the incoming scan) as corresponding to a feature, e.g., spot, of light projected on rigid tissue (e.g., teeth or gums). Processing device 22 uses as an input to the ICP algorithm only the points in the source 3D point cloud that are classified as corresponding to a feature, e.g., spot, of light projected on rigid tissue (step 39, shown in
Reference is now made to
It is noted that use of a Kalman filter to predict the expected position of the intraoral scanner based on data from the IMU is byway of example and is not intended to be limiting. For some applications, other methods of predicting the expected position of the intraoral scanner based on data from the IMU may be utilized, e.g., integration of IMU samples. It is also noted that the expected position of the intraoral scanner may also be predicted without the use of data from an IMU. For example, for some applications, the expected position of the intraoral scanner may be predicted using the previous motion trajectory of the intraoral scanner without the use of an IMU, e.g., by interpolation and extrapolation of the motion of the intraoral scanner using Lie algebra and exponentiation.
Once the ICP stitching (step 40) converges and the new scan is added to the segment, the position of the intraoral scanner is typically added to the previous trajectory input to the Kalman filter in anticipation of the next scan. Most of the time the ICP converges to a fairly accurate estimate of the actual scan position within the 3D model of the intraoral 3D surface. However, sometimes it may converge to a wrong answer. As described hereinabove, an error in the stitching of one scan may propagate into a model-breaking stitching artifact. In order to avoid this, the ICP stitching is further paired with two algorithms that independently validate its output, referred to herein as “overlap grading” (step 50) and “motion grading” (step 52).
The overlap grading algorithm (step 50) attempts to estimate the overlap between each new incoming scan and the seed of the current segment to which it was stitched. That is, for some applications, subsequently to the ICP algorithm (step 40) registering the source 3D point cloud (computed from the incoming scan) to the target 3D point cloud (computed from the registration seed), processing device 22 assesses an extent of overlap between the source 3D point cloud and the target 3D point cloud (step 50). If the new scan is correctly stitched to the seed, then it is expected that at least a significant part of the new incoming scan and the seed should match almost exactly. The purpose of overlap grading is to “grade” the stitching quality of two point-clouds, namely the new incoming scan and the seed of the current segment. The overlap between the new incoming scan and the seed of the current segment (essentially the overlap between the surfaces scanned in consecutive structured light image frames) is affected by the motion speed of the intraoral scanner as the dentist or dental technician is moving it around the intraoral cavity. As a result of this, it is possible that there may be low overlap between consecutive scans. In order to allow fluent stitching between the point clouds the inventors have developed the overlap grading algorithm to enable stitching between the point clouds even if the overlap is small, as described hereinbelow.
For some applications, the overlap grading algorithm extracts features from two clouds, namely 3D point clouds of the new scan (source 3D point cloud) and the seed of the current segment (target 3D point cloud). After the ICP stitching attempts to stitch between the new scan and the seed of the current segment, a sub-set of matching points between the new scan and the seed of the current segment is produced. Using the pairs of matching points, the processing device extracts several features (step 54). As described hereinabove, for some applications, processing device 22 receives, during the intraoral scanning, a plurality of 2D images of the intraoral 3D surface captured under broadband illumination. Thus, for some applications, for each pair of matching points, at least one of the extracted features is a non-geometrical feature based on the plurality of 2D images of the intraoral 3D surface captured during capturing of the incoming scan.
For example, one or more of the extracted features may be from the following list:
For some applications, at least 2 and/or less than 50, e.g., at least 5, e.g., at least 20, e.g., at least 40, e.g., 42 features are extracted. The extracted features are then fed into a classifier algorithm, e.g., a neural network or a support vector machine, that analyzes the inputs and then outputs a single overlap-grading-score ranging between [−1, 1], with zero being valued as a neutral decision. If the overlap-grading-score is lower than a threshold then processing device 22 rejects the registration of the incoming scan by the ICP algorithm. Thus, a stitched scan is considered to have passed the overlap grading if the score is above the threshold.
As described hereinabove, for some applications, processing device 22 receives, during the intraoral scanning, data pertaining to motion of the intraoral scanner from an inertial measurement unit (IMU) disposed within the intraoral scanner (step 42). The motion grading algorithm (step 52) validates the results (step 48) of the ICP stitching by comparing it against data from the IMU (step 56). Thus, for some applications, processing device 22 (i) assesses the registration by the ICP algorithm of the source 3D point cloud to the target 3D point cloud by comparing the registration to data from the IMU, and (ii) rejects the registration of the incoming scan by the ICP algorithm in response to the registration of the source 3D point cloud to the target 3D point cloud contradicting the data from the IMU. For some applications, the inertia of the moving intraoral scanner (based on the previous movement of the scanner) may also be used in the motion grading algorithm.
If the position of a stitched scan indicates that the intraoral scanner moved in a way that contradicts the scanner's IMU data, then the stitching of that scan is considered to have failed the motion grading validator. For example, a dentist performing an intraoral scan cannot instantly reverse the direction of motion of the intraoral scanner, thus if the position of the stitched scan indicates an instant reversal of direction of the intraoral scanner, then the stitching of that scan is considered to have failed the motion grading validator. The motion grading decision is computed by statistical learning, together with some of the metrics from the overlap grading algorithm. For some applications, motion metrics include shift and angular decline from the last motion position of the intraoral scanner. For some applications, some of the motion metrics compare (i) the projected trajectory of the intraoral scanner from its last position, to (ii) the position where the new scan actually stitched to the seed of the current segment. Additionally, a scanner motion speed limit may be used to reject scanning movements that are too fast.
For some applications a stitched scan must be accepted by the overlap grading and motion grading validators in order to pass further and become part of a segment (step 58). For some applications, if the motion of the intraoral scanner is smooth enough relative to the previous motion of the intraoral scanner, then the overlap grading may be skipped.
Reference is now made to
For some applications, when no registration seed 34 is available, a plurality of scans are tracked as candidate seeds 65, e.g., up to a threshold number of initial scans. For some applications, the threshold may be between 2 and 10, e.g., up to 5 initial scans are tracked as candidate seeds. Subsequently, each new incoming scan (step 24), along with its corresponding IMU data (step 42) and MTD data (step 38), is inputted to the chain stitching component (step 26) of the immediate stitching algorithm to try and stitch the incoming scan to one of candidate seeds 65. If the stitching is successful (decision diamond 60) then that scan is added to the candidate seed 65 (step 62) to which it was stitched and that particular candidate seed “grows.” If the stitching is not successful (decision diamond 60) then the chain stitching component of the immediate stitching algorithm attempts to stitch the scan to the next available candidate seed 65. Decision diamond 64 indicates that if there is another available candidate seed 65 then the chain stitching component attempts to stitch the scan to the next available candidate seed and if there is no other available candidate seed 65 then the scan itself is added as a new candidate seed 65. That is, if the scan is not successfully stitched to any of the current candidate seeds being tracked then that scan itself becomes an additional candidate seed. If at any point there are too many candidate seeds, then older candidate seeds may be removed as newer ones are added in order to save processing time and space in memory. For some applications, once there are too many candidate seeds, instead of simply removing the oldest candidate seed each time a new candidate seed is added, processing device 22 may selectively remove the candidate seed that has the lowest number of data points. This process is repeated for each incoming new scan until one of the candidate seeds grows large enough to be considered the start of a new segment (decision diamond 68). At that point the other candidate seeds are discarded. The seed that grew large enough to be considered the start of a new segment is reported as the start of a new segment (step 70) and subsequent incoming scans are stitched to the seed of the new segment as described hereinabove.
An example workflow of start segment logic (step 28), performed by processing device 22, is as follows:
Reference is now made to
If an incoming scan fails to stitch to seed 34 of the current (“main”) segment (step 72), instead of losing the data, the recovery stitching component keeps a “backup segment” 74. When a scan fails to stitch to seed 34 of “main” segment 27, recovery stitching component (step 32) attempts to stitch the scan to a seed 73 of backup segment 74 using the same chain stitching component (step 26) of the immediate stitching algorithm 20 as described hereinabove. When used in the recovery stitching, the chain stitching is referred to herein as step 26′. The backup segment is initiated using the same start-segment component 28 of the immediate stitching algorithm as described hereinabove. When used in the recovery stitching, the start segment is referred to as step 28′. For example, the first few scans sent to recovery stitching 32 become candidate seeds for backup segment 74 and as scans that fail to be stitched to seed 34 of main segment 27 are sent to backup segment 74, whichever candidate seed in the recovery stitching grows successfully becomes the start of backup segment 74. Overlap grading (step 50) and motion grading (step 52) are also used to verify the ICP stitching to the seed 73 of backup segment 74.
For some applications, the inability to stitch an incoming scan (step 24), along with its corresponding IMU data (step 42) and MTD data (step 38) to seed 34 of main segment 27 may be due to (a) a sporadic movement of the intraoral scanner that causes the stitching to fail overlap grading (step 50) and/or motion grading (step 52) verifiers or (b) the scanner being moved and now scanning a different region of the intraoral 3D surface. In both cases (a) and (b), processing device 22 continues to attempt to stitch new incoming scans to seed 34 of main segment 27. In the case of a sporadic movement of the intraoral scanner it is likely that subsequent consecutive incoming scans will stitch successfully to seed 34 of main segment 27, in which case the scan that was sent to the recovery stitching is discarded and backup segment 74 is reset (step 76). In the case where the scanner is now scanning a different region of the intraoral 3D surface, it is reasonable that subsequent consecutive incoming scans will also fail to stitch to seed 34 of main segment 27. Thus, backup segment 74 grows in parallel as the consecutive scans that failed to stitch seed 34 of main segment 27 are stitched to seed 73 of backup segment 74 (as described hereinabove).
As described hereinabove, the ICP stitching in the chain stitching component of the immediate stitching algorithm only stitches to seed 34 of current segment 27, i.e., the trailing subset of recent points added to the segment. Therefore, it is reasonable that although the scans that are sent to backup segment 74 failed to stitch to seed 34 of main segment 27, there may still be some overlap between backup segment 74 and some part of main segment 27 other than seed 34. Thus, recovery stitching component (step 32) of immediate stitching algorithm 20 attempts to stitch backup segment 74 to main segment 27 using a Kalman filter stitching algorithm (step 82) referred to herein as “KF stitching.” For the KF stitching (step 82), a target 3D point cloud is computed from the entire main segment 27 and a source 3D point cloud is computed from backup segment 74.
The KF stitching (step 82) is based on two concepts. The first concept is to split the main segment, i.e., the target 3D point cloud, into a plurality of sections and try to use a modified ICP between the backup segment (i.e., the source 3D point cloud computed from the backup segment) and the plurality of sections of the main segment (i.e., the target 3D point cloud) with different initial guesses (step 78). The modified ICP stitching (step 78) in the KF stitching algorithm (step 82) uses many iterations each with a small sample of points, e.g., less than 30 points, e.g., less than 20 points, e.g., 18 points. Each iteration, the target 3D point cloud position is modified with the Kalman filter state defining the position and rotation of the target cloud. The target cloud points are arranged in buckets according to their normals. The buckets are randomly sampled and the points from each bucket are also randomly sampled, thus giving uniformly distributed normals for each sample. Every few iterations, e.g., every 8 iterations, the overlap of the source cloud (i.e., the backup segment) and the target cloud (i.e., the main segment) is measured (step 84) using a sample of approximately 2000 points. If the current overlap is measured to be better than the previous overlap (from a few iterations ago) then the KF stitching between the backup segment and the main segment continues. If the overlap is not better than the previous overlap then the Kalman filter state is reset and the modified ICP starts again from a different place in the main segment.
As described hereinabove, for some applications, processing device 22 receives, during the intraoral scanning, a plurality of 2D images of the intraoral 3D surface captured under broadband illumination. The second concept in the KF stitching algorithm is to look for similar images (using image hashes) in the backup segment and the main segment and attempt to stitch scans that are near each other (step 80). That is, processing device 22 looks for 2D images of the intraoral 3D surface corresponding to backup segment 74 that are similar to 2D images of the intraoral 3D surface corresponding to main segment 27. For some applications, these image similarities are used by processing device 22 as a useful initial guess for the modified ICP (step 78) described hereinabove with respect to the first concept in the KF stitching (step 82).
If the KF stitching succeeds then the backup segment is joined to the main segment (step 86). For some applications, the backup segment may have accumulated a large number of scans such that when the backup segment is joined to the main segment it may result in a sudden influx of scans that need to be processed downstream and the dentist using the intraoral scanner may suddenly see a large part of the 3D model seeming to appear from nowhere on the display. Therefore, for some applications, a “recovery from recovery” logic is implemented to trim the backup segment into a reasonably small batch of scans before it is merged with the main segment (optional step 85). This improves the downstream processing and the user experiences only a short lag in the scanning and then the stitching appears to resume.
It is possible, however, that the KF stitching will not be able to stitch together the backup segment and the main segment. For example, there could be no overlap between the main segment and the backup segment if the scanner has been moved to an entirely different region in the intraoral cavity. In order to avoid the user-experienced time lag from being too long, there is a time limit placed on the recovery stitching (decision rectangle 88). If after a certain threshold period of time, e.g., a few seconds, e.g., 1 second, the backup segment is not rejoined to the main segment (answer to decision rectangle 88 is “yes,” i.e., the user does not see the stitching resume on the display) then one of the following options is implemented (based on decision rectangle 90):
Thus, as long as the dentist using the intraoral scanner keeps the scanner hovering over a particular region of the intraoral 3D surface (assuming the region is a scannable region, e.g., not hovering over moving tissue such as the patient's tongue), the recovery stitching component of the immediate stitching algorithm typically causes the scanning to resume within one or several seconds without losing the data accumulated during the time lag.
An example workflow of recovery stitching (step 32), performed by processing device 22, is as follows:
For some applications, the example workflow may further include:
For some applications, the example workflow may further include:
For some applications, iteratively repeating steps (e) and (f) until a given backup candidate set grows large enough to be considered a backup segment 74 of the 3D model of the intraoral surface further includes attempting to register the backup segment of the 3D model to the main segment of the 3D model, e.g., using a Kalman filter algorithm (step 82).
For some applications:
For some applications, if upon performing step II the attempted registration of the updated backup segment to the main segment or of the new backup segment to the main segment fails, the method further comprises iteratively repeating the waiting of step (II), wherein if no registration of any backup segment to the main segment is successful for a threshold amount of time (decision rectangle 90), the workflow further includes storing main segment 27 of the 3D model as a first part of the 3D model of the intraoral surface and using the last backup segment 74 stored as the beginning of a new main segment 27′ for a second part of the 3D model of the intraoral surface (step 92).
Applications of the disclosure described herein can take the form of a computer program product accessible from a computer-usable or computer-readable medium (e.g., a non-transitory computer-readable medium) providing program code for use by or in connection with a computer or any instruction execution system, such as the processing device disclosed hereinabove. For the purpose of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Typically, the computer-usable or computer readable medium is a non-transitory computer-usable or computer readable medium.
Examples of a computer-readable medium include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD. For some applications, cloud storage, and/or storage in a remote server is used.
A data processing system suitable for storing and/or executing program code will include at least one processor (e.g., processing device 22 disclosed hereinabove) coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments of the disclosure.
Network adapters may be coupled to the processor to enable the processor to become coupled to other processors or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, Python, or the like and conventional procedural programming languages, such as the C programming language or similar programming languages.
It will be understood that the methods described herein can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer (e.g., processing device 22 disclosed hereinabove) or other programmable data processing apparatus, create means for implementing the functions/acts specified in the methods described in the present application. These computer program instructions may also be stored in a computer-readable medium (e.g., a non-transitory computer-readable medium) that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the methods described in the present application. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the methods described in the present application.
The processing device disclosed hereinabove is typically a hardware device programmed with computer program instructions to produce a special purpose computer. For example, when programmed to perform the methods described herein, the processing device typically acts as a special purpose processing device. Typically, the operations described herein that are performed by computer processors transform the physical state of a memory, which is a real physical article, to have a different magnetic polarity, electrical charge, or the like depending on the technology of the memory that is used.
It will be appreciated by persons skilled in the art that the present disclosure is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present disclosure includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof that are not in the prior art, which would occur to persons skilled in the art upon reading the foregoing description.
The present application claims the priority of U.S. 63/452,916 to Ayal et al., filed Mar. 17, 2023, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63452916 | Mar 2023 | US |