The present application is directed to a system that optically scans an environment, such as a building, and in particular to a mobile scanning system that generates two-dimensional (2D) and three-dimensional (3D) scans of the environment.
A 3D triangulation scanner, also referred to as a 3D imager, is a portable device having a projector that projects light patterns on the surface of an object to be scanned. One (or more) cameras, having a predetermined positions and alignment relative to the projector, records images of the light pattern on the surface of an object. The three-dimensional coordinates of elements in the light pattern can be determined by trigonometric methods, such as by using triangulation. Other types of 3D measuring devices may also be used to measure 3D coordinates, such as those that use time of flight techniques (e.g., laser trackers, laser scanners or time of flight cameras) for measuring the amount of time it takes for light to travel to the surface and return to the device.
It is desired to have a handheld 3D measurement device that is easier to use and that gives additional capabilities and performance. Technical challenges with such scanners include that, during the scanning process, the scanner acquires, at different times, a series of images of the patterns of light formed on the object surface. These multiple images are then registered relative to each other so that the position and orientation of each image relative to the other images are known. Where the scanner is handheld, various techniques have been used to register the images. One common technique uses features in the images to match overlapping areas of adjacent image frames. This technique works well when the object/environment being measured has many features relative to the field of view of the scanner. However, if the object/environment contains a relatively large flat or curved textureless surface, the images may not properly register relative to each other.
Today, processing capability of a handheld scanner is limited by the capability of processors within the handheld scanner. Faster processing of scan data would be desirable. Greater flexibility and speed in post-processing scan data would also be desirable.
Accordingly, while existing handheld 3D triangulation scanners are suitable for their intended purpose, the need for improvement remains.
According to one or more embodiments, a system includes a scanner device configured to capture a set of point clouds comprising 3D points. The system further includes one or more processors that are operable to detect, during the capture of the set of point clouds, at least one first anchor object in a first part of a first point cloud, the first part belonging to first time interval centered at time t1. The one or more procesors also detect at least one second anchor object in a second part of the first point cloud, the second part belonging to second time interval centered at time t2. The one or more procesors further identify at least one correspondence between the first anchor object and the second anchor object. The one or more procesors further align the first part of the first point cloud with the second part of the first point cloud based on the at least one correspondence between the first anchor object and the second anchor object.
According to one or more embodiments, a method for a scanner device to capture a set of point clouds, each of the point clouds comprising 3D points includes detecting, during the capture of the set of point clouds, at least one first anchor object in a first part of a first point cloud, the first part belonging to first time interval centered at time t1. The method further includes detecting at least one second anchor object in a second part of the first point cloud, the second part belonging to second time interval centered at time t2. The method further includes identifying at least one correspondence between the first anchor object and the second anchor object. The method further includes aligning the first part of the first point cloud with the second part of the first point cloud based on the at least one correspondence between the first anchor object and the second anchor object.
According to one or more embodiments, a computer program product includes a memory device having one or more computer executable instructions stored thereon, the computer executable instructions when executed by one or more processors cause the one or more processors to perform a method for a scanner device to capture a set of point clouds, each of the point clouds comprising 3D points. The method includes detecting, during the capture of the set of point clouds, at least one first anchor object in a first part of a first point cloud, the first part belonging to first time interval centered at time t1. The method further includes detecting at least one second anchor object in a second part of the first point cloud, the second part belonging to second time interval centered at time t2. The method further includes identifying at least one correspondence between the first anchor object and the second anchor object. The method further includes aligning the first part of the first point cloud with the second part of the first point cloud based on the at least one correspondence between the first anchor object and the second anchor object.
These and other advantages and features will become more apparent from the following description taken in conjunction with the drawings.
The subject matter, which is regarded as the invention, is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
The detailed description explains embodiments of the invention, together with advantages and features, by way of example, with reference to the drawings.
Embodiments of the present disclosure provide technical solutions to technical challenges in existing environment scanning systems. The scanning systems can capture two-dimensional or three-dimensional (3D) scans. Such scans can include 2D maps, 3D point clouds, or a combination thereof. The scans can include additional components, such as annotations, images, textures, measurements, and other details.
Embodiments of the present disclosure provide a hand-held scanning platform that is sized and weighted to be carried by a single person. Embodiments of the present disclosure provide for a mobile scanning platform that may be used to scan an environment in an autonomous or semi-autonomous manner to generate a map of the environment.
Typically, when capturing a three-dimensional (3D) coordinates of points in an environment, several scans are captured by a scanner system (“scanner”), which are then combined or “stitched” to generate the map. Sometimes the collection of 3D coordinates is referred to as a “point cloud” since when it is visually displayed the 3D coordinates are represented as points. 3D points and images recorded with a single shot of a scanner can be referred to as a “frame”. A “scan” is a sequence of recorded frames. For completing such scans a scanner, such as the FARO® SCANPLAN®, FARO® SWIFT®, FARO® FREESTYLE®, or any other scanning system incrementally builds the point cloud of the environment, while the scanner is moving through the environment. In one or more embodiments of the present invention, the scanner simultaneously tries to localize itself on this map that is being generated. An example of a handheld scanner is described in U.S. patent application Ser. No. 15/713,931, the contents of which is incorporated by reference herein in its entirety. This type of scanner may also be combined with a another scanner, such as a time of flight scanner as is described in commonly owned U.S. patent application Ser. No. 16/567,575, the contents of which are incorporated by reference herein in its entirety. It should be noted that the scanners listed above are just examples and that the type of scanner used in one or more embodiments does not limit the features of the technical solutions described herein.
A technical challenge with such combining of the scans is registering the scans, i.e., arranging the scans in the right orientation, zoom level, etc., so that the scans are aligned to form an accurate replica of the environment. For this purpose targets can be used, which are special points/regions that are reliable reference points with the purpose of improving the data quality of the scans and registering multiple scans. Alternatively, Cloud2Cloud or other types of registration techniques can also be used to register one or more scans with each other. Existing systems use artificial markers for identifying such targets. For example, the artificial marker can be a predetermined label that is placed in the environment, which the operator of the scanner can identify as a target. However, placing such artificial markers can be time consuming, and in some cases impractical if the environment does not allow placement of such markers. Examples of targets include checkerboards, spheres or stickers having a predetermined size.
Embodiments of the present disclosure address such technical challenges with generating the point cloud 130 of an environment by facilitating identification of target points using existing, or “natural” locations from the environment itself, without additional steps requiring placement of artificial markers. The embodiments accordingly improve the point cloud generation process. For example, the embodiments herein facilitate using, as anchor objects, features and objects that have been captured in the point cloud and that are already existing in the environment. Further, different types of features can be used as anchor objects based on what is selected by the operator during scanning.
Anchor object is an object in a captured point cloud, where the anchor object is detectable from a range of viewing directions, distances, and orientations. An anchor object is stored in the data captured by the scanner 120 by storing one or more positions, orientations, degrees of freedom, and other measurements of the scanner 120 when capturing the anchor object. Different types of anchor objects provide different constraints in different degrees of freedom when matching two or more instances of the anchor objects in different point clouds. In one or more embodiments of the present invention, anchor objects of the same type can facilitate using the same constraint type. For example, an intersection of three planes can provide position and orientation for six degrees of freedom of the scanner 120 (6dof). Alternatively, a well-defined texture on a plane or other defined geometry (e.g. coded marker, checkerboard, QR code, projected light pattern) can also be used as the anchor object, with potentially reduced dof depending on the object type. Alternatively still, a sphere or other 3D objects, a single plane, or a line (intersection of two planes), can also be used.
By facilitating identification of such features as anchor objects, embodiments of the present disclosure provide several advantages and improvements over existing scanning systems and processes. Such improvements include, but are not limited to, improvements to registration within a single scan, improvements to re-finding of trajectory of the scanner 120 within and during scans (move back to a location from where an anchor object can be captured when tracking has been lost; or determine location based on a capture of an anchor object), and improvements to registration of point clouds from a scan, or from multiple scans with each other. The registration of multiple scans can be performed with or without “snap-in” functionality. Snap-in facilitates live registration of a new scan with another existing scan during capture of the new scan. Further, in one or more embodiments, registration of multiple scans can include registering a first scan that is captured by a first scanner with a second scan that is captured by a second scanner. The scans can be from different types of scanners, such as FREESTYLE®, FOCUS®, etc. The registration of the multiple scans and/or point clouds can be done offline in one or more embodiments of the present invention. Alternatively, in one or more embodiments of the present invention, aligning the point clouds can be performed substantially in real-time, as the data is being captured by the scanner 120.
Anchor objects can be defined during scanning, for example, when scanning an area, part of which, does not have 3D geometry or textures that facilitate the scanner to orient itself in 3D space. For example, such an area can include a flat wall with few textures that is to be scanned by the scanner 120. In such cases the scanner 120 can set an anchor object by defining a location in space where there are sufficient features or 3D geometry, scan the wall, and come back to scan the anchor object. The parts of the scan after returning to the anchor object do not suffer from loss in quality of tracking when scanning the wall. The anchor objects are used for loop closure that facilitates compensating for any loss in quality during the scanning. It should be noted that the above scenario is one example case, and that embodiments from the present disclosure can be used in various other scenarios with other anchor objects. Also, apart from a wall, there are other elements/objects that pose a challenge for scanning. As discussed herein, the location of the anchor object does not need to be a natural location (e.g. a corner), but rather may be a point in space that is defined by locations within the environment, such as the intersection of three nonconnected planes that define a point in space for example.
The anchor object can be selected by an operator manually in one or more embodiments. Alternatively, in one or more embodiments, the anchor object is automatically selected by the scanner 120 based on identification of a natural feature or object from a predetermined list. For example, the scanner 120 continuously monitors the captured frames to detect the presence of one or more natural features from a list of features that have been identified as those that can be used to identify anchor objects. Examples of features that can be used for identifying anchor objects include: planes, lines, 3D points, markers of different types (coded marker, checkerboards, QR code, etc.), projected light patterns, natural texture features, spheres and other 3D objects, columns, and combinations thereof. In yet other embodiments, the anchor object can be detected semi-automatically, where the scanner 120 generates and provides feedback that identifies an object/feature in the frame that is being captured as a potential anchor object. The operator can subsequently select the feature as an anchor object.
In one or more embodiments of the present invention, the anchor objects are used to perform a loop closure algorithm to determine and compensate for any error (“drift”) that can accumulate as the scanner 120 travels along a trajectory/path.
For example,
A visualization of the resulting graph is shown in
The scanner 120 at timestamp t has state xt=[pT,ψ]T where p is a 2D vector that represents the position in the plane and w is the orientation in radians. The measurement of the relative transform between the scanner state at two timestamps a and b is given as: zab=[{circumflex over (p)}abT, {circumflex over (ψ)}abT].
Computation of the error between the measurement and the predicted measurement is:
Here, the orientations can be normalized in the range [−π,π) in one or more embodiments, and R is a rotation matrix given by:
The same portion of the environment can be scanned from distinct positions. The coordinates of a virtual point, i.e., a point in a first point cloud, that is captured from a first position are compared with coordinates of the same virtual point in a second point that is captured from a second position. The error, i.e., difference, in the coordinates can be computed and used for loop closure. It should be noted that the above calculations are one possible technique for performing loop closure, and that in one or more embodiments of the technical solutions described herein, different calculations can be used. For example, odometry-based calculations can be used for performing the loop closure in some embodiments.
Embodiments herein facilitate performing the loop closure using the anchor objects that are detected using locations in the environment as described herein. The relative observation of an anchor object from the scanner 120 delivers an accurate position information and can correct the position of the scanner 120 in the absolute world and remove absolute inaccuracies accumulated from the mapping process. The more anchor objects (observed with a good accuracy) the better the position accuracy of the scanner 120 and consequently the absolute accuracy of the point cloud 130 scanned by the scanner 120. It should be noted that as used herein, “absolute accuracy” is the accuracy of measurements of a point cloud that is scanned compared to a ground truth. For example, a side wall of a building has a real length of 100 m. The side wall when measured by the scanner 120 is 101.1 m. In this case, there is an absolute error of 1.1 m and an absolute accuracy of >1.1 for distance >100 m. Such kind of errors in the scan positions are mitigated using loop closure.
It should be noted that for multiple planes to be used to identify an anchor object, the planes need to have a minimum angle with respect to one another, for example, for their intersection point to be used as the anchor object. While orthogonal planes are depicted in the examples in the figures herein, in other embodiments, the planes may not be orthogonal with respect to each other. Also, although intersections of planes are depicted and described as the anchor object in the embodiments described herein, in other embodiments, examples of features that can be used for identifying anchor objects include: planes, lines, 3D points, markers of different types coded marker, checkerboards, QR code, etc.), projected light patterns, natural texture features, spheres and other 3D objects, and combinations thereof.
Referring to the flowchart in
Alternatively, in an embodiment, the detection of the anchor object 2320 can be semi-automatic. Alternatively yet, in another embodiment, the detection can be manual. For example, the operator points the scanner 120 towards an area with three planes, and selects a user interface element to indicate that the anchor object 2320 is being selected. The system 100 defines the virtual intersection point by finding the intersection of three planes. The detection of the planes can be automatic upon the operator providing the user interface command. In another embodiment, the detection of the planes in the frame is done by the operator manually by selecting the three planes in the frame via the user interface.
The scanner 120 continues to capture point clouds in multiple frames at multiple other scan positions and can capture the anchor object 2320, at block 2204. In one or more examples, the scanner 120 saves the positions, i.e., coordinates, of the anchor objects in a data structure such as a list of positions. Every position in the data structure is directly linked to the data structure that is used to store the measurements of the corresponding portion of the environment.
The same anchor object 2320 can be viewed from multiple positions in the map, and accordingly, can be captured in multiple sets of frames and scans by the scanner 120. The presence of the same anchor object in multiple sets of frames and scans, which are captured by the scanner 120, is referred to as instances of the anchor object. Accordingly, multiple sets of frames captured from different positions include separate instances of the anchor object 2320.
That is, in a second frame, captured at a time interval t2, a second instance of the anchor object 2320 can be captured. The second instance can be captured from a second location of the scanner 120. In one or more embodiments of the present invention, the second instance of the anchor object 2320 is captured manually, automatically, or semi-automatically. In one or more embodiments of the present invention, the scanner device 120 triggers the capture of the second instance of the anchor object 2320 based on one or more anchor-recording criterion, at block 2203. Triggering the capture of the second instance of the anchor object 2320 can include providing a feedback to the scanner device 120 to automatically reorient itself to capture the second instance. Alternatively, the feedback can be communicated to the operator to cause such a reorientation and capture of the second instance.
The anchor-recording criterion is based at least in parts on one or more of the following: time difference between time interval t1 and time interval t2, a number of tracking features that the scanner 120 has captured/recorded so far; a number of anchor objects that the scanner 120 has captured/recorded so far; the age of the anchor object 2320; a number of 3D points that the scanner 120 has captured in the set of frames or in the point cloud; the linear and rotational velocity and acceleration of the scanner 120; the spatial separation between the first and the second part of the point cloud, where the first part includes the first instance of the anchor object 2320, and the second part includes the second instance of the anchor object 2320. The age of an anchor object 2320 indicates a number of consecutive frames that have passed since the anchor object 2320 was detected. Typically, the greater the age, the more helpful it is to re-capture the anchor object for a good point cloud alignment.
In one or more embodiments of the present invention, the movement of the scanner 120, particularly when the movement of the scanner 120 is automatically controlled is guided towards the second instance of the anchor object 2320 based on the anchor-recording criterion. Alternatively, or in addition, the feedback is displayed on a user-interface of the scanner 120. Accordingly, the operator receives the feedback that indicates that an anchor object 2320 has to be captured to improve the quality of the ongoing scan. In one or more embodiments of the present invention, the feedback also includes a guidance to the anchor object 2320 that can be scanned.
The scanner 120 can change orientation or distance from when the first instance of the anchor object 2320 was captured. Accordingly, points captured between the time intervals t1 and t2 can have different orientations and/or distances of scanner with respect to anchor object 2320. Further, the points captured between the first time interval t1 can have different orientations and/or distances with respect to the anchor object 2320 compared to the points captured during the second time interval t2.
Referring to the flowchart in
The correspondence can be used to define a pose constraint of the scanner 120 in one or more embodiments of the present invention. The pose constraint can be defined using the positions and orientations of the instances of the anchor object 2320 and one or more parameters of the scanner 120, for example, orientation, center of gravity, or any other such measured value of the scanner 120. The one or more parameters used to define the constraint can depend on the type of anchor object 230. The constraint that is determined is in addition to other constraints such as planes that are automatically identified during scanning.
There are many different types of constraints defined, e.g., based on texture features or planes that are detected in single frames. In one or more embodiments, the difference in position and orientation between the two instances of the anchor object 2320 in two separate sets of frames can lead to a constraint that sets this difference to zero. The measurement error is computed based on the same anchor object 2320 being viewed in the point cloud captured in two separate sets of frames.
The two separate sets of frames can be captured from distinct poses of the scanner 120. The scanner 120 can automatically identify that the same anchor object 2320 is being viewed again, and initiate the loop closure, in one or more embodiments. Alternatively, the operator can indicate to the scanner 120 that the same anchor object 2320 is being viewed, and to initiate loop closure. For example, specific gestures or user interface elements can be used by the operator to initate the loop closure.
In one or more embodiments of the present disclosure, after moving the scanner 120 to position 1520 (or any other position), the operator selects a “select anchor object” instruction 1214 at the position 1520. In one or more examples, (see
The difference 1530 between the recorded position of the previous instance of the anchor object 2320 and the present position of the anchor object 2320 is used as the error correction to update and correct the mapping positions, in one or more embodiments. Alternatively, or in addition, the error correction leads to a pose constraint to be used in a bundling algorithm.
The pose constraint(s) that is(are) determined are input into a bundling algorithm that provides a position and orientation estimate for each frame that is captured by the scanner 120, at block 2208. The bundling algorithm provides modeling and solving optimization problems. The bundling algorithm can be used to solve non-linear least squares problems with bounds constraints. Given a set of measured image feature locations and correspondences, the goal of bundle adjustment is to find 3D point positions and camera parameters that minimize the reprojection error. This optimization problem is usually formulated as a non-linear least squares problem, where the error is the squared L2 norm of the difference between the observed feature location and the projection of the corresponding 3D point on the image plane of the camera.
The pose constraints are input to the bundling algorithm as parameters, for example, by storing the pose constraints in an electronic file, which is then input of the bundling algorithm. Alternatively, the pose constraints are input as parameters to one or more function calls in a computer program that implements the bundling algorithm. The bundling algorithm can be accessed via an application programming interface, in one or more embodiments. The bundling algorithm itself can be implemented using techniques that are known or will be developed.
Embodiments herein facilitate identifying and providing inputs to the bundling algorithm. The correction that the bundling algorithm provides is based on the type of anchor object. For example, different degrees of freedom can be corrected with different types of anchor objects. For example, using one plane as anchor object defines 3 degrees of freedom, whereas using three planes to define an anchor object defines all 6 degrees of freedom.
As part of the loop closure, the method 2200 further includes using the measurement error 1530 to correct the frames captured by the scanner 120, at block 2210. The portion of the point cloud 130 that is scanned and stored since capturing the anchor object 2320 in the first frame is updated using the measurement error 1530, in one or more embodiments. In one or more examples, a loop closure operation is executed on the point cloud 130, and parts of the point cloud 130 are corrected in order to match the real pose, which is the starting position 1510, with the estimated pose, which is the different position 1520. The loop closure algorithm calculates a displacement for each part of the point cloud 130 that is shifted using the bundling algorithm. It should be appreciated that while embodiments herein illustrate the loop closure in two dimensions, the loop closure may be performed in three-dimensions as well.
In one or more examples, the scanner 120 determines the anchor objects linked to each portion of the point cloud 130. In one or more examples, a lookup is performed over the data structure that saves the list of anchor objects. The lookup costs a single processor operation, such as an array lookup. The scanner 120 applies the displacement vector for a portion of the point cloud 130 to the corresponding scan positions saved in the data structure and saves the resulting displaced (or revised) scan positions back into the data structure. The scanner 120 computes displaced scan positions for each of the saved scan positions 1610 in the data structure. The procedure can be repeated every time the loop closure algorithm is applied.
The displaced scan positions represent corrected scan positions of the scans that can be used directly without applying further computational expensive point cloud registration algorithms. The accuracy of the scan positions 1610 depends on the sensor accuracy of the scanner 120. As shown in
Alternatively, or in addition, the instances of the anchor object 2320 are used to align point clouds, such as aligning the first part of the point cloud captured at t1 that includes the first instance and the second part of the point cloud captured at t2 that includes the second instance. The alignment of the point cloud is done for one or multiple degrees of freedom based on the anchor object 2320. The alignment of the point clouds is performed based on alignment of the instances of the anchor object 2320.
In one or more embodiments of the present invention, the alignment of the point cloud is based on at least one quality parameter assigned to at least one of the detected instances of the anchor object 2320.
It should be noted that additional anchor objects can be scanned in one or more time intervals, and that the examples described herein describe using only a single anchor object 2320 for explanation purposes. For example, in the time interval t1, along with the anchor object 2320, another anchor object is detected and scanned. The other anchor object can be any other type of anchor object. The other anchor object can be used to align the point cloud(s) in place of, or in addition to the anchor object 2320. In another example, a first instance of the anchor object 2320 is captured at time interval t1, a second instance of the anchor object 2320 is captured at time interval t2, a third instance at the time interval t3, and so on. A combination of two or more of these instances can be used to align parts of the point cloud, or one or more point clouds that are scanned at the respective time intervals.
For example, the third instance of the anchor object 2320 can be detected and included in a second point cloud, different from the first point cloud that included the first and second instances of the anchor object 2320. The method described in
In one or more embodiments of the present invention, the first and second point clouds are captured by the same scanner 120. The alignment can be performed in substantially real-time while the scanner 120 is scanning the environment. Alternatively, the alignment can be performed offline, after the scanning is completed. In other embodiments, the first and second point clouds are captured by two separate scanners 120. In one or more embodiments of the present invention, the two separate scanners are different types of scanners, for example, a FOCUS® scanner and a FREESTYLE® scanner. The alignment of the point clouds can be performed using snap-in functionality in one or more embodiments of the present invention. For example, the snap-in functionality allows the operator to visualize the FOCUS® point cloud while recording a FREESTYLE® point cloud and aligning the two point cloud during the recording process e.g. using one or more anchor objects. FOCUS®.
Further yet, the instances of the anchor object 2320 can be used for colorizing the point cloud that is captured. In one or more embodiments of the present invention, the instances of the anchor object 2320 are visualized in the user-interface during capturing the point clouds, and/or during alignment of the point clouds. The instance of the anchor object 2320 can be visualized using a specific icon, cursor, or any other notation. Further, different instances of the same anchor object 2320 can be depicted using a common notation, e.g., all instances of a one or more processors first anchor object is denoted using a first icon, and all instances of a second anchor object is denoted using a second icon. Alternatively, or in addition, instances of anchor objects are denoted using a same icon regardless of the anchor object they belong to.
In one or more embodiments of the present invention, the visualization of an instance of the anchor object 2320 is modified/revised as the alignment process is performed. For example, a color of the visualization of the instance changes (e.g., from green to blue) as the alignment is performed. In one or more embodiments of the present invention, the visualization attribute is changed to notify that the alignment has been successful.
It should be noted that “landmarks,” which are used in existing systems, can be used to define anchor objects, which are used in embodiments herein. Typically, a landmark is a natural feature (e.g., actual intersection of three edges) or artificial targets that are placed in the environment, and is not a virtual point in the point cloud like the anchor object.
By using the anchor object(s), loop closure can be performed during live scanning as described herein. For example, when the area is rescanned, the data from scans are aligned between two (or more) captures of the anchor object 2320. In one or more embodiments, immediately after capturing scans of an area in the vicinity of the anchor object 2320, and returning to the anchor object 2320, the points in the scans captured between the two captures of the anchor object 2320 are aligned. Such alignment can be done prior to viewing the point clouds that are captured. Embodiments herein, can accordingly, facilitate tracking re-find for the scanner 120. The operator can re-orient the scanner 120 to identify its position in the environment, by re-scanning the anchor object 2320 and determining the pose of the scanner 120 using the bundling algorithm as described herein. (see
The anchor objects that are selected and recorded can be of various type and quality. It is possible to use a quality parameter of an anchor object to define the weight of an anchor object with respect to other anchor objects or bundling constraints. The quality parameter indicates a noise level and/or a confidence level associated with an anchor object. For example, if the anchor object is captured in a frame with noise level below a predetermined threshold, the quality parameter associated with that instance of anchor object is higher than the quality parameter assigned to an instance of the anchor object in a frame that has noise level above the predetermined threshold. The noise level can be caused by several factors including, ambient lighting, reflectance of surface(s), stability of the scanner, ranging noise, and so on. The quality parameter can be associated automatically by the scanner based on the noise level in the frame. Alternatively, the quality parameter is user defined by the operator. In one or more embodiments, only instances of the anchor object with at least a predetermined level of quality parameter are used for performing loop closure.
Also, different types of anchor objects define different constraints (e.g. a combination of three planes defines all 6 degrees of freedom, two planes only 5 degrees of freedom, one plane only 3 degrees of freedom). Examples of anchor objects include planes, lines, 3D points, markers of different types (FREESTYLE® marker, checkerboards, QR code, etc.), projected light patterns, natural texture features, and possible combinations thereof.
It should be noted that although the method 2200 is described to capture one anchor object 2320, in other embodiments, multiple anchor objects can be captured and used as described herein.
Further, if the same anchor object is detected in multiple scans, the anchor object is used as a constraint for matching of the multiple scans, for example during stitching the scans. The anchor objects are also used for initialization to generate submaps consisting of multiple matched data. This matching maybe implemented as nonlinear optimization with a cost function. In this method, multiple scans are captured as described in
Additionally, the anchor objects can be reused as indicator for loop closure in the case where the anchor objects can be identified globally e.g. line/wall segments through their length. If multiple such anchor objects are identified between two submaps the loop closure can be evaluated using the timestamp and the anchor objects for the alignment of the multiple submaps.
In one or more embodiments of the present disclosure, the computing device 110 and/or a display (not shown) of the scanner 120 provides a live view of the point cloud 130 of the environment being scanned by the scanner 120 using the set of sensors 122. Point cloud 130 can be a 2D or 3D representation of the environment seen through the different sensors. Point cloud 130 can be represented internally as a grid map. A grid map is a 2D or 3D arranged collection of cells (2D) or voxels (3D), representing an area of the environment. In one or more embodiments of the present disclosure, the grid map stores for every cell/voxel, a probability indicating if the cell area is occupied or not. Other representations of the map 130 can be used in one or more embodiments of the present disclosure.
Embodiments of the present disclosure facilitate improvements to results of loop closure, and consequently an improved scanning system for generating maps of an environment.
In an embodiment, the scanner 10 of
Signals from the infrared (IR) cameras 301A, 301B and the registration camera 303 are fed from camera boards through cables to the circuit baseboard 312. Image signals 352A, 352B, 352C from the cables are processed by the computing module 330. In an embodiment, the computing module 330 provides a signal 353 that initiates emission of light from the laser pointer 305. A TE control circuit communicates with the TE cooler within the infrared laser 309 through a bidirectional signal line 354. In an embodiment, the TE control circuit is included within the SoC FPGA 332. In another embodiment, the TE control circuit is a separate circuit on the baseboard 312. A control line 355 sends a signal to the fan assembly 307 to set the speed of the fans. In an embodiment, the controlled speed is based at least in part on the temperature as measured by temperature sensors within the sensor unit 320. In an embodiment, the baseboard 312 receives and sends signals to buttons 2214, 2211, 2212 and their LEDs through the signal line 356. In an embodiment, the baseboard 312 sends over a line 361 a signal to an illumination module 360 that causes white light from the LEDs to be turned on or off.
In an embodiment, bidirectional communication between the electronics 310 and the electronics 370 is enabled by Ethernet communications link 365. In an embodiment, the Ethernet link is provided by the cable 60. In an embodiment, the cable 60 attaches to the mobile PC 401 through the connector on the bottom of the handle. The Ethernet communications link 365 is further operable to provide or transfer power to the electronics 310 through the user of a custom Power over Ethernet (PoE) module 372 coupled to the battery 374. In an embodiment, the mobile PC 370 further includes a PC module 376, which in an embodiment is an Intel® Next Unit of Computing (NUC) processor. The NUC is manufactured by Intel Corporation, with headquarters in Santa Clara, Calif. In an embodiment, the mobile PC 370 is configured to be portable, such as by attaching to a belt and carried around the waist or shoulder of an operator.
In an embodiment, shown in
The ray of light 511 intersects the surface 530 in a point 532, which is reflected (scattered) off the surface and sent through the camera lens 524 to create a clear image of the pattern on the surface 530 of a photosensitive array 522. The light from the point 532 passes in a ray 521 through the camera perspective center 528 to form an image spot at the corrected point 526. The position of the image spot is mathematically adjusted to correct for aberrations of the camera lens. A correspondence is obtained between the point 526 on the photosensitive array 522 and the point 516 on the illuminated projector pattern generator 512. As explained herein below, the correspondence may be obtained by using a coded or an uncoded pattern of projected light. Once the correspondence is known, the angles a and b in
In
In
Consider the situation of
To check the consistency of the image point P1, intersect the plane P3-E31-E13 with the reference plane 860 to obtain the epipolar line 864. Intersect the plane P2-E21-E12 to obtain the epipolar line 862. If the image point P1 has been determined consistently, the observed image point P1 will lie on the intersection of the calculated epipolar lines 862 and 864.
To check the consistency of the image point P2, intersect the plane P3-E32-E23 with the reference plane 870 to obtain the epipolar line 874. Intersect the plane P1-E12-E21 to obtain the epipolar line 872. If the image point P2 has been determined consistently, the observed image point P2 will lie on the intersection of the calculated epipolar line 872 and epipolar line 874.
To check the consistency of the projection point P3, intersect the plane P2-E23-E32 with the reference plane 880 to obtain the epipolar line 884. Intersect the plane P1-E13-E31 to obtain the epipolar line 882. If the projection point P3 has been determined consistently, the projection point P3 will lie on the intersection of the calculated epipolar lines 882 and 884.
The redundancy of information provided by using a 3D imager having three devices (such as two cameras and one projector) enables a correspondence among projected points to be established even without analyzing the details of the captured images and projected pattern features. Suppose, for example, that the three devices include two cameras and one projector. Then a correspondence among projected and imaged points may be directly determined based on the mathematical constraints of the epipolar geometry. This may be seen in
By establishing correspondence based on epipolar constraints, it is possible to determine 3D coordinates of an object surface by projecting uncoded spots of light. An example of projection of uncoded spots is illustrated in
The point or spot of light 922 on the object 920 is projected as a ray of light 926 through the perspective center 932 of a first camera 930, resulting in a point 934 on the image sensor of the camera 930. The corresponding point 938 is located on the reference plane 936. Likewise, the point or spot of light 922 is projected as a ray of light 928 through the perspective center 942 of a second camera 940, resulting in a point 944 on the image sensor of the camera 940. The corresponding point 948 is located on the reference plane 946. In an embodiment, a processor 950 is in communication with the projector 910, first camera 930, and second camera 940. The processor determines a correspondence among points on the projector 910, first camera 930, and second camera 940. In an embodiment, the processor 950 performs a triangulation calculation to determine the 3D coordinates of the point 922 on the object 920. An advantage of a scanner 900 having three device elements, either two cameras and one projector or one camera and two projectors, is that correspondence may be determined among projected points without matching projected feature characteristics. In other words, correspondence can be established among spots on the reference planes 936, 914, and 946 even without matching particular characteristics of the spots. The use of the three devices 910, 930, 940 also has the advantage of enabling identifying or correcting errors in compensation parameters by noting or determining inconsistencies in results obtained from triangulation calculations, for example, between two cameras, between the first camera and the projector, and between the second camera and the projector.
It should be appreciated that while 3D coordinate data may be used for training, the methods described herein for verifying the registration of landmarks may be used with either two-dimensional or three-dimensional data sets.
Technical effects and benefits of the disclosed embodiments include, but are not limited to, increasing scan quality and a visual appearance of scans acquired by the 3D coordinate measurement device.
Turning now to
As shown in
The computer system 2100 comprises an input/output (I/O) adapter 2106 and a communications adapter 2107 coupled to the system bus 2102. The I/O adapter 2106 may be a small computer system interface (SCSI) adapter that communicates with a hard disk 2108 and/or any other similar component. The I/O adapter 2106 and the hard disk 2108 are collectively referred to herein as a mass storage 2110.
Software 2111 for execution on the computer system 2100 may be stored in the mass storage 2110. The mass storage 2110 is an example of a tangible storage medium readable by the processors 2101, where the software 2111 is stored as instructions for execution by the processors 2101 to cause the computer system 2100 to operate, such as is described herein below with respect to the various Figures. Examples of computer program product and the execution of such instruction is discussed herein in more detail. The communications adapter 2107 interconnects the system bus 2102 with a network 2112, which may be an outside network, enabling the computer system 2100 to communicate with other such systems. In one embodiment, a portion of the system memory 2103 and the mass storage 2110 collectively store an operating system, which may be any appropriate operating system, such as the z/OS or AIX operating system from IBM Corporation, to coordinate the functions of the various components shown in
Additional input/output devices are shown as connected to the system bus 2102 via a display adapter 2115 and an interface adapter 2116 and. In one embodiment, the adapters 2106, 2107, 2115, and 2116 may be connected to one or more I/O buses that are connected to the system bus 2102 via an intermediate bus bridge (not shown). A display 2119 (e.g., a screen or a display monitor) is connected to the system bus 2102 by a display adapter 2115, which may include a graphics controller to improve the performance of graphics intensive applications and a video controller. A keyboard 2121, a mouse 2122, a speaker 2123, etc. can be interconnected to the system bus 2102 via the interface adapter 2116, which may include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit. Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI). Thus, as configured in
In some embodiments, the communications adapter 2107 can transmit data using any suitable interface or protocol, such as the internet small computer system interface, among others. The network 2112 may be a cellular network, a radio network, a wide area network (WAN), a local area network (LAN), or the Internet, among others. An external computing device may connect to the computer system 2100 through the network 2112. In some examples, an external computing device may be an external webserver or a cloud computing node.
It is to be understood that the block diagram of
It will be appreciated that aspects of the present disclosure may be embodied as a system, method, or computer program product and may take the form of a hardware embodiment, a software embodiment (including firmware, resident software, micro-code, etc.), or a combination thereof. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer-readable program code embodied thereon.
One or more computer-readable medium(s) may be utilized. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. A computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In one aspect, the computer-readable storage medium may be a tangible medium containing or storing a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium, and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer-readable medium may contain program code embodied thereon, which may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. In addition, computer program code for carrying out operations for implementing aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
It will be appreciated that aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block or step of the flowchart illustrations and/or block diagrams, and combinations of blocks or steps in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Terms such as processor, controller, computer, DSP, FPGA are understood in this document to mean a computing device that may be located within an instrument, distributed in multiple elements throughout an instrument, or placed external to an instrument.
While the invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Additionally, while various embodiments of the invention have been described, it is to be understood that aspects of the invention may include only some of the described embodiments. Accordingly, the invention is not to be seen as limited by the foregoing description but is only limited by the scope of the appended claims.
This application claims the benefit of U.S. patent application Ser. No. 63/031,877 filed May 29, 2020 and U.S. patent application Ser. No. 63/066,445 filed Aug. 17, 2020, the disclosures of which are incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63066445 | Aug 2020 | US | |
63031877 | May 2020 | US |