The subject matter disclosed herein relates to a laser scanner and in particular to a laser scanner able to acquire and display multiple parameters related to a scanned object.
Laser scanners are a type of device that utilize a light source to measure and determine the three-dimensional coordinates of points on the surface of an object. Laser scanners are typically used for scanning closed or open spaces such as interior areas of buildings, industrial installations and tunnels. Laser scanners are used for many purposes, including industrial applications and accident reconstruction applications. A laser scanner can be used to optically scan and measure objects in a volume around the scanner through the acquisition of data points representing objects within the volume. Such data points are obtained by transmitting a beam of light onto the objects and collecting the reflected or scattered light to determine the distance, two-angles (i.e. an azimuth and a zenith angle), and optionally a gray-scale value. This raw scan data is collected, stored and sent to a processor or processors to generate a three-dimensional image representing the scanned area or object. In order to generate the image, at least three values are collected for each data point. These three values may include the distance and two angles, or may be transformed values, such as the x, y, z coordinates.
One type of laser scanner is a laser scanner (LS) that can scan a nearly complete spherical volume in a short period of time. By moving an LS through a scan area, an accurate 3D point cloud may be captured, no color data or spherical image is recorded.
A laser scanner may also include a camera mounted on or integrated into the laser scanner for gathering camera digital images of the environment. In addition, the camera digital images may be transmitted to a processor to add color to the scanner image. In order to generate a color scanner image, at least six values (three-positional values such as x, y, z; and color values, such as red, green and blue values or “RGB”) are collected for each data point.
Accordingly, while existing laser scanners are suitable for their intended purposes, what is needed is a laser scanner that has certain features of embodiments of the present invention.
According to an exemplary embodiment, a three-dimensional (3D) measuring device may include a spherical laser scanner (SLS) structured to generate a 3D point cloud of an area; a plurality of cameras, each camera of the plurality of cameras being structured to capture a color photographic image; a controller operably coupled to the SLS and the camera; and a base on which the SLS is mounted. The controller may include a processor and a memory. The controller may be configured to add color data to the 3D point cloud based on the color photographic images captured by the plurality of cameras. The plurality of cameras may be provided on the base and spaced apart in a circumferential direction around a pan axis of the SLS. The plurality of cameras may be fixed relative to the pan axis.
According to an exemplary embodiment, a base for use with a spherical laser scanner (SLS) structured to generate a 3D point cloud may include a base body structured to mount the spherical laser scanner; a plurality of cameras mounted on base body spaced in a circumferential direction around a pan axis of the SLS, each of the plurality of cameras being structured to capture a color photographic image; a controller operably coupled to the SLS and the plurality of cameras. The controller may include a processor and a memory. The controller may be configured to add color data to the 3D point cloud based on color photographic images captured by the plurality of cameras.
According to an exemplary embodiment, a method for measuring three-dimensional (3D) data of an area, the method may include generating, with a spherical laser scanner (SLS), a 3D point cloud of an area; capturing, with a plurality of cameras, color photographic images of the area; and adding color data to the 3D point cloud based on the color photographic images. The plurality of cameras may be mounted on the base and spaced apart in a circumferential direction around a pan axis of the SLS.
Accordingly to an exemplary embodiment, a method for measuring three-dimensional (3D) data of an area may include providing a spherical laser scanner (SLS) and a camera mounted on a moveable carrier; moving the carrier along a movement path within the area; while the carrier is being moved along the movement path, generating, with the SLS, a 3D point cloud of the area; capturing, with the camera, a plurality of color photographic images at a predetermined distance interval along the movement path; and adding color data to each of the 3D point clouds based on the plurality of color photographic images.
These and other advantages and features will become more apparent from the following description taken in conjunction with the drawings.
The subject matter, which is regarded as the invention, is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
The detailed description explains embodiments of the invention, together with advantages and features, by way of example with reference to the drawings.
Laser scanning devices, such as SLS 20, for example, are used to acquire data for the geometry such as surfaces within an environment. These devices use a coherent light source to optically scan the environment and receive reflected beams of light. By knowing the direction and orientation of a beam of light in the amount time it takes to transmit and receive the light beam, the scanning device can determine the 3D coordinates of the surface point from which the light reflected. A controller operably coupled to SLS 20 can aggregate the 3D coordinates from a plurality of surface points into a 3D point cloud of the area being scanned. It should be appreciated that while embodiments herein may refer to the light emitted by the scanner as a “laser”, is this for example purposes and the claims should not be so limited. In other embodiments, the light may be generated by another light source, such as but not limited to superluminescent light emitting diodes (LED's)
SLS 20 may include a scanner body 22 and a mirror 24 structured to steer light emitted from the coherent light source. SLS 20 may further include a motor that rotates the scanner body 22 around an azimuthal or pan axis 40. In the illustrated embodiment, the body 22 has a spherical or semi-spherical shape that allows the rotation of the body 22 about the axis 40 while providing a desired level of IP protection (e.g. EIC standard 60529). Additionally, SLS 20 may include an additional motor, galvanometer, or similar device to rotate mirror 24 around a zenith or tilt axis 42. Sensors such as rotary encoders or other suitable devices may be used to measure the azimuth angle and zenith angle as SLS 20 scans each point in the area. The combination of rotation around pan axis 40 and rotation around tilt axis 42 allows the SLS 20 to scan a substantially entire volume in a 360 degree arc around SLS 20.
Base 30 may include one or more two-dimensional (2D) photographic cameras 32 capable of capturing a color photographic image. It should be appreciated that the cameras 32 are rotationally fixed relative to the body 22. Examples of possible camera models used as camera 32 may include, but are not limited to, model MC124CG-SY (12.4 megapixel) manufactured by XIMEA Corp., model CB500CG-CM (47.5 megapixel) manufactured by XIMEA Corp., or model axA4024-29uc (12 megapixel) manufactured by Basler AG. It will be understood that other manufacturers or models may be used for camera 32. Additionally, in at least an embodiment, camera 32 may be a global shutter camera. It should be appreciated that while embodiments herein refer to a “photographic” camera, this may be any suitable image sensor and associated optical assembly configured to acquire digital images within the field of view.
In an exemplary embodiment in which more than one camera 32 is included, the cameras 32 may be spread out in a plane perpendicular to the pan axis 40 of SLS 20. For example, cameras 32 may be arranged spaced apart in a circumferential direction around an outer edge of base 30 (see
In an exemplary embodiment, cameras 32 may be positioned and oriented so that SLS 20 is not visible in the captured photographic images. Additionally, cameras 32 (and/or their lenses) may be of a sufficiently small size so as to not block the laser beam of SLS 20 as a scan area is scanned. In situations where the floor is of low interest to the operator, some blocking of SLS 20 by cameras 32 may be acceptable.
Scanning device 10 may further include a controller 50 that includes a processor 52 and a memory 54 (
Controller 50 may be configured to control the operation of SLS 20. For example, controller 50 may control rotation of scanner body 22 around pan axis 40 and the rotation of mirror 24 around tilt axis 42. Additionally, controller 50 may receive information such as pan angle, title angle, and distance to surface and/or time of flight information, and processor 52 may use this information to calculate a 3D coordinate of the scanned point. The 3D coordinate may be stored in memory 54 along with 3D coordinates of other points to generated a 3D point cloud of the area being scanned. Scan points may also be associated with identifying information such as timestamp or location of the SLS 20 within the coordinate system to facilitate integration with scans taken at other locations or with color photographic images captured by cameras 32.
Controller 50 may also be configured control operation of cameras 32. For example, controller 50 may operate cameras 32 to capture a color photographic image, which may be stored in memory 54. The stored color photographic images may also be associated with identifying information such as timestamp or location of the SLS within the coordinate system to facilitate integration of the photographs with scan data.
Controller 50 may also be configured to add color data to the 3D point cloud captured by SLS 20 based on the color photographic images captured by cameras 32. For example, because the 3D coordinates of each point of the 3D point cloud in a fixed coordinate system is known, and because the position and orientation of cameras 32 relative to SLS 20 is fixed and known, controller 50 can assign coordinates to the captured photographic images within the coordinate system of the 3D point cloud captured by SLS 20. In other words, in an exemplary embodiment, SLS 20 may be recording a point cloud while being pushed on a small cart, mobile platform, or carrier (see
As described herein, in at least an embodiment, controller 50 may control the camera to capture a plurality of sequential images, and each of the plurality of sequential images may overlap with temporally adjacent sequential images. Using photogrammetric principles, controller 50 may use these overlapping images to generate an image-based 3D point cloud. Subsequently, controller 50 may match points of the 3D point cloud generated by SLS 20 to points of the image-based 3D point cloud. Controller 50 may then attribute color data from the points in the image-based 3D point cloud to corresponding points in the 3D point cloud generated by SLS 20. Additionally, controller 50 may use a feature matching algorithm to fine tune the position and orientation of a camera within the point cloud based on identified features. Color information can be transferred at the position of the identified features.
As seen in
When scanning device 10 or scanning device 12 is provided on carrier 60, controller 50 may be configured to control the camera to capture a plurality of photographic images at a predetermined fixed distance interval as the SLS and the camera move through the scan area. For example, controller 50 may control camera 32 to take a photographic image for every 1 meter travelled by carrier 60. It will be understood, however, that the 1 meter interval is exemplary only and the capture interval may be varied according to the specific needs of the scan job being performed. Further, it should be appreciated that in other embodiments, the triggering of the acquisition of images may be based on another parameter, such as the acquisition of a predetermined number of data points by the SLS 20 for example.
In order to determine the position of SLS 20 and camera(s) 32 as carrier 60 moves through the scan area, SLS 20 and controller 50 may perform simultaneous localization and mapping methodology to determine a position of SLS 20 within the coordinate system. Because the position and orientation of cameras 32 relative to SLS 20 are known, the position of SLS 20 determined by the simultaneous localization and mapping calculation can be used to determine the position of cameras 32 when the photographic images are captured. Additionally, cameras 32 may also be used for tracking, either in real-time during the scanning or as a refinement step in post processing. In some embodiments, the device 10 may include additional sensors, such as an inertial measurement unit or encoders on the wheels of the carrier 60 for example. The data from these additional sensors may be fused with the photographic images to localize the carrier 60 and SLS 20 within the environment. Controller 50 may also use interpolation methods to add points to the 3D point cloud in post-processing. In an exemplary embodiment, camera 32 may have a higher resolution than the 3D point cloud acquired by SLS 20. In this case, the captured photographic images may be used to assist in the interpolation to add points to the 3D point cloud.
In at least an embodiment, for a desired surface texture completeness, the 3D point cloud generated by SLS 20 may be triangulated into a mesh. The mesh may be a local mesh, i.e., per scanned room or per object in the room, or alternatively. The full resolution texture may be attached to the mesh.
In at least an embodiment, controller 50 may calculate a virtual panoramic image based on the 3D point cloud with the associated color data from the captured photographic images. The panoramic image may be based on the mesh or the interpolated point cloud. For example, controller 50 may choose a position and orientation in 3D. Controller 50 may calculate a two-dimensional (2D) angle to each 3D point visible from the position and orientation, and the color information may be used to color a pixel in a synthetic epirectangular image (i.e., an image which spans 360 degrees horizontally and 180 degrees vertically). Holes in the image may be filled in with color information from the raw photographic images. For example, based on known 3D points and the corresponding pixel position in the raw photographic image, the homography between the raw photographic image and the virtual panoramic position can be calculated by controller 50. This may be calculated in a specific region of interest (ROI). The raw photographic image (or the relevant ROI) may be transformed based on the retrieved homography and warped according to its position in the epirectangular image. The resulting patch may be used to color pixels in the panoramic image.
It will be understood the scanning of points, capturing of images, and processing and storage of the data may consume a large amount of resources of scanning device 10. In order to reduce the resource requirements in at least an embodiment of scanning device 10, camera 32 may be switchable between a first resolution and a second resolution, the first resolution may be higher than the second resolution. Controller 50 may be configured to control camera 32 to capture a plurality of low resolution images at the second resolution. Controller 50 may be further configured to evaluate each low resolution image of the plurality of low resolution images as it is captured by the camera. Controller 50 may be further configured to, in response to a low resolution image of the plurality of low resolution images satisfying a predetermined condition, control the camera to capture a high resolution image at the first resolution. At least an embodiment of the predetermined condition will be described in further detail herein. The captured high resolution image may be used by controller 50 as the color photographic image used to add color data to the 3D point cloud.
In an exemplary embodiment in which multiple cameras 32 with variable resolution are provided, controller 50 may be configured to independently evaluate the predetermined condition for each of the multiple cameras 32. Further, controller 50 can control each camera 32 to capture a high resolution image independently.
In at least an embodiment, the predetermined condition evaluated by controller 50 may include detection of one or more features in the low resolution photographic images. The feature detection may be any known feature detector, such as by not limited to SIFT, SURF and BRIEF methods. Regardless of the feature detection process used, one or more features (edges, corners, interest points, blobs or ridges) may be identified in the low resolution images. In at least an embodiment, controller 50 may evaluate the difference in identified features in subsequent low resolution images. The evaluation may be performed using a FLANN feature matching algorithm or other suitable feature matching algorithm. In at least an embodiment, controller 50 may determine to capture a high resolution image when the difference in identified features between low resolution images is greater than a predetermined amount. In other words, when controller 50 is controlling camera 32 to capture images at a predetermined frequency or distance interval, there may be little substantive difference between sequential photographic images, and it may unduly burden the resources of scanning device 10 to process and store photographic images that are substantially similar. Accordingly, by utilizing low resolution images and evaluating differences between the images, controller 50 can determine to capture a high resolution image only when there is sufficient difference to make the capture worthwhile. Thus, resource consumption of scanning device 10 can be managed.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method, or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” “unit,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that may contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the laser scanner, partly on the laser scanner, as a stand-alone software package, partly on the laser scanner and partly a connected computer, partly on the laser scanner and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the laser scanner through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external laser scanner (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer program instructions.
These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that may direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the
While the invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Additionally, while various embodiments of the invention have been described, it is to be understood that aspects of the invention may include only some of the described embodiments. Accordingly, the invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims.
This application claims the benefit of U.S. Provisional Application Ser. No. 62/889,219 filed Aug. 20, 2019, the entire disclosure of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62889219 | Aug 2019 | US |