The present application claims the priority to a Chinese patent application No. 201820937584.9 filed with China National Intellectual Property Administration on Jun. 15, 2018, and entitled “3D Information Detection Device”, which is incorporated herein by reference in its entirety.
The present application relates to the technical field of acquiring 3D information about an object, and in particular to a 3D information detection device.
With the development of technology, the visual performance of machines is getting better and better. The image acquisition device of a machine is the core component of the machine, which determines the positioning accuracy of the machine. In the field of machine vision, ordinary image acquisition devices can only obtain two-dimensional information about objects, but cannot obtain 3D (3 Dimensions) information about objects, which obviously cannot meet actual requirements.
In order to obtain 3D information about objects, the current image acquisition device used to obtain 3D information about objects includes a projector and a camera, which are used to establish a structured light detection system and thus to obtain 3D information about objects. However, the projector and the camera included in the current image acquisition device are operated independently from each other and can only acquire two-dimensional images of objects. The acquired two-dimensional images need to be processed through a PC (personal computer), and only then the 3D information about objects can be obtained. In this way, the 3D information about objects can only be obtained with the cooperation of PCs, which leads to a problem that the 3D information about objects cannot be directly obtained.
The present application provides a 3D information detection device to solve the problem that the 3D information about objects cannot be directly obtained in the current manner of obtaining 3D information about objects.
In order to solve the above problem, the present application uses the following technical solution.
A 3D information detection device includes a DLP projector, a camera, a controller and an image processing module, wherein the DLP projector is configured for projecting structured light to an object, the camera is configured for acquiring an image of the object to which the structured light has been projected, the image processing module is connected to the camera and configured for processing the image to obtain 3D information about the object, the controller is connected to both the DLP projector and the camera and controls the operations of them.
Optionally, the DLP projector (100) includes a DLP driving device (110) and an optical projector (120), the DLP driving device (110) is connected to the optical projector (120) and drives the optical projector (120) to project encoded structured light to the object, and the DLP driving device (110) is connected to the controller (300).
Optionally, the controller (300) is integrated on the DLP driving device (110).
Optionally, the DLP projector (100) further includes a housing (600), within which the controller (300), the DLP driving device (110), the optical projector (120) and the image processing module (400) are all arranged.
Optionally, the 3D information detection device further includes a mounting base (500), wherein the camera (200) and the DLP projector (100) are both arranged on the mounting base (500), and the camera (200) is located on one side of the DLP projector (100).
Optionally, the camera (200) is movably arranged on the mounting base (500) and is movable in a direction close to or away from the DLP projector (100).
Optionally, the camera (200) includes a camera body and a camera lens, the camera body is movably engaged with the mounting base (500), and the camera lens is rotatably engaged with the camera body.
Optionally, an angle between an optical axis of the camera lens of the camera (200) and an optical axis of a projection lens of the DLP projector (100) ranges from 5° to 45°.
Optionally, there are two cameras (200), and the two cameras (200) are symmetrically arranged on two sides of the DLP projector (100).
Optionally, the projection lens of the DLP projector (100) is an on-axis lens; and/or, the image processing module (400) is a Graphics Processing Unit.
The technical solution used in the present application can achieve the following beneficial effects.
In the 3D information detection device disclosed in the present application, the controller can control the operations of the DLP projector and the camera, enabling them to form an associated triggered unit. The camera can capture the structured light projected by the DLP projector in time and the captured images can be processed in time by the image processing module. In this detection process, since the image processing module directly performs 3D analysis and processing on the images acquired by the camera including the projection of the DLP projector to obtain 3D information about an object, the 3D information about the object can be directly obtained without the cooperation of a PC.
The drawings described herein are used to provide a further understanding of the application and constitute a part of the application. The exemplary embodiments of the present application and the description thereof are used to explain the application, and do not constitute an improper limitation on the application. In the figures:
100: DLP projector, 110: DLP driving device, 120: optical projector, 200: camera, 300: controller, 400: image processing module, 500: mounting base, 600: housing.
For the clarity of the purpose, technical solutions, and advantages of the present application, the present application will be described clearly and completely in conjunction with the specific embodiments and the corresponding drawings of the present application. It is apparent that the described embodiments are only a part of the embodiments of the present application, rather than all of them. All other embodiments obtained by those skilled in the art based on the embodiments of the present application without any inventive efforts fall within the protection scope of the present application.
The technical solutions provided in various embodiments of the present application will be described below in detail with reference to the accompanying drawings.
Please refers to
The DLP projector 100 is a projection device based on DLP (Digital Light Processing) technology, which can digitally process image signals and then project light. The DLP projector 100 is an encoding-enabled projector. The DLP projector 100 can be configured to project structured light to an object (object under test), wherein the structured light can be encoded structured light. During the specific operation, the DLP projector 100 projects a series of encoded patterns, which are formed by structured light. The structured light technology is used to calculate 3D information about the object hereinafter, which can improve the detection accuracy.
The structured light can be analyzed through the structured light technology, which is an active triangulation technology. The basic principle is as follows. A light projecting device projects a controllable light spot, light strip or light plane to the surface of an object to form characteristic points, and then the structured light projected to the surface of the object is modulated by the height of the object. The modulated structured light is acquired as an image and is transmitted to an analysis device for analysis and calculation. Then, the 3D information about the object, that is, the three-dimensional data of the object can be obtained.
The camera 200 is configured for acquiring an image of the object to which the structured light has been projected. The image processing module 400 is connected to the camera 200 and configured for processing the image to obtain the 3D information about the object. As described above, the specific processing process and the calculation of the 3D information are common technology, and will not be repeatedly described here. Specifically, the image processing module 400 can be based on x86 system architecture, and uses a GPU to obtain an image from the camera 200 and applies a planar structured light algorithm on the image. In one specific implementation, the image processing module 400 can be, of course not limited to, a GPU (Graphics Processing Unit).
The controller 300 is connected to both the DLP projector 100 and the camera 200 and controls the operations of them.
In one implementation, the controller 300 can control the DLP projector 100 to operate first and then control the camera 200 to operate after a preset time. It can be understood that the preset time may be 1 second, 2 seconds, etc., which is not limited in the embodiments of the present application. In this case, each time the DLP projector 100 projects one sheet, the camera 200 takes one shot after the preset time, and the image processing module 400 can process the picture taken by the camera 200 and then calculate the 3D information about the object.
In another implementation, the controller 300 can control the DLP projector 100 and the camera 200 to operate synchronously. In this case, each time the DLP projector 100 projects one sheet, the camera 200 takes one shot at the same time, and the image processing module 400 can process the picture taken by the camera 200 in real time and then calculate the 3D information about the object. In an optional solution, the controller 300 can be act as a synchronization controller that is able to control the synchronization of the DLP projector 100 and the camera 200. The controller 300 can be controlled by software or by a hardware circuit. In this technical field, the simultaneous operation of two devices can be controlled in many ways, which are not listed here. Specifically, the controller 300 can be a FPGA (Field Programmable Gate Array) chip.
In the 3D information detection device disclosed in the embodiments of the present application, the controller 300 can control the DLP projector 100 and the camera 200 to operate synchronously, enabling them to form an associated triggered unit. The camera 200 can capture the structured light projected by the DLP projector 100 in time, and the captured image can be processed by the image processing module 400 in time. In this detection process, since the image processing module directly performs 3D analysis and processing on the image acquired by the camera including the projection of the DLP projector to obtain 3D information about the object, the 3D information about the object can be directly obtained without the cooperation of a PC.
Further, in the embodiment of the present application, the DLP projector 100, the camera 200 and the image processing module 400 are integrated as one unit, such that the 3D information about the object can be directly provided to users through a 3D algorithm, which greatly facilitates the usage of users, and does not require users to perform any further processing later.
In the detection device disclosed in the embodiment of the present application, the DLP projector 100 projects a series of encoded patterns (formed by structured light) to an object using the DLP projecting technology. The camera 200 then acquires the image of the object the surface of which was projected with the patterns. Finally, the image processing module 400 applies a decoding algorithm on the image taken by the camera 200 and then the depth information about the surface of the object can be accurately restored. In addition, the camera 200 itself can acquire the two dimensional information about the acquired image, and finally the 3D information about the object is calculated using the structured light technology. The 3D information detection device disclosed in the embodiment of the present application can be widely used in the fields of robot positioning, 3D scanning, 3D measurement and the like.
Using the DLP projecting technology, different patterns can be encoded flexibly, and then planar structured light can be projected with higher precision. The DLP projector 100 can encode structured light in a Gray-code encoding format or a sine encoding format. The specific implementation of encoding structured light by using the Gray-code encoding format or the sine encoding format is known to those skilled in the art, and will not be repeatedly described here.
The DLP projector 100 can include a DLP driving device 110 and an optical projector 120. The DLP driving device 110 is connected to the optical projector 120 and drives the optical projector 120 to project encoded structured light to an object. The DLP driving device 110 is connected to the controller 300, which can be integrated on the DLP driving device. In this way, it is possible to save space and it is convenient for the controller to control the synchronization of the camera 200 and the DLP driving device 100. The controller 300 can control the DLP driving device 110 and thus the projection of the optical projector 120. The optical projector 120 includes a projection lens, which can be a 12 mm or 8 mm lens. Specifically, the projection lens can focus at a working distance of 500 mm, 1000 mm, etc. Of course, the focusing distance is not limited to the above-mentioned distance. The optical projector 120 can use a DMD (Digital Micromirror Device) chip produced by TI (Texas Instruments) to carry out the DLP projection.
The optical projector 120 can include a LED light source with three colors of red, green and blue, which enables the DLP light source to project structured lights of different colors. In this case, the DLP projector 100 can provide patterns of different colors according to different scenes.
In a specific implementation, the DLP driving device 110 can include a FPGA module, which controls the generation of the Gray-code encoded patterns. The generated encoded patterns are stored in the memory, and then projected by the optical projector 120.
In the camera 200 disclosed in the embodiment of the present application, the camera 200 includes a camera body and a camera lens. The camera 200 can be a camera using an image sensor of 1.3 million pixels, or 3 million pixels, or other number of pixels. The image sensor can be a high-speed area array CCD (Charge-coupled Device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor, of course, to which the present application is not limited. In addition, the camera lens can be equipped with a standard FA (Factory Automation) lens. The focal length of the standard FA lens can be 8 mm or 12 mm, of course, to which the present application is not limited.
The DLP projector 100 can include a housing 600, within which the controller 300, the DLP driving device 110, the optical projector 120 and the image processing module 400 can be all arranged, to facilitate the on-site mounting of the above-mentioned components. In an optional solution, the housing 600 can be a metal housing, which is able to play a better role in heat dissipation, and thus can dissipate the heat generated by the components arranged in the housing in time.
The 3D information detection device disclosed in the embodiment of the present application can further include a mounting base 500. The camera 200 and the DLP projector 100 can be both arranged on the mounting base 500, and the camera 200 is located on one side of the DLP projector 100, that is to say, the camera 200 can be located on either side of the DLP projector 100. During the specific mounting, the mounting base 500 can be fixed on the detection site firstly, and then provide mounting positions for the DLP projector 100 and the camera 200. Of course, it is possible to mount the DLP projector 100 and the camera 200 on the mounting base 500 firstly and mount the formed assembly on site at last.
In an optional solution, the camera 200 is movably arranged on the mounting base 500, and is movable in a direction close to or away from the DLP projector 100, so that the position of the camera 200 can be adjusted to achieve the purpose of adjusting the shooting position.
As described above, the camera 200 can include a camera body and a camera lens. The camera body can be connected to the mounting base 500, and is movably engaged with the mounting base 500. The camera lens is rotatably engaged with the camera body and thus the shooting angle of the camera 200 can be adjusted flexibly.
In the 3D information detection device disclosed in the embodiment of the present application, there can be two cameras 200. The two cameras 200 can be symmetrically arranged on two sides of the DLP projector 100. The use of two cameras 200 can better remedy the blind area existing in the field of one camera 200, thereby improving the detection accuracy. Of course, when there is one camera 200, the 3D information about an object can also be detected.
When there are two cameras 200, the distance between the two cameras 200 can be 1referred to as a baseline distance. According to the triangulation principle, the greater the baseline distance, the higher the depth resolution that will be obtained during shooting. The above-mentioned cameras 200 are movably arranged on the mounting base 500, and thus the baseline distance between the two cameras 200 can be adjusted more flexibly to achieve the effect of flexibly adjusting the depth resolution. Users can flexibly adjust the baseline distance between the two cameras 200 according to the operating environment.
In order to improve the detection effect, in an optional solution, the end face of the projection lens of the DLP projector 100 and the end faces of the camera lenses of the two cameras 200 can be located in the same straight line. The projection lens is located in the middle between the camera lenses of the two cameras 200, that is to say, the camera lenses of the two cameras 200 are symmetrically arranged on two sides of the projection lens.
In a specific implementation, the angle between the optical axis of the camera lens of the camera 200 and the optical axis of the projection lens of the DLP projector 100 can range from 5° to 45°. Of course, the above-mentioned structure of the camera 200 is able to realize the adjustment of the shooting direction of the camera lens, and can adjust the angle between the optical axis of the camera lens of the camera 200 and the optical axis of the projection lens of the DLP projector 100 more flexibly.
Since the camera lens of the camera 200 can rotate, the 3D scanning can be performed by the 3D information detection device within a large range, and the detection range can thus be broadened.
In the embodiment of the present application, the projection lens of the DLP projector 100 can use an off-axis lens. As shown in
The above description of the embodiments of the present application focus on the differences between various embodiments. Different optimization features between the various embodiments can be combined to form a better embodiment, as long as there is no conflict between them. For the conciseness of the text, it is not repeatedly described here. What is described above is only some embodiments of the present application and is not intended to limit the present application. Various modifications and changes can be made by those skilled in the art in the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included in the scope of the claims of this application.
Number | Date | Country | Kind |
---|---|---|---|
201820937584.9 | Jun 2018 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2019/078168 | 3/14/2019 | WO | 00 |