The disclosure generally relates to structured light depth sensors.
Structured light depth sensors have been widely used in face recognition, gesture recognition, 3D scanners, and precision machining, and can be divided into time identification and space identification principles. The face recognition and the gesture recognition mostly use the space identification technique to consider the requirement of identification speed and the limitation of sensing distance.
The structured light depth sensor can calculate the depth of an object by using a projector for projecting structured lights onto the object. In prior art, the projector is usually composed of a plurality of point light sources arranged irregularly, which is not easily obtained, and the size of a block set for detecting and calculating the depth of the object is too large, resulting in low accuracy. Therefore, there is room for improvement within the art.
Many aspects of the present disclosure can be better understood with reference to the drawings. The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the views.
It will be appreciated that for simplicity and clarity of illustration, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. The drawings are not necessarily to scale and the proportions of certain parts have been exaggerated to better illustrate details and features of the present disclosure. The description is not to be considered as limiting the scope of the embodiments described herein.
Several definitions that apply throughout this disclosure will now be presented. The term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series, and the like. The term “coupled” is defined as connected, whether directly or indirectly through intervening components, and is not necessarily limited to physical connecting. The connecting can be such that the objects are permanently connected or releasably connected.
As shown in
As shown in
The structured light depth sensor 100 can sense the depth of the object 400 as follows. When the processor 30 controls the projector 10 to project one of the structured lights with a certain code to obtain a coded image 60, the structured light reflected by the object 400 is captured by the camera 20 to obtain a captured image 70. The captured image 70 is compared with a coding block 61 set in the coded image 60 by the processor 30 to match a capturing block 71 in the captured image 70. A distance between the coding block 61 and an extension line of the projector 10 is defined as xl. A distance between the capturing block 71 and an extension line of the camera 20 is defined as xr. A disparity (δ) of the coding block 61 and the capturing block 71 is equal to difference between xl and xr. The processor 30 calculates the depth (D) of the object 400 according to the following formula D=B*F/δ.
S401, setting a detecting block 90;
S402, generating the structured lights with different codes;
S403, sequentially projecting the structured lights with different codes to the object 400 to obtain a plurality of coded images 60;
S404, capturing the structured lights reflected by the object 400 to obtain a plurality of captured images 70;
S405, integrating the plurality of coded images 60 into a coded image group according to a projection order, and integrating the plurality of camera images 70 into a captured image group according to a capture order;
S406, extracting the coding block 61 in each of the plurality of coded images 60 according to the position and size of the detecting block 90;
S407, integrating a plurality of coding blocks 61 into a coding block group in a same order as the coded image group;
S408, extracting the capturing block 71 in each of the plurality of captured images 70 according to the position and size of the detecting block 90;
S409, integrating a plurality of capturing blocks 71 into a capturing block group in a same order as the captured image group;
S410, matching the coding block group and the capturing block group;
S411, calculating the disparity between the coding block group and the capturing block group matched with the coding block group;
S412, calculating the depth of the object 400.
Specifically, if images of the coding block group and the capturing block group are in a same order, the coding block group and the capturing block group are successfully matched.
In the first embodiment, there are seven point light sources 111. The seven point light sources 111 are labeled as a first point light source 111a, a second point light source 111b, a third point light source 111c, a fourth point light source 111d, a fifth point light source 111e, a sixth point light source 111f, and a seventh point light source 111g. A method that the processor 30 obtains the depth of the object 400 by generating three structured lights with different codes is described as follows.
S501, the processor 30 controls the projector 10 to project a first structured light to the object 400. When the first structured light is generated, the first point light source 111a, the second point light source 111b, the third point light source 111c, and the fourth point light source 111d are in the bright state, the fifth point light source 111e, the sixth point light source 111f, and the seventh point light source 111g are in the dark state.
S502, the processor 30 controls the camera 20 to capture the first structured light reflected by the object 400.
S503, the processor 30 controls the projector 10 to project a second structured light to the object 400. When the second structured light is generated, the first point light source 111a, the second point light source 111b, the fifth point light source 111e, and the sixth point light source 111f are in the bright state, the third point light source 111c, the fourth point light source 111d, and the seventh point light source 111g are in the dark state.
S504, the processor 30 controls the camera 20 to capture the second structured light reflected by the object 400.
S505, the processor 30 controls the projector 10 to project a third structured light to the object 400. When the third structured light is generated, the first point light source 111a, the third point light source 111c, the fifth point light source 111e, and the seventh point light source 111g are in the bright state, the second point light source 111b, the fourth point light source 111d, and the sixth point light source 111f are in the dark state.
S506, the processor 30 controls the camera 20 to capture the third structured light reflected by the object 400.
S507, the processor 30 matches the coding block group and the capturing block group.
S508, the processor 30 calculates the disparity (δ) and the depth (D) of the object 400.
The number of the point light sources 111 is not limited to being seven. When the number of the point light sources 111 is defined as m (m≥2), a minimum number of the projections is defined as n and satisfies the following relationship 2n-1≤m≤2n. A manner of the structured lights generated by the multi-point light source 11 for n times includes: one (or 20) point light source 111 in the bright state appears at intervals of one (or 20) point light source 111 in the dark state, two (or 21) point light sources 111 in the bright states appear at intervals of two (or 21) point light sources 111 in the dark states . . . 2n-1 point light sources 111 in the bright states appear at intervals of 2′n-1 point light sources 111 in the dark states.
S701, the processor 30 controls the projector 10 to project a first structured light to the object 400. The sixty four point light sources 111 are grouped in fours and include sixteen different lighting patterns.
S702, the processor 30 controls the camera 20 to capture the first structured light reflected by the object 400.
S703, the processor 30 controls the projector 10 to project a second structured light to the object 400. Among thirty two point light sources 111, which are arranged in a row, one point light source 111 in the bright state appears at intervals of one point light source 111 in the dark state, that is, point light sources in the bright states and point light sources in the dark states are arranged alternately.
S704, the processor 30 controls the camera 20 to capture the second structured light reflected by the object 400.
S705, the processor 30 matches the coding block group and the capturing block group.
S706, the processor 30 calculates the disparity (δ) and the depth (D) of the object 400.
The number of the point light sources 111 is not limited to being sixty four. In other embodiments, the number of the point light sources 111 used in the sensing method can be less than sixty four, as long as lighting patterns are different from each other when generating the first structured light.
The process at block S703 can avoid the case where the lighting state of the first four point light sources 111 and the lighting pattern of the last four point light sources 111 among six consecutive point light sources 111 are same as S701. It should be noted that the lighting pattern of the point light sources 111 is not limited to what is described in S703, where that one point light source 111 in the bright state appears at intervals of one point light source 111 in the dark state, as long as the lighting pattern of the first four point light sources 111 and the lighting pattern of the last four point light sources 111 among six consecutive point light sources 111 are different.
Compared with the prior art, the present disclosure employs the processor 30 to control the multi-point light source 11 to sequentially project the structured lights with different codes to an object 400 to obtain the coded images 60, control the camera 20 to sequentially capture the structured lights reflected by the object 400, and calculate the depth of the object 400 according to the parameter information and the distance information stored in the storage device 40. The present disclosure can reduce the size of the detecting block 90 by equidistant arrangement of the point light sources, thereby improving sensing accuracy.
It is believed that the present embodiments and their advantages will be understood from the foregoing description, and it will be apparent that various changes may be made thereto without departing from the spirit and scope of the disclosure or sacrificing all of its material advantages, the examples hereinbefore described merely being exemplary embodiments of the present disclosure.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2018 1 1331270 | Nov 2018 | CN | national |
| Number | Name | Date | Kind |
|---|---|---|---|
| 7212663 | Tomasi | May 2007 | B2 |
| 7852461 | Yahav | Dec 2010 | B2 |
| 8896594 | Xiong | Nov 2014 | B2 |
| 9445081 | Kouperman | Sep 2016 | B1 |
| 9530215 | Siddiqui | Dec 2016 | B2 |
| 9712806 | Olmstead | Jul 2017 | B2 |
| 9800859 | Venkataraman | Oct 2017 | B2 |
| 9872012 | Paramonov | Jan 2018 | B2 |
| 9916524 | Fanello | Mar 2018 | B2 |
| 10068338 | Atanassov | Sep 2018 | B2 |
| 10192311 | Pan | Jan 2019 | B2 |
| 10194138 | Zhou | Jan 2019 | B2 |
| 20040005092 | Tomasi | Jan 2004 | A1 |
| 20100074532 | Gordon | Mar 2010 | A1 |
| 20160255332 | Nash | Sep 2016 | A1 |
| 20160261852 | Hirasawa | Sep 2016 | A1 |
| 20170186167 | Grunnet-Jepsen | Jun 2017 | A1 |
| 20170195656 | Nash | Jul 2017 | A1 |
| 20170337703 | Wu | Nov 2017 | A1 |
| 20180020195 | Lindner | Jan 2018 | A1 |
| 20180059679 | Taimouri | Mar 2018 | A1 |
| 20180321384 | Lindner | Nov 2018 | A1 |
| 20190045173 | Hicks | Feb 2019 | A1 |
| 20190064359 | Yang | Feb 2019 | A1 |
| 20190089939 | Chew | Mar 2019 | A1 |
| 20190101382 | Taubin | Apr 2019 | A1 |
| 20190122378 | Aswin | Apr 2019 | A1 |
| 20190213746 | Azuma | Jul 2019 | A1 |