The present disclosure relates to three-dimensional (3D) sensing by optical means.
A computing device, such as a smart phone, with facial recognition function includes a housing, a depth sensor, and a color camera. The depth sensor and the color camera are mounted inside the housing and at the top front of the computing device to facilitate face recognition when a user looks at the computing device. The depth sensor captures data as to depth of the user's face, and the color camera is configured to capture data as to color of the user's face. The depth sensor includes two depth cameras. The depth cameras and the color camera need to be optically aligned inside the housing. However, optical alignment of the depth cameras and the color camera is often difficult. Additionally, the depth cameras and the color camera may become misaligned due to handling and other everyday forces applied to the computing device.
Accordingly, there is room for improvement in the art.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.
The base 110 includes a first side portion 111, a second side portion 112, and a cross portion 113. The cross portion 113 is connected between the first side portion 111 and the second side portion 112. The first side portion 111, the cross portion 113, and the second side portion 112 are disposed in a straight line. The second side portion 112 has a mounting hole 112b. The cross portion 113 is recessed for receiving the speaker 220 or other components inside the housing 230.
The depth sensor 120 captures data as to the depth of the user's face. The depth sensor 120 includes a first depth camera unit 121, a second depth camera unit 122, and a light emitting unit 150.
The first depth camera unit 121 includes a first circuit board 141, a first depth camera 121a, and a first connector 143. The first circuit board 141 is mounted on the first side portion 111 of the base 110. The first circuit board 141 has a first circuit board component 141a. The first depth camera 121a is electrically connected to the first circuit board component 141a. The first depth camera 121a is an infrared time-of-flight depth camera. The first connector 143 is electrically connected to the first circuit board 141. The first connector 143 is located outside of the first side portion 111 of the base 110. The first depth camera 121a is electrically connected to components inside the housing 230, through the first connector 143.
The second depth camera unit 122 includes a second circuit board 142, a second depth camera 122a, and a second connector 144. The second circuit board 142 is mounted on the second side portion 112 of the base 110. The second circuit board 142 has a second circuit board component 142a. The second depth camera 122a is electrically connected to the second circuit board component 142a. The second depth camera 122a is an infrared time-of-flight depth camera. The second connector 144 is electrically connected to the second circuit board 142. The second connector 144 is located outside of the second side portion 112 of the base 110. The second depth camera 122a is electrically connected to components inside the housing 230, through the second connector 144.
The light emitting unit 150 includes a light emitter 151 and a light controller 152. The light emitter 151 and the light controller 152 are electrically connected to a side portion of the first circuit board 141. The light emitter 151 is an infrared LED device. The light controller 152 is configured to control the light emitter 151.
The color camera unit 160 is configured to capture data as to color of the user's face. The color camera unit 160 is mounted on the second side portion 112 of the base 110. The color camera unit 160 includes a third circuit board 162, a color camera 161, and a third connector 163. The third circuit board 162 connects the color camera 161 to the third connector 163. The color camera 161 is disposed through the mounting hole 112b of the second side portion 112 of the base 110. The color camera 161 is an RGB camera. The third connector 163 is located outside of the second side portion 112. The color camera 161 is electrically connected to components inside the housing 230, through the third connector 163.
The first cover 131 has a first depth camera receiving opening 131a and a light emitter receiving opening 131b. The first cover 131 covers the first side portion 111 of the base 110 such that the first depth camera 121a is received in the first depth camera receiving opening 131a and the light emitter 151 is received in the light emitter receiving opening 131b.
The second cover 132 has a second depth camera receiving opening 132a and a color camera receiving opening 132b. The second cover 132 covers the second side portion 112 of the base 110 such that the second depth camera 122a is received in the second depth camera receiving opening 132a and the color camera 161 is received in the color camera receiving opening 132b.
The first depth camera 121a, the light emitter 151, the color camera 161, and the second depth camera 122a are disposed in a straight line. The first depth camera 121a, the second depth camera 122a, the color camera 161, and the light emitter 151 can be optically aligned after being mounted on the base 110 and before being mounted inside the housing 230. Optical alignment of the first depth camera 121a, the second depth camera 122a, the color camera 161, and the light emitter 151 can thus be done outside of the housing 230, and more accurately. Additionally, the first depth camera 121a, the second depth camera 122a, the color camera 161, and the light emitter 151 are firmly held in place by the base 110, the first cover 131, and the second cover 132 to avoid displacement/misalignment.
In
In
In
In
The embodiments shown and described above are only examples. Many details are often found in this field of art thus many such details are neither shown nor described. Even though numerous characteristics and advantages of the present technology have been set forth in the foregoing description, together with details of the structure and function of the present disclosure, the disclosure is illustrative only, and changes may be made in the detail, especially in matters of shape, size, and arrangement of the parts within the principles of the present disclosure, up to and including the full extent established by the broad general meaning of the terms used in the claims. It will therefore be appreciated that the embodiments described above may be modified within the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
201810205418.4 | Mar 2018 | CN | national |