This disclosure relates to the field of optical technology, and particularly to an infrared projector, an imaging device, and a terminal device.
With the recent advancements of the hardware and algorithms, a depth camera is now small enough to be integrated into a portable device such as a smart phone (e.g., iPhone X and OPPO Find X). With the depth camera, many applications have been developed, such as Face ID, virtual reality (VR), augmented reality (AR), gesture control, 3D measurement, and Animoji® (iOS includes an animated emoji feature known as Animoji), etc. These commercial applications drive the needs for more accurate and higher resolution 3D shape measurement techniques.
Disclosed herein are implementations of an infrared projector, an imaging device, and a terminal device.
The infrared projector provided herein includes an infrared source, a light reflective section, a light filtering section, and at least one driving component. The infrared source is configured to emit infrared light. The light reflective section is configured to receive and reflect the infrared light from the infrared source. The light filtering section is configured to receive the infrared light reflected by the light reflective section. The at least one driving component is configured to drive at least one of the light reflective section and the light filtering section to move.
The imaging device provided herein includes an infrared projector and an infrared camera. The infrared projector includes an infrared source, a light reflective section, a light filtering section, and at least one driving component. The infrared camera is configured to emit infrared light. The light reflective section is configured to receive and reflect the infrared light emitted from the infrared source. The light filtering section is configured receive the infrared light reflected by the light reflective component and let the infrared light pass through to be projected on an object to form point cloud. The at least one driving component is disposed in at least one of the light reflective section and the light filtering section and configured to change a light path from the light reflective section to the object. The infrared camera is configured to capture an image of the project according to the point cloud.
The terminal device provided herein includes an infrared projector, an infrared camera, and a housing for accommodate the infrared projector and the infrared camera. The infrared projector includes an infrared source, a light reflective section, a light filtering section, and at least one driving component. The infrared source is configured to emit infrared light. The light reflective section is configured to receive and reflect the infrared light emitted from the infrared source. The light filtering section is configured receive the infrared light reflected by the light reflective component and let the infrared light pass through to be projected on an object to form point cloud. The at least one driving component is disposed in at least one of the light reflective section and the light filtering section and configured to change a light path from the light reflective section to the object. The infrared camera is configured to capture an image of the project according to the point cloud.
The disclosure is best understood from the following detailed description when read in conjunction with the accompanying drawings. It is emphasized that, according to common practice, the various features of the drawings are not to-scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity.
Embodiments of the disclosure will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denote by like reference numerals for consistency.
Initially, abbreviation and definition of key terms are given below to facilitate the understanding of the disclosure.
Super resolution imaging: Super resolution imaging is a class of techniques that enhance the resolution and exceed the resolution limit of an imaging system and acquire higher and more accurate resolution depth information. Super resolution imaging techniques are used in general image processing and in super-resolution microscopy.
3D measurement: 3D measurement is a technique that can scan the 3D shape and the depth information of objects in a scene.
3D sensor: 3D sensor, also known as 3D scanner, is a device that analyses a real-world object or environment to collect data on its shape and possibly its appearance (e.g. color). The collected data can then be used to construct digital three-dimensional models. The purpose of a 3D sensor is usually to create a 3D model. This 3D model consists of a point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If color information is collected at each point, then the colors on the surface of the subject can also be determined.
Point cloud: point cloud is a set of data points in space. Point clouds are generally produced by 3D scanners, which measure a large number of points on the external surfaces of objects around them. As the output of 3D scanning processes, point clouds are used for many purposes, including to create 3D CAD models for manufactured parts, for metrology and quality inspection, and for a multitude of visualization, animation, rendering and mass customization applications.
In order to obtain the depth information of images, many manufacturers have carried out research and development in recent years. At present, there are two mature technologies, that is, time of flight (TOF) and structured light.
TOF: this technology emits infrared light using a light emitting diode (LED) or a laser diode (LD), and the infrared light illuminates the surface of the object and then reflects back. Since the speed of light (v) is known, an infrared light image sensor can be used to measure the reflection time (t) of positions at different depths of the object, and the distance (depth) of different positions of the object can be calculated by a simple mathematical formula.
Structured light: this technology uses a laser diode or a digital light processor (DLP) to produce different light patterns, which are reflected by different depths of the object and cause distortion of the light patterns. For example, when the light of the straight stripe is irradiated onto a finger, since the finger is a three-dimensional arc shape, the straight line stripe is reflected back to become an arc-shaped stripe. After the arc-shaped stripe enters the infrared image sensor, the three-dimensional structure of the finger can be derived by using the arc-shaped stripe.
In the related art, depth maps captured with TOF cameras have very low data quality: the image resolution is rather limited and the level of random noise contained in the depth maps is very high. Considering this, Schuon S, et al. present LidarBoost, a 3D depth super-resolution method that combines several low-resolution noisy depth images of a static scene from slightly displaced viewpoints, and merges them into a high-resolution depth image.
The drawback of LidarBoost is that it can only be applied to static scenes, and cannot be used to non-static scenes, such as scanning a smiling user.
In US patent application U.S. Ser. No. 14/322,887 of Texas Instruments Inc, a super-resolution in structured light imaging is provided. This 887' case, however, limits the depth camera to the “structured light” technique. Moreover, the 887' case only considers one way of shifting the projected patterns, i.e., shifting the camera and therefore, is not flexible enough. In addition, the 887' case does not consider the device size constraint on portable devices.
In view of this, we propose technical solutions that can take high-resolution depth images with super-resolution dynamic scenes. The disclosure provides a super-resolution technique for depth cameras, which can acquire a high-resolution depth image by combining a plurality of images of a scene. Particularly, in addition to static scenes, the super-resolution technique provided herein can be applied to non-static scenes such as scanning a smiling user, and there is no need to shift the camera to shift projected patterns (point cloud) on an object such as a user face. A product implementing the technical solutions can be easily integrated into a smart phone is also provided due to small device size.
The following aspects of the disclosure contribute to its advantages and each will be described in detail below.
The terminal device 10 may further include a fingerprint senor for fingerprint recognition.
The 3D imaging device may further include a flood illuminator 46 and sensors, such as a proximity sensor 48 and an ambient light sensor 49. The flood illuminator 46 and the proximity sensor 48 can be integrated into one module.
The device of
When an object is close to a mobile phone equipped with the 3D imaging device, for example, the proximity sensor 48 or any other structured light sensor will be launched first to determine whether there is face information. Once it is determined that there is face information, the dot projector 44 will be started to project about more than 30,000 infrared light points on the user face to form point cloud illustrated in
Generally, the resolution of the 3D imaging device depends on several factors, such as the density of the point cloud generated by the dot projector, the resolution of an IR camera, and the distance between the 3D imaging device and the scanned object. The natural way to increase the imaging resolution is increasing the density of the point cloud, such that more sampling points can be obtained. At the same time, the resolution of the infrared camera also needs to be increased to identify these points. Here, we provide a different way to increase the resolution of the 3D image device with actuating or driving mechanism. With aid of the technical solutions provided herein, it is possible to achieve super-resolution results without increasing the resolutions of the point cloud and IR camera.
According to implementations of the disclosure, an infrared projector is provided.
The infrared source 52 is configured to emit infrared light. The light reflective section 54 is configured to receive and reflect the infrared light from the infrared source 52. The light filtering section 56 is an optical element and is configured to receive the infrared light reflected by the light reflective section 54. For example, the purpose of this light filtering section is to convert the infrared to a structured light or point cloud. The at least one driving component 58 is configured to drive at least one of the light reflective section 54 and the light filtering section 56 to move. For example, the at least one driving component 58 may be coupled with the light reflective section 54, coupled with the light filtering section 56, or coupled with both the light reflective section 54 and the light filtering section 56. The term “couple” used herein can be comprehended as direct connection, attachment, and the like. In order to save internal space of the infrared projector, the driving component(s) 58 can be attached to or bound with the light reflective section 54 and/or the light filtering section 56. As used in the context, the term “at least one of A and B” means A, B, or both A and B, the terminal “A and/or B” means A, B, or both A and B. With such principle in mind, one of ordinary skill in the art may understand that by expressing as “at least one driving component 58 is configured to drive at least one of the light reflective section 54 and the light filtering section 56 to move”, it means that the at least one driving component 58 may be configured to drive the light reflective section 54 to move, drive the light filtering section 56 to move, or drive both the light reflective section 54 and the light filtering section 56 to move. In case multiple components are included in the light reflective section 54, as will be detailed below, the at least one driving component 58 may be configured to drive all or part of the components of the light reflective section 54 to move. In order to drive multiple components of the light reflective section 54 to move, sometimes, multiple driving components 58 will be needed accordingly. The term “move” used herein should be broadly interpreted, for example, it may be exchanged with the term “vibrate”, “shift”, and the like, and may refer to “move in vertical direction”, “move in horizontal direction”, “move or rotate axially” and other motions which can change the incidence angle or exit angle of infrared light, or change the light path or transmission direction of infrared light. The disclosure it not particularly limited.
In one implementation, the at least one driving component 58 is structured such that the light reflective section 54 can be driven to move.
In one implementation, as illustrated in
As to the position relationship between the first reflective component 541 and the second reflective component 542, the present disclosure is not particularly limited. For example, the first reflective component 541 and the second reflective component 542 can be arranged horizontally such that one component is next to the other. As illustrated in
The first reflective component 541 and the second reflective component 542 can be a reflective mirror, reflective plate, or other means with light reflective functions. In the following, take mirror as an example of the reflective component for illustrative purpose only, without any intent to restrict the disclosure.
As can be seen from
Similarly, the driving component 58 can be disposed at the second reflective component 542 rather than the first reflective component 541 and in this case, the first reflective component 541 can be configured as a fixed mirror.
Alternatively, even not illustrated in the figures, two driving components 58 may be used to further enhance the actuating effect. For example, one driving components 58 is attached to the first reflective component 541 and the other driving component 58 is attached to the second reflective component 542.
Still another example, different from the structures of
The foregoing driving component 58 can be implemented with an actuator for example, one example of the actuator is illustrated in
The advantages of a micro-mirror actuator are based upon their small size, low weight, and minimum power consumption. Further advantages arise along with the integration possibilities. For example, small size of micro-mirror actuator can be disposed close to the infrared source. In addition, with aid of the micro-mirror actuator, the optical path is folded into a small space and it can be easily integrated into a smart phone.
Besides, even each technical solution provided herein with its own advantages, compared with other solutions provided herein, under circumstances of fast resonant conditions, it is feasible and beneficial to use micro-mirror actuator in fast resonant condition for high-frequency scan to resist to the inertia of the infrared projector.
The foregoing depicts situations where the driving component 58 is configured to drive the light reflective section 54 to move. In addition to the above identified structure or alternatively, the driving component 58 is configured to drive the light filtering section 56 to move. As illustrated in
The light filtering section 56 can be a diffractive optical element (DOE) or a mask with evenly or unevenly distributed small light through holes.
Based on this,
In order to expedite the understanding of the disclosure, certain examples will be described.
In the following, taking a mask with evenly distributed light through holes as an example of the light filtering section 56 of the disclosure, and the mask is mounted on a three mode horizontal translational actuator, that is, an actuator can move horizontally. In this situation, the actuator can either keep the point cloud in position, or shift it to the left or to the right. As illustrated in
We can further increase the super-resolution ability of the infrared projector by combining multiple actuators. For example, as illustrated in
For example, here, suppose two actuators are adopted and one actuator moves horizontally while the other actuator moves vertically. Referring to
Instead of shifting the infrared projector evenly, that is, shifting the mask evenly, we can randomly shift the infrared projector or mask to cover different sets of locations as long as we can retrieve the geometry information accurately.
Obviously, the present implementation does not particularly specify the actuator for achieving the infrared projector, and any other configurations may be employed as far as it is appropriate. For example, a multi-mode actuator which can move horizontally and vertically can be used to achieve the same purpose as using two horizontal translational actuators.
For example, it is assumed that we use a point cloud of 30,000 dots and a depth camera of 90 Hz, the present disclosure will yield slightly different results compared with the related art. As can be seen from
It should be noted that
Besides, in the related art where no actuator is employed, if the scanned surface such as a user face has smaller variation, lower resolution will be obtained; while in this disclosure, even the scanned surface has larger variation, higher resolution can still be obtained.
The foregoing infrared projector is small enough to be integrated into a terminal device such as a smart phone. Based on this and with the understanding that the infrared projector provided herein is applicable more generally to any 3D mapping, scanning, or imaging environments, embodiments of the disclosure further provides an imaging device and a terminal device.
According to embodiments of the disclosure, an imaging device is further provided. As illustrated in
As illustrated in
The infrared source 52 is configured to emit infrared lights. The light reflective section 54 is configured to receive and reflect the infrared light emitted from the infrared source 52. The light filtering section 56 is configured receive the infrared light reflected by the light reflective section and let the infrared light pass through to be projected on an object to form point cloud. The at least one driving component 58 is disposed in at least one of the light reflective section 54 and the light filtering section 56 and configured to change a light path from the light reflective section 54 to the object, that is, change exit angles of the infrared light at the light filtering section 56.
The infrared camera 60 is coupled with the infrared projector 50 and is configured to capture an image of the project according to the point cloud formed by the infrared projector 50. For example, the infrared camera 60 is configured to read the dot pattern of the point cloud, capture its infrared image, draw a precise and detailed depth map for user face, and sends the data to a processor of a terminal device for matching for example.
The at least one driving component can include one or more than one actuators mentioned above with reference to the accompany drawings.
In one implementation, the light filtering section 56 is disposed on one of the at least one driving component. For example, the light filtering section 56 which may be embodied as a DOE is mounted on an actuator, as illustrated in
In another implementation, as illustrated in
In
Still possibly, the actuator does not necessarily to be integrated with a mirror, in fact, individual components which can be combined to achieve the purpose of shifting the infrared light exiting the light filtering section 56 can be employed. Besides, in
Based on the above, for example, based on the structure of
Still another example, based on the structure of
According to still another embodiment of the disclosure, a terminal device is provided. The terminal device can take the form of any kind of devices with 3D scanning, mapping, or imaging functions, such mobile devices, mobile stations, mobile units, machine-to-machine (M2M) devices, wireless units, remote units, user-agent, mobile client, and the like. Examples of the terminal include but are not limited to a mobile communication terminal, a wired/wireless phone, a personal digital assistant (PDA), a smart phone, a vehicle-mounted communication device.
Referring back to
In one implementation, the at least one driving component comprises an actuator equipped with a mirror (micro-mirror actuator) and is arranged in the light reflective section, the light reflective section further comprises a light reflective component such as a mirror, a reflective plate, or other reflective mechanism.
The micro-mirror actuator can be disposed closer to the infrared source than the reflective component. In this case, the actuator is configured to receive and reflect, via the mirror, the infrared light from the infrared source, and the reflective component is configured to receive the infrared light from the actuator and reflect the infrared light received from the actuator to the light filtering section.
Alternatively, compared with the reflective component, the micro-mirror actuator can be disposed far away from the infrared source is close to the light filtering section. In this case, the reflective component is configured to receive and reflect the infrared light from the infrared source, and the actuator is configured to receive the infrared light from the reflective component and reflect, via the mirror, the infrared light received from the reflective component to the light filtering section.
With aid of the infrared projector, the imaging device, or the terminal device provided herein, much smoother and sharper-edge 3D shape for various applications, such as VR, AR can be obtained. It is also possible to enable better 3D object measurement even with low resolution point clouds or low resolution infrared cameras.
For details not provided herein, reference is made to the foregoing infrared projector and imaging device. Embodiments or features thereof can be combined or substituted with each other without conflicts.
One of ordinary skill in the art can understand that all or part of operations of the infrared projector, the imaging device, and the terminal device can be completed by a computer program to instruct related hardware, and the program can be stored in a non-transitory computer readable storage medium. In this regard, according to embodiments of the disclosure, a non-transitory computer readable storage medium is provided. The non-transitory computer readable storage medium is configured to store at least one computer readable program which, when executed by a computer, cause the computer to carry out all or part of the operations of the method for signal transmission of the disclosure. Examples of the non-transitory computer readable storage medium include but are not limited to read only memory (ROM), random storage memory (RAM), disk or optical disk, and the like.
While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law
The present disclosure is a continuation-application of International (PCT) Patent Application No. PCT/CN2019/102062 filed Aug. 22, 2019, which claims priority of U.S. Provisional Patent Application No. 62/722,769, filed on Aug. 24, 2018, the entire contents of all of which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62722769 | Aug 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2019/102062 | Aug 2019 | US |
Child | 17176815 | US |