This disclosure relates to image formation in digital photography, in particular to generating an enhanced time-of-flight depth map for an image.
Time-of-flight (ToF) sensors are used to measure the distance between objects captured by the camera and the sensor plane. As illustrated in
In ToF imaging, the depth map results from a RAW measurement which correlates the reflected light with a phase-shifted input pulse, as illustrated in
Within the dashed circle at 201, small objects have been oversmoothed or smoothed out. For example, the distance of the thin black cables is not correctly measured. At 202, however, the gradient is nicely recovered even for this visually challenging part of the image. Therefore, the ToF sensor provides a correct gradient measurement in this reflecting area. These textureless areas are difficult for classical depth estimation approaches to handle. In the solid circles, shown at 204 and 205, it can be seen that dark objects, as well as far away objects, are not correctly captured. Moreover, the low depth image resolution (in this example 240×180 pixels) suffers from additional information loss through image alignment, which can be seen in the lower right corner at 203 of
There have been several attempts to overcome the drawbacks of low quality ToF data by either enriching the data with another input source or utilizing the capabilities of machine learning through data pre-processing.
Methods employing deep learning for ToF include Su, Shuochen, et al., “Deep end-to-end time-of-flight imaging”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, where it is proposed to use an end-to-end learning pipeline that maps the RAW correlation signal to a depth map. A network is trained on a synthetic dataset. This method can generalize, to some extent, to real data.
In another approach, U.S. Pat. No. 9,760,837 B1 describes a method for depth estimation with time-of-flight. This method utilises the RAW correlation signal and produces a same-resolution depth output.
In Agresti, Gianluca, et al., “Deep learning for confidence information in stereo and tof data fusion”, Proceedings of the IEEE International Conference on Computer Vision 2017, and Agresti, Gianluca, and Pietro Zanuttigh, “Deep learning for multi-path error removal in ToF sensors”, Proceedings of the European Conference on Computer Vision 2018, classical stereo vision is fused with time-of-flight sensing to increase resolution and accuracy of both synthetically created data modalities. The RGB data input pipeline is not learned and thus the use of the RGB is only indirect by leveraging the predicted stereo depth maps. The ToF data is separately reprojected and upsampled with a bilateral filter.
U.S. Pat. No. 8,134,637 B2 proposes a method to super-resolve the depth image of a ToF sensor with the help of an RGB image without learning. This method is not a one-step approach, so the multiple individual modules of their method propagate an error each which accumulates through the pipeline.
It is desirable to develop a method for generating an enhanced ToF depth map for an image.
According to a first aspect there is provided an image processing system configured to receive an input time-of-flight depth map representing the distance of objects in an image from a camera at a plurality of locations of pixels in the respective image, and in dependence on that map to generate an improved time-of-flight depth map for the image, the input time-of-flight depth map having been generated from at least one correlation image representing the overlap between emitted and reflected light signals at the plurality of locations of pixels at a given phase shift, the system being configured to generate the improved time-of-flight depth map from the input time-of-flight depth map in dependence on a colour representation of the respective image and at least one correlation image.
Thus, the input ToF depth map may be enriched with features from the RAW correlation signal and processed with co-modality guidance from aligned colour images. The system therefore utilizes cross-modality advantages. ToF depth errors may also be corrected. Missing data may be recovered and multi-path ambiguities may be resolved via RGB-guidance.
The colour representation of the respective image may have a higher resolution than the input time-of-flight depth map and/or the at least one correlation image. This may increase the resolution of the improved time-of-flight depth map.
The system may be configured to generate the improved time-of-flight depth map by means of a trained artificial intelligence model. The trained artificial intelligence model may be an end-to-end trainable neural network. Because the pipeline is trainable end-to-end, accessing all three different modalities (colour, depth and RAW correlation) at the same time may improve the overall recovered depth map. This may increase the resolution, accuracy and precision of the ToF depth map.
The model may be trained using at least one of: input time-of-flight depth maps, correlation images and colour representations of images.
The system may be configured to combine the input time-of-flight depth map with the at least one correlation image to form a correlation-enriched time-of-flight depth map. Enriching the input ToF depth map with the encoded features from a low-resolution RAW correlation signal may help to reduce depth errors.
The system may be configured to generate the improved time-of-flight depth map by hierarchically upsampling the correlation-enriched time-of-flight depth map in dependence on the colour representation of the respective image. This may help to improve and sharpen depth discontinuities.
The improved time-of-flight depth map may have a higher resolution than the input time-of-flight depth map. This may allow for improvement when rendering images captured by a camera.
The colour representation of the respective image may be a colour-separated representation. It may be an RGB representation. This may be a convenient colour representation to use in the processing of the depth map.
According to a second aspect there is provided a method for generating an improved time-of-flight depth map for an image in dependence on an input time-of-flight depth map representing the distance of objects in the image from a camera at a plurality of locations of pixels in the respective image, the input time-of-flight depth map having been generated from at least one correlation image representing the overlap between emitted and reflected light signals at the plurality of locations of pixels at a given phase shift, the method comprising generating the improved time-of-flight depth map from the input time-of-flight depth map in dependence on a colour representation of the respective image and at least one correlation image.
Thus, the input ToF depth map may be enriched with features from the RAW correlation signal and processed with co-modality guidance from aligned colour images. The method therefore utilizes cross-modality advantages. ToF depth errors may also be corrected. Missing data may be recovered and multi-path ambiguities may be resolved via RGB-guidance.
The colour representation of the respective image may have a higher resolution than the input time-of-flight depth map and/or the at least one correlation image. This may improve the resolution of the improved time-of-flight depth map.
The method may comprise generating the improved time-of-flight depth map by means of a trained artificial intelligence model. The trained artificial intelligence model may be an end-to-end trainable neural network. Because the pipeline is trainable end-to-end, accessing all three different modalities (colour, depth and RAW correlation) at the same time may improve the overall recovered depth map. This may increase the resolution, accuracy and precision of the ToF depth map.
The method may further comprise combining the input time-of-flight map with the at least one correlation image to form a correlation-enriched time-of-flight depth map. Enriching the input ToF depth map with the encoded features from a low-resolution RAW correlation signal may help to reduce depth errors.
The method may further comprise hierarchically upsampling the correlation-enriched time-of-flight depth map in dependence on the colour representation of the respective image. This may help to improve and sharpen depth discontinuities and may improve the resolution of the improved time-of-flight depth map.
The present application will now be described by way of example with reference to the accompanying drawings. In the drawings:
The input time-of-flight depth map 301 is generated from at least one RAW correlation image representing the overlap between emitted and reflected light signals at the plurality of locations of pixels at a given phase shift. As is well known in the art, using the speed of light, this RAW correlation image data is processed to generate the input ToF depth map. This processing of the RAW correlation data to form the input ToF depth map may be performed separately from the pipeline, or in an initialisation step of the pipeline. The noisy ToF input depth 301 is fed into the learning framework (labelled ToF upsampling, ToFU), indicated at 302.
The pipeline also takes as an input a colour representation 303 of the respective image for which the input depth map 301 has been generated. In this example, the colour representation is a colour-separated representation, specifically an RGB image. However, the colour representation may comprise one or more channels.
The pipeline also takes as an input at least one RAW correlation image, as shown at 304. Therefore, multi-modality input data is utilised.
The system is configured to generate an improved time-of-flight depth map 305 from the input time-of-flight depth map 301 in dependence on the colour representation of the respective image 303 and at least one correlation image 304.
The system and method will now be described in more detail below with reference to
In this example, an end-to-end neural network comprises an encoder-decoder convolutional neural network with a shallow encoder 401 and a decoder 402 with guided upsampling and depth injection. The shallow encoder 401 takes RAW correlation images 403 as input. The network encodes the RAW correlation information 403 at the original resolution of 1/1 from the ToF sensor to extract deep features for depth prediction.
During the decoding stage, the input ToF depth data, shown at 404 (which may be noisy and corrupted) is injected (i.e. combined with the RAW correlation data) at the original resolution 1/1 and is then hierarchically upsampled to four times the original resolution with RGB guidance. The input ToF depth information is injected in the decoder at the ToF input resolution stage, thus supporting the network to predict depth information with metric scale.
RGB images at 2× and 4× the original resolution of the ToF depth map, shown at 405 and 406 respectively, are utilized during guided upsampling (GU) to support the residual correction of a directly upsampled depth map and to enhance boundary precision at depth discontinuities.
The noisy ToF depth data 404 is therefore injected and upsampled to four times the original resolution with RGB guidance to generate an enhanced ToF depth map, shown at 407.
Co-injection of RGB and RAW correlation image modalities helps to super-resolve the input ToF depth map by leveraging additional information to fill the holes (black areas in the input ToF depth map), predict further away regions and resolve ambiguities due to multi-path reflections.
The above-described method may reliably recover the depth for the whole scene, despite the depth injection from the input ToF depth map being noisy and corrupted and far away pixel values being invalidated. The guided upsampling helps to improve and sharpen depth discontinuities. In this example, the final output is four times the resolution of the original input ToF depth map. However, the depth map may also be upsampled to higher resolutions.
In summary, the modalities utilized are as follows:
Layers before Injection: 1×2D-UpConvolution (from ½ Input Resolution to 1/1 Input Resolution)
Depth Injection:
For each Input:
Concatenation
4× ResNetBlock
Residual=2D-Convolution
Output=Depth+Residual
Convolution of Concatenation and Upsampling with Bilinear UpSampling (Depth Prediction at 2× Input Resolution)
Layers before GU 2: 1×2D-UpConvolution of Convolution of Concatenation (from 2× Input Resolution to 4× Input Resolution)
For each Input:
Concatenation
4× ResNetBlock
Residual=2D-Convolution
Output=Depth+Residual
Equations (1)-(4) below describe an exemplary Loss Function. For training the proposed network, the pixel-wise difference between the predicted inverse depth to mimic a disparity and the ground truth is minimized by exploiting a robust norm for fast convergence together with a smoothness term:
L
Total=ωsLSmooth+ωDLDepth (1)
where:
L
Smooth
=Σ|∇D(p)|Te−|∇I(p)| (2)
and:
L
Depth=ΣωScale|D(p)−DPred(p)|Barron (3)
where |*|Barron is the Barron Loss as proposed in Barron, “A General and Adaptive Robust Loss Function”, CVPR 2019, in the special form of a smoothed L1 norm:
f(x)=√{square root over ((x/2)2+1)}−1 (4)
ωScale accounts for the contribution of LDepth at each scale level, and D is the inverse depth and I is the RGB image. As the disparity values for lower scale levels should be scaled accordingly (for example, half the resolution results in half the disparity value), the value for the loss term should to be scaled inversely by the same scale parameter. Additionally, the amount of pixels decreases quadratically with every scale level, resulting in a scale weight for equal contribution of each scale level of: ωScale=Scale*Scale2=Scale3.
In one implementation, for generating training data, together with accurate depth ground truth, a physics-based rendering pipeline (PBRT) may be used together with Blender, as proposed in Su, Shuochen, et al., “Deep end-to-end time-of-flight imaging”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018. A low resolution version of the depth is clipped and corrupted with noise to simulate the ToF depth input signal.
Further exemplary results for different scenes are shown in
In
Therefore depth injection may guide the network to predict a well-defined depth at lower resolution which gets refined with RGB-guidance during the hierarchical guided upsampling to recover the depth at four times the original resolution.
The transceiver 905 is capable of communicating over a network with other entities 910, 911. Those entities may be physically remote from the camera 901. The network may be a publicly accessible network such as the internet. The entities 910, 911 may be based in the cloud. Entity 910 is a computing entity. Entity 911 is a command and control entity. These entities are logical entities. In practice they may each be provided by one or more physical devices such as servers and data stores, and the functions of two or more of the entities may be provided by a single physical device. Each physical device implementing an entity comprises a processor and a memory. The devices may also comprise a transceiver for transmitting and receiving data to and from the transceiver 905 of camera 901. The memory stores in a non-transient way code that is executable by the processor to implement the respective entity in the manner described herein.
The command and control entity 911 may train the artificial intelligence models used in the pipeline. This is typically a computationally intensive task, even though the resulting model may be efficiently described, so it may be efficient for the development of the algorithm to be performed in the cloud, where it can be anticipated that significant energy and computing resource is available. It can be anticipated that this is more efficient than forming such a model at a typical camera.
In one implementation, once the algorithms have been developed in the cloud, the command and control entity can automatically form a corresponding model and cause it to be transmitted to the relevant camera device. In this example, the pipeline is implemented at the camera 901 by processor 904.
In another possible implementation, an image may be captured by the camera sensor 902 and the image data may be sent by the transceiver 905 to the cloud for processing in the pipeline. The resulting image could then be sent back to the camera 901, as shown at 912 in
Therefore, the method may be deployed in multiple ways, for example in the cloud, on the device, or alternatively in dedicated hardware. As indicated above, the cloud facility could perform training to develop new algorithms or refine existing ones. Depending on the compute capability near to the data corpus, the training could either be undertaken close to the source data, or could be undertaken in the cloud, e.g. using an inference engine. The method may also be implemented at the camera, in a dedicated piece of hardware, or in the cloud.
The present application therefore uses an end-to-end trainable deep learning pipeline for ToF depth super-resolution, which enriches the input ToF depth map with the encoded features from a low-resolution RAW correlation signal. The composed feature maps are hierarchically upsampled with co-modality guidance from aligned higher-resolution RGB images. By injecting the encoded RAW correlation signal, the ToF depth is enriched by the RAW correlation signal for domain stabilization and modality guidance.
The method utilizes cross-modality advantages. For example, ToF works well at low-light or for textureless areas, while RGB works well in bright scenes or scenes with darker-textured objects.
Because the pipeline is trainable end-to-end, accessing all three different modalities (RGB, depth and RAW correlation) at the same time, they can mutually improve the overall recovered depth map. This may increase the resolution, accuracy and precision of the ToF depth map.
ToF depth errors may also be corrected. Missing data may be recovered, as the method measures farther away regions, and multi-path ambiguities may be resolved via RGB-guidance.
The network may utilise supervised or unsupervised training. The network may utilise multi-modality training, with synthetic correlation, RGB, ToF depth and ground truth renderings for direct supervision.
Additional adjustments may be made to the output ToF depth map in dependence on the ground truth image.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present application may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the application.
This application is a continuation of International Application No. PCT/EP2019/071232, filed on Aug. 7, 2019, which is hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2019/071232 | Aug 2019 | US |
Child | 17586034 | US |