This application is a U.S. National Stage Application under 35 U.S.C. 371 of International Patent Application No. PCT/EP2020/077179, filed Sep. 29, 2020, which is incorporated herein by reference in its entirety.
This application claims the benefit of European Patent Application No. 1930,6245.2, filed Sep. 30, 2019, which is incorporated herein by reference in its entirety.
The present embodiments relate generally to image processing and more particularly to using depth-maps in captured images using transmitted camera parameters.
Conventional cameras capture light from a three-dimensional scene on a two-dimensional sensor device sensitive to visible light. Light sensitive technology used in such imaging devices is often based on semiconductor technology, capable of converting photons into electrons such as, for example, charge coupled devices (CCD) or complementary metal oxide technology (CMOS). A digital image photosensor, for example, typically includes an array of photosensitive cells, each cell being configured to capture incoming light. A 2D image providing spatial information is obtained from a measurement of the total amount of light captured by each photosensitive cell of the image sensor device. While the 2D image can provide information on the intensity of the light and the color of the light at spatial points of the photosensor(s), no information is provided on the direction of the incoming light.
Generating 3D or 4D renderings from 2D captured images is complex as visual perceptions has to be created after the fact. Two important considerations in creating accurate visual perceptions have to do with parallax estimation and depth map calculations. A depth map is an image or image channel that contains information relating to the distance of the surfaces of scene objects from a viewpoint. In other words, depth-maps are special images where each pixel records the distance (or the inverse of the distance, or any information which is function of the distance,) of the objects being observed at that position versus a camera. A depth-map may be computed, for example, using several cameras observing the same field-of-view and deducing depth with the variation of parallaxes between views. In practice, estimated depth-maps shows spurious pixels. Many reasons make depth-map estimation difficult. Some of these difficulties can include objects being partially masked from one camera to the next; variation of reflected light from an object observed at different position; surfaces with no or few textures making parallax estimation difficult; and sensitivity variation among cameras.
Parallax estimation and concept is important in visual perception and can be defined as a displacement or difference in the apparent position of an object viewed along two different lines of sight and it measured by the angle of inclination between those two lines. Each human eye has a slightly different visual line that is both different and overlapping. This concept allows for depth perception to be achieved. Parallax also affects optical instruments that view objects from slightly different angles.
In videos and streaming content, providing stereoscopic visual perception becomes even more complicated. Sometimes multiple views of the same scene image captured at different angles are provided to create appropriate parallax and depth map. However, storage and processing become challenging because related data is extensive. For example, to provide motion parallax, data relating to a multi-view content is needed. The information relating to the content must be dense enough to provide enough overlap between views but with different viewing angle to allow the effect to be provided. This is one key element any compression algorithm must exploit and address so as to reduce the amount of data to be transmitted (which needs to also take into account respective camera parameters). Unfortunately, prior art currently does not provide easy and practical techniques in this arena. Consequently, it is desirous to provide techniques that require less data to be captured and used to provide three and four dimensional visual perspectives.
A method and system are provided for processing image content. The method comprises receiving information about a content image captured at least by one camera. The content includes multi-view representation of an image including both distorted and undistorted areas. The camera parameters and image parameters are then obtained and used to determine to which areas are undistorted and which areas are distorted in said image. This is used to calculate depth map of the image using the determined undistorted and distorted information. A final stereoscopic image is then rendered that uses the distorted and undistorted areas and calculation of depth map.
Different embodiments of will now be described, by way of example only, and with reference to the following drawings in which:
Most image captures provide two dimensional images. To create a three or four dimensional renderings of these images, different techniques can be used. For example, two or more views of the scene can be used for its reconstruction using a stereo pair of calibrated or uncalibrated cameras or through multiple images using a single camera or by capturing the same image through different angles such as when using a light field/plenoptic camera.
To enable the recreation of multi-dimensional visual perceptions, multi-view content transmitted needs to include pertinent information such as depth information to be effective. When more than one camera or angle is used, depth maps for each camera is needed with a well-defined MVD or Multi-View and Depth format. This information is often transmitted as the input in a format that used for the extension of technologies such in High Efficiency Video Coding (HEVC) standard for video compression/decompression.
As discussed, to provide motion parallax, a multi-view content must be dense enough to provide enough overlap between views but with different viewing angle to allow the effect but since this requires a lot of captured information compression algorithm becomes important in reducing the amount of data to be transmitted. In former 3D-HEVC and MV-HEVC extension of HEVC codec inter-view predictions were introduced. At that time multi-view camera systems were mostly considered as horizontal only systems and prediction mechanisms were exploiting only horizontal direction. Therefore, interview differences were defined as horizontal disparity. It was possible to calculate a corresponding pixel in another view using this disparity. Current camera arrays are no more horizontal only but more in a 2D or even in a 3D arrangement. Calculating a corresponding pixel in a neighbouring view requires a more complex processing which must take into account respective camera parameters. To combat these issues and shortcomings additional information to characterize cameras such as distortion information should be provided. In one embodiment, a pair camera mode can be introduced to represent the matrix of coefficients to calculate pixel positions in respective views.
An MPEG-I program targeting the delivery of content (such as 6DoF content) can allow the end-user to move inside the content and to perceive parallax. The rendered content at the client side should be adapted in real time to head movements of the observer. To create this parallax, one should deliver not only the usual 2D content but also content corresponding to what is not viewed with the initial angle but could be viewed from a different one when the viewer moves his head. This content can be typically captured by a camera array, each camera seeing the scene from slightly different angles and different positions. The distance between cameras gives roughly the amount of parallax the system will be able to provide. The amount of data to transmit a multi-view content in such a case may be exhaustive. Furthermore, to be able to synthetize intermediate views to render correctly any viewing position, some depth maps must be transmitted associated with the texture. The MVD format has already been used in the past to deliver such content. It was for instance already used as input format for the 3D-HEVC extension of HEVC. In this standard, camera parameters were transmitted as SEI messages to be used at the decoder side.
In some instances, especially when renderings are volumetrically exhaustive, camera parameters are mandatory in order to precisely calculate corresponding positions of a given point in space in any of the input views. For example, in a 3D-HEVC, multi-view contents are only provided from horizontally aligned cameras and then they can be later rectified. This means that the different views were pre-processed in order to have their respective camera principal point on a same grid. This also means that for a given point in space, the distance between its position in two different views corresponding to two different cameras was a disparity expressed only in horizontal direction.
When multiple cameras are used that are not horizontally aligned, nor rectified without considering any pre-processing such as a distortion correction. Some kind of calibration may be desirous and camera parameters become important. Camera parameters that are needed include:
Intrinsic parameters deal with the camera's internal characteristics, such as, its focal length, skew, distortion, and image center. Extrinsic parameters, on the other hand describe its position and orientation in the world. Knowing intrinsic parameters is an essential first step for 3D computer vision, as it allows you to estimate the scene's structure in Euclidean space and removes lens distortion, which degrades accuracy. In geometric optics, distortion is a deviation from rectilinear projection, a projection in which straight lines in a scene remain straight in an image. It is a form of optical aberration.
At the decoder side, the camera parameters are extracted from the stream and calculations are performed to calculate corresponding pixel positions in different views (for view prediction in the decoding process for instance). These calculations include matrix products and inverse matrix calculation which could be quite computationally intensive. In order to reduce the decoder complexity, it is possible to pre-compute these camera parameters at the encoder side and to transmit them in the bitstream in an improved manner, from a decoder perspective.
Another limitation presented by the prior art is in the way camera parameters are described (G.14.2.6 section in HEVC standard) is the amount of calculation they require to be used. Each value of each rotation or translation matrix is given in a scientific notation. It corresponds to a sign (1 bit) an exponent (6 bit) and a mantissa (vbit). Intrinsic parameters (focal and skew and principal points) are also described using the same notation. This notation requires some calculations before being used at the decoder side. In alternative embodiment, it is possible to send in parallel 32 bits fixed point version of these parameters to simplify calculations at the decoder side.
In one embodiment, it may be possible to simplify calculations at the decoder side is to remove part of the calculations to be done when manipulating camera parameters. In one embodiment, as discussed later, this entire calculation can be performed in a very precise manner that correspondingly provides positions of a given point in space from one view to another one is presented. This allows extraction of information to convert one position corresponding to one camera to another position corresponding to another camera. In one embodiment, a pre-calculated matrix can be provided in order to simplify the amount of calculation needed, particularly on the decoder side.
In another embodiment, when camera parameters that have been associated to the acquisition of each view, techniques can be used that allow for the transmission of camera parameters that:
In addition, to ease understanding of the concepts that are presented a multi-view and depth format is provided in the input format for an encoder. (Multi-view+depth means for each views the RGB content is associated with a depth map at the same pixel resolution. This depth map may be generated by any means (calculation, measurement, etc.) as known by those skilled in the art. In one embodiment, to correctly exploit such content from multiple cameras, a calibration phase is required to determine relative position of cameras (extrinsic parameters) and individual camera parameters (intrinsic parameters) such as the focal length or the principal point position.
In one embodiment, this calibration phase is done before the shooting using specific test patterns and associated software. In order to understand the techniques developed and used in conjunction with some of the embodiments used herein, some background material regarding the compression of multi-view and depth content information needs to be explored. For this purpose, it is useful to explore an example that uses various views of different points in space and calculate corresponding pixels positions in different views for at least one of these points in space. In one embodiment, as shown in
In this embodiment, the intrinsic and extrinsic parameters are used to allow for the calculation of P′, given information relating to point P. Considering a camera calibrated as a plain pinhole. Let
be its intrinsic matrix:
In one embodiment, if
are the coordinates of a given point in the Coordinate System (CS) of the camera, the coordinates of its image projection
are given (in pixel) by:
Where the symbol ≡ denotes the equivalence relation between homogeneous vectors:
Let P=(R T)∈3×4 denote the pose matrix of the camera, where R∈3×3 and T∈3×1 respectively denote the camera's orientation and position in a reference Coordinate System (CS). The camera's extrinsic matrix is defined by:
Q=(R−1−R−1·T)∈3×4
If
denote the coordinates of the same point respectively in the Camera CS and in the reference CS, then
This can be further understood by reviewing
For a given camera and a current view, let
be its index. Let be the current pixel, and z be its presumed depth. The corresponding match in a reference view #c′ is:
Given these parameters and equation (1), it is possible to calculate corresponding pixels positions in different views for one point in space while transmitting per camera:
In order to precompute the projection of one pixel onto another view, instead of transmitting intrinsic and extrinsic matrix it is possible to transmit for each group of two cameras the required product of matrix corresponding to equation (1). Replacing P by P=(R T) and Q by Q=(R−1−R−1. T)
Calculating the right part of the equation
And then finally as
In terms of storage, the two-by-two camera parameters approach requires therefore only a 3×3 matrix Acc, and a 3×1 vector Bcc, per camera pair:
Where
In theory any combination of camera pairs can be transmitted which mean n2 set of information for n cameras. Nevertheless, in one embodiment, the prediction of the view to be decoded (using a view already decoded) in all the combinations are not required. Only a given number of camera pair are required following usual dependencies between encoded views. The number of pairs to be transmitted is more likely to be in the order on 2*n instead of n2 “number_of_camera_pairs”.
In this embodiment, which is also illustrated in
Multi-View Contents Presenting Optical Distortion.
The previous description was based on undistorted content which means the original content from the camera have been modified in order to remove distortion brought by the optical system. Now consider content without correcting this distortion. The pinhole model fails to provide accurate correspondences, because of the geometric distortions occurring in actual optical systems. First, let
be a 3D point the CS of a given camera. Let's consider the corresponding homogeneous vector
Taking the optical distortions into account, the image projection equation becomes:
W:2→2 denoting the forward warping operator induced by distortion. W is usually a polynomial and therefore defined by a set of coefficients in floating-point format: {ak}k≤N
There is a variety of distortion models in the literature. E.g. Zhang only considers the first two terms of radial distortion (Z. Zhang, “A flexible new technique for camera calibration”, in IEEE Trans Pattern Analysis & Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, November 2000):
Where dr=a1·r2+a2·r4, r=√{square root over (s2+t2)} denoting the radius of the projection. On the other hand, in his popular Matlab toolbox (http://www.vision.caltech.edu/bougueti/calib_doc/), Bouguet uses a more sophisticated 5-coefficient model that considers as well tangential distortion and higher-order radial distortion:
Inverting such polynomial models would lead to a rational fraction, which would induce pointless computational complexity. It is quite straightforward to approximate the undistortion (The wording “undistortion”, meaning “inverse distortion”, corresponds to the warping from the distorted rays (that end up onto the image sensor in the optical system) back to the undistorted rays in the object world) warping by a polynomial of the same degree.
Presently, several embodiments for distorted contents can now be explored. The first one requires polynomial computations but restricts the metadata to their most compact form. The subsequent improve the in-loop performances but requires the pre-computation of an undistortion warp map.
In this embodiment, based on the model applied already, the number of parameters to described by the distortion can vary. The first information to transmit is the model applied (among a list of known models). The number of parameters is deduced from the model. Both the distortion and the undistortion information are sent to avoid calculating the undistortion coefficients at the decoding side. In term of syntax the transmission of such information is reflected in
When considering the distortion equation (1) becomes:
And—looking back to rotation matrices and translation vectors:
Which can be reformulated as:
It should also be noted that because of the distortions, the math cannot be performed as a single linear algebra operation. In addition, This embodiment requires the storage of two polynomials Wc and Wc−1 and two 2×3 matrices
This is illustrated in
The equation (5) becomes:
Or equivalently:
Which can be reformulated as:
In terms of storage, this embodiment requires one polynomial Wc, one undistortion map Mcundist and one 2×3 matrix
It should also be noted that the pre-computation of the undistortion warp map allows to save one half of the polynomial math. Warp maps may present a lower resolution than input images. In that case, warped positions are interpolated from pre-computed nodes. A subsampling factor can be applied in both horizontal and vertical direction in order to reduce the amount of information to transmit. This is illustrated further in
In another embodiment, instead of defining a subsampling factor for the unwarp map (Subsampling_factor_X and Subsampling_factor_Y), the size in horizontal and in vertical of the undistortion map is directly transmitted.
Warp maps can also be used to avoid the remaining polynomial math by defining Mcdist: 2→2 as follows:
Mcdist[u,v]=
In this case the equation (4) becomes:
Or equivalently:
Which can be reformulated as:
In terms of transmission, this embodiment requires two warp maps Mcdist and Mcundist per camera, in addition to the 3×3 matrix Acc, and to the 1×3 vector Bcc, per couple of cameras. This is captured in
Number | Date | Country | Kind |
---|---|---|---|
19306245 | Sep 2019 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/077179 | 9/29/2020 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2021/063919 | 4/8/2021 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20220084300 | Izumi | Mar 2022 | A1 |
Number | Date | Country |
---|---|---|
WO 2010037512 | Apr 2010 | WO |
Entry |
---|
Zhang et al., “A Flexible New Technique for Camera Calibration”, Institute for Electronics and Electrical Engineers (IEEE), IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 22, No. 11, Nov. 2000, 5 pages. |
Yea et al., “View Synthesis Prediction for Multiview Video Coding”, Elsevier; Signal Processing: Image Communication, vol. 24, Mitsubishi Electric Research Laboratories, Cambridge, Massachusetts, USA, Oct. 19, 2008, 14 pages. |
Anonymous, “High Efficiency Video Coding”, Intemational Telecommunication Union, Telecommunication Standardization Sector of ITU, Series H: Audiovisual and Multimedia Systems—Infrastructure of audiovisual services—Coding of moving video, Recommendation of ITU-T H.265, Nov. 2019, 712 pages. |
Bouguet, Jean-Yves, “Camera Calibration Toolbox for Matlab”, URL: http://www.vision.caltech.edu/bouguetj/calib_doc, last updated Oct. 14, 2015, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20220311986 A1 | Sep 2022 | US |