The present invention relates to determining a virtual viewpoint television.
Television is likely the most important visual information system in past decades, and it has indeed become a commodity of modern human life. With a conventional TV, the viewer's viewpoint for a particular video is determined and fixed by that of the acquisition camera. Recently, a new technology has emerged, free viewpoint television (FTV), which promises to bring a revolution to TV viewing. The premise of FTV is to provide the viewer the freedom of choosing his/her own viewpoint for watching the video by providing multiple video streams captured by a set of cameras. In addition to home entertainment, the FTV concept can also be used in other related domains such as gaming and education. The user-chosen viewpoint(s) does not need to coincide with those of the acquisition cameras. Accordingly, the FTV is not merely a simple view change by switching cameras (as possible with some DVD for a couple of preset views). The FTV technology requires a whole spectrum of technologies ranging from acquisition hardware, coding technology, bandwidth management techniques, standardization for interoperability, etc. One of the particular technologies to implement FTV is virtual view synthesis.
The essence of virtual view synthesis includes given a set of images (or video) acquired from different viewpoints to construct a new image that appears to be acquired from a different viewpoint. This multiple image modification is also sometimes referred to as image-based rendering (IBR).
In the FTV application, it is unlikely that the camera calibration information is likely to be available (e.g., imagine shooting a movie with multiple cameras which need to be calibrated each time they are moved). This renders IBR methods requiring full camera calibration generally inapplicable in most cases. Moreover, before virtual view synthesis, the virtual view should to be specified. Existing IBR techniques use a variety of way to achieve this. For example, the virtual view specification may be straightforward when the entire setup is fully calibrated. For example, the virtual view specification may be based on the user's manual picking of some points including the projection of the virtual camera center. None of these approaches is readily applicable to the FTV application with uncalibrated cameras, where an ordinary user needs an intuitive way of specifying some desired (virtual) viewpoints.
What is desirable is a framework for the rendering problem in FTV based on IBR. The approach preferably includes multiple images from uncalibrated cameras as the input. Further, while a virtual view is synthesized mainly from two principal views chosen by a viewer, other views may also be employed to improve the quality. Starting with two optimal (user-chosen) views also contributes to the reduction in the number of required views. In addition a technique for specifying the virtual view in uncalibrated cameras is desirable, and thus providing a practical solution to view specification in the FTV application without requiring either full camera calibration or complicated user interaction, both of which are all impractical for FTV.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention, taken in conjunction with the accompanying drawings.
The preferred embodiment to the rendering solution should not merely involve mathematical rendering techniques but also be modeled in such a manner to reflect a perspective on how the FTV application should configure the entire system including how (ideally) cameras should be positioned and how a user should interact with the rendering system.
In most cases multiple synchronized views of the same scene are captured by a set of fixed but otherwise un-calibrated cameras. In practice, moving cameras pose no theoretical problem if the weak calibration is done for every frame. Practically, it may be assumed that the cameras are fixed at least for a video shot and thus the weak calibration is needed only for each shot. In most cases multiple video streams are available to a viewer. The viewer specifies a virtual viewpoint and requests that the system generates a virtual video corresponding to that viewpoint.
In a typical IBR approach, since no explicit 3D reconstruction and re-projection is typically performed, in general the same physical point may have a different color in the virtual view than from any of the given views, even without considering occlusion. The differences among different views can range from little to dramatic, depending on the viewing angles, the illumination and reflection models, etc. Therefore, the IBR approach should preferably include a limitation that the virtual views should not be too far from the given views, otherwise unrealistic color may entail.
With this consideration, one may further assume that the cameras used in a FTV program are located strategically so that the most potentially interesting viewpoint should lie among the given views. For the convenience of a viewer, this can be simplified to the following: the virtual view is defined as one between any two (or more) user-chosen views from the given multiple ones (two or more). The choice of the two views can be quite intuitive and transparent in practice: for example, a viewer may feel that view 1 is too far to-the-left than desired, while view 2 is too far to-the right than desired; then the desired virtual view should be somewhere generally between view 1 and view 2.
Thus, the system may solve the following two aspects to support the FTV application (1) given the multiple video streams from uncalibrated cameras and any two (or more) user-chosen views, synthesize a virtual view generally between the two (or more) views; and (2) provide the viewer an intuitive way of specifying the virtual viewpoint in relation to the given available views.
As defined above, one may have a set of video streams with two that are the closest to the user's desired viewpoint. In an uncalibrated system, the notion of closest may not be well defined, and accordingly, the user may select the pair of views. It is desirable to make maximum use of the two specified views although other views (user selected or not) can likewise be used. For identification purposes, one may refer to the two user-chosen views as the basis images. The basis images are dynamically selected based on the user's choice and not specifically based upon specially positioned cameras.
The particular preferred approach to virtual view synthesis consists of the following steps:
1. Pair-wise weak calibration of all views to support potentially any pair that a viewer may choose. The calibration may exclude some views, especially if one view is generally between a pair of other views.
2. Color-segmentation-based correspondence between the two basis views, where other views are taken into consideration, if desired.
3. Forward warping from basis views to the virtual view with a disparity map.
4. For unfilled pixels, use an algorithm to do backward search on auxiliary views to find a dominant and disparity consistent color.
The system may be based upon using n cameras in the system. The basis views may be denoted as basis camera 1 and basis camera 2. The remaining views may be denoted as auxiliary cameras 3 to n. Fundamental matrices between the basis and the auxiliary cameras are calculated with feature detector and the random sample consensus (i.e., RANSAC) algorithm denoted as F13, F23, . . . F1n, F2n. The fundamental matrix between the basis cameras is F12. Computation of fundamental matrices need only be done once unless the cameras are moved. The fundamental matrices between the basis and the virtual views are denoted as F10 and F20, respectively.
With fundamental matrices determined, for any point x in camera 1, its corresponding point in camera 2, x′, is constrained via the fundamental matrix by x′TF12x=0, which can be used to facilitate the search for the disparity d. A third corresponding point in an auxiliary camera k is denoted by xk which is determined from xkTF1kx=0 and xkTF2kx′=0. Once the correspondence between x and x′ is determined, a virtual view pixel x″ can be determined by forward mapping, where x″ satisfies both x″TF10x=0 and x″TF20x′=0. These relationships are illustrated in
Even with the epipolar constraint described above, it is still desirable to search along an epipolar line for the disparity for a given point x. To establish the correspondence between x and x′, one may first use graph-cut-based segmentation to segment each of the basis views. For all pixels within each segment, one may assume that they have the same disparity, i.e. on the same front parallel plane. Over-segmentation is favored for more accurate modeling, and each segment is limited to be no wider and higher than 15 pixels, which is a reasonable value for a traditional NTSC TV frame with pixel resolution of 720×480.
Each segment may be warped to another image by the epipolar constraint described above (also see
In addition to using the matching score from the other basis image, one may incorporate all the auxiliary images by computing the final matching score for a segment Sj in basis image i (denoted as Sij) with disparity d as
mij(d)=maxk{mjk(d)} (1)
where mijk(d) is the matching score of segment Sij in any other basis or auxiliary camera k. Note that, the d is for the basis views, and searching in other auxiliary views is equivalent to checking which d is able to give arise to the most color consistency among the views whose relation is given in
Furthermore, instead of deciding on a single d based on the above matching score, one may use that score in the following iterative optimization procedure. The basic technique is to update the matching score of each color segment based on its neighboring segments of similar color in order to enforce disparity smoothness:
where φ is the set of neighbor segments with similar color (defined by Euclidian color distance under a pre-determined threshold), β is the inhibition constant (set to 2 for computational simplicity) controlling the convergence speed, and k the iteration index. The system may use the following stopping criteria: at any iteration k, if for any d, Sij exceeds the threshold, the updating process for this segment will stop at next iteration; the entire procedure will terminate until it converges (i.e., no segments need to be updated). The technique typically converges after 10 iterations and thus we fix the number of iteration to 10.
The above procedure is performed for both basis views, and the disparity map is further verified by left-right consistency check, and only those segments with consistent results are used for synthesizing the virtual view (thus some segments may not be used, resulting in an incomplete disparity map). In
Using the verified disparity map and the two basis views, an initial estimate of the virtual view can be synthesized by forward warping. For a pixel x in basis view 1 and x′ in basis view 2, their corresponding pixel on the virtual view will be x″ whose color is computed as
RGB(x″)=(1−α)RGB(x)+αRGB(x′) (3)
with α being a coefficient controlling the contribution of the basis views (which may be set to the same α to be defined elsewhere). Forward warping can preserve well texture details and it can easily be implemented in hardware, making real-time rendering easier.
In the initial virtual view given by forward warping, it is not uncommon to see many uncovered pixels, which may be denoted as “black holes”. These black holes are due to incomplete disparity map, such as occlusions. For each black-hole pixel, one may check its neighbor for a pixel that has been assigned a color value from the initial synthesis. The disparity of that pixel is then used for backward search on the images. Unlike other similar disparity or depth searching algorithms that do exhaustive search on the entire disparity space, the preferred system searches within a limited range within the disparity of the “valid” neighbors (those with assigned color). The search objective function is defined as:
where dn is the disparity of a valid neighbor pixel and pdn is its color;
Even after the search and propagation processes, there may still be “black holes” left when the points cannot be seen in both basis cameras. To address this, the same search and propagation method as described above may be used but with
It should be noted that there is no guarantee that all pixels can be covered by the above procedure. For example, the problem may be caused by a few isolated noisy pixels, or maybe the scene is not covered by all the cameras. A linear interpolation can handle the former situation while the latter situation can be alleviated by constraining the free viewpoint range, which is already part of the preferred assumption (i.e., the virtual view is always between two views, and the cameras are strategically positioned).
A complete virtual view obtained by following the preferred entire process is shown in
A viewpoint can be specified by a translation vector and a rotation matrix with respect to any given view to determine its position and direction. But it is unrealistic to ask a TV viewer to do this. A practical method is to start with a real view and let the viewer move to a desired viewpoint in reference to that view. This relative viewpoint moving, in an interactive manner, is much more convenient for the user. Thus the system should permit interpreting continuous virtual views from one view to another. The interpolation can be controlled by a single parameter α. When α=0, the basis view 1 is the current view; and with α increasing to 1, the viewpoint changes gradually to another view 2. A mockup user interface is illustrated in
We begin with the calibrated case as it is instructive, although the ultimate goal is to deal with the uncalibrated case. The preferred interface is similar to that shown in
P1=K1R1[I|−C1], P2=K2R2[I|−C2] (5)
For this case, one is typically only concerned with only relative relationship between the two views. By applying the following homography transform to each of the projection matrices,
Pi′=PiH
where
one converts the cameras to canonical form as:
i.e., the first camera's center is the origin, and camera 2 is related to camera 1 by rotation R2 and translation C2′.
One can specify the virtual view based on the canonical form. Suppose the camera matrix for the virtual view is:
P0′=K0′R0′[I|−C0′] (8)
One can use α to parameterize the path between basis views 1 and 2. Equation (8) then becomes
P0′(α)=K0′(α)R0′(α)[I|−C0′(α)] (9)
For the camera intrinsic matrix, the gradual change from view 1 to view 2 may be viewed as camera 1 changing its focus and principal points gradually to those of camera 2 (if the two cameras are identical, then this will not have any effect, as desired). Thus, one may interpolate the intrinsic matrix and obtain Kv′(α) as:
K0′(α)=(1−α)K1+αK2 (10)
For R0′(α), suppose
Ri′=[ri,si,ti]T (11)
where ri, si and ti represent the x-axis, y-axis and z-axis, respectively. One may construct R0′(α)=[r0(α), s0(α), t0(α)] as follows:
t0(α)=((1−α)t1+αt2)/∥(1−α)t1+αt2∥
s′=(1−α)s1+αs2
r0(α)=(s′×t0(α))/∥s′×t0(α)∥
s0(α)=t0(α)×r0(α) (12)
The first step in equation (12) constructs the new z-axis as the interpolation of two original z axes. Then one interpolates a temporary y-axis as s′. Note that s′ may not be perpendicular to the new z-axis. But with it, one can construct a new x-axis (r0(α)) with the new z-axis and a temporary y-axis. Finally, one constructs the new y-axis as the cross product of the new z-axis and x-axis.
Finally, one can construct the new camera center using linear interpolation:
C0′(α)=(1−α)C1′+αC2′ (13)
From equation (13), the new camera center is on the line connecting the two camera centers, resulting in degeneracy for the epipolar constraint and thus one should not use it for virtual view synthesis (see
Cv′(α)=[xv,yv+γ,zv].
This entire process is illustrated in
Now the uncalibrated case is considered, i.e., how we can achieve similar results from only the fundamental matrices. Given a fundamental matrix F12 the corresponding canonical camera matrices are:
P1=[I|0], P2=[[e′]xF12+e′vT|λe′] (14)
where e′ is the epipole on image 2 with F12Te′=0, v can be any 3-vector, and λ is a non-zero scalar. Note that the reconstructed P2 is up to a projective transformation. Apparently, a randomly-chosen v cannot be expected to result in a reasonable virtual view if the fundamental matrix is based on a P2 defined by such a v. It is desirable to obtain the P's from an approximately estimated essential matrix. First the essential matrix by a simple approximation scheme is estimated. The essential matrix has the form:
E12=K2−1F12K1 (15)
For unknown camera matrices K, although auto-calibration can recover the focal length at the expense of tedious computation, it is not a practical option for the FTV application (unless the information is obtained at the acquisition stage). As an approximation, one sets the parameters of the camera matrix based on the image width w and height h:
f=(w+h)/2
px=w/2, py=h/2 (16)
So K becomes:
Further, one assumes that both cameras have similar configuration and use the same K to get the essential matrix E12. An essential matrix can be decomposed into a skew-symmetric matrix and rotation matrix as:
E12=[t]xR (18)
where R and t can be viewed as the relative rotation and translation matrix of camera 2 relative to 1. Now one has
P1=K[I/0], P2=K[R|t] (19)
and thus the corresponding fundamental matrices can be recovered. This approach proved to be effective with multiple sets of data even if one has only an estimate in equation (16) without knowing the actual camera internal matrices.
Although it seems that one is going back to the calibrated case by estimating the essential matrix, the scheme is totally different from true full calibration. This is because one cannot expect to use the approximation of equation (16) for estimating the true rotation and translation that are needed for specifying the virtual as in the calibrated case. However, it is reasonable to use the approximation in the interpolation scheme as illustrated by equations (12) and (13).
A showing a simulated free viewpoint moving path by using data is shown in
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
This application claims the benefit of 60/737,076 filed Nov. 15, 2005.
Number | Name | Date | Kind |
---|---|---|---|
5963664 | Kumar et al. | Oct 1999 | A |
6266068 | Kang et al. | Jul 2001 | B1 |
6353678 | Guo et al. | Mar 2002 | B1 |
6571024 | Sawhney et al. | May 2003 | B1 |
6573912 | Suzuki et al. | Jun 2003 | B1 |
6668080 | Torr et al. | Dec 2003 | B1 |
6771303 | Zhang et al. | Aug 2004 | B2 |
6803912 | Mark et al. | Oct 2004 | B1 |
6853398 | Malzbender et al. | Feb 2005 | B2 |
6864903 | Suzuki | Mar 2005 | B2 |
6992702 | Foote et al. | Jan 2006 | B1 |
7054491 | McGuinness et al. | May 2006 | B2 |
7085409 | Sawhney et al. | Aug 2006 | B2 |
7277118 | Foote | Oct 2007 | B2 |
20020012459 | Oh | Jan 2002 | A1 |
20020061131 | Sawhney et al. | May 2002 | A1 |
20020122113 | Foote | Sep 2002 | A1 |
20020158873 | Williamson | Oct 2002 | A1 |
20030030638 | Astrom et al. | Feb 2003 | A1 |
20030095711 | McGuinness et al. | May 2003 | A1 |
20030197779 | Zhang et al. | Oct 2003 | A1 |
20030231179 | Suzuki | Dec 2003 | A1 |
20040239763 | Notea et al. | Dec 2004 | A1 |
20040240725 | Xu et al. | Dec 2004 | A1 |
20040247173 | Nielsen et al. | Dec 2004 | A1 |
20050185711 | Pfister et al. | Aug 2005 | A1 |
20050286756 | Hong et al. | Dec 2005 | A1 |
20060056727 | Jones et al. | Mar 2006 | A1 |
20060066612 | Yang et al. | Mar 2006 | A1 |
20060125921 | Foote | Jun 2006 | A1 |
20060146138 | Xin et al. | Jul 2006 | A1 |
20060146141 | Xin et al. | Jul 2006 | A1 |
20060146143 | Xin et al. | Jul 2006 | A1 |
Number | Date | Country |
---|---|---|
1785941 | May 2007 | EP |
WO 2007061495 | May 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20070109300 A1 | May 2007 | US |
Number | Date | Country | |
---|---|---|---|
60737076 | Nov 2005 | US |