This application is a National Phase application under 35 U.S.C. § 371 of PCT Application No. PCT/CN2016/070712, filed Jan. 12, 2016, the entire contents of which application is hereby incorporated herein by reference.
The present invention relates to panoramic stereo video systems for capturing, processing, compressing, and displaying 3D panoramic stereo videos, and more particularly, to methods and apparatus for calibrating camera arrays in the panoramic stereo video systems.
Multi-camera arrays have been widely used in stereo vision and panoramic video systems. Calibrating the extrinsic parameters of these cameras properly is the prerequisite for running any complex vision or video algorithm, as these algorithms will often fail if the spatial relationship between the cameras is not accurately defined. State-of-the-art camera calibration methods perform well when there is a large overlapping region in the camera's field of view (FOV). However, in cases where the overlapping region is small or even does not exist, the conventional methods tend to fail due to low resolution of the overlapping region, or failure to find a common pattern in the overlapping region.
Therefore, there is a need for robust calibration methods for cameras having small or even no overlapping field of view.
To address issues in the prior art, embodiments of the present invention provide a method and apparatus for calibrating a camera array in a panoramic stereo video system where the cameras have small or even no common field of view.
In accordance with an embodiment of the present invention, an image acquisition apparatus for capturing images to generate a panorama is provided, the image acquisition apparatus comprising: a camera array comprising a plurality of cameras configured to capture a plurality of images to generate a panorama, wherein the relative positions among the plurality of cameras are constant; and a camera calibration unit for calibrating the plurality of cameras; wherein upon the camera array moves from a first position to a second position, the plurality of cameras are configured to capture images at the first position and the second position, and the camera calibration unit is configured to determine a homogenous transformation matrix between a first camera and a second camera based on the images captured by the first camera and the second camera at the first position and the second position.
Preferably, the camera calibration unit is configured to identify a feature in the images taken by the first camera at the first position and the second position, and estimate a rotation of the first camera from the first position to the second position based on the feature.
Preferably, the camera array is configured to move randomly from the first position to the second position.
Preferably, an axis of the camera array at the first position is not parallel to the axis of the camera array at the second position.
Preferably, the first camera and the second camera comprise no over-lapping field of view.
In accordance with another embodiment of the present invention, a method of calibrating a camera array comprising a plurality of cameras configured to capture a plurality of images to generate a panorama is provided, wherein the relative positions among the plurality of cameras are constant, the method comprising: moving the camera array from a first position to a second position; measuring a homogeneous transformation matrix of a reference point on the camera array between the first position and the second position; capturing images at the first position and the second position by a first camera and a second camera on the camera array; and determining a homogenous transformation matrix between the first camera and the second camera based on the images captured by the first camera and the second camera at the first position and the second position.
Preferably, the method further comprises identifying a feature in the images taken by the first camera at the first position and the second position, and estimating a rotation of the first camera from the first position to the second position based on the feature.
Preferably, the method further comprises moving the camera array from a first position to a second position randomly.
Preferably, an axis of the camera array at the first position is not parallel to the axis of the camera array at the second position.
Preferably, the first camera and the second camera comprise no over-lapping field of view.
In accordance with embodiments of the present invention, the cameras in the camera array can be calibrated efficiently and accurately by simply taking pictures simultaneously at different positions, wherein the cameras are not required to have an overlapping field of view.
To better illustrate the technical features of the embodiments of the present invention, various embodiments of the present invention will be briefly described in conjunction with the accompanying drawings.
To better illustrate the purpose, technical feature, and advantages of the embodiments of the present invention, various embodiments of the present invention will be further described in conjunction with the accompanying drawings. It is obvious that the draws are but for exemplary embodiments of the present invention, and that a person of ordinary skill in the art may derive additional draws without deviating from the principles of the present invention.
In accordance with embodiments of the present invention, a panoramic stereo video system including multi-camera video capturing, data processing, stereo video coding, transmission, and 3D displaying is provided. The panoramic stereo video system employs real-time multi-view videos capturing, image rectification and pre-processing, and region of interest (ROI) based stereo video compression. After the transmission and decoding process, a head-mounted display (HMD) headset is used to display the left and right views.
1. System Overview
2. Data Capturing Unit
As shown in
2.1.Camera Array
As shown in
It should be noted that while the camera array is depicted as a regular octagon in
2.2. Camera Calibration
To stitch images captured by the cameras together and generate 3D effects, it is necessary to obtain both the internal and parameters of the cameras. The extrinsic parameters include the rotation and translation among the cameras, so that the images captured by different cameras can be rectified and aligned in the horizontal direction. Also, there may be distortions in the images captured by the cameras, and to obtain images free of the distortions, it is necessary to know the camera distortion parameters. These parameters are obtained during the camera calibration process.
2.2.1. Calibration of Internal and Distortion Parameters
The internal and distortion parameters of the cameras can be obtained using a variety of methods, such as the calibration method proposed by Zhengyou Zhang, and tools like MatLab can be employed to obtain these parameters.
2.2.2. Calibration of Extrinsic Parameters
After obtaining the internal parameters of the cameras, a method based on structure from motion is employed to obtain the rotations and translations among the cameras. This method has the following advantages.
Efficiency: there is no need to calibrate the cameras pair by pair. Rather, all cameras capture videos of a scene simultaneously during calibration, and the extrinsic parameters for all the cameras can be obtained at the same time.
Accuracy: in pattern-based calibration methods, the pattern needs to be captured by two adjacent cameras, which often results in lower resolution for the pattern and reduces the accuracy of the calibration. In our structure from motion based method, the movement of each camera will be estimated independently to obtain the parameters, and there is no need for adjacent cameras to have an overlapping field of view. Thus, we can place the cameras closer to the scene to be captured to achieve better accuracy.
Extensibility: since our method does not require an overlapping field of view for adjacent cameras, it is even applicable to cameras placed in back-to-back positions.
2.3. Data Capturing Method
Data from the 16 cameras are captured and stored using software, and then provided to the data processing unit. Image data for each frame captured by the cameras can be collected using software such as FFmpeg and DirectShow (or DShow). The frames captured by each camera are compressed and then stored as video files. Since there are a number of cameras, the frames captured by the cameras need to be synchronized, such as using time stamps. For example, each frame captured by the cameras can be affixed with a time stamp, and placed in a queue so that it is synchronized with other frames with the same time stamp. The synchronized frames are coded into video streams, and stored locally or transmitted through the network together.
3. Data Processing Unit
As shown in
3.1 Distortion Correction and Preprocessing
The frames captured by the cameras are warped in accordance with the distortion parameters obtained during the calibration process to obtain frames free of distortions. In order to enhance the accuracy of image alignment and stitching, the frames are filtered first to reduce noises.
3.2. Image Alignment
Image alignment is performed on each pair of cameras located on each side of the octagon, and the images captured by each pairs of cameras are aligned in the horizontal direction. In accordance with an embodiment of the present invention, each frame captured by the pairs of cameras is warped to a plane that is parallel to the optical axes of the fair of cameras.
4. Panoramic Video Stitching
The camera array has 8 pairs of cameras. The frames captured by all the left cameras are projected onto a cylinder, and then stitched into a panoramic image. A panoramic video can be obtained by repeating the above steps for all the frames captured by the left cameras. Another panoramic video can be obtained by processing the frames captured by the right cameras in the same fashion. These two panoramic videos form a panoramic stereo video.
5. Data Display Unit
As shown in
Optionally, apart from these cameras, the camera array is connected to an Inertial Measurement Unit (IMU) which can be used to get the motion of the camera array. The IMU can be a motion sensor. The IMU can be placed at a fixed position on the camera array, such as at the center of the camera array, or at the same position as one of the cameras.
Alternatively, the motion of the camera array from the starting position to ending position can be measured by other means, and transmitted to the calibration unit separately.
It should be noted that while
pi,k: The position of camera i after k-th movement.
Xi,j: The homogeneous transformation matrix between camera i and j, where
Ai,k: The homogeneous transformation matrix from position k to k+1 of camera i
Where RA
Xi: The homogeneous transform matrix between camera i and IMU.
Bk: The homogeneous transformation matrix of IMU from place k to k+1, and
In the below calculations, all the cameras are modeled by the pinhole camera model with distortions, and the intrinsic matrices Ki and the distortion parameters kci are supposed to be known.
In accordance with an embodiment of the present invention, the calibration method includes the following steps:
1. Move the camera array and make all the cameras take a picture simultaneously at each place for n times. Record the IMU data and get Bk. It is better to avoid rotating around parallel rotation axes for better performance.
2. Estimate RA
3. Estimate RX
4. Find TX
∠(uikl,Yikl)≤γ
∠(vikl,RX
for some error limit γ for all the movements and cameras, where RX
5. Get the extrinsic parameters between camera i and camera j through:
Xij=Xi−1Xj
Step 701: moving the camera array from a first position to a second position.
Step 702: determining the movement of the camera array between the first position and the second position. Here, the movement of the camera array can be specified using a homogeneous transformation matrix of a reference point on the camera array between the first position and the second position.
Step 703: capturing images at the first position and the second position by a first camera and a second camera. Here, we simply need to take pictures at these two positions using these two cameras.
Step 704: estimating the rotation of the first camera from the first position to the second position. A structure from motion method can be used to first identify a feature in the images taken by the first camera at the first position and the second position, and then estimate the rotation of the first camera from the first position to the second position based on the position of the identified feature in the pictures.
Step 705: determining a rotation between the first camera and the reference point by minimizing a defined distance.
Step 706: determining a transformation between the first camera and the reference point by minimizing a re-projection error.
Step 707: determining a homogenous transformation matrix between the first camera and the second camera based on the images captured by the first camera and the second camera at the first position and the second position. Here, the relative rotation and translation between the first camera and the second camera can be calculated from the parameters determined in previous steps.
The calibration method has the following advantages:
1. Efficiency. There is no need to calibrate the cameras pair by pair, or to hold up the chessboard for calibration. We just need to take photos of a scene simultaneously from different positions.
2. Accuracy. When the overlapping field of view of two cameras is distant from the cameras, as often the case in the setting of pattern-based calibration methods, the pattern's resolution becomes very low, as the pattern needs to be in the common area of the field of view of the two cameras. This will adversely affect the accuracy of the calibration. However, our method estimates the motion of different cameras independently, and does not require a common area in the field of view of the cameras. Thus, the cameras can be placed closer to the scene, making its calibration more accurate.
3. Extensibility. Since structure from motion method does not require the cameras to have an overlapping field of view, our method is applicable even when the cameras are back to back. This will reduce the accumulative error since we can calculate the transformation from any camera coordinate to the reference camera coordinate.
In sum, in accordance with embodiments of the present invention, the cameras in the camera array can be calibrated efficiently and accurately by simply taking pictures simultaneously at different positions, wherein the cameras are not required to have an overlapping field of view.
The various modules, units, and components described above can be implemented as an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; or other suitable hardware components that provide the described functionality. The processor can be a microprocessor provided by from Intel, or a mainframe computer provided by IBM.
Note that one or more of the functions described above can be performed by software or firmware stored in memory and executed by a processor, or stored in program storage and executed by a processor. The software or firmware can also be stored and/or transported within any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, a portable computer diskette (magnetic), a random access memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM) (magnetic), a portable optical disc such a CD, CD-R, CD-RW, DVD, DVD-R, or DVD-RW, or flash memory such as compact flash cards, secured digital cards, USB memory devices, memory sticks, and the like.
The various embodiments of the present invention are merely preferred embodiments, and are not intended to limit the scope of the present invention, which includes any modification, equivalent, or improvement that does not depart from the spirit and principles of the present invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2016/070712 | 1/12/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/120776 | 7/20/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
8228560 | Hooper | Jul 2012 | B2 |
9338439 | Grossmann et al. | May 2016 | B2 |
9392274 | Wang et al. | Jul 2016 | B2 |
10313656 | Sadi et al. | Jun 2019 | B2 |
10477182 | Schwarz et al. | Nov 2019 | B2 |
20020145660 | Kanade et al. | Oct 2002 | A1 |
20040001138 | Weerashinghe et al. | Jan 2004 | A1 |
20040189674 | Zhang et al. | Sep 2004 | A1 |
20070052794 | Ha et al. | Mar 2007 | A1 |
20080002023 | Cutler | Jan 2008 | A1 |
20110069148 | Jones et al. | Mar 2011 | A1 |
20110235706 | Demircin et al. | Sep 2011 | A1 |
20110255589 | Saunders et al. | Oct 2011 | A1 |
20120154518 | Zargarpour et al. | Jun 2012 | A1 |
20120236934 | Chen et al. | Sep 2012 | A1 |
20120249730 | Lee | Oct 2012 | A1 |
20130044186 | Jin et al. | Feb 2013 | A1 |
20130063549 | Schnyder et al. | Mar 2013 | A1 |
20130176390 | Chen et al. | Jul 2013 | A1 |
20140098185 | Davari et al. | Apr 2014 | A1 |
20140125771 | Grossmann et al. | May 2014 | A1 |
20140218550 | Chuang et al. | Aug 2014 | A1 |
20140300691 | Saito et al. | Oct 2014 | A1 |
20140307045 | Richardt et al. | Oct 2014 | A1 |
20140355685 | Chen et al. | Dec 2014 | A1 |
20150348580 | van Hoff et al. | Dec 2015 | A1 |
Number | Date | Country |
---|---|---|
102055982 | May 2011 | CN |
102243432 | Nov 2011 | CN |
102547357 | Jul 2012 | CN |
103020985 | Apr 2013 | CN |
103108187 | May 2013 | CN |
103179405 | Jun 2013 | CN |
103516995 | Jan 2014 | CN |
104169965 | Nov 2014 | CN |
104247432 | Dec 2014 | CN |
104255027 | Dec 2014 | CN |
104424642 | Mar 2015 | CN |
104580933 | Apr 2015 | CN |
104822058 | Aug 2015 | CN |
105049850 | Nov 2015 | CN |
106797459 | May 2017 | CN |
106797460 | May 2017 | CN |
1679659 | Jul 2006 | EP |
2793187 | Oct 2014 | EP |
2010085361 | Jul 2010 | WO |
2012136388 | Oct 2012 | WO |
2012-136388 | Oct 2012 | WO |
2013151883 | Oct 2013 | WO |
2015-085406 | Jun 2015 | WO |
2015085406 | Jun 2015 | WO |
2016048017 | Mar 2016 | WO |
2018064110 | Apr 2018 | WO |
Entry |
---|
Extended European Search Report dated May 10, 2019, issued in European Application No. 16884543.6 (12 pages). |
Heiko Schwarz et al., “Description of 3D Video Coding Technology Proposal by Fraunhofer HHI (HEVC compatible, configuration B”, MPEG Meeting, International Organisation for Standardisation, ISO/IE JT1/SC29/WG11, Coding of Moving Pictures and Audio, Nov. 22, 2011 (46 pages). |
Minghui Wang et al., “Region-of-interest based dynamical parameter allocation for H.264/AVC encoder”, Picture Coding Symposium, May 6, 2009 (4 pages). |
PCT International Search Report and the Written Opinion dated Sep. 22, 2016, issued in related International Application No. PCT/CN2016/070823 (6 pages). |
PCT International Preliminary Report on Patentability dated Jul. 26, 2018, issued in related International Application No. PCT/CN2016/070823 (5 pages). |
PCT International Search Report and the Written Opinion dated Oct. 17, 2016, issued in related International Application No. PCT/CN2016/071238 (7 pages). |
PCT International Preliminary Report on Patentability dated Jul. 26, 2018, issued in related International Application No. PCT/CN2016/071238 (5 pages). |
PCT International Search Report and the Written Opinion dated Oct. 19, 2016, issued in related International Application No. PCT/CN2016/070712 (6 pages). |
PCT International Preliminary Report on Patentability dated Jul. 26, 2018, issued in related International Application No. PCT/CN2016/070712 (5 pages). |
Kun Feng et al., “A Bit Allocation Method for Multi-view Video Coding Based on Stereoscopic Visual Saliency”, Journal of Optoelectronics ⋅ Laser, vol. 24, No. 10, Oct. 31, 2013, pp. 1995-2001. |
Haksub Kim et al., “Saliency Prediction on Stereoscopic Videos”, IEEE Transactions on Image Processing, vol. 23, No. 4, Apr. 30, 2014, pp. 1476-1490. |
Extended European Search Report dated Sep. 11, 2019, issued in related European Application No. 16884365.4 (9 pages). |
First Search dated Jan. 15, 2020, issued in related Chinese Patent Application No. 201680078524.X (2 pages). |
First Office Action dated Jan. 22, 2020, issued in related Chinese Patent Application No. 201680078524.X, with English machine translation (29 pages). |
First Search dated Dec. 20, 2019, issued in related Chinese Patent Application No. 201680078558.9 (2 pages). |
First Office Action dated Dec. 30, 2019, issued in related Chinese Patent Application No. 201680078558.9, with English machine translation (8 pages). |
Number | Date | Country | |
---|---|---|---|
20190028693 A1 | Jan 2019 | US |