Method for Improving Image Stitching Accuracy with Lens Distortion Correction and Device for Implementing the Same

Abstract
The invention provides a method for improving image stitching accuracy and a device for implementing the same, which apply to the multi-camera system for wide-angle image generation. Lens distortion causes mismatches of the features in the overlapping region of the images captured by the multi-camera system. As a result, the mismatches on the stitched wide-angle image are visible. The method and device for improving image stitching accuracy correct the lens distortion before stitching the images captured by the multi-camera system, so that the features in the overlapping region are matched and a seamless wide-angle image is generated by the stitching engine.
Description

BRIEF DESCRIPTIONS OF THE DRAWINGS


FIG. 1 is a schematic view of the conventional multi-camera system.



FIG. 2 illustrates an example of lens distortion.



FIG. 3 illustrates an example of image transformation inaccuracy caused by lens distortion.



FIG. 4 is a schematic view of the multi-camera system according to an embodiment of the present invention.



FIG. 5 illustrates an example of lens distortion correction by the multi-camera system of the present invention.



FIG. 6 illustrates an example of improving image transformation accuracy by lens distortion correction.



FIG. 7 illustrates the details of the means for correcting lens distortion and the image stitching engine shown in FIG. 4.



FIG. 8 is a schematic view of the optimized multi-camera system according to another embodiment of the present invention.





DETAILED DESCRIPTION

The invention will be better understood from the detailed description given below and from the accompanying drawings of the embodiments of the inventions, However, known details are omitted herein in order to not unnecessarily obscure the present invention.


Referring to FIG. 4, a multi-camera system 400 according to the present invention is illustrated. The multi-camera system 400 comprises a camera array 401, an image stitching engine 402, optional aperture controllers 403, and lens distortion correction means 404. The lens distortion correction means 404 may utilize any suitable algorithm for correcting lens distortion. Taking the polynomial algorithm for example, coefficients of the polynomial are calculated to fit the curve of the lens data. The coefficients are stored as lens parameters, and the lens distortion correction maps each pixel to the undistorted position by applying the polynomial to each pixel. Mapping table can be applied to all lens distortion correction algorithms where the corrected position of each pixel is stored in a table. The mapping table is calculated based on the applied mathematics equation. In the case of applying the mapping table, the lens parameters are the data of the table, and the mapped coordinates after correction may be fractional. Different interpolations may be applied to calculate the image data on each pixel depending on the cost and quality required by the system.


If the multi-camera system 400 does not include the aperture controllers 403, i.e. the apertures of the cameras in the camera array 401 are fixed, the lens parameters are independent of the brightness of the images and remain constant. On the contrary, if the multi-camera system 400 includes the aperture controllers 403, i.e. the apertures of the cameras in the camera array 401 are controllable, an exposure control signal is sent by the image stitching engine 402 to the aperture controllers 403 based on the brightness of the images to perform feedback control to the apertures of the camera array 401. The lens parameters used for lens distortion correction vary with the diameters of the apertures, hence the lens parameters are calculated based on the aperture control.


Compared with the example of barrel distortion shown in FIG. 2, FIG. 5 illustrates an example of correcting pin-cushion distortion by the multi-camera system of the present invention. As shown, the features in the overlapping region of the images are corrected and match with the features in overlapping region of the adjacent image.



FIG. 6 is a schematic view of improving image transformation accuracy by lens distortion correction, wherein since the image captured by the rotated camera is accurately transformed to the reference plane because the points are no longer distorted, the transformation matrix is accurately calculated based on the undistorted points.



FIG. 7 illustrates the details of the lens distortion correction means 404 and the image stitching engine 402 shown in FIG. 4. The image stitching engine 402 mainly includes “image transformation means” and “image blending and stitching means”. The image transformation means generates transformed coordinates based on transformation parameters. The generated coordinates may be fractional. The image data is then calculated by interpolation. Some transformation (e.g. planar transformation) makes the image stretch out, while some transformation squeezes the image and leaves black portion on the margin of the image. The extra black portion generated by transformation is removed from the transformed image by cropping. The image blending and stitching means generates the wide-angle image based on the image data acquired from the image transformation means.


Further, the lens distortion correction means 404 generates corrected coordinates based on the lens parameters. As described above, interpolation is required for correcting lens distortion, therefore additional image buffers should be included in the multi-camera system 400 to store the corrected image data after lens distortion correction. The additional image buffers are not a problem in the case of using software for stitching, but imply more memories in terms of hardware. For the multi-camera system of high resolution and high frame rate video, the memory access bandwidth and the memory density are both high. Therefore, the additional image buffers might exceed the limitation of memory access bandwidth. As a result, the present invention further provides another optimized multi-camera system including lens distortion correction means, which is combined with the image transformation means to save the additional image buffers.



FIG. 8 illustrates an optimized multi-camera system 800 including lens distortion correction means according to another embodiment of the present invention. As described above, the lens distortion correction means is combined with the image transformation means in the multi-camera system 800. As shown, the corrected coordinates after lens distortion correction are inputted into the image transformation means for further performing coordinate mapping based on the transformation parameters. The interpolation is then performed only once to calculate the final image data. Hence no extra image buffers are required for the multi-camera system 800, and the quality of the image is not degraded due to extra interpolation by lens distortion correction.


Although only several embodiments of the invention have been described in detail above, those of ordinary skill in the art will understand that many modifications are possible without departing from the novel features of the invention. The modifications are intended to be included within the scope of this invention as defined by the following claims.

Claims
  • 1. A device for improving image stitching accuracy, comprising: a camera array consisting of a plurality of cameras for capturing images;means for correcting lens distortion; andan image stitching engine for transforming the captured images and stitching the transformed images into a seamless wide-angle image.
  • 2. The device according to claim 1, further comprising a plurality of optional aperture controllers, wherein the aperture controllers are connected to the cameras respectively, and the image stitching engine sends an exposure control signal to the aperture controllers based on brightness of the captured images to control apertures of the cameras.
  • 3. The device according to claim 2, wherein the means for correcting lens distortion comprises coordinate mapping and interpolation, the coordinate mapping generates corrected coordinates for the captured images based on lens parameters, and the lens parameters vary with the apertures of the cameras.
  • 4. The device according to claim 3, wherein the means for correcting lens distortion utilizes polynomial algorithm, and the lens parameters are polynomial coefficients or a mapping table of the corrected coordinates.
  • 5. The device according to claim 1, wherein the image stitching engine comprises means for transforming the captured images and means for blending and stitching the transformed images, and wherein the means for transforming the captured images comprises coordinate mapping and interpolation and cropping, the means for transforming the captured images generates transformed coordinates based on transformation parameters.
  • 6. The device according to claim 5, wherein the means for correcting lens distortion is combined with the means for transforming the captured images.
  • 7. The device according to claim 6, wherein the interpolation is performed only once.
  • 8. A method for improving image stitching accuracy, comprising: capturing images by a camera array;correcting lens distortion; andtransforming the captured images and stitching the transformed images into a seamless wide-angle image.
  • 9. The method according to claim 8, wherein the camera array comprises a plurality of cameras to which a plurality of aperture controllers are connected respectively, and the method further comprises sending an exposure control signal to the aperture controllers based on brightness of the captured images to control apertures of the cameras.
  • 10. The method according to claim 9, wherein correcting the lens distortion comprises coordinate mapping and interpolation, the coordinate mapping generates corrected coordinates for the captured images based on lens parameters, and the lens parameters vary with the apertures of the cameras.
  • 11. The method according to claim 10, wherein correcting the lens distortion utilizes polynomial algorithm, and the lens parameters are polynomial coefficients or a mapping table of the corrected coordinates.
  • 12. The method according to claim 8, wherein transforming the captured images comprises coordinate mapping and interpolation and cropping, and transforming the captured images generates transformed coordinates based on transformation parameters.
  • 13. The method according to claim 12, wherein correcting the lens distortion is combined with transforming the captured images.
  • 14. The method according to claim 13, wherein the interpolation is performed only once.
Priority Claims (1)
Number Date Country Kind
95112790 Apr 2006 TW national