The present invention relates to a device and a method for creating an image, and particularly, although not exclusively, to a device and a method for creating a panoramic image.
An image may represent an object or a scene which may be virtual or real in nature. In some applications, images may be created by transforming recorded image data to a printing or light signal with different colors and light intensities. The image data may be recorded using a camera or an imaging device which may record the colors and light of the original objects or scene.
Due to a limitation in optics and camera sensor, viewing angles of a camera are usually restricted to a predetermined range. In some occasion, the camera may not be able to capture a desire image which may cover a view angle being larger than the designed range.
In accordance with a first aspect of the present invention, there is provided a method for creating an image, comprising the steps of identifying data representing an overlapping region in each of a first and a second sets of image data, wherein the first and the second sets of image data are arranged to represent a first image and a second image respectively, and at least one common feature in both the first image and the second image is identified in the overlapping region; discarding a portion of the data representing a portion of the overlapping region in each of the first and the second sets of image data to obtain a first and a second sets of modified image data; and combining the first and the second sets of modified image data to a third set of image data arranged to represent a third image including at least a portion of each of the first image and the second image.
In an embodiment of the first aspect, a portion of the data representing a portion of the overlapping region is defined by a divider within the overlapping region and at least one boundary of the overlapping region.
In an embodiment of the first aspect, the divider is defined at or proximate to a center position between the boundaries of the overlapping region.
In an embodiment of the first aspect, the method further comprises the step of transforming the divider into a linear line using interpolation.
In an embodiment of the first aspect, the method further comprises the step of performing a regression analysis on a plurality of feature points associated with the at least one common feature to define the divider.
In an embodiment of the first aspect, the regression analysis include a least square analysis.
In an embodiment of the first aspect, the method further comprises the step of comparing the first and the second sets of image data so as to determine the plurality of feature points.
In an embodiment of the first aspect, the method further comprises the step of discarding at least one of the plurality of feature points based on at least one selection criteria.
In an embodiment of the first aspect, the at least one selection criteria includes at least one of: a position of the feature point beyond a predetermined region in the first and/or the second image; and a position of the feature point beyond the overlapping region in the first and/or the second sets of image data.
In an embodiment of the first aspect, the third image is a panoramic image.
In accordance with a second aspect of the present invention, there is provided a device for creating an image, comprising a data processing module arranged to identify data representing an overlapping region in each of a first and a second sets of image data, wherein the first and the second sets of image data are arranged to represent a first image and a second image respectively, and at least one common feature in both the first image and the second image is identified in the overlapping region; and an image processing module arranged to discard a portion of the data representing a portion of the overlapping region in each of the first and the second sets of image data to obtain a first and a second sets of modified image data, and arranged to combine the first and the second sets of modified image data to a third set of image data arranged to represent a third image including at least a portion of each of the first image and the second image.
In an embodiment of the second aspect, a portion of the data representing a portion of the overlapping region is defined by a divider within the overlapping region and at least one boundary of the overlapping region.
In an embodiment of the second aspect, the divider is defined at or proximate to a center position between the boundaries of the overlapping region.
In an embodiment of the second aspect, the data processing module is further arranged to transform the divider into a linear line using interpolation.
In an embodiment of the second aspect, the data processing module is further arranged to perform a regression analysis on a plurality of feature points associated with the at least one common feature to define the divider.
In an embodiment of the second aspect, the regression analysis include a least square analysis.
In an embodiment of the second aspect, the data processing module is further arranged to compare the first and the second sets of image data so as to determine the plurality of feature points.
In an embodiment of the second aspect, the data processing module is further arranged to discard at least one of the plurality of feature points based on at least one selection criteria.
In an embodiment of the second aspect, the at least one selection criteria includes at least one of a position of the feature point beyond a predetermined region in the first and/or the second image; and a position of the feature point beyond the overlapping region in the first and/or the second sets of image data.
In an embodiment of the second aspect, the third image is a panoramic image.
Embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings in which:
20
The inventors have, through their own research, trials and experiments, devised that image stitching may be used in generation of panoramic image and application. For example, seamless image stitching which minimize false edges may be applied. Alternatively, stitching images may involve minimal solutions for the geometric parameters of a camera rotating about its optical center.
In some other examples, automatic stitching method such as SIFT may be used. SIFT may be further improved with a structure deformation method on processing stitching mission. Alternatively, a stitching method utilizing SIFT and mean seamless cloning technique may be applied.
The stitching method may be used in processing cell phone and camera panoramic image. However, some of these methods may not keep the original information obtained, and thus may not be suitable for applications involving space environment problems.
With reference to
In an illustrative example, the panoramic camera, one of the precision instruments on a lunar rover called “Yutu rover”, which is used to obtain images of the moon's surface in a lunar surface exploration mission—the Chang' E-3 mission. Stitching panoramic images may be used to provide useful information of the moon for better understanding of the environmental conditions around the landing site.
Preferably, a continuous image of the lunar surface will help observing the lunar surface, which allows studying of lunar dust, crater and stone. In addition, it is also helpful for designing a lunar rover having ability to navigate and avoid obstacle automatically in an unknown environment.
The panoramic camera (PCAM) is one of the rover's major scientific payloads, with two cameras installed on the mast of the Yutu rover, it has ability to take colorized and panchromatic image data. The main performances of PCAM are as shown in Table 1. The processed data is sampled and returned by Chang' E-3 rover Yutu, and it contains head data and record data. Head data, which describes the rules of data distribution and sample attitude (see Table 2). According to head data, that recorded data array is based on a constant order as R-G-B. The length of colorized image data is 12192768 (2352×1728×3). To processing the image data, the first stage is separate layer.
Ti =(i+Lh) mod3, where i=1, 2, 3, . . . , 12192768, Ti is a judgement variate, which can decide record i's layer. Lh represents the length of head data, and i+Lh is location about record i in PCAM data.
If Ti is equal to 1, this record data locate in red layer, and if Ti equal to 2 or 3 the data locate in green or blue layer. Then the record distinguished into R-G-B arrays, transform this 3 array into 3 matrixes as 2352×1728, and the lunar surface high definition (HD) image have been obtained by compound this 3 matrixes at last.
With reference to
In order to get panoramic images, the camera(s) may be arranged to change the capture angle for shooting different pictures. With reference to
Besides capturing horizontal images, the PCAM may rotate with an elevation angle about −36° to get close quarter pictures. With reference to
Without wishing to be bound by theory, stitching method which consults format of image and image's conceal information may reduce mistake or error. In accordance with an embodiment of the present invention, there is provided a method of stitching or generating of images, which involves least-loss filling (LLF). The features decide stitching direction and position, it will affect stitching quality, so finding an appropriate feature operator may be an important stage for this method. Gaussian kernel may be applied to generate scale space, considering the speed and effect of the Speeded Up Robustness Feature (SURF), it may be used as a core operator on LLF.
Preferably, SIFT and/or SURF may be applied in a stitching process for panoramic image, which provides advantages such as distinctiveness, great-quantity, high-speed and extendibility. Furthermore, these two methods may partly solve problems such as projective transformation, illumination, occlusion, and scene with clutter and noise, therefore may suit with the problem in PCAM images.
With reference to
Stage 1 Scale-space extreme detection
Stage 2 Key point localization
Stage 3 Orientation assignment
Stage 4 Feature point descriptor
In the step of Scale-space extreme detection, SIFT's Gaussian template is unchanged, it only changes the size of image in the different octave. In SURF, the size of image is unchanged, it obtains detected image by using metabolic size with Gaussian Blur template.
Referring to
SURF and SIFT use the same method step of key point localization, but use different ways on orientation assignment. SIFT specified each keypoint direction parameters and makes operator with rotation invariance by utilizing neighborhood pixel's gradient direction and keypoint's distribution characteristics.
To generate orientation histogram, a revolution of 360 degrees is divided into 36 intervals, and then the heights of each interval are determined. Peak hm district is defined to be the main direction. To enhance the robustness, any other's height may be considered as the auxiliary direction of keypoint, which is above 0.8×hm. Then each of the keypoints may include three tags: location, scale and direction. In the final step of SIFT, it has 4×4×8=128 dimension vectors to describe a feature.
For SURF, dimension vector may be decreased to 64. In the stage of matching feature points, SURF has fewer vectors to compare, that may reduce computing time. With reference to Table 3 below, processing time/scale/changes/rotation/blur/illumination/affine are compared. In one example embodiment, considering the scale-invariant, illumination and computing time, SUFT may be applied as a main feature operator in least loss filling.
With reference to
In this embodiment, the data processing module 402 and/or the image processing module 404 may be implemented as components of a camera, such as an image processor 406 integrated in a camera for processing the image captured by the camera. Alternatively, the data processing module 402 and the image processing module 404 may be individually implemented in separate processor 406, or may be implemented in a processing unit 406 or a computer arranged to process images imported from external sources.
In some examples of image capturing, image data may be recorded as light intensities of different colours sensed at different positions by the camera sensor. With different sets of image data, through image processing, images may be “restored” to represent the objects/scene being captured or recorded by the camera. By analysing and modifying the recorded image data, the image may be modified accordingly.
Preferably, a first image 408 and a second image 410 corresponding to the source images may be processed by the data processing module 402 and the image processing module 404, so as to generate a third image 412 which includes portions of the first image 408 and the second image 410. Preferably, the third image 412 is a panoramic image which may comprise portions of two or more source images 408, 410.
Preferably, the data processing module 402 may identify the overlapping region in each of the first and the second sets of image data, i.e. the image portion representing a substantially similar region with similar features on the images. In the overlapping regions, at least one common feature may be identified in the overlapping region of both the first and the second image 410. In general, more common feature being identified is preferable.
In some example embodiment, the source images 408, 410 may include different sizes or portions of overlapping regions. For example, images taken by the CE-3 PCAM may be different from those taken by LROC (Lunar Reconnaissance Orbiter Camera), it may not include a large overlapped area. Thus it may be difficult to accurately find out overlapping regions on the images.
The inventors devise that the problem may be solved as follows. If an image is divided up and down or left and right into 2 slices and each of the slices has twenty precent of overlapped region, feature algorithm may be applied to obtain the features on this two images. At this moment, the feature operators from overlapped regions in these two images are completely equal. By applying a feature matching algorithm to process the feature operators, a pair of matching features in the two overlapped region pair may be obtain.
Preferably, a portion of the data representing a portion of the overlapping region is defined by a divider within the overlapping region and at least one boundary of the overlapping region. The over lapping region may be defined with two boundaries, and a divider is arranged to divide the overlapping region into substantially two halves, i.e. the divider may be defined at or proximate to a center position between the boundaries of the overlapping region, one of the two halves may be discarded or deleted in the image stitching process such that the images may be combined without the identical features being repeatedly included in the panoramic image.
After discarding certain areas in the overlapping region in each of the first and the second images 408, 410, the first and the second sets of image data are transformed to a first and a second sets of modified image data and may be further used to defined the third set of image data arranged to represent a third image 412.
With reference to
The shaded areas in
By using Least squares method on match features, each of the two images should include a divider line. Since this two overlapped regions should be completely equal, this two divider lines should correspondingly be completely equal.
Subsequently, by discarding the region at left of divider line from right image, and the region at right of divider line from left image, these two slices/portions may be stitched as a single image perfectly, similarly of up and down divide model. However, of the real CE-3 image, two stitching images not divide from an image, the overlapped region is not same. But the camera shooting the same object and the overlapped region of stitching image is similar. The features position would change following the object change at overlapped region. So the line represented by the same objects and environment even the camera changes shooting angle. But using the same method, this two divider lines are not equal neither.
Preferably, the data processing module 402 is further arranged to transform the divider into a linear line using interpolation. Such interpolation process transforms the two divider lines into horizontal line or vertical line to improve the stitching work.
To guarantee that there is no original information loss while stitching images, LLF keeps integrality and continuity of data. With reference to
In this example, the method may be implemented as the
Least-Loss Filling (LLF) method discussed above, and may be carried out by the data processing module 402 and the image processing module 404.
LLF may involve at three major stages as follows:
Stage 1 Feature point matching
Stage 2 Least Square
Stage 3 Complement Image
With reference to
Preferably, the data processing module 402 is further arranged to discard at least one of the feature points based on at least one selection criteria. The selection criteria may include a position of the feature point beyond a predetermined region in the first and/or the second image 410, such as a predetermined distance between the feature point and a reference location or a predetermined different between the two distance, or the position of the feature point beyond (not within) the overlapping region in the first and/or the second sets of image data.
Matching point with large distance or the location of features are not in the overlapped region would be discarded or ignored. LLF may be used to stitch Change' E-3 panoramic image automatically by known the matching points and capture mechanism. Stitching of the two source images may consider 5 directions or arrangements: unable to stitch, above-under, under-above, left-right and right left with reference to
Preferably, the data processing module 402 is further arranged to perform a regression analysis on the plurality of feature points associated with the at least one common feature to define the divider. For example, the regression analysis may include a least square analysis, which involves the following matrix:
The matrix M(xi(1),yi(1),xi(2),yi(2)) is divided into MI1(xi(1), yi(1))and MI2(xi(2), yi(2)), compares , xi(1), xi(2), and yi(1), yi(2) to determine the stitching direction. The length of abscissa in I1 is Lx and the length of ordinate in I1 is Ly. If
explained I1 on the left side of I2;
explained I1 on the right side of I2;
explained I1 above I2;
explained I1 under I2; Otherwise these two images don't have overlapped region. The data processing module 402 may compare the first and the second sets of image data so as to determine the plurality of feature points based on the abovementioned relation.
By using Least Square to processing MI1(xi(1), yi(1)) and MI2(xi(2), yi(2)), the dividing line 11 (x, y), 12 (x, y) of useful area and useless area may be determined. The dividing line may be represented according to the following relation:
20
With reference to
For example, in case of horizontal model, let maximum of x be the boundary of image width. If xi is less than xmax, it will insert di pixels in that row, to ensure that xi of each row is equal to xmax according to the following relation:
After complementing two images, the right stitch position is depending on δy. δy is malposition of two stitching image and defines as
δy=mean (i1(x,y),y)−mean(i2(x,y),y) (6)
These embodiments may be advantageous in that the device for generating an image may be used to stitch multiple images to form a continuous panoramic image with minimal error induced in the final panoramic image. The method involve a regression analysis of the features points in the overlapping regions may allow the determination of a best possible divider line for combining the two source images at the best possible position and alignment.
The inventors also carried out experiments in accordance with several embodiments of the present invention. In the experiment, the image data is obtained based on actual panoramic image data return by Change' E-3 rover. The experiments involve many different images in different shooting environment and different direction, including images with many stones, lunar dust, crater, clutter, shadow, sky (black area) and different illumination.
As described, Change' E-3 panoramic image may include five different stitching cases and the experiments include different stitching situations of direction. With reference to
With reference to
With reference to
With reference to
With reference to
The method in accordance with the present invention, such as LLF, is also compared with other stitching methods such as linear least square method, B-spline least square method and many knot splines least square method. With reference to
With reference to
With reference to
With reference to
It will also be appreciated that where the methods and systems of the present invention may be either wholly implemented by computing system or partly implemented by computing systems then any appropriate computing system architecture may be utilised. This will include stand alone computers, network computers and dedicated hardware devices. Where the terms “computing system” and “computing device” are used, these terms are intended to cover any appropriate arrangement of computer hardware capable of implementing the function described.
It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.
Any reference to prior art contained herein is not to be taken as an admission that the information is common general knowledge, unless otherwise indicated.