The present invention relates to the image processing technology field, and more particularly to an image processing method for immediately producing panoramic images.
Because traditional film cameras just can capture scenes by a capture angle in a range from 30° to 50°, the traditional film cameras cannot capture panoramic scenes in one single picture. Recently, with the emergence of digital cameras and the advance of image processing technologies, conventional photography technologies have been able to use multi camera lens to capture the panoramic scenes by an entire angular image capturing range, such that the panoramic scenes can be processed to a 360 degree panoramic image presented on a single picture by using a multiple image stitch processing technology.
Please refer to
Although the conventional image processing method for producing 360 degree panoramic images is now widely practiced in a form of App (application software), inventors of the present invention find that the conventional image processing method still includes some drawbacks and shortcomings in practical application. The drawbacks are summarized as following two points:
Accordingly, in view of the conventional image processing method showing many drawbacks and shortcomings in practical applications, the inventors of the present application have made great efforts to make inventive research thereon and eventually provided an image processing method for immediately producing panoramic images.
The primary objective of the present invention is to provide an image processing method for immediately producing panoramic images. Differing from conventional image processing technology cannot immediately produce 360 degree panoramic images, the present invention particularly provides an image processing method for immediately producing panoramic images. In this method, two fish-eye cameras are used for capturing video information, and then, a streaming video is transmitted to an electronic device by wired or wireless technology after the video information is treated with a video encoding process and a streaming process. Therefore, an image processing application program installed in the electronic device is able to subsequently treat the streaming video with a video encoding process, a panoramic coordinates converting process, an image stitching process, and an edge-preserving smoothing process in turns, so as to eventually show a sphere panorama on the display of the electronic device. Moreover, by the utilization of a digital signal processor, the image processing application program is able to further process the sphere panorama to a plain panorama, a fisheye panorama, or a human-eye panorama.
In order to achieve the primary objective of the present invention, the inventor of the present invention provides an embodiment for the image processing method for immediately producing panoramic images, which is applied in an electronic device and comprises following steps:
The invention as well as a preferred mode of use and advantages thereof will be best understood by referring to the following detailed description of an illustrative embodiment in conjunction with the accompanying drawings, wherein:
To more clearly describe an image processing method for immediately producing panoramic images according to the present invention, embodiments of the present invention will be described in detail with reference to the attached drawings hereinafter.
The image processing method for immediately producing panoramic images proposed by the present invention can be applied in an electronic device such as digital camera, smart phone, tablet PC, or notebook by a form of App (application software). Thus, after a user completes an image capturing operation (or a video recording operation) by using an image capturing module, the App would immediately transform an image captured by the image capturing operation (or a plurality of image frames obtained from the video recording operation) to a panoramic image (or a panoramic video).
It is worth explaining that, the aforesaid image capturing module can be an independent camera device or a camera module of the electronic device. Moreover, the image frames are transmitted from the image capturing module to the electronic device by wired transmission technology or wireless transmission technology. On the other hand, as engineers skilled in image processing technology field know, both the terms of “one frame of image” and “an image frame” mean one photograph, and a video or video stream consists of a plurality of image frames.
Please refer to
First of all, the method proceeds to step (1) for treating at least one image capturing module with a parameter calibration process.
In the mathematical equation, FOV means the field of view of the image capturing module, and W and Wover represent an image width and an image overlapping width of two of the image frames, respectively.
After the step (1) is completed, the method continuously proceeds to step (2) for using the at least one image capturing module to capture at least two image frames. Subsequently, the method proceeds to step (3) for treating the at least two image frames with a panoramic coordinates conversing process, so as to produce at least two panoramically-coordinated image frames. Herein, it needs to further explain that, although
Please refer to
A plurality of latitude-longitude coordinates are obtained after the latitude-longitude coordinate conversing process is completed. It needs to explain that, (θ, Ø) shown in the coordinate conversion formulas represents a latitude-longitude coordinate; moreover, PI, W and H represent a circumference ratio, an image width and an image height, respectively. Furthermore, the latitude-longitude coordinates are subsequently treated with a 3D vector conversing process in order to produce a plurality of 3D vectors, wherein the 3D vector conversing process is carried out by using three vector conversion formulas defined as follows:
spX=cos Ø×sin θ (3)
spY=cos Ø×cos θ (4)
spZ=sin Ø (5)
In above-presented three vector conversion formulas, (θ, Ø) and (spX, spY, spZ) represent a latitude-longitude coordinate and a 3D vector coordinate, respectively. Continuously, the obtained 3D vectors are treated with a projection conversing process for producing a plurality of projected latitude-longitude coordinates, wherein the projection conversing process is carried out by using three conversion formulas defined as follows:
In the three conversion formulas, (r, θ*, Ø*) and (spX, spY, spZ) represent a projected latitude-longitude coordinate and a 3D vector coordinate, respectively; moreover, FOV means the field of view of the image capturing module and W representing an image width. Eventually, two calculation formulas are used to calculate a plurality of original image coordinates of the at least two image frames based on the projected latitude-longitude coordinates, such that the at least two panoramically-coordinated image frames are hence produced. The two calculation formulas are defined as follows:
X*=Cx+r×cos θ* (9)
Y*=Cy+r×sin θ* (10)
In the two calculation formulas, (X*, Y*) and (Cx, Cy) represent a panorama coordinate and a lens center coordinate of the fisheye lens obtained after the parameter calibration process is finished.
After the step (3) is completed, the method continuously proceeds to step (4) for treating the at least two panoramically-coordinated image frames with an image stitching process, so as to obtain a single panoramic image frame. To completing the step (4), it needs to firstly select a first sub-region from an image overlapping region of the two panoramically-coordinated image frames. That is, selecting a left sub-region from an image overlapping region locating in right side of the left side image frame of the two panoramically-coordinated image frames. Continuously, to find out a plurality of left feature points from the left sub-region by using a fixed interval sampling method, and subsequently find out a plurality of first feature-matching points from one of the two panoramically-coordinated image frames matching the left feature points by using a pattern recognition method.
After finding out the first feature-matching points, it needs to further select a second sub-region from an image overlapping region of the two panoramically-coordinated image frames. That is, selecting a right sub-region from an image overlapping region locating in left side of the right side image frame of the two panoramically-coordinated image frames. Continuously, to find out a plurality of right feature points from the right sub-region by using a fixed interval sampling method, and subsequently find out a plurality of second feature-matching points from another one of the two panoramically-coordinated image frames matching the right feature points by using a pattern recognition method.
After obtaining the first feature-matching points and the second feature-matching points, the App is able to stitch the two panoramically-coordinated image frames based on the first feature-matching points and the second feature-matching points, such that the panoramic image frame is produced. Furthermore, as the engineers skill in image processing technology field know, the panoramic image frame obtained by stitch the two panoramically-coordinated image frames must be subsequently treated with an edge smoothing process in order to eliminate stitch seam.
When executing the edge smoothing process, it needs to firstly find out the center point of the image overlapping region of the left side image frame and the right side image frame, and then use following mathematical equation to carry out a first image blending process.
In above-presented mathematical equation, PL′ represents a new pixel of the left side image frame of the two panoramically-coordinated image frames stitched to each other; moreover, PL0 and PR represent the original pixel of the left side image frame and the original pixel of the right side image frame, respectively. In addition, WL means a left width of the image overlapping region, and WL0 represents a distance from a specific pixel in the left side image frame to a left boundary of the left side image frame. After completing the first image blending process, a second image blending process is subsequently carried out by using a mathematical equation defined as follows:
In above-presented mathematical equation, PR′ represents a new pixel of the right side image frame of the two panoramically-coordinated image frames stitched to each other; moreover, PR0 and PL represent the original pixel of the right side image frame and the original pixel of the left side image frame, respectively. In addition, WR means a right width of the image overlapping region, and WR0 represents a distance from a specific pixel in the right side image frame to a right boundary of the right side image frame.
After the completing the image stitching process and the edge smoothing process, the method is continuously proceeded to step (5) for treating the panoramic image frame with a display mode conversing process in order to make the panoramic image frame be shown on a display of the electronic device 3 by a specific display mode, such as spherical panoramic display mode, plain panoramic display mode, fisheye panoramic display mode, human-eye panoramic display mode, or projection panoramic display mode. So that, the panoramic image frame can be shown on the display of the electronic device by a form of sphere panorama, plain panorama, fisheye panorama, human-eye panorama, or projection panorama. As the engineers skilled in image processing technology field know, the display mode conversing process is completed by using a programmable image processor or a digital signal processor. Besides, the display mode conversing process can also be completed by a programmable image processing library such as OpenGL® 1.5, DirectX®, or Shader Model 3.0 in built a display card of the electronic device 3.
Referring to
Herein, it needs further explain that, above descriptions of the embodiment of the image processing method of present invention are made by taking one L-frame wide-angle image and one R-frame wide-angle image for examples. However, the image processing method of the present invention can also applied to process a video stream. For instance, after obtaining a plurality of panoramic image frames from the step (5), a panoramic video can be produced after treating each of the panoramic image frames with a video coding process in a time series of the image frames.
Therefore, through above descriptions, the waste air exhausting device having functionality to abate noise and modulate noise frequency provided by the present invention has been introduced completely and clearly; in summary, the present invention includes the advantages of:
(1) Differing from conventional image processing technology cannot immediately produce 360-degree panoramic images, the present invention provides an image processing method for immediately producing panoramic images. In this method, two fish-eye cameras been calibrated are used for capturing video information; and then, after treating the video information with a video encoding process and a streaming process, streaming video is transmitted to an electronic device by wired or wireless way. Therefore, an image processing application program installed in the electronic device can be used for treating the streaming video with a video encoding process, a panoramic coordinates converting process, an image stitching process, and an edge-preserving smoothing process, so as to eventually show a sphere panorama on the display of the electronic device. Moreover, by using a programmable image processor or a digital signal processor, the image processing application program is able to show a plain panorama, a fisheye panorama, or a human-eye panorama on the display of the electronic device after treating the sphere panorama with a visual field converting process.
The above description is made on embodiments of the present invention. However, the embodiments are not intended to limit scope of the present invention, and all equivalent implementations or alterations within the spirit of the present invention still fall within the scope of the present invention.