This application claims the priority benefit of Taiwan application serial no. 98120873, filed Jun. 22, 2009. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of specification.
1. Field of the Invention
The present invention relates to an image transformation method and more particularly to a method for obtaining a three dimensional (3D) image using a two dimensional (2D) image and a corresponding depth image.
2. Description of Related Art
In a 3D image display, a barrier and a view design is often used in combination with binocular parallax to enable the human eyes to sense a 3D image.
In a conventional process of making a 3D image, images or view suitable for the left and the right eyes are read from a memory and the two images are processed and outputted. However, such method requires a significant amount of memory space and consumes more resources. Furthermore, the afore-mentioned images usually undergo a motion process such that the images have holes in part of the pixels. The conventional technology solves the problem of pixel holes by using copy interpolation to interpolate the pixel holes, which, however, generates less smooth and less natural images.
An exemplary embodiment of the present invention provides an image transformation method in which a 3D image is obtained according to a 2D image and a corresponding depth image.
An exemplary embodiment of the present invention provides an image transformation method adapted to an image display device. The image transformation method includes obtaining a 2D image and a corresponding depth image. According to the depth image and N gain values GMW, a motion process is performed on the 2D image to obtain N motion images, wherein a pixel motion value of each of the motion images varies with the corresponding gain value GMW, wherein N and w are positive integers and 1≦w≦N. A plurality of corresponding view images are obtained by respectively performing an interpolation process on the motion images. A synthesis process is performed on the view images to obtain a 3D image.
In one exemplary embodiment of the present invention, the afore-mentioned motion process calculates the pixel motion value in the motion images with the following formula:
Sw(i,j)=(D(i,j)/a)*GMw, wherein Sw(i,j) represents the pixel motion value of the wth motion image, (i,j) represents coordinates of a pixel at the ith column and jth row in the wth motion image, D(i,j) represents a pixel value of the depth image, a is a constant, and i and j are positive integers.
In one exemplary embodiment of the present invention, the afore-mentioned constant a is an integer greater than 0.
In one exemplary embodiment of the present invention, the afore-mentioned plurality of motion images are obtained by moving pixel positions in the 2D image according to the corresponding pixel motion value Sw(i,j).
In one exemplary embodiment of the present invention, the afore-mentioned pixel motion value Sw(i,j) represents an amount of motion of the pixel to the left or the right.
In one exemplary embodiment of the present invention, the above-mentioned step of respectively performing the interpolation process on the motion images to obtain a plurality of corresponding view images includes selecting an average of a plurality of pixels adjacent to the pixel hole to interpolate the pixel hole according to a pixel motion direction of each motion image.
In one exemplary embodiment of the present invention, the above-mentioned step of respectively performing the interpolation process on the motion images to obtain a plurality of corresponding view images includes selecting a median of a plurality of pixels adjacent to the pixel hole to interpolate the pixel hole according to a pixel motion direction of each motion image.
In one exemplary embodiment of the present invention, the above-mentioned step of performing a synthesis process on the view images to obtain a 3D image includes displaying the view images on the display device according to the pixel positions corresponding to the view images.
In one exemplary embodiment of the present invention, a computer program product is provided which when loaded in a computer executes the following steps. A 2D image and a corresponding depth image are obtained. According to the depth image and N gain values GMW, a motion process is performed on the 2D image to obtain N motion images, wherein a pixel motion value of each of the motion images varies with the corresponding gain value GMW, wherein N and w are positive integers and 1≦w≦N. A plurality of corresponding view images are obtained by respectively performing an interpolation process on the motion images. A synthesis process is performed on the view images to obtain a 3D image.
According to the above, the present invention provides an image transformation method in which a 3D image is obtained according to a 2D image and a corresponding depth image. In addition to obtaining a plurality of smooth and natural view images which are synthesized as a 3D image, the present invention also conserves memory usage. The image transformation method of the present invention is applicable for image display devices, computer accessible recording media, computer program products, or embedded systems.
In order to make the aforementioned and other features and advantages of the present invention more comprehensible, several embodiments accompanied with figures are described in detail below.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
In step S302, using the depth image and the N gain values GMW, the motion process is performed on the 2D image to obtain the N motion images, wherein the pixel motion value of the N motion images varies with the corresponding gain value GMW, wherein N and w are positive integers and 1≦w≦N. In detail, Sw(i,j) represents the pixel motion value of the pixel at the ith column and jth row in the wth motion image. Furthermore, p(i,j) and mw(i,j) respectively represent the pixel values of the pixels at the ith column and jth row of the 2D image 401 and the first motion image 403, (i,j) are corresponding coordinates, and w=1 and i and j are positive integers. As such, a motion process is performed on the pixel value p(i,j) according to the pixel motion value Sw(i,j) to obtain the pixel value mw(i,j), wherein the pixel motion value Sw(i,j) varies with the gain value GMw. In other words, the N motion images are obtained by moving pixel positions in the 2D image according to the corresponding pixel motion value Sw(i,j). For example, in order to obtain the first motion image 403, assume the pixel motion value S1(11,12)=2 (S1(11,12) corresponds to the pixel value p(11,12)) and then the pixel value p(11,12) is shifted to right by two pixels. That is to say, after the motion process, the pixel value m1(13,12) is the pixel value p(11,12). Furthermore, if right shift direction is adopted as the pixel motion direction, then mw(i+Sw(i,j),j)=p(i,j), i.e. the pixel motion value Sw(i,j) represents the pixel motion value to the right.
Furthermore, the pixel motion value Sw(i,j) is obtained according to the depth image 402 and the gain value GMw, wherein Sw(i,j)=(D(i,j)/a)*GMw, the constant a and the gain value GMw may be obtained based on design experience. For example, the embodiment illustrated in
On the other hand, the motion process is performed on the 2D image 401 according to the depth value D(i,j) and the gain value GM2, i.e. according to the pixel motion value S2(i,j)=(D(i,j)/a)*GM2, to obtain the second motion image 405 in
The constant a is used to quantify the depth value D(i,j). For example, the constant a may be a positive integer greater than 0. Assume the depth value D(i,j) is a gray value of a positive integer, 0≦D(i,j)≦255, and a=64, then 0≦D(i,j)/a<4. Furthermore, assume D(i,j)/a is truncated, the result of truncating D(i,j)/a is a positive integer greater than or equal to 0 and smaller than or equal to 4. According to the above, the constant a may be used to adjust motion distance of the pixel motion value Sw(i,j).
The motion images after the motion process may have pixel holes. For example, the first motion image 403 may have a pixel hole 410, i.e. the diagonal lines in the first motion image 403. Similarly, the diagonal lines in the second motion image 405 are also pixel holes. Specifically, in the motion process in step S302, not all pixel values of the first motion image 403 are obtained through the pixel motion value Sw(i,j) of the 2D image 401. In other words, pixel values at some positions in the first motion image 403 are not defined and the positions of these undefined pixel values are pixel holes. To interpolate the pixel holes, in step S303, a plurality of corresponding view images are obtained by performing an interpolation process on the motion images.
According to the above interpolation, the first view image 404 in
In step S304, the synthesis process is performed on the view images to obtain the 3D image, wherein the synthesis process is designed in accordance with the panel.
Based on the above image transformation method adopting two view images, embodiments of the present invention may also adopt six view images.
On the other hand, according to the image transformation method of the present embodiment, subsequently in step S303, the interpolation process is performed on the above motion images to obtain the plurality view images. For example, the interpolation process may adopt the average interpolation on six pixel values. As such, the average interpolation of six pixel values is performed on the motion image of
According to the image transformation method of the present embodiment, subsequently in step S304, the synthesis process is performed on the view images to obtain the 3D image. In other words, the synthesis process is performed on the view images shown in
According to the above, the view images obtained in the embodiments of the present invention are smoother and more natural than the view images obtained by conventional interpolation. Moreover, the embodiments of the present invention also have the advantage of conserving memory usage. In a conventional process of making a 3D image, images or views suitable for the left and the right eyes are read from a memory and the two images are processed and outputted. According to descriptions in the above embodiments, a 2D image and a depth image are received from a memory and a motion process is used to obtain a plurality of motion images for subsequent processes. As such, when the panel adopts more view images, e.g. six view images, then six images suitable for the left eye and six images suitable for the right eye need to be read in the conventional method for making a 3D image for the subsequent synthesis process. Compared to the conventional technology, in the above embodiments, only a 2D image and a depth image need to be received from the memory and a motion process is used to obtain six motion images for subsequent processes. Then, interpolation is performed on the six motion images to obtain six view images, wherein when each view image is generated, data of each view image covers the corresponding motion image, thereby reducing memory usage. Moreover, in the above embodiments, the required memory space does not increase because the panel uses more view images.
According to the above, the present invention provides an image transformation method in which a 3D image is obtained according to a 2D image and a corresponding depth image. First, by using the depth image and a plurality of gain values, a motion process is performed on the 2D image to obtain a plurality of motion images. Subsequently, an interpolation process is performed on the motion images to thereby obtain a plurality of smooth and natural view images in addition to reducing memory usage. Finally, a synthesis process is performed on the view images to obtain a 3D image. The image transformation method of the present invention is applicable for image display devices, computer accessible recording media, computer program products, or embedded systems.
Although the present invention has been described with reference to the above embodiments, it will be apparent to one of the ordinary skill in the art that modifications to the described embodiment may be made without departing from the spirit of the invention. Accordingly, the scope of the invention will be defined by the attached claims not by the above detailed descriptions.
Number | Date | Country | Kind |
---|---|---|---|
98120873 A | Jun 2009 | TW | national |
Number | Name | Date | Kind |
---|---|---|---|
4157259 | Goffe | Jun 1979 | A |
5764786 | Kuwashima et al. | Jun 1998 | A |
5825997 | Yamada et al. | Oct 1998 | A |
6038031 | Murphy | Mar 2000 | A |
6215494 | Teo | Apr 2001 | B1 |
6256036 | Matsumoto | Jul 2001 | B1 |
6580821 | Roy | Jun 2003 | B1 |
6760488 | Moura et al. | Jul 2004 | B1 |
6777659 | Schwarte | Aug 2004 | B1 |
6819318 | Geng | Nov 2004 | B1 |
6914600 | Malzbender et al. | Jul 2005 | B2 |
7120519 | Okabayashi et al. | Oct 2006 | B2 |
7263209 | Camus et al. | Aug 2007 | B2 |
7423666 | Sakakibara et al. | Sep 2008 | B2 |
7616885 | Chen et al. | Nov 2009 | B2 |
8116558 | Hayashi et al. | Feb 2012 | B2 |
8121399 | Hayashi et al. | Feb 2012 | B2 |
20050068317 | Amakai | Mar 2005 | A1 |
20080239144 | Tanase et al. | Oct 2008 | A1 |
20090086092 | Oishi | Apr 2009 | A1 |
20110153362 | Valin et al. | Jun 2011 | A1 |
20110188708 | Ahn et al. | Aug 2011 | A1 |
20120134536 | Myokan | May 2012 | A1 |
20130027391 | Lin et al. | Jan 2013 | A1 |
Number | Date | Country |
---|---|---|
2004-201004 | Jul 2004 | JP |
I233574 | Jun 2005 | TW |
200816800 | Apr 2008 | TW |
Entry |
---|
Tomasi et al. “Shape and Motion from Image Streams under Orthography . . . ” Intl Journal of Computer Vision pp. 1-18. |
“Office Action of Taiwan counterpart application” issued on Dec. 24, 2012, p. 1-p. 8. |
Number | Date | Country | |
---|---|---|---|
20100322535 A1 | Dec 2010 | US |