This application claims the benefit of Korean Patent Application No. 10-2008-0112825, filed on Nov. 13, 2008 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.
1. Field
One or more embodiments relate to a display apparatus and method that may display a high depth three-dimensional (3D) image, and more particularly, to a technology that may separate an image into a near-sighted image and a far-sighted image to output the near-sighted image using a light field method and to output the far-sighted image using a multi-view method and thereby may prevent the image from being blurred or being overlapped to output a high quality of image.
2. Description of the Related Art
A three-dimensional (3D) display apparatus denotes an image display apparatus that may three-dimensionally display an image. In order to more actually embody a 3D effect, the 3D display apparatus may more sufficiently provide depth cues to make it possible for a user to feel the 3D effect. This is different from a two-dimensional (2D) display apparatus. The depth cues may include a stereo disparity, a convergence, an accommodation, a motion parallax, and the like.
Representative methods of an auto-stereoscopic display apparatus that does not use glasses may adopt a multi-view scheme and a light field method. However, when embodys a 3D display image using the multi-view method and the light field method, the multi-view method may cause blurring of the image and a visual fatigue to occur in displaying a near-sighted image that is positioned between a display panel and a user. The light field method may blur the image in displaying a far-sighted image that is positioned behind the display panel.
Accordingly, there is a need for a research regarding an excellent 3D display technology that may overcome limits found in an existing 3D display technology and may prevent blurring or overlapping of an image and thereby enhance an image quality.
Additional aspects and/or advantages will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
According to an aspect of one or more embodiments, a display method including: separating an input image into a near-sighted image and a far-sighted image; imaging the near-sighted image using a light field method and imaging the far-sighted image using a multi-view method; and weaving and outputting the imaged near-sighted image and the far-sighted image is provided.
In this instance, the method may further include extracting a depth of the input image to generate a depth map. The separating of the input image may include separating the input image into the near-sighted image and the far-sighted image based on the depth map.
Also, the separating of the input image may include separating, as the near-sighted image, an image that is positioned between a display panel and a user, and separating, as the far-sighted image, an image that is positioned behind the display panel.
Also, the method may further include performing an interpolation or an extrapolation for the input image, when a number of viewpoints of the input image is different from a number of viewpoints of an output image to be output.
Also, the method may further include: verifying a location of a user; and controlling a sweet spot of an output image to be output according to the location of the user.
According to another aspect of one or more embodiments, a display apparatus including: an image separating unit to separate an input image into a near-sighted image and a far-sighted image; a near-sighted image imaging unit to image the near-sighted image using a light field method; a far-sighted image imaging unit to image the far-sighted image using a multi-view method; an image weaving unit to weave the imaged near-sighted image and the far-sighted image; and an image output unit to output the weaved image is provided.
In this instance, the display apparatus may further include a depth extraction unit to extract a depth of the input image to generate a depth map.
Also, the display apparatus may further include an image interpolation unit to perform an interpolation or an extrapolation for the input image, when a number of viewpoints of the input image is different from a number of viewpoints of an output image to be output.
Also, the display apparatus may further include: a location verification unit to verify a location of a user; and a control unit to control a sweet spot of an output image to be output according to the location of the user.
Additional aspects, features, and/or advantages of embodiments will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the disclosure.
These and/or other aspects and advantages will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below to explain the present disclosure by referring to the figures.
Referring to
In operation S120, the display method may image the near-sighted image using a light field method, and may image the far-sighted image using a multi-view method. Hereinafter, operation S120 will be further described in detail with reference to
Referring to
In operation S220, the display method may encode the far-sighted image to a perspective image.
Specifically, the display method may encode the near-sighted image to the orthogonal image to output the near-sighted image using the light field method. Also, the display method may encode the far-sighted image to the perspective image in order to output the far-sighted image using the multi-view method.
Referring again to
Referring to
In operation S320, the display method may transfer the weaved image signal to a display panel to output an image.
Specifically, the display method may sequentially weave the near-sighted image and the far-sighted image that are imaged to the orthogonal image and the perspective image, respectively into a single image, and thereby make them into a single image frame. Accordingly, when the weaved image signal is transferred to a 3D display panel, it is possible to output an actual image.
According to an embodiment, the display method may further include extracting a depth of the image to generate a depth map. For example, when the input image is in a stereo format, a multi-view format, and the like, and the input image is received, it is possible to extract a depth of the input image to generate the depth map. Accordingly, the input image is separated into a near-sighted image and a far-sighted image based on the generated depth map. When the input image is in a 3D format having color and depth information, the input image can be separated into the near-sighted image and the far-sighted image without generating the depth map.
Also, according to an embodiment, when a number of viewpoints of the input image is different from a number of viewpoints of an output image to be output, the display method may further include performing an interpolation or an extrapolation for the input image. For example, when the input image is a 6-viewpoint image and the output image is a 24-viewpoint image, the display method may perform the interpolation or the extrapolation for the input image to output the input image as the 24-viewpoint image.
Also, according to an embodiment, the display method may further include verifying a location of a user, and controlling a sweet spot of an output image to be output according to the location of the user. Here, the operation of controlling the sweet spot of the output image will be further described in detail with reference to
Referring to
As described above, each of a multi-view display method and a light field display method may adopt a different method to obtain and display an image. However, the multi-view display method and the light field display method may be embodied through the same display structure. Specifically, both methods may attach a lenticular lens onto a 2D display panel to thereby display a 3D image, or may be embodied in a form of a multi-projector. Accordingly, both the near-sighted image and the far-sighted image can be outputted cleaning by separating the input image into a multi-view image and a light field image according to a depth.
Also, according to an embodiment, the input image may be separated into the near-sighted image and the far-sighted image. The near-sighted image may be imaged and be output using the light field method. The far-sighted image may be imaged and be output using the multi-view method. Through this, the user may view the enhanced image without blurring or overlapping of the image.
Referring to
Referring to
In operation S620, it may be determined whether to display the 3D image in front of a display panel or behind the display panel. In operation S630, the 3D image may be separated into a near-sighted image and a far-sighted image.
In operation S640, the near-sighted image and the far-sighted image may be generated into a light field image and a multi-view image, respectively. Specifically, the near-sighted image may be encoded to an orthogonal image, and the far-sighted image may be encoded to a perspective image.
In operation S650, the encoded images may be sequentially weaved into a single image to thereby generate a single image frame. In operation S660, a final image signal where the near-sighted image and the far-sighted image are weaved may be transferred to a 3D display to thereby display an actual image.
Referring to
In operation S720, a color image containing color information and a depth image containing depth information may be extracted from the stereo image.
In operation S730, a near-sighted image and a far-sighted image may be separated using the color image and the depth image. In this instance, the near-sighted image and the far-sighted image may be separated depending on whether an image is output from a region that is located closer to a user based on a display panel, or whether the image is output from a region that is located away from the user based on the display panel. For this, the near-sighted image or the far-sighted image may be separated by comparing an image value and a predetermined parameter value.
In operation S740, when the image is the near-sighted image, a light field image may be generated to output the near-sighted image using a light field method. Specifically, the near-sighted image may be encoded to an orthogonal image for the output of the light field method.
In operation S750, when the image is the far-sighted image, a multi-view image may be generated to output the far-sighted image using a multi-view method. Specifically, the far-sighted image may be encoded to a perspective image for the output of the multi-view method.
In operation S760, the imaged near-sighted image and the far-sighted image may be weaved to generate a single image frame.
As described above, according to an embodiment, since the near-sighted image and the far-sighted image that are imaged using respective different methods are weaved and thereby output, it is possible to clearly display both the near-sighted image and the far-sighted image, without causing blurring or overlapping of an image.
Referring to
The image separating unit 810 may separate an input image into a near-sighted image and a far-sighted image. The near-sighted image and the far-sighted image may be separated depending on an output location based on a display unit, or may be determined through a comparison with a predetermined parameter value.
The near-sighted image imaging unit 820 may image the near-sighted image using a light field method. Accordingly, the near-sighted image may be encoded to an orthogonal image.
The far-sighted image imaging unit 830 may image the far-sighted image using a multi-view method. Accordingly, the far-sighted image may be encoded to a perspective image.
The image weaving unit 840 may weave the imaged near-sighted image and the far-sighted image. Specifically, the near-sighted image and the far-sighted image may be weaved to generate a single frame image.
The image output unit 850 may output the weaved image.
The depth extraction unit may extract a depth of the input image to generate a depth map. For example, when the input image is a stereo image or a multi-view image, the depth extraction unit may extract the depth to generate the depth map of the image.
When a number of viewpoints of the input image is different from a number of viewpoints of an output image to be output, the image interpolation unit may perform an interpolation or an extrapolation for the input image.
The location verification unit may verify a location of a user. The control unit may control a sweet spot of the output image according to the location of the user. For example, the control unit may change the sweet spot of the output image in correspondence to a location change according to a motion of the user.
As described above, according to an embodiment, an input image may be separated into a near-sighted image and a far-sighted image. The near-sighted image and the far-sighted image may be imaged and output using different methods, respectively. Through this, it is possible to embody a 3D display apparatus that may prevent burring or overlapping of an image and enables the user to view the image in a relatively wider view range, without feeling a visual fatigue.
The aforementioned display type or structure is only an example. Thus, when embodying a 3D display apparatus, there may be some difference. Specifically, a projector method may be used to embody a multi-view image and a light field image. Also, a micro array lens may be adopted instead of using a lenticular lens. Any modification found in embodying this display apparatus, or in generating the multi-view image and the light field image may be included in the spirit and scope of the embodiments.
The high depth 3D image display method according to the above-described example embodiments may be recorded in computer-readable media including program instructions to implement various operations embodied by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. Examples of computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described example embodiments, or vice versa.
Although a few embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the disclosure, the scope of which is defined by the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2008-0112825 | Nov 2008 | KR | national |