The present invention relates to a method and system for rendering a stereoscopic view. Such method may be applied to stereoscopy in general and more particularly to design and presentation of three-dimensional graphical user interfaces (GUI).
A PCT application WO2010/046824 relates to a system and method of processing an input three dimensional video signal comprising multiple views by determining a far disparity estimate and a near disparity estimate, adapting the input signal by shifting it by a disparity shift and generating an overlay within the spatial region for the shifted signal. It addresses the problem of migrating the disadvantages of having encoded the disparity relationship in stereo or multiview content for three dimensional displays when applying overlays. The source data includes disparity information.
It is known from the U.S. patent U.S. Pat. No. 6,441,815 “Method and system for high performance computer-generated virtual environments” that a stereoscopic system may employ a stereo pair of images as texture maps and apply them to a polygon situated in a given scene, i.e. at least 2 polygons—one for the left eye and one for the right eye. The image drawn for the right eye of the viewer uses the right-eye image from the stereo pair applied to a polygon beyond the portal frame. The left eye sees the identical environment from its perspective and the left-eye image from the stereo pair is applied to the polygon beyond the portal. Such solution achieves improved quality of rendered objects that gain depth in a three-dimensional scene. Additionally, it defines a two-step scene generation process, comprising pre-rendering and final rendering.
Experiments have shown that in special circumstances where pre-rendering of textures is utilized (for example photographs, computer graphics images, live video as defined in U.S. Pat. No. 6,441,815), scene objects may be situated such that despite their specific placement in a pre-rendered scene, the final rendering phase will result in a different apparent placement of the objects. Such situation will be referred herein as objects mixup. For example, object mixup may refer to a situation in which object A intended to be placed behind object B will be seen in the stereoscopic viewing system as being in front of the object B. This is a result of mixed instructions received by human brain with respect to depth cues present in the particular three-dimensional scene.
The experiments underlying the present invention have shown that the object mixup may be caused by texture parallax, namely a situation in which the right-eye texture and the left-eye texture of the object are different and have a parallax, i.e. a given characteristic point of the object is present on different positions at the left-eye texture and at the right-eye texture. The parallax depends on the distance between the object and the cameras as well as on the setting of cameras, their spread, toe-in angle, etc.
The technical problem of the prior art is that the known three-dimensional content presentation methods do not have mechanisms to prevent objects mixup when the textures of the object have a parallax. The patent U.S. Pat. No. 6,441,815 does not teach that textures parallax would lead to perceived objects mixup.
No other known documents disclose the above-mentioned objects mixup effect in a stereoscopic image and do not therefore anticipate the aforementioned problem on which the present invention is based.
The present invention aims to provide a method and system for rendering a stereoscopic view which avoid perceived objects mixup wherein object textures have parallax.
The object of the invention is a method for rendering a stereoscopic view, the method comprising the steps of defining a stereoscopic scene view comprising representation of a scene object, receiving a left-eye texture and a right-eye texture for the scene object and generating a left-eye scene view comprising the left-eye texture applied to the representation of the scene object and the right-eye scene view comprising a right-eye texture applied to the representation of the scene object. The method further comprises the steps of, prior to generating the left-eye scene view and the right-eye scene view, determining a value of the texture parallax between the left-eye texture and the right-eye texture and offsetting the left-eye texture and the right-eye texture by a half of the value of the texture parallax in opposite directions such as to provide an offset left-eye texture and an offset right-eye texture with a texture parallax equal to zero.
Determining the value of the texture parallax and offsetting the left-eye texture and the right-eye texture can be performed at a pre-rendering stage to provide the offset left-eye texture and the offset right-eye texture to be received for the scene object at the final rendering stage.
The value of the texture parallax can be determined at a pre-rendering stage to provide the left-eye texture and the right-eye texture and the value of the texture parallax to be received for the scene object at the final rendering stage.
The value of the texture parallax can be determined at a pre-rendering stage as a function of the relative positioning of the cameras, the scene object and the screen.
The region of the offset texture outside the original texture region can be filled with the screen image visible at the pre-rendering stage.
The value of the texture parallax of the textures received at the final rendering stage can be determined by image analysis.
The region of the offset texture outside the original texture region can be made transparent.
The value of texture parallax can be determined as a difference between position of a reference point of the scene object on the right-eye texture and on the left-eye texture.
The value of texture parallax can be determined as a minimum, maximum, average or median value of differences between positions of a plurality of reference points of the scene object on the right-eye texture and on the left-eye texture.
Another object of the invention is a computer program comprising program code means for performing all the steps of the method of the invention when said program is run on a computer, as well as a computer readable medium storing computer-executable instructions performing all the steps of the computer-implemented method of the invention when executed on a computer.
The object of the invention is also a system for rendering a stereoscopic view, the system comprising a scene composer configured to define a stereoscopic scene view comprising representation of a scene object and a scene generator configured to generate a left-eye scene view comprising the left-eye texture applied to the representation of the scene object and the right-eye scene view comprising a right-eye texture applied to the representation of the scene object. The system further comprises a texture recenterer configured to determine a value of the texture parallax between the left-eye texture and the right-eye texture and to offset the left-eye texture and the right-eye texture by a half of the value of the texture parallax in opposite directions such as to provide an offset left-eye texture and an offset right-eye texture with a texture parallax equal to zero and to provide the offset textures to the scene generator.
The texture recenterer and the scene generator may form a part of a final rendering subsystem.
The texture recenterer may form a part of a pre-rendering subsystem.
The invention will be described with reference to a drawing, in which:
In a stereoscopic environment there is provided a separate scene object and a separate texture for each eye. The scene view 102 received by the right camera 111 comprises representation of the object 112 in place 101 whereas the scene view 102 received by the left camera 106 comprises representation of the object 112 in place 107. As can be seen on
The left camera 106 has a field of view defined by lines 103 and 105 and the right camera 111 has a field of view defined by lines 108 and 110. The cameras 106, 111 view the object 112 from different angles 104 and 109 which results, in the present case, in a negative object parallax 113 (defined as the difference between the position of the object on the right-eye scene 101 and on the left-eye scene 107) and in that the object 112 is seen as being in front of the screen plane 102. Such cameras arrangement is used to capture texture images of the object 112 for both eyes. The left camera 106 will capture a texture for the left eye while the right camera 111 will capture a texture for the right eye. These textures will be later used to generate a stereoscopic imaginery at an image generating device. The image generating device will utilize pre-generated textures, i.e. textures captured or pre-rendered prior to the image generation process. Hence, the full scene generation process is a two-step procedure, namely it comprises the steps of pre-rendering and final rendering.
Both cameras see identical environment from different perspectives. In order to extract textures, according to a typical method known from the prior art, the same part of the environment is selected from the full scene. As seen on
As can be seen, representations of the scene object 403, 404 appear in
The prior art systems recognized improvement of quality when using textures with parallax but failed to identify that in certain situations the texture parallax causes problems of significant relevance in a process of stereoscopic imaginery development. Such specific situations were not recognized by the prior art systems.
The present invention utilizes a concept, which will be called hereinafter as recentering, to compensate potential object parallax when object textures have a texture parallax. Recentering limits the contradicting depth cues throughout the whole image irrespective of scene objects placement or grouping.
In a standard method for rendering a stereoscopic view, a stereoscopic scene view is defined with a representation of a scene object. Next, there are generated a left-eye scene view comprising a left-eye texture applied to the representation of the scene object and a right-eye scene view comprising a right-eye texture applied to the representation of the scene object.
The present method enhances the steps of the standard rendering method in that prior to generating the left-eye scene view and the right-eye scene view, the textures are recentered. Recentering involves determining a value of the texture parallax between the left-eye texture and the right-eye texture and offsetting the textures by a half of the value of the texture parallax in opposite directions such as to provide textures with no texture parallax, as shown in
The object 603 is positioned relative to screen 604 as in
An exemplary practical implementation example may apply to a situation when a textures supplier first renders textures for a defined environment. Later, a GUI designer may design a scene in which stereoscopic objects are placed. It often will be the case that the object used for textures prerendering will not be situated in the same place in the designed scene. Therefore, a further step is necessary that will prevent objects to be rendered such that they would appear in the scene at locations other than the GUI designer selected.
The texture parallax can be determined either by image analysis or by calculations at the pre-rendering stage, when the positioning of the object with respect to the screen and the cameras is known.
The following equation can be used:
d/D=IPD/(IPD−img)
wherein
d is a distance of the cameras from the object,
D is a distance of the cameras from the screen,
IPD is the spacing between the cameras and
img is the texture parallax.
img=IPD−IPD*D/d.
The distance of the cameras from the object may be defined in various ways, as the object is not a single point and may have an area variable with respect to the depth coordinate. For example the distance may be calculated to the closest point of the object, to the farthest point of the object or to the middle point of the object. The choice is made by the scene designer.
After the texture parallax is calculated, the textures may be provided to the rendering engine together with the value of the texture parallax. Alternatively, the textures may be offset in the pre-rendering stage so as to obtain textures with no parallax and the offset textures may be provided to the rendering engine.
Another method may estimate or compute the texture parallax value by analyzing the pre-generated textures. The final rendering engine will then offset the textures by half of the texture parallax prior to rendering the scene.
It is to be noted that the pre-rendering and final rendering stages can be performed either on one device or on different devices. For example, pre-rendering can be executed on a personal computer or workstation, while the final rendering can be executed on a set-top box, a smartphone, television set or a tablet device.
When the textures are offset at the pre-rendering stage, the region 605, 606 of the offset texture outside the original texture region 203, 206 can be filled with the screen image visible at the pre-rendering stage.
When the textures are offset at the final rendering stage, the region 605, 606 of the offset texture outside the original texture region 203, 206 can be made transparent such as to be filled with the background image.
In another embodiment, when a depth map for the stereoscopic texture is available, the values of parallaxes of each point of the texture may be easily computed from the aforementioned equation: d/D=IPD/(IPD−img), wherein d is the distance to the object, D is the distance to the screen, IPD is interpupiliar distance and img is the stereoscopic parallax. If the stereoscopic textures are being created in controlled environment, for instance when they are being rendered by means of computer generated imaginary, the exact values of the stereoscopic parallaxes may be known apriori.
Given the values of parallaxes in various points of the stereoscopic texture, the reference texture parallax value used for the offset of textures to perform recentering may be chosen as the image minimal parallax, maximal parallax, average parallax, median parallax or by any other convention.
Although determining of stereoscopic parallaxes on the stereoscopic texture is a non-trivial problem, there are various solutions for depth map estimation and stereo matching. Examples of those methods are presented in
US2003/0231792A1 System and Method for Progressive Stereo Matching of Digital Images, U.S. Pat. No. 5,577,130 Method and Apparatus for Determining the Distance between an Image and an Object, U.S. Pat. No. 7,333,652B2 System and Method for Efficiently Performing a Depth Map Recovery Procedure. Further information may be found in publication: Gaël Sourimant, “Depth maps estimation and use for 3DTV”, Technical Report num. 0379, INRIA Rennes Bretagne Atlantique, Rennes, France, February 2010.
It can be easily recognized, by one skilled in the art, that the aforementioned method for rendering a stereoscopic view may be performed and/or controlled by one or more computer programs. Such computer programs are typically executed by utilizing the computing resources in a computing device such as personal computers, personal digital assistants, cellular telephones, receivers and decoders of digital television or the like. Applications are stored in non-volatile memory, for example a flash memory or volatile memory, for example RAM and are executed by a processor. The stereoscopic view generating device i.e. a TV set or a set top box according to the present invention, optionally comprises such a memory. These memories are exemplary recording media for storing computer programs comprising computer-executable instructions performing all the steps of the computer-implemented method according the technical concept presented herein.
The present invention may be implemented using any display, for example on a computer monitor, a television display, a stereoscopic projector.
While the invention presented herein has been depicted, described, and has been defined with reference to particular preferred embodiments, such references and examples of implementation in the foregoing specification do not imply any limitation on the invention. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader scope of the technical concept. The presented preferred embodiments are exemplary only, and are not exhaustive of the scope of the technical concept presented herein. Accordingly, the scope of protection is not limited to the preferred embodiments described in the specification, but is only limited by the to claims that follow.
Number | Date | Country | Kind |
---|---|---|---|
1115117.2 | Feb 2011 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP12/52636 | 2/16/2012 | WO | 00 | 7/11/2013 |