This application claims the benefit, under 35 U.S.C. § 365 of International Application PCT/EP2018/064510, filed Jun. 1, 2018, which was published in accordance with PCT Article 21(2) on Dec. 20, 2018, in English, and which claims the benefit of European Patent Application No. 17305712.6 filed Jun. 12, 2017.
The present disclosure relates to the rendering of a multi view content. Particularly, but not exclusively, the present disclosure is directed to the rendering of a multi view multimedia content on a display screen depending on user's position.
This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
A multi view content (so called Light Field content), still image or video, can be obtained by Light Field acquisition system, such as a fixed camera array composed of a plurality of distinct cameras or a plenoptic camera formed by an array of micro-lenses placed in front of an image sensor. For each frame of a Light Field video or each Light Field image, the Light Field acquisition system is able to provide a set of multi-views, a set of depth maps and associated system acquisition parameters.
When rendering a Light Field content on a known display screen (e.g. a 2D TV screen), a user can benefit from the parallax capability offered by the Light Field content, providing a parallax rendering effect and a Virtual Reality (VR) experience. The parallax effect gives a sense of depth and makes a user feel the volume of objects or characters of a scene.
Depending on the Light Field acquisition system and user's position (in particular his head or eyes), the scene coverage can have some limitations leading the user to perceive holes or black surfaces on the edges.
The present disclosure has been devised with the foregoing in mind.
In a general form, the disclosure concerns a method configured to be associated with the display of a multi view content on a display screen depending on a position of a user's head, wherein said method comprises:
In an embodiment, the positioning zone and the triggering area can have both a pyramidal shape.
In an embodiment, the one or more incentive effects can comprise at least one of:
In an embodiment, the darkening effect can increase when the angle of view associated with the user's head position located within the triggering area increases.
In an embodiment, the darkening effect can increase linearly with the angle of view of the user's head position.
In an embodiment, the parallax intensity effect can decrease the speed of movement of elements appearing in the multi view content displayed on the screen, when the angle of view associated with the user's head position located within the triggering area increases.
In an embodiment, the one or more incentive effects can be reversible.
In an embodiment, the multi view content having been acquired by an acquisition device, the positioning zone can be established based one or more obtained acquisition parameters of the acquisition device and one or more obtained parameters of the display screen.
In an embodiment, the pyramidal shape of the positioning zone can be defined by an horizontal angle of view of the acquisition device and a vertical angle of view of the acquisition device.
In an embodiment, the pyramidal shape can be centered with respect to the display screen.
In an embodiment, the positioning zone can be defined by a minimum distance from the display screen.
In an embodiment, said minimum distance from the display screen can correspond to the maximum distance between:
In an embodiment, the horizontal minimum distance can be obtained from the following equation:
with wscreen the width of the display screen, and a the horizontal angle of view of the acquisition device.
In an embodiment, wherein the vertical minimum distance can be obtained from the following equation:
with hscreen the height of the display screen, and β the vertical angle of view of the acquisition device.
In an embodiment, the positioning zone can be defined by a maximum distance from the display screen.
In an embodiment, said maximum distance can be obtained from a minimum height and a maximum height between which the user's gaze can be located.
In an embodiment, said maximum distance can be obtained from an intersection of the pyramidal shape and an horizontal band defined by said minimum and maximum heights.
The present disclosure further concerns an apparatus adapted for providing information to a user observing a multi view content displayed on a screen according to the user's head position,
wherein it comprises at least one memory and at least one processing circuitry configured to:
The present disclosure also concerns an apparatus adapted for providing information to a user observing a multi view content displayed on a screen according to the user's head position, wherein it comprises:
In an embodiment, the positioning zone and the triggering area can have both a pyramidal shape.
In an embodiment, the one or more incentive effects can comprise at least one of:
In an embodiment, the darkening effect can allow to increase the brightness of the display screen when the angle of view associated with the user's head position located within the triggering area increases, and conversely.
In an embodiment, the parallax intensity effect can decrease the speed of movement of elements appearing in the multi view content displayed on the screen, when the angle of view associated with the user's head position located within the triggering area increases.
In an embodiment, the one or more incentive effects can be reversible.
In an embodiment, the apparatus can be configured to display the positioning zone and/or the triggering area.
Besides, the present disclosure is further directed to a non-transitory program storage device, readable by a computer, tangibly embodying a program of instructions executable by the computer to perform a method configured to be associated with the display of a multi view content on a display screen depending on a position of a user's head, wherein said method comprises:
The present disclosure also concerns a computer program product which is stored on a non-transitory computer readable medium and comprises program code instructions executable by a processor for implementing a method configured to be associated with the display of a multi view content on a display screen depending on a position of a user's head,
wherein said method comprises:
The method according to the disclosure may be implemented in software on a programmable apparatus. It may be implemented solely in hardware or in software, or in a combination thereof.
Some processes implemented by elements of the present disclosure may be computer implemented. Accordingly, such elements may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as “circuit”, “module” or “system”. Furthermore, such elements may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since elements of the present disclosure can be implemented in software, the present disclosure can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like.
The disclosure thus provides a computer-readable program comprising computer-executable instructions to enable a computer to perform the method as previously described.
Certain aspects commensurate in scope with the disclosed embodiments are set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of certain forms the disclosure might take and that these aspects are not intended to limit the scope of the disclosure. Indeed, the disclosure may encompass a variety of aspects that may not be set forth below.
The disclosure will be better understood and illustrated by means of the following embodiments and execution examples, in no way limitative, with reference to the appended figures on which:
Wherever possible, the same reference numerals will be used throughout the figures to refer to the same or like parts.
The following description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its scope.
All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read only memory (ROM) for storing software, random access memory (RAM), and nonvolatile storage.
In the claims hereof, any element expressed as a means and/or module for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
In addition, it is to be understood that the figures and descriptions of the present disclosure have been simplified to illustrate elements that are relevant for a clear understanding of the present disclosure, while eliminating, for purposes of clarity, many other elements found in typical digital multimedia content delivery methods, devices and systems. However, because such elements are well known in the art, a detailed discussion of such elements is not provided herein. The disclosure herein is directed to all such variations and modifications known to those skilled in the art.
The rendering system 1 comprises a capturing device 101, a processing apparatus 102, an acquisition device 103, an image projection computing apparatus 104 and a display device 105 equipped with a display screen 106.
It should be appreciated that the image projection computing apparatus 104 and the display device 105 can be combined together to form a standalone device, while they have been represented separately in
The capturing device 101 can be formed by a webcam, a video camera, or the like, configured to shoot the face of a user in front of the capturing device 101. The capturing device 101 can be arranged in communication with the processing apparatus 102.
The processing apparatus 102 is configured to receive multimedia content (such as a video) captured by the capturing device 101. From the received content, the processing apparatus 102 can determine the position of the user's head with respect to the display screen 106 and can further track movements of the user's head thanks to known tracking algorithms.
The acquisition device 103 is configured to acquire a multi view content (corresponding to a scene 200), such as a multi view still image or a multi view video. As an illustrative, but non-limitative example, the acquisition device can be formed by a fixed camera array composed of a plurality of distinct cameras regularly arranged or by a plenoptic camera comprising an array of micro-lenses positioned in front of an image sensor. In a variant or complement compliant with the present principles, the acquisition device can be a virtual acquisition device (e.g. a virtual camera array) to obtain computer-generated imagery (CGI). For each acquired multi view image or each frame of a multi view video, the acquisition device 103 can provide a set of multi-views, a set of depth maps and associated system acquisition parameters.
The image projection computing apparatus 104 can receive both data associated with user's head position and movements from the processing apparatus 102 and the acquired multi view content (image or video) delivered by the acquisition device 103. Based on the received information, the image projection computing apparatus 104 is configured to determine a projection of the multi view content to be displayed on the display device 105 as a function of the position of the user's head.
The projection of the acquired multi view content (set of different images associated with depth maps) on the screen 106 is the result of:
When the acquisition device 103 is a camera array, the two following matrices are estimated by calibration for each camera:
Considering a pixel (u, v) of the sensor of a camera of the acquisition device 103, its color (referenced RGB) and depth (referenced z(u, v, c)) are available (image and associated depth map). The pixel (u, v) can be un-projected in the 3D world coordinate system by using the following equation:
wherein zuv is the depth of the pixel at position (u,v) in the image. For natural images, this depth has been estimated with known algorithms.
In the following, the 3D visualization coordinate system (CS) associated with screen 106 is considered as the reference one (identified by (Xw, Yw, Zw) in
For the re-projection, the following OpenGL matrix of projection can be used:
wherein:
Such OpenGL matrix is described for example in the document “OpenGL Programming Guide 9th edition, Appendix E”, by Dave Shreiner, Graham Sellers, John Kessenich—The Khronos OpenGL ARB Working Group—Addison Wesley editor.
A virtual camera (arranged at the user's head position) needs to be placed also in that 3D visualization CS. The following translation matrix Teye (representing the movement of the user's head with respect to the screen 106) is used to compute the image viewed by the user on the screen 106:
A 3D point is further transformed thanks to following equation:
and then projected into the displayed image by making the 4D vector homogeneous:
wherein zeye defines the Z of the 3D point viewed in a virtual camera coordinate system (attached to the user's head) while Z′ is the depth stored in the Z buffer of the displayed image computation.
It should be noted that the minus sign relies on the fact that, in OpenGL representation, the Z axis is oriented towards the eye so that all 3D points have negative Z value. The zeye value is consistent with a metric value, while Z′eye=A−B/Zeye is a function of Z with a format convenient for the Z buffer algorithm.
To project a pixel in the MVD format on the screen 106 observed by the user, the following equation is considered:
Thus, thanks to the rendering system 1, the closer to the screen 106 the user's head is, the more he sees a large portion of the acquired scene 200. The more he moves away from the screen 106, the more he sees a sub-part of it.
The display device 105 can be any kind of device equipped with a screen, such as a TV set, a tablet, a smartphone, a laptop, a PDA, a head-mounted device, or the like.
As illustrated in
Thus, the rendering system 1 is configured to provide a parallax effect depending on the user's head position in front of the display device 105, when displaying the multi view content on the screen 106. In particular, the parallax effect can be defined by the relative positions of several objects (elements) of the scene 200, these positions being observed by the user. The more the difference of depth between objects is, the more the observed relative position will be modified.
To prevent the user from reaching the limits of the displayed multi view content (e.g. leading to the display of black bands 300 on the edges of the display screen as shown in
In the following, the method 400 is operated by the image projection computing apparatus 104. Naturally, in a variant or complement, said method 400 can be implemented by another element of the rendering system 1, such as the display device 105 or a standalone element (not shown in the Figures).
In an embodiment, as shown in
In an embodiment, the method 400 comprises, in step 401, the reception, by the image projection computing apparatus 104, of acquisition parameters of the acquisition device 103 and parameters of the display screen 106.
The acquisition parameters can comprise the horizontal angle of view α and the vertical angle of view β of the acquisition device 103, as shown in
The method 400 further comprises, in step 402, the determination of a minimum distance from the display screen to define the positioning zone 500, for instance by a dedicated means 104A of the image projection computing apparatus 104.
In an embodiment, when the aspect ratio (relationship between width and height) of the multi view content captured by the acquisition device 103 differs from the aspect ratio associated with the display screen 106, the minimum distance zmin corresponds to the maximum between:
In a variant, when the aspect ratio of the multi view content captured by the acquisition device 103 is the same than the one of the display screen 106, the minimum distance zmin corresponds to the horizontal minimum distance above mentioned, which is equal to the vertical minimum distance.
Thus, the top of the pyramidal shape of the positioning zone 500 is arranged at the minimum distance zmin and centered with respected to the display screen 106.
As shown in
In a variant or complement, the method 400 further can comprise, in step 403, the definition, by the means 104A, of a maximum distance zmax from the display screen 106 obtained from a minimum height hmin and a maximum height hmax between which the user's gaze can be located, as shown in
As depicted in
In an embodiment according to the disclosure, the generated positioning zone 500 to observe the multi view content can be displayed on the screen 106 for instance through the user interface.
In a further embodiment, as shown in
In particular, the method 700 can comprise, in a step 701, the modification of the current positioning zone 500, for instance upon user's input through a dedicated user interface (which can be the same as the one described with regards to method 400).
As illustrated in
The global modification (represented by the matrix of transformation H as previously defined) is defined, in a step 702, by:
wherein Sxy is the scaling matrix and Tz is the translation matric in depth.
As shown in the
zcz=zmin+|Tz|
αcz=α×sxy
By considering the above described equation for obtaining the projection of the multi view content on the screen 106 in function of the user's head position:
with the new transformation matrix H, the image projection computing apparatus 104 adapts, in step 703, the projection of the displayed multi view content to the definition of the new positioning zone 800.
Thus, when the new positioning zone 800 has been expanded by the user for instance thanks to the user interface, the displayed multi view content is adapted (for example by an adaptation means 104B of the computing apparatus 104) so that the user can move in the new positioning zone without reaching the limits (such as black bands or occulted areas) of the display of multi view content.
It should be appreciated that a front translation (i.e. when the positioning zone is moved towards the screen as shown in
In an illustrative but non limitative example, the set of dual arrows (not shown) can be either displayed on the screen 106 for selection by the user directly by touching the arrows when the screen is a touch screen or through a keyboard or dedicated remote control.
In a further embodiment shown in
To that end, the method 900 comprises, in a step 901, the generation of a positioning zone 500 to observe the multi view content (for instance by the means 104A of the computing apparatus 104), according to the method 400 as previously described.
In a step 902, the method 900 further comprises the definition (e.g. by a module 104C of the computing apparatus 104) of a triggering area 550 arranged, at least partially, within the positioning zone 500. In a variant, the triggering area can be arranged outside the positioning zone 500, for instance contiguous to the positioning zone 500.
As shown in
When the user's head position is located within said triggering area 550, the method 900 further comprises, in a step 903, the triggering of one or more incentive effects to encourage the user to stay within the positioning zone 500. The step 903 can be implemented by a triggering means 104D of the image projection computing apparatus 104.
In an embodiment, an incentive effect can be at least one of:
Naturally, one or more incentive effects can be triggered concurrently by the computing apparatus 104.
In particular, the darkening effect can increase (e.g. the brightness of the screen 106 decreases, the screen 106 becomes darker) when the angle of view (horizontal or vertical) associated with the user's head position located within the triggering area 550 increases, and conversely. When the angle of view of the user's head position reaches one maximum angle (amongst the horizontal angle αmax and/or vertical angle βmax), the screen 106 becomes completely dark or black. It should be appreciated that the darkening effect decreases (i.e. the brightness of the screen 106 increases, the screen 106 becomes brighter) when the user's head moves away from a border of the triggering area 550 towards the center of the positioning zone 500.
In addition, while the darkening effect has been described as applied on the screen 106, it can be also applied, in a variant or complement, directly to the multimedia content itself (without modifying the brightness of the screen 106).
As depicted by the curve of
Besides, the parallax intensity effect allows a modification of the speed of movement of elements appearing in the multi view content displayed on the screen 106, when the angle of view associated with the user's head position located within the triggering area 550 increases, and conversely.
To that end, in an embodiment, the image projection computing apparatus 104 can use a computation angle associated with the angle of view of the user's head. Said computation angle can be obtained from the relationship defined, for instance, by the exemplary curve shown in
Thus, the parallax effect perceived by the user corresponds to the parallax effect which can be observed at an angle different from the angle of view associated with the user position, so that the parallax effect seems to be attenuated to the user observing the screen 106.
In an embodiment, when the visual cue (for instance arrows) is implemented, one or more arrows can be displayed when the angle of view associated with user's head is arranged between αincentive and αmax, and/or βIncentive and βmax. The arrows can be oriented towards the center of the positioning zone 500 to encourage the user to move away from the borders of the later. Once the user's head is in the positioning zone 500, but not in the triggering area 550 anymore, the arrows can disappear. In a further complement, the arrows can blink to draw user's attention. The blinking rate can depend on the position of the user's head within the triggering area (such as the closer to outside borders of the triggering area the user's head is, the higher the blinking rate will be).
References disclosed in the description, the claims and the drawings may be provided independently or in any appropriate combination. Features may, where appropriate, be implemented in hardware, software, or a combination of the two.
Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one implementation of the method and device described. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments.
Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
Although certain embodiments only of the disclosure have been described herein, it will be understood by any person skilled in the art that other modifications, variations, and possibilities of the disclosure are possible. Such modifications, variations and possibilities are therefore to be considered as falling within the spirit and scope of the disclosure and hence forming part of the disclosure as herein described and/or exemplified.
The flowchart and/or block diagrams in the Figures illustrate the configuration, operation and functionality of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, or blocks may be executed in an alternative order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of the blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While not explicitly described, the present embodiments may be employed in any combination or sub-combination.
Number | Date | Country | Kind |
---|---|---|---|
17305712 | Jun 2017 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/064510 | 6/1/2018 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2018/228833 | 12/20/2018 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5900849 | Gallery | May 1999 | A |
6097394 | Levoy | Aug 2000 | A |
7792423 | Raskar | Sep 2010 | B2 |
8416289 | Akeley | Apr 2013 | B2 |
8803918 | Georgiev | Aug 2014 | B2 |
8823771 | Jeong | Sep 2014 | B2 |
8933862 | Lapstun | Jan 2015 | B2 |
8995785 | Knight | Mar 2015 | B2 |
9060093 | Chu | Jun 2015 | B2 |
9098112 | Cheng | Aug 2015 | B2 |
9132352 | Rabin | Sep 2015 | B1 |
9214013 | Venkataraman | Dec 2015 | B2 |
9299183 | Vesely | Mar 2016 | B2 |
9451924 | Bernard | Sep 2016 | B2 |
9520106 | An | Dec 2016 | B2 |
10033986 | Pitts | Jul 2018 | B2 |
10104370 | Fusama | Oct 2018 | B2 |
10116867 | Blonde | Oct 2018 | B2 |
10182183 | Sabater | Jan 2019 | B2 |
10313633 | Rosenberg | Jun 2019 | B2 |
10390005 | Nisenzon | Aug 2019 | B2 |
10444931 | Akeley | Oct 2019 | B2 |
10474227 | Carothers | Nov 2019 | B2 |
10540818 | Akeley | Jan 2020 | B2 |
10545215 | Karafin | Jan 2020 | B2 |
10679361 | Karnad | Jun 2020 | B2 |
10679373 | Riemens | Jun 2020 | B2 |
10852838 | Bradski | Dec 2020 | B2 |
20040075735 | Marmaropoulos | Apr 2004 | A1 |
20040156631 | Redert | Aug 2004 | A1 |
20050123171 | Kobayashi | Jun 2005 | A1 |
20070285554 | Givon | Dec 2007 | A1 |
20080088935 | Daly | Apr 2008 | A1 |
20100091092 | Jeong | Apr 2010 | A1 |
20110102423 | Nam | May 2011 | A1 |
20110157154 | Bernard | Jun 2011 | A1 |
20110254926 | Ushiki | Oct 2011 | A1 |
20110273466 | Imai | Nov 2011 | A1 |
20110316987 | Komoriya | Dec 2011 | A1 |
20120182299 | Bowles | Jul 2012 | A1 |
20130044103 | Lee | Feb 2013 | A1 |
20130050412 | Shinohara | Feb 2013 | A1 |
20130093752 | Yuan | Apr 2013 | A1 |
20130278719 | Rusert | Oct 2013 | A1 |
20140028662 | Liao | Jan 2014 | A1 |
20140036046 | Takefumi | Feb 2014 | A1 |
20140066178 | Kelly | Mar 2014 | A1 |
20140146148 | Maciocci | May 2014 | A1 |
20140306963 | Sun | Oct 2014 | A1 |
20150042557 | Narita | Feb 2015 | A1 |
20150156470 | Didyk | Jun 2015 | A1 |
20150189261 | Kaneko | Jul 2015 | A1 |
20150215600 | Norkin | Jul 2015 | A1 |
20150235408 | Gross | Aug 2015 | A1 |
20150334369 | Bruls | Nov 2015 | A1 |
20150362743 | Lanman | Dec 2015 | A1 |
20160029012 | Bruls | Jan 2016 | A1 |
20160201776 | Takase | Jul 2016 | A1 |
20160214016 | Stafford | Jul 2016 | A1 |
20160234481 | Borel | Aug 2016 | A1 |
20160284048 | Rekimoto | Sep 2016 | A1 |
20160309081 | Frahm | Oct 2016 | A1 |
20170003507 | Raval | Jan 2017 | A1 |
20170085867 | Baran | Mar 2017 | A1 |
20170091983 | Sebastian | Mar 2017 | A1 |
20170148186 | Holzer | May 2017 | A1 |
20170188002 | Chan | Jun 2017 | A1 |
20170243373 | Bevensee | Aug 2017 | A1 |
20170280126 | Van der Auwera | Sep 2017 | A1 |
20180020204 | Pang | Jan 2018 | A1 |
20180046874 | Guo | Feb 2018 | A1 |
20180097867 | Pang | Apr 2018 | A1 |
20180253884 | Burnett, III | Sep 2018 | A1 |
20190054734 | Baran | Feb 2019 | A1 |
20190236796 | Blasco Claret | Aug 2019 | A1 |
20190385323 | Doyen | Dec 2019 | A1 |
20200059635 | Katsumata | Feb 2020 | A1 |
20200074658 | Yu | Mar 2020 | A1 |
20200134849 | Blasco Claret | Apr 2020 | A1 |
20200410635 | Varanasi | Dec 2020 | A1 |
20210103148 | Hua | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
101301857 | Nov 2008 | CN |
102117180 | Jul 2011 | CN |
103209313 | Jul 2013 | CN |
104145234 | Nov 2014 | CN |
103281550 | Mar 2015 | CN |
104798370 | Jul 2015 | CN |
105488771 | Apr 2016 | CN |
106210465 | Dec 2016 | CN |
106210538 | Dec 2016 | CN |
106501635 | Mar 2017 | CN |
2863635 | Apr 2015 | EP |
3099055 | Nov 2016 | EP |
2488905 | Sep 2012 | GB |
H10504917 | May 1998 | JP |
2005165848 | Jun 2005 | JP |
2016541035 | Dec 2016 | JP |
6113337 | Apr 2017 | JP |
2423018 | Jun 2011 | RU |
2014150963 | Mar 2017 | RU |
2013132886 | Sep 2013 | WO |
2013180192 | Dec 2013 | WO |
2014149403 | Sep 2014 | WO |
2015122108 | Aug 2015 | WO |
WO2018228918 | Dec 2018 | WO |
Entry |
---|
Dodgson, Neil A., “Analysis of the viewing zone of multi-view autostereoscopic displays”, Proceedings of SPIE, vol. 4660, May 2002, pp. 254-265. |
Dodgson, N. A., “Analysis of the viewing zone of the Cambridge autostereoscopic display”, Applied Optics, vol. 35, No. 10, Apr. 1, 1996, pp. 1705-1710. |
Geng, Wenjing, et. al. “Flat3D: Browsing Stereo Images on a Conventional Screen” International Conference on Multimedia Modeling, (2015), pp. 546-558. |
Feldmann, Ingo, et. al., “Navigation Dependent Nonlinear Depth Scaling”. Proceedings of 23rd International Picture Coding Symposium, Apr. 23-25, 2003, 4 pages. |
Buchanan, P., et. al., “Creating a View Dependent Rendering System For Mainstream Use”. IEEE 23rd International Conference Image and Vision Computing, (2008), 6 pages. |
{hacek over (R)}e{hacek over (r)}abek, Martin, et. al., “Motion Parallax Based Restitution of 3D Images on Legacy Consumer Mobile Devices”. IEEE 13th International Workshop on Multimedia Signal Processing, (2011), 5 pages. |
Hoang, Anh Nguyen, et. al., “A Virtual Reality System Using View-Dependent Stereoscopic Rendering”. IEEE International Conference on Information Science & Applications (ICISA), (2014), 4 pages. |
Angco, Marc Jordan G., et. al., “Depth Perception Through Adaptive 3D View Perspective and Motion Parallax”. IEEE Asia Pacific Conference on Wireless and Mobile, (2015), pp. 83-88. |
Levin, Anat, et. al., “Understanding Camera Trade-Offs Through A Bayesian Analysis Of Light Field Projections”. Proceedings of European Conference on Computer Vision, (2008), pp. 1-14. |
Ng, Ren, et. al., “Digital Light Field Photography”. A Dissertation Submitted to the Department of Computer Science and the Committee on Graduate Studies of Stanford University, Jul. 2006, 203 pages. |
Wanner, Sven, et. al., “Generating EPI Representation of a 4D Light Fields with a Single Lens Focused Plenoptic Camera”. International Symposium on Visual Computing (ISVC), (2011), pp. 90-101. |
Merkle, Philipp, et. al., “Efficient Prediction Structures for Multiview Video Coding” IEEE Transactions on Circuits and Systems For Video Technology, vol. 17, No. 11, Nov. 2007, pp. 1461-1473. |
Hirsch, Matthew, et. al., “A Compressive Light Field Projection System”. ACM Transactions on Graphics vol. 33, No. 4, (2014), 20 pages. |
Muddala, Suryanarayana, et. al., “Depth-Included Curvature Inpainting for Disocclusion Filing in View Synthesis”. International Journal on Advances in Telecommunications, vol. 6 No. 3 & 4, (2013), pp. 132-142. |
Fehn, Christoph, et. al., “Key Technologies for an Advanced 3D-TV System”. Three-Dimensional TV, Video, and Display III, Proceedings of SPIE, vol. 5599, (2004), pp. 66-80. |
International Search Report and Written Opinion of the International Searching Authority for PCT/EP2018/065033, dated Sep. 20, 2018. |
International Preliminary Report on Patentability for PCT/EP2018/065033 dated Dec. 17, 2019, 6 pages. |
Shreiner, Dave, et. al., “Homogeneous Coordinates and Transformation Matrices”. OpenGL Programming Guide 8th edition, Appendix E, Mar. 2013, pp. 829-834. |
International Search Report and Written Opinion of the International Searching Authority for PCT/EP2018/064510 dated Aug. 13, 2018, 11 pages. |
International Preliminary Report on Patentability for PCT/EP2018/064510 dated Dec. 17, 2019, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20200195914 A1 | Jun 2020 | US |