1. Field of the Invention
The invention herein relates to data compression of light field imaging information used by light field electronic displays for the display of ultra-high resolution 3D images utilizing techniques, such as holography, integral imaging, stereoscopy, multi-view imaging, video and the like. The invention has unique application to light field displays having common, industry-standard interfaces, such as HDMI, Displayport, MIPI, etc., for which data transfer bandwidth of the imaging information into the light field displays is known to be challenging.
2. Prior Art
In prior art light fields, neighboring hogels exhibit similar anglet data. The hogel appears to the viewer as a single point source, which could be implemented as a single lens of a micro-lens array above the light field display pixels (see U.S. Pat. No. 8,928,969). The reproduced 3D image, also known as a light field frame, consists of the complete set of hogels generated by the light field display. A light field video consists of a time-sequence of light field frames. Typically, an application processor pre-processes the input light field image data, such as real images acquired by cameras and/or rendered computer-generated images, and transfers the data to light field displays. In order to provide the necessary bandwidth between the application processor and light field displays having common interfaces currently available, such as HDMI, Displayport, MIPI, etc., the input signal is divided among several interfaces, which is cumbersome if not, in fact, infeasible due to data size limitations.
Data compression prior to transmission is employed to cope with the extreme volume of light field image data used by light field displays. Recently published methods for light field compression, such as the ones in Magnor, M. and Girod, B. “Data Compression for Light-Field Rendering,” IEEE Trans. on Circuits and Systems for Video Technology, 10(3), 338-343 (2000) and Conti, C.; Lino, J.; Nunes, P.; Soares, L. D.; Lobato Correia, P., “Spatial prediction based on self-similarity compensation for 3D holoscopic image and video coding,” in Image Processing (ICIP), 2011 18th IEEE International Conference on, vol., no., pp. 961-964, 11-14 Sep. 2011, follow the usual approach of prediction, transformation and residue quantization, similar to the methods adopted by prior art 3D video coding standards (Ohm, J.-R., “Overview of 3D video coding standardization,” In International Conference on 3D Systems and Applications, Osaka, 2013). The drawback of these compression approaches is that they process the incoming data in frame buffers, which become extremely large when compressing high-resolution (and thus high volume) data, and necessarily introduce undesirable video latency for real-time display applications.
Another prior art solution for light field data compression is to “sub-sample” the views in the image generation procedure and reconstruct the suppressed views directly at the light field display. For example, in Yan, P.; Xianyuan, Y., “Integral image compression based on optical characteristic,” Computer Vision, IET, vol. 5, no. 3, pp. 164, 168, May 2011 and Yan Piao; Xiaoyuan Yan, “Sub-sampling elemental images for integral imaging compression,” Audio Language and Image Processing (ICALIP), 2010 International Conference on, vol., no., pp. 1164, 1168, 23-25 Nov. 2010, the light field is sub-sampled based on the optical characteristics of the display system. A formal approach to light field sampling is described in Jin-Xiang Chai, Xin Tong, Shing-Chow Chan, and Heung-Yeung Shum. 2000. Plenoptic sampling. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques (SIGGRAPH '00) and Gilliam, C.; Dragotti, P. L.; Brookes, M., “Adaptive plenoptic sampling,” Image Processing (ICIP), 2011 18th IEEE International Conference on, vol., no., pp. 2581, 2584, 11-14 Sep. 2011. Although these prior art methods provide a significant reduction in bit rates, the compression rate is undesirably highly content-dependent. Moreover, these methods usually rely on complicated view synthesis algorithms (for example, see the works in Graziosi et al, “Methods For Full Parallax Compressed Light Field 3D Imaging Systems”, United States Provisional Patent Application No. 20150201176 A1, published Jul. 16, 2015, “View Synthesis Reference Software (VSRS) 3.5,” wg11.sc29.org, March 2010, C. Fehn, “3D-TV Using Depth-Image-Based Rendering (DIBR),” in Proceedings of Picture Coding Symposium, San Francisco, Calif., USA, December 2004, Mori Y, Fukushima N, Yendo T, Fujii T, Tanimoto M (2009) View generation with 3D warping using depth information for FTV. Sig Processing: Image Commun 24(1-2):65-72 and Tian D, Lai P, Lopez P, Gomila C (2009) View synthesis techniques for 3D video. In: Proceedings applications of digital image processing XXXII, Vol. 7443, pp 74430T-1-11) requiring very large frame buffers, floating-point logic units, and several memory transfers. Thus, sub-sampling solutions require considerable display device computational resources Bhaskaran, V. “65.1: invited Paper: Image/Video Compression—A Display Centric Viewpoint,” SID Symposium Digest of Technical Papers, vol. 39, no. 1, 2008.
Some compression methods have been developed specifically for stereoscopic video displays. For example, frame-compatible encoding methods for left and right views are described in Vetro, A.; Wiegand, T.; Sullivan, G. J., “Overview of the Stereo and Multiview Video Coding Extensions of the H.264/MPEG-4 AVC Standard,” in Proceedings of the IEEE, vol. 99, no. 4, pp. 626-642, April 2011. These methods encode 3D stereoscopic video by down-sampling the video via bundling two contiguous frames into one new frame, either temporally or spatially (horizontally or vertically). Examples of frame-packing include side-by-side, where two frames are horizontally down-sampled and arranged next to each other, and top-bottom frame packing, where the two frames are vertically down-sampled and arranged on top of each other. By bundling two frames into one, the rate is reduced by half. Another advantage of this approach is that the decoding method is a very simple view reconstruction that can be implemented directly at the stereoscopic display. However, these encoding methods always perform the same data sub-sampling regardless of the image content, which results in less than optimal image quality.
In Graziosi, D. B., Alpaslan, Z. Y. And El-Ghoroury, H. S., “Compression for full-parallax light field displays”, Proceedings of SPIE-IS&T Electronic Imaging, 9011, (2014), Graziosi, D. B., Alpaslan, Z. Y. And El-Ghoroury, H. S., “Depth assisted compression of full parallax light fields”, Proceedings of SPIE-IS&T Electronic Imaging, 9011, (2015) and Graziosi et al, “Methods For Full Parallax Compressed Light Field 3D Imaging Systems”, United States Patent Application Publication No. 2015/0201176 A1, a more sophisticated method for light field compression is described. The prior art compression method therein analyzes the composition of the entire light field scene and selects a subset of hogels from among all the hogels associated with the light field for transmission to the light field display, wherein the suppressed hogels are generated from the received hogels. To achieve even higher compression ratios, the prior art compression methods adopt transform and entropy encoding. The Graziosi, D. B., Alpaslan, Z. Y. And El-Ghoroury, H. S., “Compression for full-parallax light field displays”, Proceedings of SPIE-IS&T Electronic Imaging, 9011, (2014), Graziosi, D. B., Alpaslan, Z. Y. And El-Ghoroury, H. S., “Depth assisted compression of full parallax light fields”, Proceedings of SPIE-IS&T Electronic Imaging, 9011, (2015) and Graziosi et al, “Methods For Full Parallax Compressed Light Field 3D Imaging Systems”, United States Patent Application Publication No. 2015/0201176 A1 would benefit from an enhanced compression method that reduces the required decoding processing by doing a piece-wise analysis of the scene and omitting the transform and entropy encoding step. The reduction in decoding time and processing would beneficially lead to a smaller memory footprint and reduced latency, which is ideal for display interfaces using memory and processors commonly available.
As is known in the prior art, there are extremely high-resolution displays that require the use of multiple interfaces to receive source image data. In Alpaslan, Z. Y., El-Ghoroury, H. S., “Small form factor full parallax tiled light field display,” in SPIE Conference on Stereoscopic Displays and Applications XXVI, 2015, a high-resolution light field display formed by tiling multiple small pixel-pitch devices (U.S. Pat. Nos. 7,623,560, 7,767,479, 7,829,902, 8,049,231, 8,243,770 and 8,567,960) is described. The light field display described therein incorporates multiple input interfaces to compensate for the bandwidth limitation of the individual display interfaces commonly used. The lack of high-bandwidth interfaces motivated subsequent development of compression algorithms. In the prior art of
The Video Electronics Standards Association (VESA) Display Stream Compression (DSC) algorithm is a proposed standard for compression of raw video data to be sent to high-resolution displays. The VESA DSC encoder is visually faithful; i.e., the artifacts introduced by compression are hardly perceived by the viewer. The VESA DSC algorithm utilizes sophisticated prediction techniques mixed with very simple entropy encoding methods and was designed with display interfaces in mind; hence, it performs all of its processing on a line-by-line basis and has a very precise rate control procedure to maintain the bit rate below the limited bandwidth of common display interfaces. However, the VESA DSC algorithm does not utilize the block coding structure approach used in common video compression methods and does not take advantage of the highly correlated image structure present in light fields, both of which can provide significant compression gains.
In applications where the intensities of light rays do not change perceptibly as the rays propagate, the light field can be parameterized using two parallel planes, or equivalently four variables (Levoy, M. and Hanrahan, P., “Light Field Rendering,” Proceedings of the 23rd annual conference on Computer Graphics and Iteractive Techniques, SIGGRAPH 96). This parameterization was used in Levoy, M. and Hanrahan, P., “Light Field Rendering,” Proceedings of the 23rd annual conference on Computer Graphics and Iteractive Techniques, SIGGRAPH 96 to capture a light field and reconstruct novel view points of the light field by utilizing light ray interpolation. In order to obtain reconstructed views with high quality and realistic results, oversampling of the variables was required. This imposes a high demand on the capturing and transmission procedures, which then must generate and transmit a huge amount of data. The use of compression methods such as the VESA DSC can reduce the data requirements for transmission interfaces. Nevertheless, this procedure is still based on prediction and entropy coding, which increases the computational resources at the display driver. Furthermore, the procedure does not take advantage of the structure of light field images with the high degree of correlation between hogels.
The aforementioned prior art fails to accommodate high quality, low computational load high-resolution light field transmission methods as is required for practical implementation of a full parallax light field display. What is needed is a compression method that takes advantage of the correlation between hogels and that avoids the computational loading and latency associated with prior art compression methods.
In the following description, like drawing reference numerals are used for the like elements, even in different drawings. The matters defined in the description, such as detailed construction and elements, are provided to assist in a comprehensive understanding of the exemplary embodiments. However, the present invention can be practiced without those specifically defined matters. Also, well-known functions or constructions are not described in detail since they would obscure the invention with unnecessary detail. In order to understand the invention and to see how it may be carried out in practice, a few embodiments of it will now be described, by way of non-limiting example only, with reference to accompanying drawings, in which:
A “light field” is a vector function that completely describes the amount of light flowing in every direction through every point in space, including its direction, amplitude, frequency, and phase. In particular, the collection of light rays emitted from a scene composed of 3D objects is considered to be a light field. By emitting light with modulated color, intensity and direction, light field displays are able to reproduce a light field of scenes to display 3D objects without the need for glasses and with reduced vergence accommodation conflict (VAC).
To reproduce 3D images, light field displays incorporate a sufficient number of pixels, with each pixel generating a collimated light bundle modulating color and intensity of a view of the light field in a unique direction, designated and referred to as an “anglet”. A holographic element, or “hogel”, consists of the group of neighboring anglets pointing to all viewing directions reproducible by the display. In a hogel, anglets are created by modulating a group of pixels assigned to a hogel. The three types of modulation are: modulation color by mixing color primaries, modulation of intensity by adjusting the drive time or drive current of the pixels, and modulation of direction based on pixel location. Typically displays have three color primaries (red, green and blue), however, some displays may have more than three or less than three primaries. For a light field that includes color, each hogel can be considered to be multiple hogels, each for a specific color or color primary in the light field.
This invention herein discloses content-aware light field coding methods that achieve straightforward compression gains coupled with constant bit-rate coding. Specifically, the invention utilizes the content of the light field scenes for making dynamic decisions on the piece-wise decimation of the light field information in such a way as to achieve a fixed compression ratio. This fixed-rate compression approach herein provides simple encoding and decoding with minimal latency to allow the use of interfaces and processors as are commonly available. The compression method can also be incorporated into both the acquisition procedure, so that the suppressed data is not stored in memory, and the rendering procedure, so that suppressed data is not rendered. The compression method can be incorporated in both acquisition and rendering, or it can be in either of them. When incorporated in both the amount of compression may increase. Hence, both memory and rendering processing requirements are greatly reduced.
The disclosed light field compression method expands the capabilities of prior art light field devices and methods by making it feasible to transmit high-resolution, full parallax light fields for various light field imaging systems utilizing common interfaces. Additional objectives and advantages of this invention will become apparent from the following detailed description of a preferred embodiment thereof that proceeds with reference to the accompanying drawings.
The disclosed invention can readily be applied to light fields that have been acquired in different ways. For example, a light field can be acquired by utilizing an array of 2D cameras, with the cameras arranged on a regular grid. This form of camera array arrangement can be emulated by placing a microlens array in front of the image sensor, such as is described in U.S. Pat. No. 9,179,126 or implemented in commercial light field cameras. The cameras in such an array can be arranged in a plane that is perpendicular to the viewing direction with only horizontal and vertical displacement or placed in a non-regular arrangement, such as the L16 camera proposed by a company named Light. Non-regular camera arrangements can be used, but this results in a more complicated view projection. The advantage of a regular grid is that anglets from the hogels are related to each other by horizontal or vertical shifts only, which decreases the related processor load for hogel reconstruction.
To display a light field, the associated display apparatus frequently utilizes a microlens on top of the pixel grid for directional modulation of the light rays. A microlens array preferentially divides the pixel grid into regular squared areas, corresponding to identical hogels arranged regularly. If the input signal is not acquired with a camera array having the same optical properties as the display's microlens array, then a light field data reformatting is required.
The invention herein combines content adaptive encoding and scene analysis-based light field sub-sampling methods. Regardless of the scene content, the resulting down-sampled light field achieves a fixed compression ratio. In the described embodiment of the invention, a fixed 2:1 compression is achieved by rearranging the light field, such that adjacent pairs of hogels are compressed to the size of a single hogel. Similarly, by combining 2×2 hogels, 4×1 hogels or 1×4 hogels into the size of a single hogel, a fixed 4:1 compression is achieved by natural extensions of the methods presented here. Higher combinations of hogels are contemplated as falling within the scope of the invention, resulting in commensurately higher compression ratios. For clarity,
The distinction between simple frame-compatible encoding and the methods of this invention is that the former uses pixel sub-sampling only, while the methods of this invention include dynamic decisions on a frame-by-frame basis according to the content of the light field among three different sub-sampling methods. Hence, this invention provides compression that dynamically adapts to the content, while maintaining a fixed compression ratio and a simple decoding method.
As shown in
In United States Patent Application Publication No. 2015/0201176 A1, the concept of compressed rendering was introduced, wherein compression is applied in the rendering process to suppress data that can be eliminated at the source and simply not generated or rendered. In an embodiment of this invention, the renderer suppresses the light field data, such that only half of the total light field data is rendered with a commensurate reduction in processing load. When the light field data is not fully rendered we can achieve additional data compression, computational and power savings by first selecting which hogels should be rendered and applying compression to these hogels after they are rendered or as they are rendered (the rendering and compression operations can be combined).
The foregoing disclosure described 2:1 compression methods in detail, which methods may be practiced in hardware, firmware or software. Also as previously mentioned, the methods of the present invention may be expanded to higher compression ratios. By way of example, to extend the methods to 4:1 compression, one can simply apply the method described to two pairs of adjacent hogels, typically but not necessarily in a 2×2 hogel pattern, and then select the hogel of the 2×2 hogel pattern with the best fidelity metric. Thus in
Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention without departing from its scope defined in and by the appended claims. It should be appreciated that the foregoing examples of the invention are illustrative only, and that the invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
This application claims the benefit of U.S. Provisional Patent Application No. 62/255,744 filed Nov. 16, 2015.
Number | Name | Date | Kind |
---|---|---|---|
5613048 | Chen et al. | Mar 1997 | A |
6009188 | Cohen et al. | Dec 1999 | A |
6091767 | Westerman | Jul 2000 | A |
6097394 | Levoy et al. | Aug 2000 | A |
6252989 | Geisler et al. | Jun 2001 | B1 |
6549308 | Camahort | Apr 2003 | B1 |
6738533 | Shum et al. | May 2004 | B1 |
6963431 | Holzbach et al. | Nov 2005 | B2 |
7404645 | Margulis | Jul 2008 | B2 |
7623560 | El-Ghoroury et al. | Nov 2009 | B2 |
7767479 | El-Ghoroury et al. | Aug 2010 | B2 |
7829902 | El-Ghoroury et al. | Nov 2010 | B2 |
7978407 | Connor | Jul 2011 | B1 |
8049231 | El-Ghoroury et al. | Nov 2011 | B2 |
8098265 | El-Ghoroury et al. | Jan 2012 | B2 |
8155456 | Babacan et al. | Apr 2012 | B2 |
8243770 | El-Ghoroury et al. | Aug 2012 | B2 |
8284237 | Chen et al. | Oct 2012 | B2 |
8315476 | Georgiev | Nov 2012 | B1 |
8334889 | Blanche | Dec 2012 | B2 |
8401316 | Babacan et al. | Mar 2013 | B2 |
8567960 | El-Ghoroury et al. | Oct 2013 | B2 |
8681185 | Guncer | Mar 2014 | B2 |
8854724 | El-Ghoroury et al. | Oct 2014 | B2 |
8913004 | Bozarth et al. | Dec 2014 | B1 |
8928969 | Alpaslan et al. | Jan 2015 | B2 |
8970646 | Guncer | Mar 2015 | B2 |
9097890 | Miller et al. | Aug 2015 | B2 |
9110504 | Lewis et al. | Aug 2015 | B2 |
9129183 | Venkataraman et al. | Sep 2015 | B2 |
9179126 | El-Ghoroury et al. | Nov 2015 | B2 |
9195053 | El-Ghoroury et al. | Nov 2015 | B2 |
9274608 | Katz et al. | Mar 2016 | B2 |
9524682 | El-Ghoroury et al. | Dec 2016 | B2 |
9529191 | Sverdrup et al. | Dec 2016 | B2 |
9538182 | Mishourovsky et al. | Jan 2017 | B2 |
9681069 | El-Ghoroury et al. | Jun 2017 | B2 |
9712764 | El-Ghoroury et al. | Jul 2017 | B2 |
9769365 | Jannard | Sep 2017 | B1 |
9965982 | Lapstun | May 2018 | B2 |
20020067521 | Holzbach et al. | Jun 2002 | A1 |
20050018911 | Deever | Jan 2005 | A1 |
20070035706 | Margulis | Feb 2007 | A1 |
20070229653 | Matusik et al. | Oct 2007 | A1 |
20080043095 | Vetro et al. | Feb 2008 | A1 |
20080043096 | Vetro et al. | Feb 2008 | A1 |
20080187305 | Raskar | Aug 2008 | A1 |
20080198920 | Yang et al. | Aug 2008 | A1 |
20090086170 | El-Ghoroury et al. | Apr 2009 | A1 |
20090268970 | Babacan et al. | Oct 2009 | A1 |
20090278998 | El-Ghoroury et al. | Nov 2009 | A1 |
20100003777 | El-Ghoroury et al. | Jan 2010 | A1 |
20100007804 | Guncer | Jan 2010 | A1 |
20100046848 | Witzgall | Feb 2010 | A1 |
20100056274 | Uusitalo et al. | Mar 2010 | A1 |
20100066921 | El-Ghoroury et al. | Mar 2010 | A1 |
20100091050 | El-Ghoroury et al. | Apr 2010 | A1 |
20100156894 | Holler et al. | Jun 2010 | A1 |
20100220042 | El-Ghoroury et al. | Sep 2010 | A1 |
20100225679 | Guncer | Sep 2010 | A1 |
20100231585 | Weiblen | Sep 2010 | A1 |
20100265385 | Knight et al. | Oct 2010 | A1 |
20100309287 | Rodriguez | Dec 2010 | A1 |
20110058021 | Chen et al. | Mar 2011 | A1 |
20110134227 | Shin | Jun 2011 | A1 |
20110255592 | Sung et al. | Oct 2011 | A1 |
20110261050 | Smolic et al. | Oct 2011 | A1 |
20120033113 | El-Ghoroury et al. | Feb 2012 | A1 |
20120050481 | Chen et al. | Mar 2012 | A1 |
20120069154 | Talstra et al. | Mar 2012 | A1 |
20120105310 | Sverdrup et al. | May 2012 | A1 |
20120183232 | Babacan et al. | Jul 2012 | A1 |
20120213270 | Baraniuk et al. | Aug 2012 | A1 |
20120309455 | Klose et al. | Dec 2012 | A1 |
20120327139 | Margulis | Dec 2012 | A1 |
20130010057 | Borel et al. | Jan 2013 | A1 |
20130050786 | Lucente | Feb 2013 | A1 |
20130077880 | Venkataraman | Mar 2013 | A1 |
20130077882 | Venkataraman et al. | Mar 2013 | A1 |
20130141895 | Alpaslan et al. | Jun 2013 | A1 |
20130222633 | Knight et al. | Aug 2013 | A1 |
20130258451 | El-Ghoroury et al. | Oct 2013 | A1 |
20130282639 | Potkonjak | Oct 2013 | A1 |
20130286053 | Fleck et al. | Oct 2013 | A1 |
20130286178 | Lewis et al. | Oct 2013 | A1 |
20130294504 | Mishourovsky et al. | Nov 2013 | A1 |
20130321581 | El-Ghoroury et al. | Dec 2013 | A1 |
20130342644 | Rusanovskyy | Dec 2013 | A1 |
20140002675 | Duparre et al. | Jan 2014 | A1 |
20140079336 | Venkataraman et al. | Mar 2014 | A1 |
20140092281 | Nisenzon | Apr 2014 | A1 |
20140098189 | Deng et al. | Apr 2014 | A1 |
20140146201 | Knight et al. | May 2014 | A1 |
20140168062 | Katz et al. | Jun 2014 | A1 |
20140210823 | Maguire, Jr. | Jul 2014 | A1 |
20140219558 | Teng | Aug 2014 | A1 |
20140232822 | Venkataraman et al. | Aug 2014 | A1 |
20140267228 | Ofek et al. | Sep 2014 | A1 |
20140285429 | Simmons | Sep 2014 | A1 |
20140292620 | Lapstun | Oct 2014 | A1 |
20140340434 | El-Ghoroury | Nov 2014 | A1 |
20140347361 | Alpaslan | Nov 2014 | A1 |
20140375856 | Kaneko | Dec 2014 | A1 |
20150033539 | El-Ghoroury et al. | Feb 2015 | A1 |
20150178939 | Bradski et al. | Jun 2015 | A1 |
20150201176 | Graziosi | Jul 2015 | A1 |
20150264223 | Akenine-Moller et al. | Sep 2015 | A1 |
20150296203 | Lucente | Oct 2015 | A1 |
20150312560 | Deering et al. | Oct 2015 | A1 |
20150319456 | Le Floch | Nov 2015 | A1 |
20160021355 | Alpaslan et al. | Jan 2016 | A1 |
20160028935 | El-Ghoroury et al. | Jan 2016 | A1 |
20160182782 | El-Ghoroury et al. | Jun 2016 | A1 |
20160191765 | El-Ghoroury et al. | Jun 2016 | A1 |
20160191823 | El-Ghoroury et al. | Jun 2016 | A1 |
20160360177 | Graziosi et al. | Dec 2016 | A1 |
20170142427 | Graziosi | May 2017 | A1 |
20170184776 | El-Ghoroury et al. | Jun 2017 | A1 |
20170264879 | Zhou | Sep 2017 | A1 |
Number | Date | Country |
---|---|---|
101430426 | May 2009 | CN |
CN-103503023 | Jan 2014 | CN |
104081414 | Oct 2014 | CN |
104216128 | Dec 2014 | CN |
1978754 | Oct 2008 | EP |
WO-2008146190 | Dec 2008 | WO |
WO-2010108024 | Sep 2010 | WO |
WO-2011065738 | Jun 2011 | WO |
WO-2012149971 | Nov 2012 | WO |
WO-2013049699 | Apr 2013 | WO |
WO-2015106031 | Jul 2015 | WO |
WO-2016172384 | Oct 2016 | WO |
Entry |
---|
Balogh, Tibor et al., “Real-time 3D light field transmission”, SPIE Photonics Europe, vol. 7724, Apr. 16, 2010, pp. 772406-1 to 772406-7. |
Huang, Fu-Chung et al., “The Light Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light Field Displays with Focus Cues”, ACM Transactions on Graphics, vol. 34, No. 4, Article 60, Aug. 2015, pp. 60:1 to 60:12. |
Kovacs, Peter T. et al., “Overview of the Applicability of H.264/MVC for Real-Time Light-Field Applications”, 2014 3DTV-Conference: The True Vision—Capture, Transmission and Display of 3D Video (3DTV-CON), IEEE, Jul. 2, 2014, 4 pp. total. |
Masia, Belen et al., “Display Adaptive 3D Content Remapping”, Computers and Graphics, vol. 37, No. 8, Dec. 1, 2013, pp. 983-996. |
Conti, Caroline et al., “Light-Field Video Coding Using Geometry-Based Disparity Compensation”, 2014 3DTV-Conference: The True Vision—Capture, Transmission and Display of 3D Video (3DTV-CON), IEEE, Jul. 2, 2014, pp. 1-4. |
Dricot, A. et al., “Integral Images Compression Scheme Based on View Extraction”, 2015 23rd European Signal Processing Conference (EUSIPCO), Aug. 31, 2015, pp. 101-105. |
Li, Yun et al., “Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities”, IEEE Transactions on Image Processing, vol. 25, No. 1, Jan. 2016, pp. 80-91. |
Zhang, Cha et al., “A survey on image-based rendering—representation, sampling and compression”, Signal Processing: Image Communication, vol. 19, 2004, pp. 1-28. |
“International Search Report and Written Opinion of the International Searching Authority dated Feb. 2, 2017; International Application No. PCT/US2016/062357”, Feb. 2, 2017. |
Aggoun, Amar et al., “Immersive 3D Holoscopic Video System”, IEEE Multimedia Magazine, Special Issue on 3D Imaging Techniques and Multimedia Applications, vol. 20, No. 1, Jan.-Mar. 2013, pp. 28-37. |
Akeley, Kurt et al., “A Stereo Display Prototype with Multiple Focal Distances”, ACM Trans. Graph. (SIGGRAPH), vol. 23, 2004, pp. 804-813. |
Alpaslan, Zahir Y. et al., “Development and Deployment of a Tiled Full Parallax Light Field Display System”, Proceedings of the SPIE, Applications of Digital Image Processing XXXIX, vol. 9971, Sep. 27, 2016, pp. 99710J-1 to 99710J-8. |
Alpaslan, Zahir Y. et al., “Parametric Characterization of Perceived Light Field Display Resolution”, SID Symposium Digest of Technical Papers, vol. 47, No. 1, May 2016, pp. 1241-1245. |
Alpaslan, Zahir Y. et al., “Small Form Factor Full Parallax Tiled Light Field Display”, Proceedings of Electronic Imaging, SPIE—IS&T, vol. 9391, Feb. 9, 2015, pp. 93910E-1 to 93910E-10. |
Arai, Jun et al., “Integral Three-Dimensional Television Using a 33-Megapixel Imaging System”, Journal of Display Technology, vol. 6, No. 10, Oct. 2010, pp. 422-430. |
Arai, Jun , “Three-Dimensional Television System Based on Spatial Imaging Method Using Integral Photography”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2012, May 7-9, 2012, pp. 5449-5452. |
Balogh, Tibor , “The HoloVizio System”, Stereoscopic Displays and Virtual Reality Systems XIII, Proceedings of the SPIE—IS&T Electronic Imaging, vol. 6011, Jan. 27, 2006, pp. 60550U-1 to 60550U-12. |
Bhaskaran, Vasudev , “65.1: Invited Paper: Image/Video Compression—A Display Centric Viewpoint”, SID Symposium Digest of Technical Papers, vol. 38, No. 1, 2008, pp. 990-993. |
Cakmakci, Ozan et al., “Head-Worn Displays: A Review”, Journal of Display Technology, vol. 2, No. 3, Sep. 2006, pp. 199-216. |
Candes, Emmanuel et al., “Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?”, 2004, pp. 1-39. |
Candes, Emmanuel J. et al., “Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information”, IEEE Transactions on Information Theory, vol. 52, No. 2, Feb. 2006, pp. 489-509. |
Chai, Jin-Xiang et al., “Plenoptic Sampling”, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques—SIGGRAPH '00, 2000, pp. 307-318. |
Chen, Jianhong et al., “True Color Optical Simulation of Integral Imaging 3D Display”, Proceedings of the International Display Workshops, vol. 21, Dec. 3, 2014, pp. 848-851. |
Chen, Wei et al., “New Requirements of Subjective Video Quality Assessment Methodologies for 3DTV”, Video Processing and Quality Metrics 2010 (VPQM), Scottsdale, United States, 2010, 6 pp. total. |
Conti, Caroline et al., “Spatial Prediction Based on Self-Similarity Compensation for 3D Holoscopic Image and Video Coding”, 2011 18th IEEE International Conference on Image Processing (ICIP), Sep. 11-14, 2011, pp. 961-964. |
Curless, Brian et al., “A Volumetric Method for Building Complex Models from Range Images”, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 1996, pp. 1-10. |
Donoho, David L. , “Compressed Sensing”, IEEE Transactions on Information Theory, vol. 52, No. 4, Apr. 2006, pp. 1289-1306. |
El-Ghoroury, Hussein S. et al., “Quantum Photonic Imager (QPI): A New Display Technology and Its Applications”, Proceedings of the International Display Workshops, vol. 21, Dec. 3, 2014, pp. 1202-1205. |
El-Ghoroury, Hussein S. et al., “Quantum Photonic Imager (QPI): A Novel Display Technology that Enables more than 3D Applications”, SID Symposium Digest of Technical Papers, vol. 46, No. 1, May 2015, pp. 371-374. |
Fehn, Christoph , “A 3D-TV Approach Using Depth-Image-Based Rendering (DIBR)”, Proceedings of Picture Coding Symposium, San Francisco, CA, USA, Dec. 2004, 6 pp. total. |
Fehn, Christoph , “Depth-Image-Based Rendering (DIBR), Compression and Transmission for a New Approach on 3D-TV”, Proc. of SPIE Stereoscopic Displays and Virtual Reality Systems XI, 2004, pp. 93-104. |
Forman, Matthew C. et al., “Objective Quality Measurement of Integral 3D Images”, Proc. SPIE 4660, Stereoscopic Displays and Virtual Reality Systems IX, 155, 2002, 8 pp. total. |
Furihata, Hisayoshi et al., “Novel view synthesis with residual error feedback for FTV”, Stereoscopic Displays and Applications XXI, Proceedings of the SPIE—IS&T Electronic Imaging, vol. 7542, Jan. 2010, pp. 75240K-1 to 75240K-12. |
Gilliam, Christopher et al., “Adaptive Plenoptic Sampling”, 2011 18th IEEE International Conference on Image Processing, 2011, pp. 2581-2584. |
Gortler, Steven J. et al., “The Lumigraph”, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '96), 1996, pp. 43-52. |
Graziosi, Danillo B. et al., “Compression for Full-Parallax Light Field Displays”, Proceedings of SPIE—The International Society for Optical Engineering, Feb. 2014, 14 pp. total. |
Graziosi, Danillo B. et al., “Compression for Full-Parallax Light Field Displays”, Stereoscopic Displays and Applications XXV, Proc. of SPIE—IS&T Electronic Imaging, vol. 9011, Mar. 6, 2014, pp. 90111A-1 to 90111A-14. |
Graziosi, Danillo B. et al., “Depth assisted compression of full parallax light fields”, Stereoscopic Displays and Applications XXVI, Proceedings of SPIE—IS&T Electronic Imaging, vol. 9391, Mar. 17, 2015, pp. 93910Y-1 to 93910Y-15. |
Guenter, Brian et al., “Foveated 3D Graphics”, ACM SIGGRAPH Asia, Nov. 2012, 10 pp. total. |
Halle, Michael W. et al., “Fast computer graphics rendering for full parallax spatial displays”, Proc. SPIE 3011, Practical Holography XI and Holographic Materials III, Apr. 10, 1997, 8 pp. total. |
Halle, Michael W. , “Multiple Viewpoint Rendering for Three-Dimensional Displays”, PhD Thesis, Program in Media Arts and Sciences, School of Architecture and Planning, Massachusetts Institute of Technology, 1997, 164 pp. total. |
Heide, Felix et al., “Adaptive Image Synthesis for Compressive Displays”, Proc. of SIGGRAPH 2013 (ACM Transactions on Graphics), vol. 32, No. 4, 2013, 11 pp. total. |
Hoffman, David M. et al., “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue”, Journal of Vision, vol. 8, No. 3, 2008, pp. 1-30. |
Holliman, Nicolas S. et al., “Three-Dimensional Displays: A Review and Applications Analysis”, IEEE Transactions on Broadcasting, vol. 57, No. 2, Jun. 2011, pp. 362-371. |
Hoshino, H. et al., “Analysis of resolution limitation of integral photography”, J. Opt. Soc. Am. A, vol. 15, No. 8, Aug. 1998, pp. 2059-2065. |
Hu, Xinda et al., “Design and Assessment of a Depth-Fused Multi-Focal-Plane Display Prototype”, Journal of Display Technology, vol. 10, No. 4, Apr. 2014, pp. 308-316. |
Hua, Hong et al., “A 3D integral imaging optical see-through head-mounted display”, Optics Express, vol. 22, No. 11, May 28, 2014, pp. 13484-13491. |
International Organisation for Standardisation, “Call for Proposals on 3D Video Coding Technology”, ISO/IEC JTC1/SC29/WG11, MPEG2011/N12036, Geneva, Switzerland, Mar. 2011, 20 pp. total. |
International Organisation for Standardisation, “Use Cases and Requirements on Free-viewpoint Television (FTV)”, ISO/IEC JTC1/SC29/WG11, MPEG2013/N14104, Geneva, Switzerland, Oct. 2013, 12 pp. total. |
International Telecommunication Union, “H.264, Series H: Audiovisual and MultiMedia Systems, Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services”, ISO/IEC 14496-10:2003, Coding of Audiovisual Objects—Part 10: Advanced Video Coding, ITU-T Recommendation H.264, Mar. 2005, 343 pp. total. |
Isaksen, Aaron et al., “Dynamically Reparameterized Light Fields”, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '00), 2000, pp. 297-306. |
Iwadate, Yuichi et al., “Generating Integral Image from 3D Object by Using Oblique Projection”, 18th International Display Workshops 2011 (IDS '11), Dec. 7-9, 2011, pp. 269-272. |
Iwasawa, Shoichiro et al., “REI: an automultiscopic projection display”, Proceedings of 3DSA2013, Selected paper 1, 2013, pp. 1-4. |
Jang, Jae-Young et al., “3D Image Correlator using Computational Integral Imaging Reconstruction Based on Modified Convolution Property of Periodic Functions”, Journal of the Optical Society of Korea, vol. 18, No. 4, Aug. 2014, pp. 388-394. |
Javidi, Bahram et al., “Three-Dimensional Holographic Image Sensing and Integral Imaging Display”, Journal of Display Technology, vol. 1, No. 2, Dec. 2005, pp. 341-346. |
Kim, Changil , “Scene Reconstruction from a Light Field”, https://graphics.ethz.ch/˜kimc/publications/changil-kim-ms-thesis-2010-compressed.pdf, 2010, 72 pp. total. |
Koike, T. , “Theory, Design, and Application of 4-D Light Field Display”, Ph.D. Dissertation, University of Tokyo, Mar. 23, 2009, 133 pp. total. |
Kundu, Shinjini , “Light Field Compression Using Homography and 2D Warping”, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Mar. 25-30, 2012, pp. 1349-1352. |
Lanman, Douglas et al., “Near-Eye Light Field Displays”, ACM Transactions on Graphics (TOC), vol. 32, Issue 6, Article 220, Nov. 2013, 27 pp. total. |
Lee, Cheon et al., “View Synthesis using Depth Map for 3D Video”, Proceedings of 2009 APSIPA Annual Summit and conference, Sapporo, Japan, 2009, pp. 350-357. |
Levoy, Marc et al., “Light Field Rendering”, Computer Graphics, SIGGRAPH 96 Proceedings, 1996, pp. 31-42. |
Lippmann, M. G. , “Epreuves reversibles. Photographies integrales.”, Comptes-Rendus Academie des Sciences, vol. 146, 1908, pp. 446-451. |
Liu, Shujie et al., “New Depth Coding Techniques With Utilization of Corresponding Video”, IEEE Transactions on Broadcasting, vol. 57, No. 2, Jun. 2011, pp. 551-561. |
Lucente, M. , “Computational holograhic bandwidth compression”, IBM Systems Journal, vol. 35, Nos. 3&4, 1996, pp. 349-365. |
Lucente, Mark , “Diffraction-Specific Fringe Computation for Electro-Holography”, Doctoral Thesis Dissertation, MIT Dept. of Electrical Engineering and Computer Science, Sep. 1994, 171 pp. total. |
Lucente, Mark , “Holographic bandwidth compression using spatial subsampling”, Optical Engineering, Special Section on Electronic Holography, Jun. 1996, pp. 1-25. |
Lucente, Mark , “Interactive Computation of Holograms Using a Look-up Table”, Journal of Electronic Imaging, vol. 2, No. 1, pp. 28-34, Jan. 1993, 14 pp. total. |
Lucente, Mark , “Interactive holographic displays: the first 10 years”, Book chapter for “Holography—The First 50 Years”, Draft: 2003, 2003, 17 pp. total. |
Lucente, Mark , “Interactive three-dimensional holographic displays: seeing the future in depth”, For special issue of SIGGRAPH's Computer Graphics publication on Current, New, and Emerging Display Systems, May 1997, 17 pp. total. |
Magnor, Marcus et al., “Data Compression for Light-Field Rendering”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 10, No. 3, Apr. 2000, pp. 338-343. |
Maimone, Andrew et al., “Computational Augmented Reality Eyeglasses”, 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Oct. 1-4, 2013, pp. 29-38. |
Maimone, Andrew et al., “Focus 3D: Compressive Accommodation Display”, ACM Transactions on Graphics, vol. 32. No. 5, 2013, 13 pp. total. |
Malvar, Henrique S. et al., “Lifting-based reversible color transformations for image compression”, Proc. of SPIE of Applications of Digital Image Processing, vol. 7073, 2008, pp. 707301-1 to 707301-10. |
Marwah, Kshitij et al., “Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections”, Proc. of SIGGRAPH 2013 (ACM Transactions on Graphics, 32, 4), 2013, 12 pp. total. |
Masia, Belen et al., “A survey on computational displays: Pushing the boundaries of optics, computation, and perception”, Computers & Graphics, vol. 37, 2013, pp. 1012-1038. |
Matsubara, Rie et al., “Light field display simulation for light field quality assessment”, Proceedings of the Stereoscopic Displays and Applications Conference XXVI (SPIE—IS&T), vol. 9391, Feb. 9-11, 2015, pp. 93910G-1 to 93910G-15. |
Microsoft, “Microsoft HoloLens”, downloaded from https://www.microsoft.com/en-us/hololens, admitted prior art, 5 pp. total. |
Mori, Yuji et al., “View generation with 3D warping using depth information for FTV”, Signal Processing: Image Communication, vol. 24, 2009, pp. 65-72. |
Morvan, Yannick et al., “Platelet-based coding of depth maps for the transmission of multiview images”, Proceedings of the SPIE, Stereoscopic Displays and Applications, vol. 6055, Feb. 2006, 12 pp. total. |
Ng, Ren , “Fourier Slice Photography”, ACM Trans. Graph., vol. 24, No. 3, Jul. 2005, pp. 735-744. |
Oculus VR, LLC, “Oculus Gear VR”, downloaded from https://www.oculus.com/gear-vr/, admitted prior art, 9 pp. total. |
Oculus VR, LLC, “Oculus Rift”, downloaded from https://www.oculus.com/rift/, admitted prior art, 15 pp. total. |
Oh, Kwan-Jung et al., “Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video”, IEEE Signal Processing Letters, vol. 16, No. 9, Sep. 2009, pp. 747-750. |
Oh, Kwan-Jung et al., “Hole-Filling Method Using Depth Based In-Painting for View Synthesis in Free Viewpoint Television (FTV) and 3D Video”, Picture Coding Symposium (PCS) 2009, May 6-8, 2009, 4 pp. total. |
Ohm, Jens-Rainer , “Overview of 3D Video Coding Standardization”, Proceedings of the Three Dimensional Systems and Applications (3DSA) International Conference 2013, 2013, pp. 1-4. |
Olsson, Roger et al., “A Combined Pre-Processing and H.264—Compression Scheme for 3D Integral Images”, 2006 IEEE International Conference on Image Processing, 2006, pp. 513-516. |
Olsson, Roger et al., “A Depth Dependent Quality Metric for Evaluation of Coded Integral Imaging Based 3D-Images”, 3DTV Conference, 2007, 4 pp. total. |
Park, Jae-Hyeung et al., “Recent progress in three-dimensional information processing based on integral imaging”, Applied Optics, vol. 48, No. 34, Dec. 1, 2009, pp. H77-H94. |
Piao, Yan et al., “Sub-sampling Elemental Images for Integral Imaging Compression”, International Conference on Audio Language and Image Processing (ICALIP), 2010, pp. 1164-1168. |
Razavi, R et al., “Low-delay video control in a personal area network for augmented reality”, IET Image Processing, vol. 2, No. 3, 2008, pp. 150-162. |
Reed, Nathan , “Depth Precision Visualized”, retrieved online at https://developer.nvidia.com/content/depth-precision-visualized, Jul. 15, 2015, 11 pp. total. |
Shi, Shasha et al., “Efficient Compression Method for Integral Images Using Multi-View Video Coding”, 2011 18th IEEE International Conference on Image Processing, 2011, pp. 137-140. |
Shum, Heung-Yeung et al., “Survey of Image-Based Representations and Compression Techniques”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, No. 11, Nov. 2003, pp. 1020-1037. |
Sjostrom, Marten et al., “Improved Depth-Image-Based Rendering Algorithm”, 3DTV Conference: The True Vision—Capture, Transmission and Display of 3D Video (3DTV-CON), 2011, 4 pp. total. |
Sloan, Peter-Pike et al., “Time Critical Lumigraph Rendering”, Proceedings of the 1997 ACM SIGGRAPH Symposium on Interactive 3D Graphics, 1997, 7 pp. total. |
Smolic, Aljoscha et al., “Coding Algorithms for 3DTV—A Survey”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 11, Nov. 2007, pp. 1606-1621. |
Solh, Mashhour et al., “Depth Adaptive Hierarchical Hole-Filling for DIBR-Based 3D Videos”, Proceedings of the SPIE, Three-Dimensional Image Processing (3DIP) and Applications II, vol. 8290, 2012, pp. 829004-1 to 829004-11. |
Sullivan, Gary J. et al., “The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions”, SPIE Conference on Applications of Digital Imaging Processing XXVII, Special Session on Advances in the New Emerging Standard: H.264/AVC, Aug. 2004, pp. 1-21. |
Sutherland, Ivan E. , “A head-mounted three dimensional display”, 1968 International Workshop on Managing Requirements Knowledge, 1968, pp. 757-564. |
Takahashi, Keita , “Theoretical Analysis of View Interpolation With Inaccurate Depth Information”, IEEE Transactions on Image Processing, vol. 21, No. 2, Feb. 2012, pp. 718-732. |
Takaki, Yasuhiro , “High-Density Directional Display for Generating Natural Three-Dimensional Images”, Proceedings of the IEEE, vol. 94, No. 3, Mar. 2006, pp. 654-663. |
Tanimoto, Masayuki et al., “Reference Software of Depth Estimation and View Synthesis for FTV/3DV”, International Organisation for Standardisation, ISO/IEC JTC1/SC29/WG11, MPEG2008/M15836, Busan, Korea, Oct. 2008, 5 pp. total. |
Texas Instruments, “DLP Technology for Near Eye Display, Application Report”, Literature No. DLPA051A, available online at http://www.ti.com/lit/wp/dlpa051a/dlpa051a.pdf, Sep. 2014, 18 pp. total. |
Tian, Dong et al., “View Synthesis Techniques for 3D Video”, Applications of Digital Image Processing XXXII, Proceedings of the SPIE, vol. 7443, 2009, pp. 74430T-1 to 74430T-11. |
Urey, Hakan et al., “State of the Art in Stereoscopic and Autostereoscopic Displays”, Proceedings of the IEEE, vol. 99, No. 4, Apr. 2011, pp. 540-555. |
Vetro, Anthony et al., “Overview of the Stereo and Multiview Video Coding Extensions of the H.264/MPEG-4 AVC Standard”, Proceedings of the IEEE, vol. 99, No. 4, Apr. 2011, pp. 626-642. |
Walls, Frederick et al., “VESA Display Stream Compression”, Downloaded at http://www.vesa.org/wp-content/uploads/2014/04/VESA_DSC-ETP200.pdf, Mar. 3, 2014, pp. 1-5. |
Wang, Zhou et al., “Image Quality Assessment: From Error Visibility to Structural Similarity”, IEEE Transactions on Image Processing, vol. 13, No. 4, Apr. 2004, pp. 600-612. |
Wegner, Krzysztof et al., “Enhanced View Synthesis Reference Software (VSRS) for Free-viewpoint Television”, International Organisation for Standardisation, ISO/IEC JTC1/SC29/WG11, MPEG2013/M31520, Geneva, Switzerland, Oct. 2013, 4 pp. total. |
Wetzstein, Gordon et al., “Compressive Light Field Displays”, IEEE Computer Graphics and Applications, vol. 32, Issue 5, Sep./Oct. 2012, pp. 6-11. |
Wetzstein, Gordon et al., “Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting”, 2012 Proceedings of ACM SIGGRAPH Transactions on Graphics (TOG), vol. 31, Issue 4, Article 80, Jul. 2012, 11 pp. total. |
Wikipedia, “List of refractive indices”, https://en.wikipedia.org/wiki/List_of_refractive_indices, Dec. 7, 2003, 5 pp. total. |
X Company, “Glass”, downloaded from http://www.google.com/glass/start/, which redirects to https://x.company/glass/, admitted prior art, 6 pp. total. |
Yan, P. et al., “Integral image compression based on optical characteristic”, IET Computer Vision, vol. 5, No. 3, 2011, pp. 164-168. |
Yang, Lu et al., “Artifact reduction using reliability reasoning for image generation of FTV”, Journal of Visual Communication and Image Representation, vol. 21, 2010, pp. 542-560. |
Yang, Lu et al., “Error Suppression in View Synthesis Using Reliability Reasoning for FTV”, 3DTV Conference: The True Vision—Capture, Transmission and Display of 3D Video (3DTV-CONO), Jun. 2010, 4 pp. total. |
Yi, Faliu et al., “Fast 3D Computational Integral Imaging Using Graphics Processing Unit”, Journal of Display Technology, vol. 8, No. 12, Dec. 2012, pp. 714-722. |
Yi, Faliu et al., “Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit”, Applied Optics, vol. 53, No. 13, May 1, 2014, pp. 2777-2786. |
Yoo, Hoon , “Artifact analysis and image enhancement in three-dimensional computational integral imaging using smooth windowing technique”, Optics Letters, vol. 36, No. 11, Jun. 1, 2011, pp. 2107-2109. |
Zhang, Cha et al., “Compression of Lumigraph with Multiple Reference Frame (MRF) Prediction and Just-in-time Rendering”, Proceeding of the 2000 Data Compression Conference, DCC 2000 Snowbird, UT, USA; Mar. 28-30, 2000, Los Alamitos, CA, USA; IEEE Comput. Soc., Mar. 28, 2000, pp. 253-262. |
Zhao, Yin et al., “Boundary Artifact Reduction in View Synthesis of 3D Video: From Perspective of Texture-Depth Alignment”, IEEE Transactions on Broadcasting, vol. 57, No. 2, Jun. 2011, pp. 510-522. |
Zhao, Yin et al., “Suppressing Texture-Depth Misalignment for Boundary Noise Removal in View Synthesis”, 28th Picture Coding Symposium, PSC2010, Nagoya, Japan, Dec. 8-10, 2010, pp. 30-33. |
Nam, Dongkyung et al., “Flat Panel Light-Field 3-D Display: Concept, Design, Rendering, and Calibration”, Proceedings of the IEEE, vol. 105, No. 5, May 2017, pp. 876-891. |
“3-D Display Technologies—New Trends of Space Expression”, CMC Publishing Co., Ltd., Toshio Honda, Supervising Editor, Dec. 4, 2013, pp. 80-84. |
Shimizu, Shinya et al., “View Scalable Multiview Video Coding Using 3-D Warping With Depth Map”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 11, Nov. 2007, pp. 1485-1495. |
“Extended Search Report dated May 20, 2019; European Patent Application No. 16867077.6”, May 20, 2019. |
Li, Yun et al., “Coding of Plenoptic Images by Using a Sparse Set and Disparities”, 2015 IEEE International Conference on Multimedia and Expo (ICME), Jun. 29, 2015, Jun. 29, 2015, pp. 1-6. |
Number | Date | Country | |
---|---|---|---|
20170142427 A1 | May 2017 | US |
Number | Date | Country | |
---|---|---|---|
62255744 | Nov 2015 | US |