1. Field of the Invention
This invention relates generally to image and video displays, more particularly to 3D light field imaging systems and the construction methods of 3D light field imaging systems with large viewing angle, high frequency and extended depth. The term “light field” describes the transmission and modulation of the light including, direction, amplitude, frequency and phase, and therefore encapsulates imaging systems that utilize techniques such as holography, integral imaging, stereoscopy, multi-view imaging, Free-viewpoint TV (FTV) and the like. The invention described herein details an easy to manufacture, high brightness, no color break up, wide viewing angle 3D light field imaging systems that don't sacrifice image resolution or image depth.
2. Prior Art
3D displays have been gaining popularity since the introduction of glasses based 3D TVs by all the major TV manufacturers in 2010. The biggest shortcoming of the currently available technology has been identified as the 3D glasses, (see Refs. [6], [7] and [8]), which can be categorized as either active or passive. In general, glasses based 3D display technology is uncomfortable for the viewers to use for long time periods and pose challenges for people who require prescription glasses.
Autostereoscopic displays use directional modulators (such as parallax barriers or lenticular sheets) attached to a display surface to create a 3D effect without requiring glasses. Commercially available autostereoscopic displays typically use horizontal parallax to present the 3D information to the viewer. The main problems of such autostereoscopic display technology are the limited viewing angle and the limited resolution per view, resulting in a lower quality 3D image. Using this autostereoscopic technology, a display's pixels are divided into two or more views to create motion parallax in the horizontal direction. Within the limits of a predefined viewing box, a viewer can see a stereoscopic 3D picture and in some cases even a 3D view from a slightly different angle of the same 3D scene. However, because the pixels are divided equally to each view, the views have a much smaller resolution than the actual 3D image, resulting in low resolution images. In addition, within the viewing box, the user has to keep his head vertical, otherwise the 3D effect disappears.
A more natural 3D effect is achieved with full parallax 3D display technology. Two of the most widely sought after full parallax digital 3D display technologies are integral imaging and holography. In addition to horizontal parallax, these technologies also have vertical parallax, such that a vertical movement of the viewer will show a different view of the 3D scene. Full parallax displays generally have an order of magnitude or more views than horizontal parallax only displays. Arranging these views densely creates a very natural 3D image that does not change by a user moving or tilting his head. Two of the main differences between the digital holography and integral imaging technologies are the view generation and illumination requirements. In digital holography, view generation requires Fourier transformation and is more computationally intensive than integral imaging view generation, which predominantly requires pixel shuffling and relatively much less computation. Digital holography also requires a coherent light source for illumination of its fringes which limits the brightness and size of the 3D image achievable. Therefore integral imaging is the preferred method when it comes to view generation and illumination requirements in pragmatically generating full parallax 3D images.
Where RI is the resolution of the image, PI is the pixel pitch of the image, Δzm is the maximum achievable depth in the generated 3-D image, Ω is the viewing angle, Rx is the spatial modulator device resolution and Px is the spatial modulator device pixel pitch. As seen from Eq. 1, Eq. 2, Eq. 3, as the spatial modulator device resolution increases, the overall 3-D integral image quality increases. This means the best image quality would be possible with the smallest pixel pitch. Eq. can be interpreted as the total image quality bandwidth of an integral imaging display. For an integral imaging display this quality is limited by the pixel pitch of the spatial modulator and to improve any one of these parameters, a designer is usually faced with having to make the combination of the other two parameters worse.
Viewing Angle Improvement Methods
There have been attempts at improving the viewing angle of integral imaging displays without worsening the other two parameters, but these attempts were impractical to manufacture, offered little improvement, or degraded an aspect of a display not explained in the characteristic equation so much that the resulting image was unusable.
Viewing angle improvement methods for integral imaging displays can be characterized in to 5 main methods: Mask based methods (i.e., Moving mask, Sliding mask, Two displays and a mask, Movable pinhole array, Two masks, Multi Axis Relay, Moving lens and tilted barriers, Orthogonal polarization switching sheet, Two displays with orthogonal polarization switching, Transparent display device as a dynamic mask, Three displays), Micro Convex Mirror Array, Multi-viewer tracking, High Refractive Index Medium, and Curved Lens Array Methods (i.e., Curved lens array with barrier and Curved projection with a Fresnel lens)
In mask based methods, a mask is placed between the display device and the lens array so that each unmasked lens has access to an elemental image region that is larger than its diameter.
The main problem associated with the mask based methods is that at any time only half of the lenses in the lens array have access to all the pixels in the display device. This means that either the pixels in the display device have to be shared using time multiplexing, increasing the complexity and the processing requirements of the device, or the number of pixels has to be increased by adding another display device, increasing the space occupied by the display system. The maximum reported viewing angle achieved by mask based methods is 60 degrees.
Micro convex mirror array systems project an array of elemental images on to an array of acrylic convex lenses. The 4% reflection that comes from these lenses creates a very dim virtual image with up to 70 degrees viewing angle. In virtual mode the lens array is between the observer and the 3D image (i.e., the image appears to be floating behind the lens array). There are four problems associated with this method. The first problem is the manufacturability of micro convex mirror arrays. As of now there is no practical way to manufacture micro convex mirror arrays therefore the method had to use the reflection from the convex lens array. Because the reflectance of the convex lenses is only 4% the brightness of the image suffers. The third problem is the supporting of only the virtual mode integral images limits the usability and attractiveness of the display. Finally the fourth problem is the space that is taken by the system. As with any front projection system the projectors have to be located behind the observers to project the image on the screen, limiting the usability of the display system.
Multi-viewer tracking methods determines an observer's position and adjust the elemental images to appear undistorted at that position. Tracking the user and adjusting the elemental images can be done for more than one user and requires time multiplexing. The viewing angle achievable by this method is limited by the amount of adjustments that can be made to elemental images and the lens numerical aperture. There are three problems associated with this method. First, time multiplexing increases the image processing requirements of the system and reduces the brightness of the images available to each observer. Second, there is a need for an external system to track the movements of the observer(s) and synchronize it with the elemental generation, increasing the space requirements and complexity of the system. Third, this method creates a lot of crosstalk among the lenses in the lens array and the image quality suffers.
In high index medium method, a high index medium is inserted between the display device and a plano-convex lens array. This arrangement creates a system with increased numerical aperture, which translates into wider viewing angles. The maximum viewing angle reported by this method is 64 degrees. There are two problems associated with this method. First, because the numerical aperture is increased the image depth and/or pixel size suffers in this method. Second, as the refractive index of the medium increases its transmissivity decreases, reducing the total viewing brightness of the system.
In curved lens array methods, a curved lens array or a flat lens array and a large aperture lens combination is used to create a real mode integral image. The viewing angle is enhanced because of the added curvature from the arrangement of the lenses in the lens array or from the large aperture lens. This method is a rear projection based method whereby barriers are used between the lenses in the lens array to prevent crosstalk. The maximum reported viewing angle for this method is 60 degrees. There are three disadvantages to this method. First, difficulties in manufacturability prevent making of thin curved lens arrays and large aperture lenses. Second, because this is a rear projection method the space requirements are larger than regular integral imaging displays.
In addition to all the specific problems the described prior art methods have, all of these methods have common image quality problems arising from the display technology they use. When the underlying display technology is flat panel LCD or LCOS projection, the pixels consist of spatially adjacent red, green and blue sub-pixels. In the 3D image these sub-pixels create color break up and reduce the quality of the image. When the underlying technology is DLP projection, the color break up can be reduced by overlapping the red, green and blue components of the image, however the light source and the illumination optics required to operate this technology combined with the complex electronics synchronization and projection optics requires a large space to operate limiting the usefulness of the technology.
Depth and Resolution Improvement Methods
Published depth improvement methods for integral imaging displays can be characterized into nine different methods. The most basic depth improvement method is the depth priority integral imaging in which the lens array is located at its focal length away from the display device. This enables the display system to achieve the highest depth of field by displaying real and virtual images at the same time.
In these methods lenslets with different focal lengths arranged spatially combined with a time multiplexed movement of the lens array, the lenslets can create increase the depth of field by adding multiple central depth planes.
Multi layered display devices work by using two or more display devices (some transparent) to create multiple central depth planes and overlapping marginal depth planes.
By zooming elemental images electronically the depth could be controlled. If this is done fast enough with time multiplexing we can control depth by changing the elemental images only, but the image resolution also changes.
A variation of multiple display method is multiple displays with time multiplexing. Usually one device is dedicated to display a virtual image another is dedicated to display a real image and using masks the viewing angle is also improved.
Composite lens array or stepped lens array based systems increase the number of central depth planes by fabricating elemental lenses on two different lens planes. The main idea is changing the lens to display distance. Lenses can also be moved with time multiplexing to fill the whole volume.
Similar effects can be achieved using different optical path lengths. Examples of this include using polarization devices with which image planes are multiplexed with two different central depth planes using a beam splitter. In another method a mirror barrier array which rotates mirrors by 45 degrees increases the optical path length from the display to the lens array and causes a new central depth plane to materialize.
In the following description, like drawing reference numerals are used for the like elements, even in different drawings. The matters defined in the description, such as detailed construction and elements, are provided to assist in a comprehensive understanding of the exemplary embodiments. However, the present invention can be practiced without those specifically defined matters. Also, well-known functions or constructions are not described in detail since they would obscure the invention with unnecessary detail. In order to understand the invention and to see how it may be carried out in practice, a few embodiments of it will now be described, by way of non-limiting example only, with reference to accompanying drawings, in which:
The invention described herein details an easy to manufacture, high brightness, no color break up, wide viewing angle integral imaging system that does not sacrifice image resolution or image depth.
With the availability of high brightness, vertical stacking, small pixel pitch displays, Ref. [10], [12] and [13], a full parallax integral imaging display with a wide viewing angle and improved picture quality is feasible. This invention presents a method of improving the viewing angle of an integral imaging display without degrading the image resolution or the image depth. The invention eliminates the color distortions found in other integral imaging displays and achieves very high brightness in a very small volume.
These new types of emissive solid state displays have 10 μm or smaller pixel pitch, high dynamic range, and fast modulation speed, Refs [12], [13]. These types of displays, which can be driven by multi-core matched instruction set processors Ref [14] that are bonded to their photonic layer, are able to perform complex image processing operations on their input data without the need for a supporting graphics computer. Examples of these complex processing operations include, light field rendering, video decompression or video decoding Ref [4] and [5], real time color and luminance uniformity correction, color gamut adjustments [15], gamma adjustments and the like. These displays which have their own video decompression capabilities can accept compressed input which makes them ideal for use in full parallax light field imaging due to high bandwidth requirements imposed by the full parallax property. Since these emissive solid state displays with small pixel pitch require a very small number of connections (power, input data and some control lines) to operate, they can be utilized in unique spatio-optical Ref. [16] and spatio-temporal Ref. [17] light field modulator configurations. In these unique spatio-optical and spatio-temporal light field modulator configurations, micro patterned screens can be combined with the emissive displays and the combined structure can be moved with the help of gimbals to increase the total viewing angle, total number of directional light rays, total depth displayed by the display and the displayed image resolution.
This invention details new methods of using emissive solid state displays with small pixel pitch and processing capabilities to create full parallax light field displays.
Prior art methods described in the previous section cannot be used to create practical displays due to their limitations by the use of displays with low dynamic range, low brightness, low modulation speed, and no compressed input capability. The invention presented herein makes use of an imager with high dynamic range, high brightness, high modulation speed and compressed input capability. Therefore it can overcome all the prior art limitations in practical ways.
In one embodiment of the present invention, the imager provides two dimensional (2D) modulation of the pixels spatially in the imager plane, the micro patterned screen attached to the imager provides three dimensional (3D) modulation by modulating the imager pixels directionally with either collimated, converging or diverging light beams where the pixel location determines the direction of the modulation and the distance between the imager and the screen determines the collimation of the light beam. Thus in one embodiment of this invention the imager directly provides a 2D modulated image to the variable focal lens and in another embodiment of this invention the combination of the imager and the micro patterned screen provides a 3D modulated image to the variable focal lens. The variable focal length lens assembly modulates the two dimensional image in a direction perpendicular to the imager axis and works as a floating device (or relay) to create a real image between the lens assembly and the viewer. An important aspect of the variable focal length lens assembly of the present invention is that it is able to change its focal length at a rate that would not be perceived by the human visual system (HVS), for example at a rate of change of at least 60 Hz. As the focal length of the lens assembly changes, at least at 60 Hz rate, the two dimensional or three dimensional image that emanates from the imager and micro patterned screen is placed at a different depth location while the change in the depth at which the images are formed would not be perceived by the HVS. When the image input to the variable focal length lens assembly is a two dimensional (2D) image, the fast modulation of the lens focal length combined with a fast modulation of the 2D image on the imager creates a series of 2D images arranged back to back in a plane perpendicular to the imager plane. The micro patterned screen functions as an optical interface and as a directional light modulator that enables use of larger or smaller diameter lens assembly and also to fill the gaps between the image slices by giving them a small volume. The combination of the imager, micro patterned screen and the lens assembly is enough to create a volumetric image formed with slices of shallow depth three dimensional images. However, because of the limited directional modulation capabilities of a micro patterned screen, this volumetric 3D image will have a very limited amount of viewing angle and will not be able to display enough perspectives to create a floating image sensation. When the image input to the variable focal length lens assembly is three dimensional (3D) the variable focal length lens assembly can be programmed to arrange multiple volumetric images back to back in an edge matched, overlapped or non-matching configuration to create larger volumetric images. In both 2D and 3D input modes to the variable focal lens assembly the modulated images can be intentionally separated in space to achieve data compression or power savings, without affecting the perceived image quality since the human visual system can blend multiple images presented at about 0.6 diopter distance away from each other.
The rotation arms help with the additional directional modulation of the volumetric image created by the imager, screen and lens assembly combination. The directional modulation that is achieved by the rotation arms can be thought of as a lens with no aberrations and a very large aperture. The fast volumetric modulation capabilities combined with the two axis rotation enable creation of different perspective views and a more realistic 3D image sensation that can be viewed by multiple viewers without requiring glasses or causing discomfort.
In another embodiment of this invention the micro patterned screen can be moved in small increments in the axes parallel to the imager to increase the number of modulated directional beams. The imager and micro patterned screen combination can be moved by the same rotational arms to reduce system volume and simplify image processing or the micro patterned screen can be placed in a stationary configuration to reduce system power consumption.
In another embodiment of this invention the variable focal length lens assembly can be moved with the imager and micro patterned screen by the rotational arms to minimize the total volume of the system and simplify the image processing algorithms. Or the focal length assembly can be stationary, to reduce system power consumption.
Image Formation and Rendering Considerations
The light field display system of this invention can be used to create large viewing angle, large depth and high resolution real 3D images or virtual 3D images by changing the elemental image creation and multiplexing methods.
It is an established fact in the vision science that when two or more images are presented to the visual system with 0.6 diopters distance between the adjacent images the human visual system would fill in the empty volume between the adjacent images by depth blending the images. The rendering of the images for this purpose would be done considering the depth of the images in diopters and explained in prior art extensively. Another embodiment of the present invention renders and displays images that are 0.6 diopters apart from each other to enable depth blending by the human visual system. Creating and displaying images with 0.6 diopter separation can reduce the total number of images that need to be created and displayed to achieve substantially large image volume. Even though 0.6 diopters is enough of a separation to create depth blending in the human visual system, this distance can be adjusted to achieve higher contrast images or different effects.
In another embodiment of this invention the 3D imaging data presented to the display system can be rendered with methods of compressed rendering and display matched compression.
Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention without departing from its scope defined in and by the appended claims. It should be appreciated that the foregoing examples of the invention are illustrative only, and that the invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The disclosed embodiments, therefore, should not be considered to be restrictive in any sense. The scope of the invention is indicated by the appended claims, rather than the preceding description, and all variations which fall within the meaning and range of equivalents thereof are intended to be embraced therein.
This application is a continuation of International Application No. PCT/US2014/029623 Filed Mar. 14, 2014 which claims the benefit of U.S. Provisional Patent Application No. 61/800,818 filed Mar. 15, 2013.
Number | Name | Date | Kind |
---|---|---|---|
5059008 | Flood et al. | Oct 1991 | A |
5691836 | Clark | Nov 1997 | A |
5986811 | Wohlstadter | Nov 1999 | A |
6151167 | Melville | Nov 2000 | A |
6433907 | Lippert et al. | Aug 2002 | B1 |
6795221 | Urey | Sep 2004 | B1 |
6795241 | Holzbach | Sep 2004 | B1 |
6803561 | Dunfield | Oct 2004 | B2 |
6924476 | Wine et al. | Aug 2005 | B2 |
6937221 | Lippert et al. | Aug 2005 | B2 |
6999238 | Glebov et al. | Feb 2006 | B2 |
7061450 | Bright et al. | Jun 2006 | B2 |
7071594 | Yan et al. | Jul 2006 | B1 |
7106519 | Aizenberg et al. | Sep 2006 | B2 |
7190329 | Lewis et al. | Mar 2007 | B2 |
7193758 | Wiklof et al. | Mar 2007 | B2 |
7209271 | Lewis et al. | Apr 2007 | B2 |
7215475 | Woodgate et al. | May 2007 | B2 |
7232071 | Lewis et al. | Jun 2007 | B2 |
7334901 | El-Ghoroury | Feb 2008 | B2 |
7369321 | Ren et al. | May 2008 | B1 |
7400439 | Holman | Jul 2008 | B2 |
7482730 | Davis et al. | Jan 2009 | B2 |
7486255 | Brown et al. | Feb 2009 | B2 |
7551280 | Yanai | Jun 2009 | B2 |
7580007 | Brown et al. | Aug 2009 | B2 |
7619807 | Baek et al. | Nov 2009 | B2 |
7623560 | El-Ghoroury et al. | Nov 2009 | B2 |
7630118 | Onvlee | Dec 2009 | B2 |
7724210 | Sprague | May 2010 | B2 |
7767479 | El-Ghoroury et al. | Aug 2010 | B2 |
7791810 | Powell | Sep 2010 | B2 |
7829902 | El-Ghoroury et al. | Nov 2010 | B2 |
7835079 | El-Ghoroury et al. | Nov 2010 | B2 |
7841726 | Conner | Nov 2010 | B2 |
7952809 | Takai | May 2011 | B2 |
7957061 | Connor | Jun 2011 | B1 |
8009358 | Zalevsky et al. | Aug 2011 | B2 |
8049231 | El-Ghoroury et al. | Nov 2011 | B2 |
8098265 | El-Ghoroury et al. | Jan 2012 | B2 |
8243770 | El-Ghoroury et al. | Aug 2012 | B2 |
8567960 | El-Ghoroury et al. | Oct 2013 | B2 |
8681185 | Guncer | Mar 2014 | B2 |
8754829 | Lapstun | Jun 2014 | B2 |
8854724 | El-Ghoroury | Oct 2014 | B2 |
8928969 | Alpaslan | Jan 2015 | B2 |
8933862 | Lapstun | Jan 2015 | B2 |
8970646 | Guncer | Mar 2015 | B2 |
9036255 | Loney | May 2015 | B2 |
9195053 | El-Ghoroury | Nov 2015 | B2 |
9423626 | Choi | Aug 2016 | B2 |
9494787 | Bagwell | Nov 2016 | B1 |
9560339 | Borowski | Jan 2017 | B2 |
20020033932 | Yamamoto et al. | Mar 2002 | A1 |
20020167485 | Hedrick | Nov 2002 | A1 |
20030071813 | Chiabrera | Apr 2003 | A1 |
20030103047 | Chiabrera | Jun 2003 | A1 |
20030107804 | Dolgoff | Jun 2003 | A1 |
20040212550 | He | Oct 2004 | A1 |
20050041296 | Hsiao et al. | Feb 2005 | A1 |
20050068454 | Afsenius | Mar 2005 | A1 |
20050082963 | Miyazaki et al. | Apr 2005 | A1 |
20050088079 | Daniels | Apr 2005 | A1 |
20050088453 | Ten | Apr 2005 | A1 |
20050168699 | Suzuki et al. | Aug 2005 | A1 |
20050179868 | Seo et al. | Aug 2005 | A1 |
20060028400 | Lapstun | Feb 2006 | A1 |
20060098285 | Woodgate et al. | May 2006 | A1 |
20060181770 | Lee | Aug 2006 | A1 |
20060238545 | Bakin | Oct 2006 | A1 |
20060238723 | El-Ghoroury | Oct 2006 | A1 |
20060244918 | Cossairt et al. | Nov 2006 | A1 |
20070046898 | Conner | Mar 2007 | A1 |
20070058260 | Steenblik et al. | Mar 2007 | A1 |
20070109813 | Copeland et al. | May 2007 | A1 |
20070199645 | Yanai | Aug 2007 | A1 |
20070263298 | El-Ghoroury et al. | Nov 2007 | A1 |
20080043014 | Tachi et al. | Feb 2008 | A1 |
20080055903 | Akiyama | Mar 2008 | A1 |
20080117491 | Robinson | May 2008 | A1 |
20080136981 | Kawakami et al. | Jun 2008 | A1 |
20080144174 | Lucente et al. | Jun 2008 | A1 |
20080170293 | Lucente et al. | Jul 2008 | A1 |
20080218853 | El-Ghoroury et al. | Sep 2008 | A1 |
20080278808 | Redert | Nov 2008 | A1 |
20090015918 | Morozumi et al. | Jan 2009 | A1 |
20090086170 | El-Ghoroury et al. | Apr 2009 | A1 |
20090219954 | Gollier | Sep 2009 | A1 |
20090278998 | El-Ghoroury et al. | Nov 2009 | A1 |
20100003777 | El-Ghoroury et al. | Jan 2010 | A1 |
20100007804 | Guncer | Jan 2010 | A1 |
20100026960 | Sprague | Feb 2010 | A1 |
20100066921 | El-Ghoroury et al. | Mar 2010 | A1 |
20100091050 | El-Ghoroury et al. | Apr 2010 | A1 |
20100208342 | Olsen | Aug 2010 | A1 |
20100220042 | El-Ghoroury et al. | Sep 2010 | A1 |
20100225679 | Guncer | Sep 2010 | A1 |
20100245957 | Hudman et al. | Sep 2010 | A1 |
20100259605 | So et al. | Oct 2010 | A1 |
20110075257 | Hua | Mar 2011 | A1 |
20110095184 | Tachibana et al. | Apr 2011 | A1 |
20110096156 | Kim et al. | Apr 2011 | A1 |
20110122239 | Baik et al. | May 2011 | A1 |
20110134220 | Barbour et al. | Jun 2011 | A1 |
20110181706 | Harrold et al. | Jul 2011 | A1 |
20110304614 | Yasunaga | Dec 2011 | A1 |
20120033113 | El-Ghoroury et al. | Feb 2012 | A1 |
20120062988 | Watanabe | Mar 2012 | A1 |
20120200681 | Yoshida et al. | Aug 2012 | A1 |
20120200810 | Horikawa | Aug 2012 | A1 |
20120242615 | Teraguchi | Sep 2012 | A1 |
20120305746 | Moon et al. | Dec 2012 | A1 |
20120307223 | Van Zwet et al. | Dec 2012 | A1 |
20120307357 | Choi | Dec 2012 | A1 |
20120312957 | Loney | Dec 2012 | A1 |
20130033586 | Hulyalkar | Feb 2013 | A1 |
20130141895 | Alpaslan | Jun 2013 | A1 |
20130182225 | Stout | Jul 2013 | A1 |
20130258451 | El-Ghoroury | Oct 2013 | A1 |
20130271763 | Li et al. | Oct 2013 | A1 |
20130285885 | Nowatzyk et al. | Oct 2013 | A1 |
20130300840 | Borowski | Nov 2013 | A1 |
20130321675 | Cote | Dec 2013 | A1 |
20140035959 | Lapstun | Feb 2014 | A1 |
20140063489 | Steffey et al. | Mar 2014 | A1 |
20140240809 | Lapstun | Aug 2014 | A1 |
20140253993 | Lapstun | Sep 2014 | A1 |
20140292620 | Lapstun | Oct 2014 | A1 |
20140307064 | Horimai et al. | Oct 2014 | A1 |
20150033539 | El-Ghoroury | Feb 2015 | A1 |
20150277129 | Hua | Oct 2015 | A1 |
20160147067 | Hua | May 2016 | A1 |
20160360125 | Yamamoto et al. | Dec 2016 | A1 |
20180108711 | Teraguchi | Apr 2018 | A1 |
Number | Date | Country |
---|---|---|
102007771 | Apr 2011 | CN |
9-54281 | Feb 1997 | JP |
9-109455 | Apr 1997 | JP |
2007-025601 | Feb 2007 | JP |
2007-525690 | Sep 2007 | JP |
2008-304572 | Dec 2008 | JP |
2008-545550 | Dec 2008 | JP |
2009-506384 | Feb 2009 | JP |
2010-068435 | Mar 2010 | JP |
2010-510643 | Apr 2010 | JP |
2010-117398 | May 2010 | JP |
2011-100090 | May 2011 | JP |
201126205 | Aug 2011 | TW |
WO-2004094896 | Nov 2004 | WO |
WO-2006125224 | Nov 2006 | WO |
WO-2008064361 | May 2008 | WO |
Entry |
---|
“International Search Report and Written Opinion of the International Searching Authority Dated Jul. 24, 2014; International Application No. PCT/US2014/029623”, (dated Jul. 24, 2014). |
Arai, Jun , “Depth-control method for integral imaging”, Optics Letters, vol. 33, No. 3, (Feb. 1, 2008), pp. 279-281. |
Arai, Jun , et al., “Effects of focusing on the resolution characteristics of integral photography”, J. Opt. Soc. Am. A, vol. 20, No. 6, (Jun. 2003), pp. 996-1004. |
Baasantseren, Ganbat , et al., “Computational Integral Imaging with Enhanced Depth Sensitivity”, Journal of Information Display, vol. 10, No. 1, (Mar. 2009), pp. 1-5. |
Baasantseren, Ganbat , et al., “Integral floating-image display using two lenses with reduced distortion and enhanced depth”, Journal of the SID, vol. 18, No. 7, (2010), pp. 519-526. |
Baasantseren, Ganbat , et al., “Viewing angle enhanced integral imaging display using two elemental image masks”, Optics Express, vol. 17, No. 16, (Aug. 3, 2009), pp. 14405-14417. |
Bagheri, Saeed , et al., “A Fast Optimization Method for Extension of Depth-of-Field in Three-Dimensional Task-Specific Imaging Systems”, Journal of Display Technology, vol. 6, No. 10, (Oct. 2010), pp. 412-421. |
Castro, Albertina , et al., “Integral imaging with large depth of field using an asymmetric phase mask”, Opt. Express, vol. 15, (2007), pp. 10266-12073. |
Choi, Heejin , et al., “Depth- and viewing-angle-enhanced 3-D/2-D switchable display system with high contrast ratio using multiple display devices and a lens array”, Journal of the SID, 15/5, (2007), pp. 315-320. |
Choi, Heejin , et al., “Depth-enhanced integral imaging using two parallel display devices”, Proceedings of the Pacific Rim Conference on Lasers and Electro-Optics 2005. CLEO/Pacific Rim 2005., (Aug. 2005), pp. 201-202. |
Choi, Heejin , et al., “Depth-enhanced integral imaging with a stepped lens array or a composite lens array for three-dimensional display”, Proceedings of the 16th Annual Meeting of the IEEE Lasers and Electro-Optics Society, 2003. LEOS 2003, vol. 2, (Oct. 27-28, 2003), pp. 730-731. |
Choi, Heejin , et al., “Improved analysis on the viewing angle of integral imaging”, Applied Optics, vol. 44, No. 12, (Apr. 20, 2005), pp. 2311-2317. |
Choi, Heejin , et al., “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays”, Optics Express, vol. 11, No. 8, (Apr. 21, 2003), pp. 927-932. |
Choi, Heejin , et al., “Wide-viewing-angle 3D/2D convertible display system using two display devices and a lens array”, Optics Express, vol. 13, No. 21, (Oct. 17, 2005), pp. 8424-8432. |
Date, Munekazu , et al., “Depth reproducibility of multiview depth-fused 3-D display”, Journal of the SID, vol. 18, No. 7, (2010), pp. 470-475. |
Goodman, Joseph W., “Introduction to Fourier Optics, Third Edition”, Roberts & Company Publishers, (2005), pp. 138-145, 154-162, 186-212, 355-367. |
Hahn, Joonku , et al., “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators”, Optics Express, vol. 16, No. 16, (Aug. 4, 2008), pp. 12372-12386. |
Hudson, Alex , “Could 3D TV be dangerous to watch?”, BBC News, http://news.bbc.co.uk/2/hi/programmes/click_online/9378577.stm, (Jan. 28, 2011), 3 pp. total. |
Hyun, Joobong , et al., “Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement”, ETRI Journal, vol. 31, No. 2, (Apr. 2009), pp. 105-110. |
Jang, Ju-Seog , et al., “Depth and lateral size control of three-dimensional images in projection integral imaging”, Optics Express, vol. 12, No. 16, (Aug. 9, 2004), pp. 3778-3790. |
Jang, Ju-Seog , et al., “Three-dimensional projection integral imaging using micro-convex-mirror arrays”, Optics Express, vol. 12, No. 6, (Mar. 22, 2004), pp. 1077-1083. |
Jang, Jae-Young , et al., “Viewing angle enhanced integral imaging display by using a high refractive index medium”, Applied Optics, vol. 50, No. 7, (Mar. 1, 2011), pp. B71-B76. |
Javidi, Bahram , et al., “New developments in active and passive 3D image sensing, visualization, and processing”, Proc. of SPIE, vol. 5986, (2005), pp. 598601-1 to 59806-11. |
Javidi, Bahram , et al., “Orthoscopic, long-focal-depth integral imaging by hybrid method”, Proc. of SPIE, vol. 6392, (2006), pp. 639203-1 to 639203-8. |
Jung, Sungyong , et al., “Depth-enhanced integral-imaging 3D display using different optical path lengths by polarization devices or mirror barrier array”, Journal of the SID, 12/4, (2004), pp. 461-467. |
Jung, Sungyong , et al., “Viewing-angle-enhanced integral 3-D imaging using double display devices with masks”, Opt. Eng., vol. 41, No. 10, (Oct. 2002), pp. 2389-2390. |
Jung, Sungyong , et al., “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement”, Optics Express, vol. 11, No. 12, (Jun. 16, 2003), pp. 1346-1356. |
Jung, Sungyong , et al., “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching”, Applied Optics, vol. 42, No. 14, (May 10, 2003), pp. 2513-2520. |
Kavehvash, Zahra , et al., “Extension of depth of field using amplitude modulation of the pupil function for bio-imaging”, Proc. of SPIE, vol. 7690, (2010), pp. 76900O-1 to 76900O-8. |
Kim, Joohwan , et al., “A depth-enhanced floating display system based on integral imaging”, Proceedings of the 2006 SPIE-IS&T Electronic Imaging, SPIE vol. 6055, 60551F, (2006), pp. 60551F-1 to 60551F-9. |
Kim, Youngmin , et al., “Depth-enhanced integral floating imaging system with variable image planes using polymer-dispersed liquid-crystal films”, OSA Optics and Photonics Spring Congress, St. Petersburg, Florida, USA, paper JMA2, (2008), 3 pp. total. |
Kim, Yunhee , et al., “Depth-enhanced three-dimensional integral imaging by use of multilayered display devices”, Applied Optics, vol. 45, No. 18, (Jun. 20, 2006), pp. 4334-4343. |
Kim, Hwi , et al., “Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays”, Optics Express, vol. 17, No. 8, (Apr. 13, 2009), pp. 6389-6396. |
Kim, Yunhee , et al., “Point light source integral imaging with improved resolution and viewing angle by the use of electrically movable pinhole array”, Optics Express, vol. 15, No. 26, (Dec. 24, 2007), pp. 18253-18267. |
Kim, Youngmin , et al., “Projection-type integral imaging system using multiple elemental image layers”, Applied Optics, vol. 50, No. 7, (Mar. 1, 2011), pp. B18-B24. |
Kim, Hwi , et al., “The use of a negative index planoconcave lens array for wide-viewing angle integral imaging”, Optics Express, vol. 16, No. 26, (Dec. 22, 2008), pp. 21865-21880. |
Kim, Joowhan , et al., “Viewing region maximization of an integral floating display through location adjustment of viewing window”, Optics Express, vol. 15, No. 20, (Oct. 1, 2007), pp. 13023-13034. |
Kim, Yunhee , et al., “Viewing-angle-enhanced integral imaging system using a curved lens array”, Optics Express, vol. 12, No. 3, (Feb. 9, 2004), pp. 421-429. |
Lee, Byoungho , et al., “Viewing-angle-enhanced integral imaging by lens switching”, Optics Letters, vol. 27, No. 10, (May 15, 2002), pp. 818-820. |
Martinez-Corral, Manuel , et al., “Integral imaging with extended depth of field”, Proc. of SPIE, vol. 6016, (2005), pp. 601602-1 to 601602-14. |
Martinez-Corral, Manuel , et al., “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays”, Applied Optics, vol. 43, No. 31, (Nov. 1, 2004), pp. 5806-5813. |
Martinez-Corral, Manuel , et al., “Orthoscopic, long-focal-depth 3D Integral Imaging”, Proc. of SPIE, vol. 6934, (2006), pp. 69340H-1 to 69340H-9. |
Martinez-Cuenca, Raul , et al., “Enhanced depth of field integral imaging with sensor resolution constraints”, Optics Express, vol. 12, No. 21, (Oct. 18, 2004), pp. 5237-5242. |
Martinez-Cuenca, R. , et al., “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system”, Optics Express, vol. 15, No. 24, (Nov. 26, 2007), pp. 16255-16260. |
Martinez-Cuenca, Raul , et al., “Extended Depth-of-Field 3-D Display and Visualization by Combination of Amplitude-Modulated Microlenses and Deconvolution Tools”, Journal of Display Technology, vol. 1, No. 2, (Dec. 2005), pp. 321-327. |
Min, Sung-Wook , et al., “Analysis of an optical depth converter used in a three-dimensional integral imaging system”, Applied Optics, vol. 43, No. 23, (Aug. 10, 2004), pp. 4539-4549. |
Min, Sung-Wook , et al., “New Characteristic Equation of Three-Dimensional Integral Imaging System and its Application”, Japanese Journal of Applied Physics, vol. 44, No. 2, (2005), pp. L71-L74. |
Navarro, H. , et al., “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC)”, Optics Express, vol. 18, No. 25, (Dec. 6, 2010), pp. 25573-25583. |
Navarro, Hector , et al., “Method to Remedy Image Degradations Due to Facet Braiding in 3D Integral-Imaging Monitors”, Journal of Display Technology, vol. 6, No. 10, (Oct. 2010), pp. 404-411. |
Okano, Fumio , et al., “Depth Range of a 3D Image Sampled by a Lens Array with the Integral Method”, IEEE 3DTV-CON, (2009), 4 pp. total. |
Okoshi, Takanori , “Three-Dimensional Imaging Techniques”, Academic Press, Inc. Publishers, (1976), pp. 43-123, 295-349, 351-357. |
Park, Soon-Gi , et al., “2D/3D convertible display with enhanced 3D viewing region based on integral imaging”, Proc. of the SPIE, 7524, (2010), 9 pp. total. |
Park, Jae-Hyeung , et al., “Analysis of viewing parameters for two display methods based on integral photography”, Applied Optics, vol. 40, No. 29, (Oct. 10, 2001), pp. 5217-5232. |
Park, Chan-Kyu , et al., “Depth-extended integral imaging system based on a birefringence lens array providing polarization switchable focal lengths”, Optics Express, vol. 17, No. 21, (Oct. 12, 2009), pp. 19047-19054. |
Park, Jae-Hyeung , et al., “Integral imaging with multiple image planes using a uniaxial crystal plate”, Optics Express, vol. 11, No. 16, (Aug. 11, 2003), pp. 1862-1875. |
Park, Gilbae , et al., “Multi-viewer tracking integral imaging system and its viewing zone analysis”, Optics Express, vol. 17, No. 20, (Sep. 28, 2009), pp. 17895-17908. |
Park, Jae-Hyeung , et al., “Recent progress in three-dimensional information processing based on integral imaging”, Applied Optics, vol. 48, No. 34, (Dec. 1, 2009), pp. H77-H94. |
Ponce-Diaz, Rodrigo , et al., “Digital Magnification of Three-Dimensional Integral Images”, Journal of Display Technology, vol. 2, No. 3, (Sep. 2006), pp. 284-291. |
Saavedra, G. , et al., “Digital slicing of 3D scenes by Fourier filtering of integral images”, Optics Express, vol. 16, No. 22, (Oct. 27, 2008), pp. 17154-17160. |
Song, Yong-Wook , et al., “3D object scaling in integral imaging display by varying the spatial ray sampling rate”, Optics Express, vol. 13, No. 9, (May 2, 2005), pp. 3242-3251. |
Stern, Adrian , et al., “3-D computational synthetic aperture integral imaging (COMPSAII)”, Optics Express, vol. 11, No. 19, (Sep. 22, 2003), pp. 2446-2451. |
The Telegraph, “Samsung warns of dangers of 3D television”, The Telegraph, http://www.telegraph.co.uk/technology/news/7596241/Samsung-warns-of-dangers-of-3D-television.html, (Apr. 16, 2010), 2 pp. total. |
Tolosa, A. , et al., “Optical implementation of micro-zoom arrays for parallel focusing in integral imaging”, J. Opt. Soc. Am. A, vol. 27, No. 3, (Mar. 2010), pp. 495-500. |
Wakabayashi, Daisuke , “Panasonic, Japan Work on 3-D Safety”, The Wall Street Journal, http://blogs.wsj.com/digits/2011/01/06/panasonic-working-with-japan-on-3-d-standards/, (Jan. 6, 2011), 2 pp. total. |
“Office Action Dated Jun. 12, 2017; Taiwanese Patent Application No. 103109831”, (dated Jun. 12, 2017). |
“Notice of Allowance Dated Jan. 25, 2018; Taiwanese Patent Application No. 103109831”, dated Jan. 25, 2018. |
Number | Date | Country | |
---|---|---|---|
20140347361 A1 | Nov 2014 | US |
Number | Date | Country | |
---|---|---|---|
61800818 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2014/029623 | Mar 2014 | US |
Child | 14452329 | US |