1. Field of the Invention
Embodiments of the present invention relate generally to the field of three-dimensional (3D) displays, and more specifically to systems and methods for true-3D display suitable for multiple viewers without use of glasses or tracking of viewer position, where each of the viewers' eyes sees a slightly different scene (stereopsis), and where the scene viewed by each eye changes as the eye changes position (parallax).
2. Related Art
Over the last 100 years, significant efforts have gone into developing three-dimensional (3D) displays. There are existing 3D display technologies, including DMD (digital-mirror-device, Texas Instruments) projection of illumination on a spinning disk in the interior of a globe (Actuality Systems); another volumetric display consisting of multiple LCD scattering panels that are alternately made clear or scattering to image a 3D volume (LightSpace/Vizta3D); stereoscopic systems requiring the user to wear goggles (“Crystal Eyes” and others); two-plane stereoscopic systems (actually dual 2D displays with parallax barrier, e.g. Sharp Actius RD3D); and lenticular stereoscopic arrays (many tiny lenses pointing in different directions, e.g., Phillips nine-angle display, SID, Spring 2005). Most of these systems are not particularly successful at producing a true 3D perspective at the users eye or else are inconvenient to use, as evidenced by the fact that the reader probably won't find one in her/his office. The Sharp notebook only provides two views (left eye and right eye, with a single angle for each eye), and the LightSpace display appears to produce very nice images, but in a limited volume (all located inside the monitor) and would be very cumbersome to use as a projection display.
Beyond these technologies there are efforts in both Britain and Japan to produce a true holographic display. Holography was invented in the late 1940s by Gabor and started to flourish with the invention of the laser and off-axis holography. The British work, has actually produced a display that has a ˜7 cm extent and an 8 degree field of view (FOV). While this is impressive, it requires 100 million pixels (Mpixels) to produce this 7 cm field in monochrome and, due to the laws of physics, displays far more data than the human eye can resolve from working viewing distances. A working 50 cm (20 inch) color holographic display with a 60-degree FOV would require 500 nanometer (nm) pixels (at least after optical demagnification, if not physically) and more than a Terapixel (1,000 billion pixels) display. These numbers are totally unworkable anytime in the near future, and even going to horizontal parallax only (HPO, or three-dimensional in the horizontal plane only) just brings the requirement down to 3 Gpixels (3 billion pixels.) Even 3 Gpixels per frame is still a very unworkable number and provides an order of magnitude more data than the human eye requires in this display size at normal working distances. Typical high-resolution displays have 250-micron pixels—a holographic display with 500 nm pixels would be a factor of 500 more dense than this—clearly far more data would be contained in a holographic display than the human eye needs or can even make use of at normal viewing distances. Much of this incredible data density in a true holographic display would just go to waste.
A volumetric 3D display has been proposed by Balogh and developed by Holografika. This system does not create an image on the viewing screen, but rather projects beams of light from the viewing screen to form images by intersecting the beams at a pixel point in space (either real—beams crossing between the screen and viewer, or virtual—beams apparently crossing behind the screen as seen by the viewer). Resolution of this type of device is greatly limited by the divergence of the beams leaving the screen, and the required resolution (pixel size and total number of pixels) starts to become very high for significant viewing volumes.
Eichenlaub teaches a method for generating multiple autostereoscopic (3D without glasses) viewing zones (typically eight are mentioned) using a high-speed light valve and beam-steering apparatus. This system does not have the continuously varying viewing zones desirable for a true 3D display, and has a large amount of very complicated optics. Neither does it teach how to place the optics in multiple horizontal lines (separated by small vertical angles) so that continuously variable autostereoscopic viewing is achieved. It also has the disadvantage of generating all images from a single light valve (thus requiring the very complicated optical systems), which cannot achieve the bandwidth required for continuously variable viewing zones.
Nakamuna, et al., have proposed an array of micro-LCD displays with projection optics, small apertures, and a giant Fresnel lens. The apertures segregate the image directions and the giant Fresnel lens focuses the images on a vertical diffuser screen. This system has a number of problems including: 1) extremely poor use of light (most of the light is thrown away due to the apertures); 2) exceedingly expensive optics and lots of them, or alternatively very poor image quality; 3) very expensive electronics for providing the 2D array of micro-LCD displays.
Thomas has described an angular slice true 3D display with full horizontal parallax and a large viewing angle and field of view. The display however requires a large number of projectors to operate, and is therefore relatively expensive.
Embodiments of the present invention include 3D displays. One embodiment has a display screen that consists of a convergent reflector and a narrow angle diffuser. The 3D display has an array of 2D image projectors that project 2D images onto the display screen to form 3D imagery for a viewer to see. The convergent reflector of the display screen enables full-screen fields of view for the viewer using only a few projectors (at least one, but nominally two or more for 3D viewing). The narrow angle diffuser of the display screen provides control over the angular information in the 3D imagery such that the viewer sees a different image with each eye (stereopsis) and, as the viewer moves her head, she sees different images as well (parallax). Accordingly, several advantages of one or more aspects are as follows: to provide no-glasses-required 3D imagery to a viewer without head tracking or other cumbersome devices; to present both depth and parallax, that does not require exotic rendering geometries or camera optics to generate 3D content; and to require only a few projectors to generate full-screen fields of view for both eyes. Other advantages of one or more aspects will be apparent from a consideration of the drawings and ensuing description.
One embodiment is a system having one or more 2D image projectors and a display screen which is optically coupled to the 2D image projectors. The 2D image projectors are configured to project individual 2D images substantially in focus on the display screen. The display screen is configured to optically converge each projected 2D image from the corresponding 2D image projector to a corresponding viewpoint, where the ensemble of the viewpoints form an eyebox. Each pixel from each of the 2D images is projected from the display screen into a small angular slice to enable a viewer observing the display screen from within the eyebox to see a different image with each eye. The image seen by each eye varies as the viewer moves his or her head with respect to the display screen.
The 2D image projectors may consist of lasers and scanning micro-mirrors that are optically coupled to the lasers, so that the 2D image projectors lenslessly project the 2D images on the display screen. The 2D image projectors driven by laser light sources may allow the 2D images to be substantially in focus at all locations (i.e., in all planes transverse to the optical axis of the system). The system may be configured to generate each of the 2D images from a perspective of the viewpoints in the eyebox, and to provide each of the 2D images to the corresponding projector. The system may be configured to anti-alias the 2D images according to an angular slice horizontal projection angle δθ between the projectors. The system may obtain one or more of the 2D images by rendering 3D data from a 3D dataset, or one or more still or video cameras (e.g., from 3D cameras, such as image plus depth-map cameras). The system may convert video streams into the 3D dataset and then render the 2D images from the 3D dataset. The system may obtain some of the 2D images by shifting or interpolation from others of the 2D images obtained from the cameras, and may substantially proportionally match a depth of field of the cameras to a depth of field for the system. The 2D image projectors may form a plurality of separate groups to form multiple eyeboxes, from which viewers may each observe the display. Each eyebox may be large enough for a plurality of viewers. The shape of the display screen may be selected from the group consisting of cylinders, spheres, parabolas, ellipsoids and aspherical shapes.
Numerous alternative embodiments are also possible.
Other objects and advantages of the invention may become apparent upon reading the following detailed description and upon reference to the accompanying drawings.
While the invention is subject to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and the accompanying detailed description. It should be understood, however, that the drawings and detailed description are not intended to limit the invention to the particular embodiment which is described. This disclosure is instead intended to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims. Further, the drawings may not be to scale, and may exaggerate one or more components in order to facilitate an understanding of the various features described herein.
The present invention and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as not to unnecessarily obscure the present invention in detail.
One embodiment of the 3D display is illustrated in
In one embodiment, the display 101 provides horizontal parallax only (HPO) 3D imagery to the viewer 10. For HPO, the diffuser 45 reflects and diffuses incident light over a wide range vertically (say 20 degrees or more, the vertical diffusion angle is chosen so that adequate and similar intensity light reaches the viewer from the top and bottom of the diffuser), but only over a very small angle horizontally (say one degree or so). An example of this type of asymmetric reflective diffuser is a holographically produced Light Shaping Diffuser from Luminit LLC (1850 West 205th Street, Torrance, Calif. 90501, USA). Luminit's diffusers are holographically etched high efficiency diffusers, referred to as holographic optical elements (HOE's). Luminit is able to apply a reflective coating (very thin layer, conformable coating of, for example, aluminum or silver) to the etched surfaces to form reflective diffusers. Other types of diffusers (not necessarily HOE) with similar horizontal and vertical characteristics (e.g., arrays of micro-lenses) are usable along with other possible reflective coatings (e.g. silver/gold alloy). Similarly, thin film HOE diffusers over the top of a reflector can perform the same function.
In one embodiment, referring now to
Referring again to
Maximal (full screen) rays from each projector in the array 120 define a boundary for the eyebox 70 (
The extent of the eyebox 70 in
Spatial blurring is the apparent defocusing of the 3D imagery as a function of visual depth within a given scene. Objects that visually appear at the diffuser 45 are always in focus, and objects that appear further away in 3D space than the diffuser 45 have increasing apparent defocus. An acceptable range of spatial blurring around the diffuser 45 for the typical viewer is known as depth of field. A depth of field 94 is illustrated in
The horizontal angular displacement 20 and the diffuser 45 with limited horizontal angular diffusion are elements that work jointly to present 3D imagery to the viewer 10. In
For example, in
The drawings in
A block diagram in
The algorithm 600 uses a rendering step 640 to generate the appropriate 2D images required to drive each projector in the array 120. The rendering step 640 uses parameters from a calibration step 610 to configure and align the 2D images such that as the viewer 10 moves his head within the eyebox 70, he sees blended 3D imagery without distortions from inter-projector misalignments or intra-projector mismatches. A user (perhaps the viewer 10) is able to control the rendering step 640 through a 3D user control step 630. This step 630 allows the user to change manually or automatically parameters such as the apparent parallax among the 2D images, the scale of the 3D data, the virtual depth of field and other rendering variables.
The rendering step 640 uses a 2D image projection specific to each projector as defined by parameters from the calibration step 610. For a particular projector, the 2D image projection has a viewpoint within the eyebox 70, such as the viewpoint 11 in
An additional embodiment is shown in
An additional embodiment is shown in
An additional embodiment is shown in
Referring now to
An additional embodiment is a 3D display 105 shown in
The advantage of this type of convergent angular slice true-3D display is that many fewer projectors are required to produce a full horizontal parallax 3D image (view changes continuously with horizontal motion) than with a flat-screen angular slice display (ASD). Note that the projectors can be located to the side of the viewer or below the viewer just as well as above the viewer.
Diffusion before Convergence—
An additional embodiment is shown in
The reflected chief rays (for example rays 191 and 192) from each projector converge to form viewpoints within an eyebox 72. Given a radius of curvature R for the mirror 50, the horizontal extent of the eyebox 72 is defined in a manner similar to the ray geometry in
Although a depth of field for the display 201 is centered at the diffusion screen 40, the apparent location of the depth of field to the viewer 10 follows convergent mirror geometry for object and image distances. For example in one embodiment, if the diffusion screen 40 is a distance 0.5 R from the mirror 50, then the apparent center for the depth of field approaches infinity.
Diffusion after Convergence—
An additional embodiment is shown in
An additional embodiment is a full parallax 3D display. Full parallax means that the viewer sees a different view not only with horizontal head movements (as in HPO) but also with vertical head movements. One can think of HPO as the ability for the viewer to look around objects horizontally, and full parallax as the ability to look around objects both horizontally and vertically. Full parallax is achieved with a diffuser that has both a narrow horizontal angular diffusion and a narrow vertical angular diffusion. (Recall that HPO requires only narrow diffusion in the horizontal while the vertical has broad angular diffusion.) As noted previously, the angular diffusion is tightly coupled with the angular displacement of the projectors in the array. Again, recall that HPO requires proportionally matching the horizontal angular displacement 20 (
From the descriptions above, a number of advantages of some embodiments of the angular convergent true 3D display become evident, without limitation:
(a) No special glasses, head tracking devices or other instruments are required for a viewer to see 3D imagery, thus avoiding the additional cost, complexity, and annoyances for the viewer associated with such devices.
(b) No moving parts such as spinning disks, rasterizing mirrors or shifting spatial multiplexers are required, which thereby increases the mechanical reliability and structural integrity.
(c) Since image projectors, by construction, project 2D images such that rays diverge from the projector lens, the use of a convergent reflector has the advantage of focusing these rays into the eyebox. This property makes rendering the 2D images to form the 3D imagery simpler since standard projection geometries, where horizontal and vertical projection foci share approximately the same location, are used to form the 2D images without the need for non-standard projections such as anamorphic where horizontal and vertical projection foci do not share the same location. Thus, 2D imagery from digital (still or video) cameras with standard lens can be used to drive the projectors directly without additional processing to account for the divergent projector rays.
(d) The convergence at the eyebox of the projected 2D images permits the use of a single projector in the array to achieve a full-screen field of view to a viewer in the eyebox. Additional projectors simply increase the size of the eyebox and the parallax in the displayed 3D imagery for the viewer. Thus, only a few projectors (nominally two or more) are required for viewing full-screen 3D imagery, which reduces system cost.
(e) The separation of the diffuser and the convergent mirror permits the adjustment of the apparent center for the depth of field (relative to the viewer) in accordance with convergent mirror geometry for object and image distances. This adjustment has the advantage to display 3D imagery with an apparent depth of field required by a particular application.
Accordingly, the reader will see that the 3D display of the various embodiments can be used by viewers to see 3D imagery without special glasses, head tracking or other constraints. The viewer sees different views with each eye and can mover his head to see different views to look around objects in the 3D imagery.
Although the description above contains many specificities, these should not be construed as limiting the scope of the embodiments but as merely providing illustrations of some of several embodiments. For example, the convergent reflectors can have different shapes such as cylindrical, spherical, toroidal, etc.; the display screen can consist of a single convergent reflective diffuser, of a transmitting diffuser followed by a convergent mirror, of a convergent mirror followed by a transmitting diffuser, etc.; the 2D images driving the image projectors can be derived from renderings of 3D data, video streams from one or more cameras, video images converted to 3D data and then rendered, etc.
The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the claims. As used herein, the terms “comprises,” “comprising,” or any other variations thereof, are intended to be interpreted as non-exclusively including the elements or limitations which follow those terms. Accordingly, a system, method, or other embodiment that comprises a set of elements is not limited to only those elements, and may include other elements not expressly listed or inherent to the claimed embodiment.
While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions and improvements fall within the scope of the invention as detailed within the following claims.
This application claims the benefit of U.S. Provisional Patent Application 61/704,285, filed Sep. 21, 2012, which is incorporated by reference as if set forth herein in its entirety.
Number | Date | Country | |
---|---|---|---|
61704285 | Sep 2012 | US |