Current technologies enable creation of large format analog holograms that provide stunning realism. Such holograms are able to give viewers the impression of a complete three-dimensional (3D) picture frozen in time. For example, back in 1972, the Cartier jewelry store displayed a hologram of a hand holding jewelry in the window storefront on 5th Avenue in New York City. The hologram reportedly looked so realistic that an elderly woman passing by tried to attack the virtual hand floating in the air. Additionally, for static objects such as historical museum artifacts, archaeological findings, architectural models, prototypes, etc., full color holography offers a realism on par with visual inspection of the actual object.
An illustrative three-dimensional (3D) display system includes a reference spatial light modulator configured to generate a reference wavefront. The system also includes an object spatial light modulator configured to generate an object wavefront. The system further includes a Hogel basis display positioned between the reference spatial light modulator and the object spatial light modulator. The Hogel basis display is configured to receive the reference wavefront and the object wavefront. The Hogel basis display is also configured to generate a light field based at least in part on interference between the reference spatial light modulator and the object spatial light modulator.
An illustrative method of displaying 3D objects includes generating, by a reference spatial light modulator, a reference wavefront. The method also includes generating, by an object spatial light modulator, an object wavefront. The method also includes receiving, by a Hogel basis display positioned between the reference spatial light modulator and the object spatial light modulator, the reference wavefront and the object wavefront. The method further includes generating, by the Hogel basis display, a light field based at least in part on interference between the reference spatial light modulator and the object spatial light modulator.
Other principal features and advantages of the disclosure will become apparent to those skilled in the art upon review of the following drawings, the detailed description, and the appended claims.
Illustrative embodiments of the disclosure will hereafter be described with reference to the accompanying drawings.
In the past, three-dimensional (3D) display design has been largely dominated by physicists and electrical engineers who are strongly focused on accurate physical synthesis of optical fields, with little attention paid to the importance of the displayed content. This approach ignores a critical piece of insight shared by all modern researchers in computer science, data science, statistics, image processing, machine learning, and astronomy. This insight is that the space of all possible naturally occurring high dimensional signals spans an exceptionally small portion of the space occupied by all possible signals. Most of the space is occupied by noise. Described herein are hologram displays methods and systems that capitalize on this notion to fundamentally transform the principles of 3D image synthesis towards a new paradigm of data-driven 3D display design.
A number of different three-dimensional (3D) display technologies have developed over the years, including integral/lenticular/barrier/lightfield displays, holographic photography, digital holographic printers, and holovideo displays. Brief descriptions of these technologies are included below.
Integral and parallax barrier displays were introduced more than a century ago, leading to the development of integral/lenticular/barrier/lightfield displays. Originally these displays were presented in the form of static 3D displays with fixed imagery. In operation, these displays essentially convert the pixels mapped onto a two-dimensional (2D) surface into a set of four-dimensional (4D) rays, where the total number of rays that can be generated is equal to the number of pixels (space-bandwidth product ((SBP)). The advantage of producing static imagery is SBP in excess of 1012 over meter sizes due to well-established advances in print technology. The same principle has been demonstrated extensively in various forms with large format liquid crystal displays (LCDs) and rear projection screens with other types of spatial light modulators (SLMs). Using SLMs provides an advantage over print, namely that dynamic/programmable content can be displayed. However, the display of dynamic/programmable content comes at a cost of reduced SBP (on the order of 106 instead of 1012), dramatically reducing the fidelity of 3D imagery that can be displayed. This limitation can be somewhat overcome with the use of multiple SLMs, but unfortunately the cost per SLM has remained prohibitively expensive so as to limit SBP to the order of 106-108 pixels/rays, even for the most state of the art displays.
Analog holography (or holographic photography) was popularized by the pioneering research of Leith and Eupatniks that followed the invention of the LASER in the 1960's and 1970's. Analog holography, uses high resolution photosensitive film to record interference fringes with a resolution close to the wavelength of light. In a large format hologram (e.g. 1 m×1 m) generated with this technology, the information density is high, resulting in 3D images that are displayed with stunning realism. In some cases the imagery is so realistic that it is almost impossible to distinguish from a real object. Unfortunately, interferometric holography is restricted to imagery that can be recorded in a laboratory environment, and the high information density recorded on an analog hologram is extremely difficult to replicate using programmable SLMs.
The Spatial Imaging group pioneered the concept of a Digital Holographic Printer in the 1980's and 1990's. The primary concept was to digitally record a high resolution light field, either by synthesizing 3D computer graphics renderings, or by capturing a large multitude of imagery from different viewing positions. Decoupling the acquisition and printing process allowed, for the first time, 3D imagery to be displayed of objects located outside of laboratory environments. At the same time, because the holograms were recorded on very high resolution analog film, the process enabled the recording of light fields with SBP approaching that of the analog holograms that were popular in the 60's and 70's.
In the 1990's, Zebra Imaging, Inc. was formed to commercialize digital holographic printer technology, and the concept of a “Hogel,” or holographic pixel was introduced. A Hogel is a fundamental unit of a digital holographic print, or the equivalent of a pixel in 2D print. One Hogel is printed at a time, and that Hogel encodes the set of rays that should emanate from that spatial location on the hologram surface. This technology was used to produce some of the most stunningly realistic 3D digital prints ever created. Zebra Imaging also forayed into light field displays, producing some stunning dynamic 3D imagery that capitalized on the available expertise in SLM technology and holographic printing. However, these light field displays suffered from the same limitations in SBP as conventional integral/parallax barrier displays.
Regarding holovideo displays, when Stephen Benton invented the rainbow hologram at Polaroid in the 1970's, he thought he had made the first step in developing a practical implementation of holographic video. Benton knew that the SBP of analog holograms was beyond anything that could be achieved with commercial SLM technology available at the time. However, he realized that by reducing the hologram to Horizontal Parallax Only (HPO), the SBP could be achieved over reasonably small hologram sizes. This led to the first implementations of holographic video, which utilized high bandwidth multi-channel acousto-optic modulators, high speed scanning mirrors, and a very large objective lens. The system had various incarnations that produced relatively wide field of view (FoV) HPO 3D imagery (˜30 degrees), with a reasonably large image size (˜100 millimeters (mm)), but was notoriously difficult to align and synchronize. Further developments in holovideo utilized high bandwidth ferro-electric SLMs coupled with optically-addressable SLMs to achieve similar SBP with dramatically simplified alignment and synchronization. Higher bandwidth SLMs have also been explored using alternative physical processes, but the above-described holovideo displays remain the closest implementations to achieve practical large area, wide-FOV based holographic fringe projection.
While engineers and scientists have nearly mastered the recording and display of static 3D imagery, dynamic display has proved to be an elusively challenging task. As discussed above, many visually compelling 3D displays have been built in the last few decades. However, scientists and engineers have yet to truly deliver on the ultimate goal of a dynamic, holographic quality 3D display. The data sizes and rates required to display dynamic, holographic quality 3D imagery over a large area and FOV becomes more manageable each year that Moore's law advances. However, the data sizes still remain so astronomically large as to be fundamentally impractical for decades to come. What is needed is a transformational approach to the problem of 3D display design that fundamentally rethinks the problem and breaks the current trend of incremental advances.
Content adaptive 3D displays operate on a fundamentally different principle than other types of 3D displays. Content adaptive 3D displays operate based on the concept that two layered 2D SLMs can be considered as a rank-1 factorization of a 4D light field. As a result, by quickly sequencing through a set of images on two displays at a rate higher than the integration time of the human eye, a high rank approximation of a light field can be achieved. The technique was the first to utilize the limited SBP afforded by commercial SLMs to produce 3D imagery using a content-specific light field basis. This original idea can be extended to include multiple stacked SLMs, as well various combinations of low resolution light fields and 2D SLMs.
The embodiments described herein are based in part on the above-described concept of content adaptive 3D displays. However, for the embodiments described herein, the bases used are different for each light field displayed. The bases are rank-1 factorizations of a given light field, which is essentially a singular value decomposition (SVD) basis learned from that single light field. The 3D display concepts introduced herein are inspired by the notion of displaying light fields in a learned basis. However, the proposed systems introduce the concept of displaying imagery in a fixed light field basis that is learned from a training database of example light fields. Furthermore, the proposed system utilizes the concept of a hogel screen to display light fields with exceptionally high SBP (e.g., 1012 rays), while utilizing only a small number of degrees of freedom on a programmable SLM (e.g., 106 pixels).
The proposed methods and systems sidestep the infeasibly difficult problem of creating a 3D display that is able to reproduce all possible light fields that are within the realm of physical possibility. Optical physics dictates that recreating an accurate 3D optical field with a hemispherical FOV involves light modulation at the resolution of a single wavelength (e.g. 0.5 micrometers (um)). Hemispherical FOV over a 1 meter (m) viewing surface requires a space-bandwidth product (SBP) on the order of (1 m/0.5 um)2=4*1012, which is a very large number. Currently available programmable spatial light modulators (SLMs) are unable to achieve such high SBP at a reasonable cost. However, there are methods to create optical recordings that exceed this SBP, doing so only at a fixed point in time. The methods and systems described herein capitalize upon these optical recording techniques to produce 3D displays with unprecedented realism and the ability to be programmed dynamically using only the limited degrees of freedom offered by commercially available SLM technology.
Specifically, the proposed approach is to develop a display that only aims to reproduce the set of light fields that are within the realms of physical plausibility. The advantage of this approach is that is can drastically decrease the degrees of freedom necessary to dynamically program a display. The approach is based on the largest set of physically plausible light fields that can be recovered from the approximately 2*106 dynamic degrees of freedom offered by a single commercially available SLM. The core idea is outlined in
As discussed with reference to
The Hogel basis screen of
In operation, a light field l(x, u) is emitted from a display surface, where the 2D lateral coordinates on the display surface are x=(x, y), and the coordinates u=(u, v) are the tangents of ray angles emitted from the surface. The light field is discretized into n∈[1, . . . , N] spatial regions (Hogels) on the display surface, so that the set of rays emitted from the nth spatial location can be represented on the display as ln(u) (see
The basic concept of creating an HBS is outlined in
E
n(x,z)∝Σk=1K|Qz[rk(x)+[Ik(u)]]|2, Eq. 1:
where Qz represents the 3D propagator inside the film (e.g., Huygen's propagator). After the set of K exposures are performed for each of the N Hogel locations, the holographic screen produces a 3D phase modulation function n(x, z)∝En(x, z) proportional to its exposure. Provided that the hologram thickness d is sufficiently large so that the Bragg condition successfully filters out holograms captured with reference beams separated by an angle ΔØ≈λ/d, then each of the recorded images may be reproduced independently by illuminating the hologram with the corresponding reference beam:
[rk(x),n(x,z)∝[Ik(u)], Eq. 2:
where [r, ] represents the function that propagates the incident field r through the phase modulation function recorded inside the volume hologram. Equation 2 expresses that the light field emitted from a reference beam incident on the nth Hogel with incident angle Øk is proportional to the kth image displayed on the O-SLM and recorded on the hologram. More generally, if the nth Hogel is illuminated with a weighted combination of reference beams, the total incident field on the nth Hogel is:
r
n(x)=Σk=1Kαn,k·rk(x). Eq. 3:
In equation 3, αn,k2 represents the intensity of the ray incident on the nth Hogel location from the kth illumination direction, and αn,k therefore represents a low-resolution (LR) lightfield. The intensity of the rays exiting the hogel is a linear combination of the images displayed on the O-SLM during Hogel recording, as shown in
l
n(x)=|Σk=1Kαn,k·[rk(x),n(x,z)]|2=|Σk=1Kαn,k·[Ik(u)]|2. Eq. 4:
It can be assumed that there is a discretization of spatial coordinates relative to the M pixels on the O-SLM, corresponding to M different ray angles exiting each Hogel surface so that [Ik(u)]→hk∈M. The discretized form of Eqn. 4 then becomes:
l
n=|Σk=1Kαn,k·hk|2=|αn|2, Eq. 5:
where αn∈+K represents the set of rays incident on the nth Hogel, ln∈+M represents the set of rays exiting the nth Hogel, ∈M×K is the Hogel basis encoded in the volume hologram, and the |·| operator is taken elementwise. When the identical Hogel basis is recorded at each position on the film, it can be expressed as:
=||2, Eq. 6:
where ∈+K×N is a matrix representing the incident LR light field consisting of N spatial samples and only K ray samples, and ∈+M×N represents the emitted 4D light field with M>>K ray samples for each spatial location. Equations 4-6 express the fundamental principle of the Hogel basis 3D display, namely that the set of rays diffracted from each Hogel are a linear combination of the K images displayed on the O-SLM during Hogel recording. In particular, when M>>K, the SBP for the display is significantly increased. This is because a HR light field is recorded as a fixed pattern in the volume holographic film. The HBS has a very large SBP=M·N≈108-109. To display a 3D image, a LR light field that includes a set of rays with intensities ∈+K×N is projected onto the HBS. The incident light field has a very small SBP=K·N≈106. However, the light field emitted from the HBS has an SBP equal to the HBS, and produces a scattered intensity that closely matches the ground truth HR light field ∈+M×N. The key to utilizing this principle lays in the careful selection of an appropriate Hogel basis such that a large variety of naturally occurring, HR light fields may be reconstructed using only K degrees of freedom per Hogel.
The forward model expressed by Equation 6 represents a factorization of the light field scattering from a Hogel basis screen. The degree to which a light field may be accurately approximated with this factorization will be determined by how well Equation 6 matches with the ground truth ray intensities of a large set of naturally occurring HR light fields. In particular, one may have access to a large database of P naturally occurring HR light fields p∈+M×N, p∈[1, . . . , P]. Then, the problem of learning the optimal Hogel basis boils down to solving the minimization problem:
where ε(·) represents the loss function which ensures fidelity of the basis representation relative to the training data p, and (·) represents a regularization function on the basis coefficients. For instance, setting (·)=|·|2,(·)=|·|1, is a coupled sparse dictionary learning (e.g., KSVD) and phase-retrieval problem. The solution to the optimization problem expressed by Equation 7 produces a fixed Hogel basis that can be used to represent any light field in terms of the learned basis coefficients . Solutions to other variants of loss functions may be found using other methods, for instance using Deep Neural Networks (DNNs) that are trained using stochastic gradient descent.
In addition to building prototypes of the system, development of the proposed system also includes development of optimization algorithms to learn an optimal Hogel basis from a database of high resolution light fields. The light fields can be generated synthetically using one or more physically-based open source 3D computer graphics rendering packages, such as Blender, PBRT, Mitsuba, etc. Though the synthesized light fields do not exactly mimic high resolution captured light fields, they do exhibit a very high degree of similarity. In particular, the physically based renderers include subtle light transport effects such as soft shadows, sub-surface scattering, spatially varying bidirectional reflectance distribution function(s) (BRDF), etc.
In an illustrative embodiment, the prototype system depicted in
The 3D display system of
For playback, the Hogel basis screen is photo-developed, the object LCD/SLM 310 is removed, and an image is programmed on the reference LCD/SLM 305. When just two pixels are turned on the object LCD/SLM 310, each pixel produces a ray that activates a different Hogel, each of which reproduces a different object LCD/SLM image Ik∈M, depending on the angle kΔϕ incident to the Hogel screen (as shown in the right-hand portion of
In some embodiments, the display format of the proposed system may cover the learning and recording of a fixed angular spectrum basis for the purpose of synthesizing dynamic 3D images with exceedingly large SBP, while using a single, low resolution, programmable SLM. In alternative embodiments, different techniques and/or hardware may be used. For example, in one embodiment, multiple SLMs can be used in the system to increase the fidelity of projected 3D display imagery over larger display sizes. Additionally, Hogel basis screens can be manufactured over much larger areas using a step-and-repeat approach (e.g., similar to the approach used in digital holographic printing).
In some embodiments, the system can be based on learning a fixed 2D angular spectrum basis. However, the concept can also be generalized to consider a full, spatial-angular 4D light field basis. Such an approach enables increased fidelity in 3D display quality with fewer degrees of freedom, but would come at the cost of increased complexity in Hogel basis screen recording, integrated display components (e.g. more SLMs), or both. Additionally, the principles described herein can be used to generate high resolution compressive 2D display content and/or other possible extensions such as a high fidelity 4D reflectance display, a high fidelity 8D reflectance display, etc.
In an illustrative embodiment, the same concept used to convert a low resolution light field into a high resolution light field using the high SBP Hogel basis screen can also be used in reverse. For example, the LCD/SLM used in the playback of
In an operation 410, an angular spectrum basis is displayed on an object LCD/SLM. The angular spectrum basis can result in an object wavefront. In an operation 415, interference between the reference LCD/SLM and the object LCD/SLM is recorded on the Hogel basis screen. In an operation 420, the image on the reference LCD/SLM is shifted so that the reference beam angle changes. In an operation 425, a new image is displayed on the object LCD/SLM. In an illustrative embodiment, the operations 420 and 425 can occur simultaneously (or nearly simultaneously). The interference pattern between the reference LCD/SLM and the object LCD/SLM is again recorded on the Hogel basis screen in the operation 415. In an illustrative embodiment, the operations 415, 420, and 425 continue to repeat a plurality of times until the desired number (K) of difference exposures is reached for each Hogel.
The processor 505 can be any type of computer processor known in the art, and can include a plurality of processors and/or a plurality of processing cores. The processor 505 can include a controller, a microcontroller, an audio processor, a graphics processing unit, a hardware accelerator, a digital signal processor, etc. Additionally, the processor 505 may be implemented as a complex instruction set computer processor, a reduced instruction set computer processor, an x86 instruction set computer processor, etc. The processor 505 is used to run the operating system 510, which can be any type of operating system.
The operating system 510 is stored in the memory 515, which is also used to store programs, network and communications data, peripheral component data, light field information, the 3D display application 535, and other operating instructions. The memory 515 can be one or more memory systems that include various types of computer memory such as flash memory, random access memory (RAM), dynamic (RAM), static (RAM), a universal serial bus (USB) drive, an optical disk drive, a tape drive, an internal storage device, a non-volatile storage device, a hard disk drive (HDD), a volatile storage device, etc.
The I/O system 525 is the framework which enables users and peripheral devices to interact with the computing system 500. The I/O system 525 can include a mouse, a keyboard, one or more displays, a speaker, a microphone, etc. that allow the user to interact with and control the computing system 500. The I/O system 525 also includes circuitry and a bus structure to interface with peripheral computing devices such as power sources, USB devices, peripheral component interconnect express (PCIe) devices, serial advanced technology attachment (SATA) devices, high definition multimedia interface (HDMI) devices, proprietary connection devices, etc. In an illustrative embodiment, the I/O system 525 is configured to receive inputs and operating instructions from a user.
The network interface 530 includes transceiver circuitry that allows the computing system to transmit and receive data to/from other devices such as remote computing systems, servers, websites, etc. The network interface 530 enables communication through the network 540, which can be in the form of one or more communication networks and devices. For example, the network 540 can include a cable network, a fiber network, a cellular network, a wi-fi network, a landline telephone network, a microwave network, a satellite network, etc. and any devices/programs accessible through such networks. The network interface 530 also includes circuitry to allow device-to-device communication such as Bluetooth® communication.
The 3D display application 535 includes hardware and/or software, and is configured to perform any of the operations described herein. Software of the 3D display application 535 can be stored in the memory 515. As an example, the 3D display application 535 can include computer-readable instructions to control the reference LCD/SLM and/or the reference wave, control the object LCD/SLM and/or the object wave, control movement of the mechanical stepper, control exposure of Hogels on the Hogel basis display, control playback of 3D imagery, identify optimal basis, etc.
The computing system 500 is in communication with a remote processing system 545 via the network 540. In an illustrative embodiment, the remote processing system 545 can be used to perform any of the processing operations described herein. In some embodiments, the remote processing system 545 can house some or all of the 3D display application 535. In an alternative embodiment, the remote processing system 545 may not be used.
As discussed herein, the proposed methods and systems utilize established optical recording techniques to produce 3D displays with unprecedented realism, and that can be programmed using only low-resolution SLM technology. The technical approach to achieve such systems was to use data-driven techniques to learn a representation of the set of physically plausible light fields using only the 106 programmable rays offered by a single commercially available SLM. The core idea is outlined in
The HBS-DNN algorithm illustrated in
In one embodiment, a megapixel SLM is used to create a low resolution (LR) light field. The LR light field is projected onto a Hogel Basis Screen (HBS). The HBS has recorded a fixed basis, producing high resolution (HR) light field imagery with unprecedented 3D realism. The basis recorded on the HBS is fixed in one embodiment, and dynamic content can be displayed by modifying the image on the SLM. The displays can be single color (wavelength) or full color, depending on the implementation.
Another embodiment is directed to a single Hogel 2D display that uses multiplexed volume holography. The hardware and software of such a system can be developed to test and expand upon the basic principles of the 3D display concept (in 2D). In such an implementation, a volume hologram that includes a single holographic pixel, or Hogel, is used to multiplex numerous 2D basis images onto an approximately 1 mm×1 mm area on a photopolymer photographic film. The basis images are learned from a dataset of example 2D images, then recorded onto the volume holographic film by interfering with a reference beam. After the hologram is developed, a high-resolution 2D image is reconstructed using a low-resolution SLM coupled with the recorded volume hologram.
Development of a single Hogel 2D display involves development of a rendering framework to model the physical propagation of light through volume diffraction gratings. The framework has two important uses for the overall program. First, it can be used to verify theoretical relationships between a volume hologram, the maximum number of multiplexed holograms that can be stored, and the diffraction efficiency of each hologram. Formally, Equation 1 can be numerically implemented, which allows for calculation of the phase modulation function (x,z) produced from a set of holographic exposures. The function [rk(x),n(x, z) from Eqn. (2) can also be numerically implemented, which propagates the input field r(x) through the phase modulation function (x, z) recorded inside the volume hologram. Accurate numerical calculation of [·] involves a suitable choice of optical propagation models. Candidate propagation models include Huygens/ASM propagation, Beam propagation, Coupled Wave Theory (CWT), Finite Difference Time Domain methods, etc. Second, the output of the optical propagator can be used to compute the predicted Hogel basis for a given recorded exposure pattern (x, z). Once an accurate estimate of the Hogel basis is determined, it is possible to decompose an HR light field into an LR lightfield using Equation 6, and determine the HR lightfield produced when the LR lightfield is incident on the HBS using the forward model in Equation 5. The rendering framework enables end-to-end simulation of 3D lightfields produced by the 3D display prototypes described herein.
A system can also be developed to multiplex multiple images into a photopolymer volumetric hologram using a transmission geometry.
After being exposed, the developed hologram can be used as a Holographic basis screen (HBS) to display high resolution 2D imagery. The principle of operation for the single hogel 2D display is illustrated in
Another embodiment is directed to a dynamic compressive 3D display using a Hogel Basis Screen. In one implementation, the above-referenced single Hogel 2D display can be extended to form a fully-functioning multiplexed digital holographic printer, which can then be used to manufacture a Hogel Basis Screen (HBS). A step-and-repeat process can be used to record a 2D grid of Hogels onto a 10 cm×10 cm HBS. The printed HBS will be coupled with a SLM producing a LR light field of approximately 106 rays. After exiting the HBS, a HR lightfield will be produced with 108-109 rays, enabling dynamic 3D content with significantly greater 3D realism.
To implement the dynamic compressive 3D display, algorithms are developed to learn an optimal Hogel basis from a database of high-resolution light fields and to reconstruct HR light fields from their respective Hogel basis coefficients (LR light field). New HBS-DNN auto-encoder optimization algorithms can also be developed for learning Hogel Basis , and estimating Hogel Basis coefficients from an HR lightfield . The network architecture can encode physical propagators in the form of network weights in the decoder network (, )→o. The optimized weights in the decoder after training correspond to the learned Hogel Basis , which is printed onto the HBS screen. The HBS-DNN is also used to optimize a set of weights for an autoencoder. For the testing phase of the LS-NN, a target HR light field is input feed-forward into the trained encoder and the coefficients =(, ) are computed. As illustrated in
To construct the 3D display, a fully functioning multiplexed digital holographic printer can be constructed in one embodiment (as illustrated in
Still referring to
Another embodiment is directed to a programmable 3D reflectance field display. Typically, the illumination of 2D/3D imagery is presented with illumination that is determined at the time of capture. Reflectance displays capture the response of 2D/3D scenes to different incoming lighting directions, so the visual display can be made consistent with environmental illumination conditions. In one implementation, the aforementioned multiplexed digital holographic printer can be used to multiplex multiple holograms of a 3D object illuminated under different viewing directions onto a single piece of holographic film. The multiplexed hologram can be played back using environmental lighting, providing realistic illumination of 3D objects with visual cues that are highly consistent with the environment, including shading, shadows, caustics, and reflections.
To implement a system for digital holographic printing of 3D reflectance fields, a multiplexed digital holographic printer and associated software can be used for displaying dynamic 3D content with unprecedented realism. The printer can be used to manufacture an HBS, and 3D content is programmed by projecting different LR light fields onto the HBS. In this embodiment, the multiplexed digital printer is used to record a set of holograms with fixed 3D content and varying reflectance. First, a 3D reflectance field is generated by rendering or acquiring a HR light field of a 3D scene under all possible point source illuminations. From the set of rendered images, the scene can be computationally relit so as to produce realistic light-material interactions in a synthetic environment, which is a technique used in the visual effects industry to create renderings of real actors in computer generated environments. This 3D reflectance field can be used to demonstrate a programmable 3D reflectance field display. Typically, the illumination of 2D/3D imagery is presented with illumination that is determined at the time of capture. However, the proposed multiplexed digital holographic printer will allow one to record all HR light fields at once onto the holographic film, and then replay them in a linear combination so that the 3D field is computationally illuminated with the desired environmental lighting. The computational relighting process is identical to the FIBS display process and is illustrated in
Yet another embodiment is directed to a method and system for performing holographic photography conservation. The system can be used to study the degradation of holographic emulsions in works stored at various locations around the world (e.g., the MIT Museum, etc.). In such an embodiment, instrumentation techniques can be used to document the 3D images recorded on holograms stored in a collection. Established imaging techniques can also be used to study the material properties of the holograms and provide a conservation study of holographic film-based media.
The photographic information embedded in a hologram is in the form of interference fringes that are on the order of 1-2 microns. To view a hologram, the recorded interference pattern is reconstructed by illuminating the hologram with the appropriate illumination. A photograph only captures a small portion of the information encoded in a hologram. However, it is possible to record the entire 3D wavefront produced by a hologram, if imagery can be captured with a resolution on the order of 1-2 microns. In the past, digital recording of the interference pattern encoded on a hologram was not considered possible. However, high resolution digital cameras are now widely prolific and inexpensive. Typically, acquisition of 3D wave fronts involves an interferometry setup which can only performed in a laboratory setting and is therefore impractical for transporting to a museum for conservation purposes. However, ptychographic imaging techniques for acquiring reconstructing 3D wave fronts without interferometry have been developed, and the principle has been demonstrated in both optical and X-ray wavelengths.
In one embodiment, a ptychography instrument can be developed that includes just a laser, focusing optics, a high-resolution focal plane array, a 2D scanning gantry, and associated control electronics/software. The ptychography instrument can be a small modification to a 2D scanning gantry. Ptychography algorithms previously developed can be used to reconstruct 3D wave fronts from diffraction patterns captured using the scanning gantry.
Additionally, commercial imaging and material characterization instruments can be used to study the 3D chemical composition of the photochemical materials in holograms. In particular, two main imaging modalities can be used, whose complimentary information will be fused together to better understand the materials structure, and how it relates to the physical appearance, and hence the experience of the viewer. In one implementation, portable scanning X-Ray Fluorescence (XRF) instrument can be used in concert with a portable scanning Optical Coherence Tomography (OCT) imager to probe the selection of holograms. The XRF and OCT imagery can be captured of the same region, either scanned simultaneously with the same gantry, or aligned using imager registration software. The XRF instrument provides access to elemental composition of materials, providing information about the relative concentration of these materials. The OCT instrument will provide information of the 3D structure of the density of materials. The XRF and OCT information can be fused to develop a complete picture of the 3D material structure of regions of the hologram. Comparative studies across different holograms will allow one to probe material structures that are correlated with different stages of deterioration.
The embodiments described herein can be used to develop single color or full-color holographic displays. Additionally, larger display sizes (e.g. 1 m×1 m) can be used. Incorporating multiple SLMs will also increase the fidelity of projected 3D display imagery over larger display sizes. In alternative embodiments, other variations and extensions are also envisioned.
The word “illustrative” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “illustrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Further, for the purposes of this disclosure and unless otherwise specified, “a” or “an” means “one or more.”
The foregoing description of illustrative embodiments of the disclosure has been presented for purposes of illustration and of description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure. The embodiments were chosen and described in order to explain the principles of the disclosure and as practical applications of the disclosure to enable one skilled in the art to utilize the disclosure in various embodiments and with various modifications as suited to the particular use contemplated. It is intended that the scope of the disclosure be defined by the claims appended hereto and their equivalents.
The present application claims the priority benefit of U.S. Provisional Patent App. No. 62/731,280 filed on Sep. 14, 2018, the entire disclosure of which is incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2019/051292 | 9/16/2019 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62731280 | Sep 2018 | US |