1. Field of the Invention
This invention relates to techniques for speckle reduction in holographic optical systems, in particular holographic image display systems.
2. Description of the Related Art
Speckle is a problem in holographic image display systems, in particular those which display an image on a two-dimensional (though not necessarily planar) screen. This is because images are generated using coherent light and when this falls on a surface, unevenness at the wavelength scale or greater causes interference in the eye of the observer and hence speckle in the displayed image. Inter-pixel interference also results in an effect which has a visual appearance similar to speckle, although in this case the effect arises independently of the properties of the surface or the observer's eye.
One technique which may be employed to reduce speckle when replaying a holographic image is described in EPO 292 209A. This describes the fabrication of a composite hologram using separate exposures with different speckle fields generated using a diffuser. A technique for speckle reduction in a non-holographic image display system can be found in WO2006/104704. Other similar background prior art can be found in EP1 292 134A, US2006/0012842, WO98/24240, and U.S. Pat. No. 6,747,781. A system using a 1-dimensional spatial light modulator, scanned to generate a 2D image, is described in “Hadamard speckle contrast reduction”, J. I. Trisnadi, Optics Letters 29, 11-13 (2004) and in Trisnadi, Jahja I., “Speckle contrast reduction in laser projection displays” Silicon Light Machines, Sunnyvale, Calif. 94089, as well as U.S. Pat. No. 7,214,946, U.S. Pat. No. 7,046,446 and related patents.
We have previously described techniques for displaying an image holographically (see, for example, WO 2005/059660, WO 2006/134398 and WO 2006/134404, all hereby incorporated by reference in their entirety). These techniques, which display multiple temporal sub-frames for a single image frame, have advantages in reducing inter-pixel interference caused by adjacent pixels having decorrelated phase values, and additional advantages in reducing speckle because temporally sequential sub-frames produce images with substantially independent spatial phase structure, leading to independent speckle patterns which average in the eye of the observer. However further speckle reduction is desirable.
According to certain embodiments of the present invention there is therefore provided a holographic image display system for displaying an image holographically on a display surface, the system comprising: a spatial light modulator (SLM) to display a hologram; a light source to illuminate said displayed hologram; projection optics to project light from said illuminated displayed hologram onto said display surface to form a holographically generated two-dimensional image, said projection optics being configured to form, at an intermediate image surface, an intermediate two-dimensional image corresponding to said holographically generated image; a diffuser located at said intermediate image surface; and an actuator mechanically coupled to said diffuser to, in operation, move said diffuser to randomise phases over pixels of said intermediate image to reduce speckle in an image displayed by the system.
Broadly speaking embodiments of the system provide a phase pattern across the intermediate image of the displayed hologram enabling a plurality of different, in embodiments independent, speckle patterns to be generated which, if displayed sufficiently quickly, average within the eye to reduce speckle. The phase pattern may be random or computed/pseudo-random, that is having a random characteristic to function as a light diffuser, but with a controlled, more particularly limited, spatial (angular) intensity distribution for the diffused light.
In embodiments the intermediate image surface comprises a Fourier transform plane of the phase imprint of the SLM. A version of the displayed image is formed at this surface, at a resolution determined (in part) by the number of pixels of the SLM.
In many cases the intermediate image surface comprises a plane but in embodiments it may be a curved surface, depending upon the location of the surface within the projection optics and on whether the display surface is curved. (We have previously described techniques for projecting onto a curved display surface, for example for a head-up display, in our co-pending applications GB0706264.9 and U.S. 60/909,394 hereby incorporated in their entireties by reference herein).
In embodiments the projection optics may comprise demagnification optics such as a beam expander or reverse Keplerian telescope, one or more lenses of which may be encoded in the hologram display on the SLM. In some preferred embodiments the SLM comprises a reflective SLM.
In preferred embodiments the holographic image display system comprises an “OSPR-type” system (described in detail later) in which multiple temporal subframes are displayed for each displayed image (frame). In embodiments of this technique the phases of pixels of successive frames are pseudo-random, albeit that in some preferred implementations noise generated by one subframe is compensated for in one or more subsequent subframes (which technique the inventors term adaptive (AD) OSPR).
Broadly speaking, OSPR reduces speckle noise power at spatial frequencies up to a frequency dependent on the inverse pixel pitch in the intermediate image plane. If a minimum feature size of pixel pitch of the diffuser is no smaller than a pixel pitch of the image in the intermediate image plane then the diffuser has the effect of adding more temporal subframes, since the OSPR procedure effectively randomises the pixel phases of each successive subframe.
In a non-OSPR holographic image display system the effect is somewhat akin to that inherently achieved by OSPR. With OSPR, however, preferably a pixel pitch or feature size of the diffuser is less than that of the intermediate holographically generated image, in which case speckle is reduced at increased spatial frequencies than would otherwise be the case, up to a spatial frequency determined by the inverse of the diffuser pixel pitch or feature size.
Since the intermediate image and diffuser are both two-dimensional preferably the diffuser pixel pitch is less than the intermediate image pixel pitch in each of two corresponding orthogonal directions (x and y directions) in the intermediate image plane. In some preferred implementations the diffuser is moved in two dimensions (x and y directions) to reduce “streaking” in the image.
We have previously described, in PCT/GB2007/050291 (hereby incorporated in its entirety by reference herein) techniques for displaying colour images holographically. Thus in embodiments the light source may provide illumination at more than one wavelength, and may include beam expanding/combining or other optics. In some implementations of a colour display system pixels of different colours (wavelengths) may have substantially the same size in the displayed image plane. In other implementations the pixel sizes are generally proportional to the wavelengths of the incident light. In this latter type of system it is preferable for a pixel of the diffuser to be smaller than a smallest intermediate image pixel size, for example a blue wavelength intermediate image pixel size.
It has been found, in practice, that noise reduction resulting from OSPR-type phase randomisation and speckle reduction from a diffuser with pixels smaller than those of the intermediate image multiply together to give substantially lower perceived speckle and reduce speckle contrast.
The diffuser may comprise ground glass with a feature size less than a pixel pitch of the intermediate image (for example a phase change of at least, say, π/4 over this distance). However, because ground glass is typically rough over a range of length scales if feature sizes at this scale are present then generally there will also be smaller features. These will tend to scatter the light over a wide range of angles, a proportion of the light being scattered beyond an acceptance angle of a final lens of the projection optics, thus resulting in a reduced intensity displayed image. It is therefore useful to employ a specially computed diffuser to match the acceptance angle of a final lens of the projection optics and/or optimise the depth of field of the system. (This is described in more detail later). A pixilated quantised phase diffuser may be employed without significant penalty.
Additionally or alternatively it can be useful to employ a diffuser with a minimum feature size constraint, for example 1/10th, 1/50th or 1/100th of a pixel pitch in the intermediate image plane (this depends on the collection angle of the final lens). To limit the minimum feature size whilst providing diffuser features less than an intermediate image pixel pitch a pixellated quantised phase diffuser may be employed. This may be a binary phase diffuser with phases of, for example, 0 and π, or more than two phase levels may be employed. In embodiments the pixels of the diffuser may have a pixel pitch of less than 5 μm, 4 μm, 3 μm, 2 μm or 1 μm. A pixel of the diffuser may have one of a plurality of quantised phase levels. Thus, in embodiments, the diffuser comprises a pixellated array with one of two phases for each pixel chosen with a 50% probability. The phase pattern may be random or computed/pseudo-random, that is having a random characteristic to function as a light diffuser, but with a controlled, more particularly limited, spatial (angular) intensity distribution for the diffused light.
The actuator may comprise a motor but in preferred embodiments a piezoelectric actuator is employed. Preferably the stroke of the actuator is sufficient for at least 2, 5 or 10 different phase patterns (diffuser pixels) to be imposed on an intermediate image pixel. Thus, depending upon pixel sizes, in some preferred embodiments the piezoelectric actuator has a stroke of at least 5 μm, more preferably at least 10 μm. For the different speckle patterns caused by the different imposed phase patterns to integrate within the human eye the different phase patterns should be imposed sufficiently quickly for the speckle patterns to average within an observer's eye, for example in less than 1/30th, preferably less than 1/60th of a second. A routine experiment can be employed to trade off stroke length and speed of movement and the actuator may be operated either on-resonance or off-resonance (also taking into account the desirability of low audio noise from the actuator). For example the actuator may operate at a frequency of between 10 Hz and 10 KHz.
As previously mentioned, the above described techniques are particularly advantageous in a holographic image display system which generates a plurality of temporal holographic subframes for display in rapid succession on the SLM such that corresponding temporal subframe images on the display surface average in an observer's eye to give the impression of the displayed image. This technique can reduce speckle in the projected image up to a spatial frequency dependent on the (inverse) intermediate image pixel pitch in the Fourier transform plane of the SLM, and, in cases where the diffuser pixel pitch is less than this, speckle at increased spatial frequencies can be reduced. Where the diffuser pixel pitch is not less than that of the intermediate, holographically-generated image, effectively the “OSPR-effect” is enhanced.
Thus, in a related aspect certain embodiments of the invention provide a method of reducing speckle in a holographic image display system for holographically displaying an image comprising a plurality of pixels on a display surface, the system comprising: a spatial light modulator (SLM) to display a hologram; a light source to illuminate said displayed hologram; projection optics to project light from said illuminated displayed hologram onto said display surface to form a holographically generated two-dimensional image, said projection optics being configured to form, at an intermediate image surface, an intermediate two-dimensional image corresponding to said holographically generated image; and a diffuser located at said intermediate image surface, the system being configured to generate a plurality of temporal holographic subframes for display in rapid succession on said SLM such that corresponding temporal subframe images on said display surface average in an observer's eye to give the impression of said displayed image; the method comprising moving said diffuser to provide within the area of each said pixel a plurality of different phases sufficiently quickly for a resulting changing speckle pattern to be integrated in the eye of a human observer to reduce a perceived level of speckle.
Preferably the diffuser has pixels of a pitch less than that of an intermediate, holographically-generated image in the system such that speckle is reduced at a spatial frequency higher than a maximum spatial frequency of the displayed image. Preferably the diffuser is moved by more than 2, 5 or 10 diffuser pixels within at least the time duration of an image frame, optionally within the duration of one or more temporal subframes.
In some preferred embodiments of the method the diffuser has a pseudo-random computed pattern which is computed to provide a predetermined intensity distribution for light diffused by the diffuser. In embodiments the diffuser may be computed as a phase hologram; in some preferred embodiments it has a pattern of pixels of quantised phase which have values which are computed to achieve a pattern, more particularly angular intensity distribution of light diffused by the diffuser. In some embodiments this pattern is computed such that the diffuser defracts or diffuses light into a disc, preferably this disc at least substantially filling a clear aperture of an imaging device (lens or mirror) imaging the diffuser. In embodiments the diffuser pattern, more particularly the disc, is computed so that the diffused light substantially matches an acceptance angle of an imaging device imaging the intermediate imaging surface of the projection optics. Additionally or alternatively the (pseudo-random) computed pattern on the diffuser may be calculated to optimise a depth of field of the system, more particularly to maximise speckle reduction for a determined depth of field of the holographic image display system (for example, a design depth of field or a measured depth of field). In this latter case the pattern on the diffuser may be configured to defract light into a disc, the disc having light intensity which falls off with radius from the centre according to a power law, more particularly dependent on the square of the radius, in embodiments (1−r2). In some preferred embodiments a disc of the diffused light at least substantially fills a clear aperture of the imaging device, typically the output optics, and may “over fill” the input aperture of the output optics.
Although the above described techniques are particularly advantageous with systems configured to generate a displayed image holographically they may also be employed with image projection systems which employ coherent light, or at least partially coherent light, to illuminate a transmissive or reflective SLM (for example, a digital light processor, DLP) displaying an image for projection on to a display surface rather than displaying a hologram.
Thus certain embodiments of the invention also provide an image display system to project an image onto a display surface using at least partially coherent light, the system comprising: a spatial light modulator (SLM) to display a two-dimensional image; an at least partially coherent light source to illuminate said displayed image on said SLM; projection optics to project light from said illuminated display image onto said display surface to form a two-dimensional image, said projection optics being configured to form an intermediate two-dimensional image corresponding to said displayed image; a pixellated, quantised phase diffuser located at a position of said intermediate two-dimensional image; and a piezoelectric actuator mechanically coupled to said diffuser to, in operation, move said diffuser to change a speckle pattern of said projected two-dimensional image, whereby, in operation, said changing speckle pattern of said projected two-dimensional image resulting from movement of said diffuser averages in a human observer's eye to reduce a perceived level of speckle.
In some preferred embodiments the preferred diffuser has a pixel pitch less than the intermediate image pixel pitch. In embodiments the diffuser has a pixel pitch of less than 5 μm, 4 μm, 3 μm, 2 μm or 1 μm. Preferably, the actuator (preferably a piezoelectric actuator) is configured to move the diffuser in two dimensions.
These and other aspects of the invention will now be further described, by way of example only, with reference to the accompanying figures in which:
a and 24b show, schematically, block diagrams of first and second examples of holographic image display systems implementing of embodiments of the invention; and
a and 25b show, respectively, a schematic diagram of a colour holographic image display system embodying the invention, and details of a mechanical configuration for the system of
We have previously described, in UK patent application number 0512179.3 filed 15 Jun. 2005, incorporated in its entirety by reference herein, a holographic projection module comprising a substantially monochromatic light source such as a laser diode; a spatial light modulator (SLM) to (phase) modulate the light to provide a hologram for generating a displayed image; and a demagnifying optical system to increase the divergence of the modulated light to form the displayed image. Without the demagnifying optics the size (and distance from the SLM) of a displayed image depends on the pixel size of the SLM, smaller pixels diffracting the light more to produce a larger image. Typically an image would need to be viewed at a distance of several metres or more. The demagnifying optics increase the diffraction, thus allowing an image of a useful size to be displayed at a practical distance. Moreover the displayed image is substantially focus-free: that is the image is substantially in focus over a wide range or at all distances from the projection module.
A wide range of different optical arrangements can be used to achieve this effect but one particularly advantageous combination comprises first and second lenses with respective first and second focal lengths, the second focal length being shorter than the first and the first lens being closer to the spatial light modulator (along the optical path) than the second lens. Preferably the distance between the lenses is substantially equal to the sum of their focal distances, in effect forming a (demagnifying) telescope. In some embodiments two positive (i.e., converging) simple lenses are employed although in other embodiments one or more negative or diverging lenses may be employed. A filter may also be included to filter out unwanted parts of the displayed image, for example a bright (zero order) undiffracted spot or a repeated first order image (which may appear as an upside down version of the displayed image).
This optical system (and those described later) may be employed with any type of system or procedure for calculating a hologram to display on the SLM in order to generate the displayed image. However we have some particularly preferred procedures in which the displayed image is formed from a plurality of holographic sub-images which visually combine to give (to a human observer) the impression of the desired image for display. Thus, for example, these holographic sub-frames are preferably temporally displayed in rapid succession so as to be integrated within the human eye. The data for successive holographic sub-frames may be generated by a digital signal processor, which may comprise either a general purpose DSP under software control, for example in association with a program stored in non-volatile memory, or dedicated hardware, or a combination of the two such as software with dedicated hardware acceleration. Preferably the SLM comprises a reflective SLM (for compactness) but in general any type of pixellated microdisplay which is able to phase modulate light may be employed, optionally in association with an appropriate driver chip if needed.
Referring now to
Still referring to
Lens pair L3 and L4 (with focal lengths f3 and f4 respectively) form a demagnification lens pair. This effectively reduces the pixel size of the modulator, thus increasing the diffraction angle. As a result, the image size increases. The increase in image size is equal to the ratio of f3 to f4, which are the focal lengths of lenses L3 and L4 respectively. A spatial filter may be included to filter out unwanted parts of the displayed image, for example a zero order undiffracted spot or a repeated first order (conjugate) image, which may appear as an upside down version of the displayed image, depending upon how the hologram for displaying the image is generated.
Continuing to refer to
The DSP 100 may comprise dedicated hardware and/or Flash or other read-only memory storing processor control code to implement a hologram generation procedure, in preferred embodiments in order to generate sub-frame phase hologram data for output to the SLM 24.
We now describe a preferred procedure for calculating hologram data for display on SLM 24. We refer to this procedure, in broad terms, as One Step Phase Retrieval (OSPR), although strictly speaking in some implementations it could be considered that more than one step is employed (as described for example in GB0518912.1 and GB0601481.5, incorporated in their entirety by reference herein, where “noise” in one sub-frame is compensated in a subsequent sub-frame).
Thus we have previously described, in UK Patent Application No. GB0329012.9, filed 15 Dec. 2003, a method of displaying a holographically generated video image comprising plural video frames, the method comprising providing for each frame period a respective sequential plurality of holograms and displaying the holograms of the plural video frames for viewing the replay field thereof, whereby the noise variance of each frame is perceived as attenuated by averaging across the plurality of holograms.
Broadly speaking in our preferred method the SLM is modulated with holographic data approximating a hologram of the image to be displayed. However this holographic data is chosen in a special way, the displayed image being made up of a plurality of temporal sub-frames, each generated by modulating the SLM with a respective sub-frame hologram. These sub-frames are displayed successively and sufficiently fast that in the eye of a (human) observer the sub-frames (each of which have the spatial extent of the displayed image) are integrated together to create the desired image for display.
Each of the sub-frame holograms may itself be relatively noisy, for example as a result of quantising the holographic data into two (binary) or more phases, but temporal averaging amongst the sub-frames reduces the perceived level of noise. Embodiments of such a system can provide visually high quality displays even though each sub-frame, were it to be viewed separately, would appear relatively noisy.
A scheme such as this has the advantage of reduced computational requirements compared with schemes which attempt to accurately reproduce a displayed image using a single hologram, and also facilitate the use of a relatively inexpensive SLM.
Here it will be understood that the SLM will, in general, provide phase rather than amplitude modulation, for example a binary device providing relative phase shifts of zero and π (+1 and −1 for a normalised amplitude of unity). In preferred embodiments, however, more than two phase levels are employed, for example four phase modulation (zero, π/2, π, 3π/2), since with only binary modulation the hologram results in a pair of images one spatially inverted in respect to the other, losing half the available light, whereas with multi-level phase modulation where the number of phase levels is greater than two this second image can be removed. Further details can be found in our earlier application GB0329012.9 (ibid), hereby incorporated by reference in its entirety.
Although embodiments of the method are computationally less intensive than previous holographic display methods it is nonetheless generally desirable to provide a system with reduced cost and/or power consumption and/or increased performance. It is particularly desirable to provide improvements in systems for video use which generally have a requirement for processing data to display each of a succession of image frames within a limited frame period.
We have also described, in GB0511962.3, filed 14 Jun. 2005, a hardware accelerator for a holographic image display system, the image display system being configured to generate a displayed image using a plurality of holographically generated temporal sub-frames, said temporal sub-frames being displayed sequentially in time such that they are perceived as a single reduced-noise image, each said sub-frame being generated holographically by modulation of a spatial light modulator with holographic data such that replay of a hologram defined by said holographic data defines a said sub-frame, the hardware accelerator comprising: an input buffer to store image data defining said displayed image; an output buffer to store holographic data for a said sub-frame; at least one hardware data processing module coupled to said input data buffer and to said output data buffer to process said image data to generate said holographic data for a said sub-frame; and a controller coupled to said at least one hardware data processing module to control said at least one data processing module to provide holographic data for a plurality of said sub-frames corresponding to image data for a single said displayed image to said output data buffer.
In this preferably a plurality of the hardware data processing modules is included for processing data for a plurality of the sub-frames in parallel. In preferred embodiments the hardware data processing module comprises a phase modulator coupled to the input data buffer and having a phase modulation data input to modulate phases of pixels of the image in response to an input which preferably comprises at least partially random phase data. This data may be generated on the fly or provided from a non-volatile data store. The phase modulator preferably includes at least one multiplier to multiply pixel data from the input data buffer by input phase modulation data. In a simple embodiment the multiplier simply changes a sign of the input data.
An output of the phase modulator is provided to a space-frequency transformation module such as a Fourier transform or inverse Fourier transform module. In the context of the holographic sub-frame generation procedure described later these two operations are substantially equivalent, effectively differing only by a scale factor. In other embodiments other space-frequency transformations may be employed (generally frequency referring to spatial frequency data derived from spatial position or pixel image data). In some preferred embodiments the space-frequency transformation module comprises a one-dimensional Fourier transformation module with feedback to perform a two-dimensional Fourier transform of the (spatial distribution of the) phase modulated image data to output holographic sub-frame data. This simplifies the hardware and enables processing of, for example, first rows then columns (or vice versa).
In preferred embodiments the hardware also includes a quantiser coupled to the output of the transformation module to quantise the holographic sub-frame data to provide holographic data for a sub-frame for the output buffer. The quantiser may quantise into two, four or more (phase) levels. In preferred embodiments the quantiser is configured to quantise real and imaginary components of the holographic sub-frame data to generate a pair of sub-frames for the output buffer. Thus in general the output of the space-frequency transformation module comprises a plurality of data points over the complex plane and this may be thresholded (quantised) at a point on the real axis (say zero) to split the complex plane into two halves and hence generate a first set of binary quantised data, and then quantised at a point on the imaginary axis, say 0j, to divide the complex plane into a further two regions (complex component greater than 0, complex component less than 0). Since the greater the number of sub-frames the less the overall noise this provides further benefits.
Preferably one or both of the input and output buffers comprise dual-ported memory. In some particularly preferred embodiments the holographic image display system comprises a video image display system and the displayed image comprises a video frame.
In an embodiment, the various stages of the hardware accelerator implement a variant of the algorithm given below, as described later. The algorithm is a method of generating, for each still or video frame I=Ixy, sets of N binary-phase holograms h(1) . . . h(N). Statistical analysis of the algorithm has shown that such sets of holograms form replay fields that exhibit mutually independent additive noise.
1. Let Gxy(n)=Ixyexp(jφxy(n)) where φxy(n) is uniformly distributed between 0 and 2π for 1≦n≦N/2 and 1≦x,y≦m
2. Let guv(n)=F−1[Gxy(n)] where F−1 represents the two-dimensional inverse Fourier transform operator, for 1≦n≦N/2
3. Let muv(n){guv(n)} for 1≦n≦N/2
4. Let muv(n+N/2)=ℑ{guv(n)} for 1≦n≦N/2
5. Let
where Q(n)=median (muv(n)) and 1≦n≦N.
Step 1 forms N targets Gxy(n) equal to the amplitude of the supplied intensity target Ixy, but with independent identically-distributed (i.i.t.), uniformly-random phase. Step 2 computes the N corresponding full complex Fourier transform holograms guv(n). Steps 3 and 4 compute the real part and imaginary part of the holograms, respectively. Binarisation of each of the real and imaginary parts of the holograms is then performed in step 5: thresholding around the median of muv(n) ensures equal numbers of −1 and 1 points are present in the holograms, achieving DC balance (by definition) and also minimal reconstruction error. In an embodiment, the median value of muv(n) is assumed to be zero. This assumption can be shown to be valid and the effects of making this assumption are minimal with regard to perceived image quality. Further details can be found in the applicant's earlier application (ibid), to which reference may be made.
The purpose of the phase-modulation block shown in the embodiment of
The quantisation hardware that is shown in the embodiment of
There are many different ways in which phase-modulation data, as shown in
In another embodiment, pre-calculated phase modulation data is stored in a look-up table and a sequence of address values for the look-up table is produced, such that the phase-data read out from the look-up table is random. In this embodiment, it can be shown that a sufficient condition to ensure randomness is that the number of entries in the look-up table, N, is greater than the value, m, by which the address value increases each time, that m is not an integer factor of N, and that the address values ‘wrap around’ to the start of their range when N is exceeded. In a preferred embodiment, N is a power of 2, e.g. 256, such that address wrap around is obtained without any additional circuitry, and m is an odd number such that it is not a factor of N.
In other implementations the operations illustrated in
In the OSPR approach we have described above subframe holograms are generated independently and thus exhibit independent noise. However the generation process for each subframe can take into account the noise generated by the previous subframes in order to cancel it out, effectively “feeding back” the perceived image formed after, say, n OSPR frames to stage n+1 of the procedure, forming a closed-loop system. Such an adaptive (AD) OSPR procedure uses feedback as follows: each stage n of the algorithm calculates the noise resulting from the previously-generated holograms H1 to Hn-1, and factors this noise into the generation of the hologram Hn to cancel it out. As a result, noise variance falls as 1/N2 (where a target image T outputs a set of N holograms). More details can be found in WO2007/031797 and WO2007/085874.
The OSPR algorithm can be generalised to the case of calculating Fresnel holograms by replacing the Fourier transform step by a discrete Fresnel transform. One significant advantage associated with binary Fresnel holograms is that the diffracted near-field does not contain a conjugate image.
Referring back to
It is possible to remove the lens L3 from the optical system by employing a Fresnel hologram which encodes the equivalent lens power. The output image from the projector would still be in-focus at all distances from the output lens L4 but due to the characteristics of near-field propagation, is free from the conjugate image artifact. L3 is the larger of the lens pair, as it has the longer focal length, and removing it from the optical path significantly reduces the size and weight of the system.
The same technique can also be applied to the beam-expansion lens pair L1 and L2, which perform the reverse function to the pair L3 and L4. It is therefore possible to share a lens between the beam-expansion and demagnification assemblies, which can be represented as lens function encoded onto a Fresnel hologram. This results in a holographic projector which requires only two small, short focal length lenses. The remaining lenses are encoded onto a hologram, which is used in a reflective configuration.
Referring back to steps 1 to 5 of the above-described OSPR procedure, step 2 was previously a two-dimensional inverse Fourier transform. To implement a Fresnel hologram, also encoding a lens, as described above an inverse Fresnel transform is employed in place of the previously described inverse Fourier transform.
The discrete Fresnel transform can be expressed in terms of a Fourier transform
H
xy
=F
xy
(1)
·F[F
uv
(2)
h
uv]
The inverse Fresnel transform may take the form:
In effect the factors F(1) and F(2) turn the Fourier transform in a Fresnel transform of the hologram h. The size of each hologram pixel is Δx×Δy, and the total size of the hologram is (in pixels) N×M. In the above, z defines the focal length of the holographic lens. Finally, the sample spacing in the replay field is:
so that the dimensions of the replay field are
consistent with the size of replay field in the Fraunhofer diffraction regime.
The transform shown in
For more details reference may be made to the applicant's co-pending international patent application number PCT/GB2007/050157 filed 27 Mar. 2007, hereby incorporated by reference in its entirety.
Referring to
The system 1000 comprises red 1002, green 1006, and blue 1004 collimated laser diode light sources, for example at respective wavelengths of 638 nm, 532 nm and 445 nm. Each light source comprises a laser diode 1002 and, if necessary, a collimating lens and/or beam expander. Optionally the respective sizes of the beams are scaled to the respective sizes of the holograms, as described later. The red, green and blue light beams are combined in two dichroic beam splitters 1010a, b, as shown and the combined beam is provided to a reflective spatial light modulator 1012 (although in other embodiments a transmissive SLM may be employed).
The combined optical beam is provided to demagnification optics 1014 which project the holographically generated image onto a screen 1016. As illustrated, the extent of the red field is greater than that of the blue field, determined by the (constant) SLM pixel pitch and the respective wavelengths of the illuminating light. In operation red, green and blue fields are time multiplexed, for example by driving the laser diodes in a time-multiplexed manner, to create a full colour display.
Theoretically the demagnifying optics 1014 could be configured to demagnify by different factors for different wavelengths by, in effect, introducing controlled “chromatic” aberration. Alternatively the lens power may be adjusted in accordance with the colour of light illuminating the SLM, to select different demagnifications in synchrony with the different colours of the SLM. Preferably, however, adjustment for the different degrees of diffraction of the different colours of light by the SLM is compensated for when calculating the hologram that is to be displayed on the SLM, as described in detail in PCT/GB2007/050291, hereby incorporated by reference.
The dashed line 1018 shows an intermediate image plane, that is a fourier transform plane of the SLM, at which a speckle-reducing diffuser (described below) may be located.
For a static viewer position, the apparent structure size of the speckle field will depend upon the resolution limit (and hence pupil size) of the imaging system used. Following the theoretical analysis described in “Statistical Optics” (J. W. Goodman, Wiley Classics Library Edition, 2000) we assume that the scattering screen is ideal (i.e. a flat [−π, π] phase distribution) on scales approximating a wavelength. The spectral distribution of the speckle pattern is then given by the autocorrelation function of the aperture. For the case of a square pupil, this looks like the function seen in
The effect of the pupil size on the speckle field can be determined within the model. Here the screen is modelled as an ideal scatterer with a pixel size of 20 μm across a 100×100 pixel area in a 256×256 pixel scene (
By selecting a structure size of 20 μm for the screen, the physical size of the replay field is then f(=20 cm)*λ(=532 nm)/Δ(˜20 μm)˜5.3 mm across which is approximately the size of the anatomical pupil in dim viewing conditions (typically 5-8 mm). This replay field can then be apertured by a circular pupil to approximate the light passing through the eye (
For a screen size of 2 mm×2 mm, and an ocular focal length of ˜17 mm, the magnification of the image will be around 17 mm/200 mm ˜ 1/12, leading to a total image size of 167 μm×167 μm. Given that the typical size of a cone photoreceptor in the eye is around 2.5 μm near the fovea, with a spacing of ˜3 μm the 200×200 pixels in the retinal image would have to be coherently downsampled to around 50×50 pixels. This means that groups of 4×4 pixels in the replay field will have to be summed coherently to determine the intensity and phase of the light incident upon each of the cones in the eye. This effect was not taken into consideration for the first set of simulations which assumed that the image was being taken with a camera of longer focal length and finer pixel resolution than the eye (
To avoid the edge distortion caused by the aberrations in the pupil plane, the retinal image of the screen was cropped to 80% in the x and y dimensions before determining the statistics of the intensity pattern observed (
Testing the model: One of the primary tests to determine whether the intensity pattern predicted by the model was actually caused by a speckle effect was to compare the spectral properties of the intensity distribution to that predicted by theory. To test this, four different aperture sizes were used which filled 25%, 50%, 75% and 100% of the width of the replay field area. Smaller aperture sizes produced retinal images with coarser intensity patterns, in accordance with that observed in experiment and in theory (
Further tests to determine whether the intensity pattern seen is indeed that due to coherent noise include measuring the speckle contrast and the population statistics of the intensity patterns. In theory, the ratio of the standard deviation to the mean intensity should be unity for a true speckle pattern. For the above simulations, the speckle contrast values for the different aperture sizes were (25%) 0.92, (50%) 0.90 (75%) 0.87, (100%) 0.68. The variation in the pixel value histograms are as shown in
The reason for the increasing deviation of the simulations from the ideal speckle characteristics with aperture size can be explained in terms of the aperture PSF. In order to be truly simulating speckle, we require that each pixel in the image plane is the result of contributions from multiple pixels in the object plane. As the image plane can be described as the convolution of the image with the PSF of the aperture, the PSF must be sufficiently wide to contain multiple pixels within the PSF area. From
From this we can conclude that in order to obtain a realistic simulation of speckle, we require that the aperture occupies <25% of the replay field area, but in order for this area to correspond with a dimension of 5.3 mm, we require that the structure size of the screen falls by a factor of at least 4 to around 5 μm, i.e. we quadruple the resolution of the simulation. The alternative is to keep the simulation resolution the same and reduce the physical size of the screen by a factor of four in each dimension (0.25 mm2). However, in this case, after applying the aperture we rely on increasingly fewer data points in the simulation to construct the image plane from, leading to errors. This can be seen be reducing the aperture sizes by a factor of 4 and maintaining the same resolution (
Calculating the spectral power distributions for the intensity field using these simulation parameters shows how close the model fits with theoretical predictions (
We now consider invariance with ocular aberrations: The statistical properties of laser speckle patterns do not change with the introduction of aberrations into the imaging system. If the intensity patterns predicted by the model are caused by coherent noise then the graphs shown above should be the same with the introduction of aberrations in the pupil plane. Aberrations of varying powers were shown not to affect the statistical properties of the intensity patterns formed in the image plane. Sample statistics taken from these aberrations are shown in
We now describe some experimental results:
In summary, speckle can be accurately simulated when multiple points in the object contribute to each point in the image plane (i.e. the PSF of the imaging system is sufficiently large). The number of points passing through the aperture of the imaging system must also be sufficiently large (≧256 points) to produce an intensity pattern which follows speckle statistics. From the range of aperture sizes modelled it can be seen from
Furthermore, the moving diffuser can be seen to significantly reduce the spectral power spectrum of a speckle field measured experimentally at lower spatial frequencies. However, using too coarse a diffuser scatters light outside the collection angle of the final lens, significantly decreasing the brightness.
Using a pixellated binary phase diffuser scatters light inside the collection cone of the final projection lens. The pixel size of the diffuser is sufficiently small to generate ˜10 speckle patterns within a 10 μm distance. This range is then a sufficiently small to allow piezo actuation. The appearance of speckle in the final image is then decreased to a level which is tolerable to the viewer.
Referring now to
b shows an alternative optical configuration using a reflective SLM in which the functions of lenses L2 and L3 are shared in a single lens 28 which may, in embodiments, be encoded in the hologram displayed on the SLM 24, as previously described. In this example system a waveplate 34 is employed to rotate the polarisation of the incident beam for the beamsplitter.
A holographically generated intermediate image is formed at the Fourier transform plane of the demagnifying optics, at which a piezoelectrically driven pixellated diffuser 2402 is located. Again the diffuser 2402 is linked by an arm (shown schematically) to a piezo-electric actuator 2404, coupled to a driver 2406. Optionally in this and the previously described arrangement an aperture may also be included in this plane to block off one or more of zero order (undiffracted light), the conjugate image, and higher diffraction orders.
Referring now to
Embodiments of the system enable:
1. Temporal Reduction of Speckle, reducing the power of the speckle spectrum within a finite spatial frequency bandwidth.
As previously mentioned, OSPR reduces the appearance of speckle by randomising the phase of each pixel in the projected subframe image. The higher the rate at which the subframes are projected, the lower the power of the speckle spectrum (within a spatial frequency bandwidth determined by the pixel pitch of the holographic image). Put another way, the speckle contrast is reduced by a factor of 1/sqrt(N), where N is the number of subframes within the integration time of the eye. Using a diffuser in the intermediate image plane allows the rate at which the phase changes over the scale of a diffuser pixel to be increased beyond that achievable by the microdisplay alone. This effect could be achieved by increasing the subframe rate of the microdisplay, but this would use additional processing power.
2. Spatial reduction of speckle, increasing the bandwidth over which power in the speckle spectrum can be reduced.
When the diffuser is placed in the plane of the holographic image (i.e. the intermediate image plane), the image of the diffuser is projected onto the wall. Preferably the diffuser is substantially transparent, so that substantially only the phase of the projected image is affected by the diffuser. Without a diffuser, the phase within a pixel of the projected image (generated using OPSR) is uniform at a given instant in time (but varies randomly over time). Reducing the pixel pitch of the diffuser below that of the holographic image acts to reduce the area in the projected image over which the phase is uniform at a given instant in time. Over the integration time of the eye, regions in the projected image which have phases that vary randomly with respect to each other will produce multiple speckle patterns that will average out. To the eye it then appears as though these regions are incoherent with respect to each other.
Moving the diffuser rapidly generates random phases on a scale that is smaller than the projected image pixel. This effect could additionally or alternatively be achieved by increasing the number of pixels of the microdisplay (i.e. the spatial resolution of the projected image), but again this would use additional processing power.
Embodiments of the technique are implemented in a system which generates two-dimensional images holographically. This, inter alia, relaxes the time constraint on the diffuser. The diffuser can now complete the number of cycles used to reduce/remove speckle once every video frame rather than once every image row or once every image pixel. This substantially facilitates the use of a piezo-actuated diffuser.
In some preferred implementations a bending piezoelectric actuator is employed, in embodiments coupled to an arm holding the diffuser. In embodiments of the miniature holographic projector system, the stroke distance of the diffuser (˜10 μm) is sufficiently small to allow a bending piezo actuator to be used. Further, by attaching 2 piezo benders at right angles it is possible to achieve movement of the diffuser in two dimensions, preferably in two substantially orthogonal directions. This is helpful to avoid the appearance of “streaking” in the image. The frequency of the diffuser movement is preferably such that the period is less than 1/60 s. In embodiments the frequency of the diffuser movement may be such that the period is less than 1 sub-frame interval. However the effect appears to saturate at high speeds. Thus in embodiments an actuator frequency of 3-400 Hz was preferred over a higher frequency such as ˜2 KHz (which can cause audible noise).
We now describe a first example process for designing and constructing the diffuser. In this example a random binary ([0, π]) phase pattern described over a pixellated array with a 1.5 μm pitch is used. This was for a holographic image display system with a 3 μm intermediate image pixel pitch—one preferred ratio of the pixel pitch of the holographic image to the diffuser pixel pitch appears to be approximately 2:1. The diffuser was generated using a photo-lithography process (exposing, developing and etching a photoresist pattern on glass). This gives a flat diffuser surface profile that covers that phase range [0, π]. This helps to avoids light being scattered outside the final projections lens, increasing displayed image intensity, and reduces other artefacts caused by larger feature sizes. By contrast with a ground glass diffuser, a binary phase, pixellated diffuser has a predictable spatial frequency structure and hence a predictable cone of angles over which light is scattered. By adjusting the pixel pitch of the binary phase diffuser, the range of angles over which the light is scattered can be closely controlled. This is useful for finding a good balance between reduced speckle contrast and maximising both image brightness and projector throw angle.
In other arrangements, however, the optimal pixellated, quantised phase diffuser is a specially-computed pattern which may appear random to the eye but in fact offers improved performance in speckle reduction and throughput. Essentially a truly random phase diffuser will spread light into a “sinc”-type intensity distribution. However this distribution is not optimal for speckle reduction and one can generally do better than this to achieve improved speckle reduction. One preferred option is for the diffuser to be computed as a phase hologram which diffracts light into a uniform disc of a chosen angle, selected to fill (preferably substantially exactly) the clear aperture of the projection lens imaging the diffuser. This gives the optimal speckle reduction for a given clear aperture, but does not maximise depth of field.
Another alternative is for the diffuser to be a computed as a phase hologram which diffracts light into a disc whose intensity falls off with radius according to the equation 1−r2. This gives optimal speckle reduction for a given depth of field, but diffracts light into a larger clear aperture than the uniform disc diffuser described above (hence potentially necessitating a larger/more complex projection lens).
For either of the above diffusers the spatial distribution of phases still looks random to the eye (these “computed” diffusers are not readily distinguishable by eye from truly “random” ones) but their effect on distributing incident light is quite different from a truly random phase diffuser. Background material on diffuser design can be found in “Binary optic diffuser design”, Fedor, Adam S. Proc. SPIE Vol. 4557, p. 378-385, Micromachining and Microfabrication Process Technology VII, Jean Michel Karam; John Yasaitis; Eds.
The skilled person will understand that applications for the techniques we have described are not limited to holographic image display systems displaying images on a planar or curved 2D screen but may also be employed when displaying or projecting an image or pattern on any surface using coherent light, in, particular holographically.
Applications for the described techniques we have described include in particular (but are not limited to) the following: mobile phone; PDA; laptop; digital camera; digital video camera; games console; in-car cinema; navigation systems (in-car or personal e.g. wristwatch GPS); head-up and helmet-mounted displays for automobiles and aviation; watch; personal media player (e.g. MP3 player, personal video player); dashboard mounted display; laser light show box; personal video projector (a “video iPod®” concept); advertising and signage systems; computer (including desktop); remote control unit; an architectural fixture incorporating a holographic image display system; more generally any device where it is desirable to share pictures and/or for more than one person at once to view an image.
No doubt many effective alternatives will occur to the skilled person and it will be understood that the invention is not limited to the described embodiments and encompasses modifications apparent to those skilled in the art lying within the spirit and scope of the claims appended hereto.
Number | Date | Country | Kind |
---|---|---|---|
0800167.9 | Jan 2008 | GB | national |
This application is a continuation-in-part of International Application No. PCT/GB2008/051211, filed Dec. 19, 2008, designating the United States and published in English on Jul. 16, 2009 as WO 2009/087358 and incorporated in its entirety by reference herein, which claims the benefit of United Kingdom Appl. No. 0800167.9, filed Jan. 7, 2008 and incorporated in its entirety by reference herein.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/GB2008/051211 | Dec 2008 | US |
Child | 12831195 | US |