Developers of information storage devices continue to seek increased storage capacity. As part of this development, memory systems employing holographic optical techniques, referred to as holographic memory systems, holographic storage systems, and holographic data storage systems, have been suggested as alternatives to conventional memory devices.
Holographic memory systems may read/write data to/from a photosensitive storage medium. When storing data, holographic memory system often record the data by storing a hologram of a 2-dimensional (2D) array, commonly referred to as a “page,” where each element of the 2D array represents a single data bit. This type of system is often referred to as a “page-wise” memory system. Holographic memory systems may store the holograms as a pattern of varying refractive index and/or absorption imprinted into the storage medium.
Holographic systems may perform a data write (also referred to as a data record operation, data store operation, or write operation) by combining two coherent light beams, such as laser beams, at a particular location within the storage medium. Specifically, a data-encoded signal beam, also called a data beam, is combined with a reference light beam to create an interference pattern in the photosensitive storage medium. The interference pattern induces material alterations in the storage medium to form a hologram.
Holographically stored data may be retrieved from the holographic memory system by performing a read (or reconstruction) of the stored data. The read operation may be performed by projecting a reconstruction or probe beam into the storage medium at the same angle, wavelength, phase, position, etc., or compensated equivalents thereof, as the reference beam used to record the data. The hologram and the reference beam interact to reconstruct the signal beam.
The reconstructed signal beam (aka a reconstructed data beam) may then be detected by a power-sensitive detector and processed for delivery to an output device. The irradiance impinging on the detector can be written as:
I(x,y)=|ES(x,y)+EN(x,y)|2
I(x,y)=|ES|2+|EN|2+2|ES∥EN| cos φS-N
where ES(x, y) and EN(x, y) are the scalar complex amplitudes of the holographic signal and the coherent optical noise, respectively. The relative phase difference between the two fields, φS-N, is effectively random, so the cosine factor in the final term swings randomly between +1 and −1. This term, which has the signal multiplied by the noise rather than adding to it, is a limiting noise factor in the practical development of holographic data storage.
Direct detection has several limitations. First, since hologram diffraction efficiency is driven to the lowest possible level in order to maximize the number of pages that may be stored, the read signals may be weak and require long exposure times to detect. Secondly, the laser light used to perform the read-out may be necessarily coherent, thus optical noise sources such as scatter and ISI (intersymbol interference, or pixel-to-pixel crosstalk from blur) may mix coherently with the desired optical signal, reducing signal quality when compared to additive noise of the same power. As such, there may be a need to improve the signal level of the detected hologram and improve the signal to noise ratio (SNR).
Quadrature Homodyne Detection
One way to boost the SNR is to use homodyne detection. In homodyne detection, the reconstructed signal beam interferes with a coherent beam, known as a local oscillator (LO) or LO beam, at the detector to produce an interference pattern that represents a given data page stored in the holographic memory. The detector array produces a signal (e.g., a photocurrent) whose amplitude is proportional to the detected irradiance, which can be written as:
I
homo
=|E
LO
+E
S
+E
N|2
I
homo
=|E
LO|2+|ES|2+|EN|2+2|ELO∥ES| cos φLO-S+2|ELO∥EN| cos φLO-N+2|ES∥EN| cos φS-N
where ELO is the complex amplitude of the LO. If the amplitude of the LO is much larger than the amplitude of the reconstructed signal beam, then the terms not involving ELO become negligible. This has the effect of amplifying the signal, eliminating nonlinear effects of coherent noise, and allowing the detection of phase as well as amplitude.
To reproduce the data page accurately, however, the LO should be optically phase-locked with the reconstructed data page signal in both time and space such that the LO constructively interferes with each and every data pixel in the hologram simultaneously. However, alignment tolerances, lens aberrations, wavelength and temperature sensitivities, and a host of other minute deviations from perfection may introduce small variations in the flatness of the “phase carrier” wavefront bearing the reconstructed data page. For binary modulation, the “phase carrier” wavefront may be defined as the wavefront of the data page had all pixels been in the ‘one’ state. Thus, successfully performing page-wide homodyne detection in such a manner may involve expensive, sophisticated adaptive optic elements and control algorithms in order to phase-match the local oscillator to the hologram (or vice versa). As such, performing homodyne detection is generally not practical in commercial holographic data storage systems.
Another approach to increasing the SNR of the reconstructed data page is quadrature homodyne detection as disclosed in U.S. Pat. No. 7,623,279, which is entitled “Method for holographic data retrieval by quadrature homodyne detection” and which has a filing date of Nov. 24, 2009. In quadrature homodyne detection, the reconstructed signal beam interferes with two versions of an imprecise local oscillator to produce a pair of interference patterns, e.g., one after another on the detector array. The two versions of the imprecise local oscillator are in quadrature, i.e., there is a 90-degree phase difference between them. As a result, the low-contrast areas in the interference pattern between the first version of imprecise local oscillator and the reconstructed signal beam appear as high-contrast areas in the interference pattern between the second version of imprecise local oscillator and the reconstructed signal beam. Similarly, the high-contrast areas in the interference pattern between the first version of imprecise local oscillator and the reconstructed signal beam appear as low-contrast areas in the interference pattern between the second version of imprecise local oscillator and the reconstructed signal beam. Combining the two interference patterns yields a completely high-contrast interference pattern that encodes all of the information in the reconstructed data page.
The inventors have recognized that, despite its advantages over direct detection and conventional homodyne detection, quadrature homodyne detection has several disadvantages as well. In particular, quadrature homodyne detection yields an additive common intensity noise term that can reduce the signal-to-noise ratio (SNR) of the detected images. The inventors have also recognized that this common intensity noise term can be reduced, suppressed, or even completely eliminated by using a technique called “n-rature homodyne detection.”
In one example of n-rature homodyne detection, a coherent light source, such as a laser, generates a beam of coherent light that is split into a probe beam and a local oscillator beam with a beam splitter. The probe beam illuminates at least one hologram in the holographic storage medium so as to generate at least one reconstructed signal beam, which represents at least some of the information stored in the holographic storage medium. The reconstructed signal beam interferes with the local oscillator beam to produce a plurality of spatial interference patterns, each of which is imaged by at least one detector to form a respective image in a plurality of images. (For example, the spatial interference patterns can be detected in series with a single detector or in parallel with multiple detectors, depending on whether there is a single reconstructed signal beam/local oscillator beam pair or multiple pairs.)
The resulting plurality of spatial interference patterns comprises (i) a first spatial interference pattern generated by interference of the reconstructed signal beam with the local oscillator beam at a first phase difference between the at least one reconstructed signal beam and the at least one local oscillator beam and (ii) a second spatial interference pattern generated by interference of the reconstructed signal beam with the local oscillator beam at a second phase difference between the reconstructed signal beam and the local oscillator beam. The first and second phase differences can be implemented with one or more phase retarders in the path of the reconstructed signal beam and/or the path of the local oscillator beam. The first and second phase differences are selected so as to substantially cancel common intensity noise in a representation of the information in the hologram. This representation can be generated by a processor coupled to the detector.
In another embodiment, the probe beam illuminates an in-phase hologram and a quadrature hologram in the holographic storage medium so as to generate at least one reconstructed signal beam that represents both the in-phase hologram and the quadrature hologram. The reconstructed signal beam interferes with at least one local oscillator beam to produce at least three spatial interference patterns, each of which is detected by a detector. A processor coupled to the detector forms a first representation and a second representation based on the at least three spatial interference patterns. The first representation and the second representation are analogous to ĨA and ĨB, respectively, described below in a Combining N-Rature Homodyne Detected Images section. The first and second representations can be referred to collectively as a quadrature image pair.
In yet another embodiment, the probe beam illuminates the holographic storage medium with at least one probe beam so as to generate at least one reconstructed signal beam that represents at least some information stored in the holographic storage medium. The reconstructed signal beam interferes with at least one local oscillator beam to produce a first interference pattern, which a detector senses as described above and below. For m=2 . . . n, where n is an integer greater than 2, a phase retarder increments (or decrements) a phase difference between the local oscillator beam and the reconstructed signal beam by about 2π/n modulo 2π. For each phase difference, the reconstructed signal beam interferes with the local oscillator beam so as to produce an mth interference pattern. The detector senses each of these interference patterns.
In still another embodiment, the probe beam illuminates the holographic storage medium with at least one probe beam so as to generate at least one reconstructed signal beam that represents at least some information stored in the holographic storage medium. A detector senses a plurality of spatial interference patterns resulting from interference of the reconstructed signal beam with at least one local oscillator beam. A processor coupled to the detector demodulates a spatial wavefront modulation representing a misalignment of the local oscillator beam's wavefront with respect to the reconstructed signal beam's wavefront from at least one of these spatial interference patterns. The processor also generates a representation of the information retrieved from the hologram based on the spatial interference patterns.
In a further embodiment, the probe beam illuminating the holographic storage medium with at least one probe beam so as to generate at least one reconstructed signal beam that represents at least some information stored in the holographic storage medium. A detector senses a plurality of spatial interference patterns resulting from interference of the reconstructed signal beam with at least one local oscillator beam. A processor operably coupled to the detector generates a representation of the information stored in the hologram based on the spatial interference patterns. The processor also removes non-signal terms from the representation based on the image of the at least one local oscillator.
In still a further embodiment, the probe beam illuminates the holographic storage medium with at least one probe beam so as to generate at least one reconstructed signal beam that represents at least some information stored in the holographic storage medium. A detector acquires at least one image of interference between the reconstructed signal beam and a local oscillator beam. A processor coupled to the detector compares a first portion of the image to a reserved block in the hologram. The processor upsamples the resulting comparison to a spatial resolution of the information stored in the hologram so as to generate an upsampled comparison. And the processor resamples the image at the spatial resolution of the information stored in the hologram based on the upsampled comparison.
In still another embodiment, the probe beam illuminates the holographic storage medium with at least one probe beam so as to generate at least one reconstructed signal beam that represents at least some information stored in the holographic storage medium. A detector acquires at least one image of interference between the reconstructed signal beam and a local oscillator beam. A processor coupled to the detector generates a representation of the information stored in the hologram based on the image, estimates misfocus in the representation, and compensates the misfocus in the representation.
It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
The skilled artisan will understand that the drawings primarily are for illustrative purposes and are not intended to limit the scope of the inventive subject matter described herein. The drawings are not necessarily to scale; in some instances, various aspects of the inventive subject matter disclosed herein may be shown exaggerated or enlarged in the drawings to facilitate an understanding of different features. In the drawings, like reference characters generally refer to like features (e.g., functionally similar and/or structurally similar elements).
A holographic storage medium can record holograms that encode variations in the phase and/or amplitude of the field of an incident optical signal beam. For instance, a holographic storage medium can record information encoded on an optical carrier using phase shift keying (PSK) as a hologram. It can also record information encoded on an optical carrier using both phase and amplitude modulation, e.g., using quadrature amplitude modulation (QAM). Recording information encoded partially or completely in the phase of the signal beam (e.g., using PSK, QAM, etc.) is referred to herein as “coherent channel modulation.” And multiplexing holograms in the holographic storage medium by utilizing both dimensions of the complex phase plane (e.g., as in QPSK and QAM) is referred to herein as “phase-multiplexed holography.”
Combining coherent channel modulation and/or phase multiplexing with other holographic multiplexing techniques, including angle multiplexing, spatial multiplexing, and/or polytopic multiplexing, offers several advantages over other approaches to holographic data storage. First, phase multiplexing increases the storage density of the holographic storage medium. Second, PSK modulation may reduce or eliminate the DC component in the reconstructed signal beam. Third, PSK may also reduce or eliminate cross-talk caused by gratings formed between pixels in the holographic recording medium (aka intra-signal modulation).
To take full advantage of coherent channel modulation and/or phase multiplexing, the recorded holograms are typically read using a coherent channel technique, such as homodyne detection. Conventional homodyne detection requires a local oscillator that is locked, to within a fraction of an optical wavelength, in temporal and spatial phase to the reconstructed signal beam. Unfortunately, achieving this degree of phase stability can be impractical under normal operating conditions. Quadrature homodyne detection does not require precise spatial phase locking, but is subject to additive common intensity noise as explained above and below.
Fortunately, n-rature homodyne detection operates with relatively imprecise spatial phase locking and suppresses or eliminates the common intensity noise that affects quadrature homodyne detection. In n-rature homodyne detection, a local oscillator interferes with the reconstructed signal beam to produce the first of n>2 interference patterns, each of which is sensed by a detector. The modulo 2π phase difference between the local oscillator and the reconstructed signal beam is changed by 2π/n, then the detector senses the second interference pattern, and so on until all n images have been detected. The detected images can be combined, e.g., into in-phase and quadrature images representing in-phase and quadrature holograms, then processed to remove undesired spatial wavefront modulation caused by misalignment of the components in the holographic storage system, aberrations, etc. The data stored in the holograms can be retrieved from the demodulated images.
Compared to direct detection, coherent detection offers a higher SNR, higher sensitivity, and/or lower bit-error rate (BER) at a given optical power level. Moreover, the gain can be adjusted by varying the amplitude of the local oscillator. And unlike direct detection, it can be used to retrieve phase-modulated data as well as amplitude-modulated data.
Coherent Channel Modulation for Holographic Data Storage
As shown in
The system 100 further includes a beam splitter 120 that splits the collimated light beam 121 into a nascent reference/probe beam 122 and a nascent signal/local oscillator beam 123. The nascent signal/local oscillator beam 123 is so-named because it can, depending on configuration of the system 100, be used to generate either a nascent signal beam 126 for recording a hologram, or a local oscillator 125 for (n-rature) homodyne detection. The nascent reference/probe beam 122 is so-named because it can, depending on configuration of the system 100, be used to generate a reference beam 133 for recording a hologram or a probe beam 134 for generating a reconstructed signal beam 124.
In operation, the nascent reference/probe beam 122 propagates to beam directing device 127, whereupon it is directed as a reference beam 133 through reference beam converging lens 151. The beam directing device 127 typically, but not necessarily, comprises a mirror galvanometer configured to rotate through a defined range, the rotation being depicted by rotation arrow 129. The beam directing device 127 is thus adapted to direct the reference beam 133 through the reference beam converging lens 151 at various angles.
The reference beam 133 is focused onto a reflecting beveled edge of a knife-edge mirror 156 by the reference beam converging lens 151, whereupon the knife-edge mirror 156 reflects the reference beam 133 and thereby directs the beam 133 through the objective lens 145 and into the recording medium 158. When the holographic system 100 resides in read mode (illustrated in
As illustrated in
Monocular Holographic Data Storage—Record Mode
The polarizing beam splitter (PBS) 139 is configured to allow the p-polarized nascent signal beam 126 to pass through to a data encoding element 140. The data encoding element, or spatial light modulator (SLM) 140, is illuminated by the nascent signal beam 126, into which a data page is embedded to generate a signal beam 143. The SLM 140 can be implemented as a Mohave model reflective, ferroelectric liquid crystal based SLM comprising 1216×1216 pixels operating in binary mode. The pixel pitch is 10.7 μm×10.7 μm and the pixels occupy an area of 13.0 mm×13.0 mm. (The Mohave SLM was formerly manufactured by Displaytech.) Other embodiments comprise various SLMs including, but not limited to, transmissive SLMs, other reflective SLMs, and gray-scale phase SLMs. In some embodiments, a data encoding element comprises other means for encoding data in a signal beam, the other means including, but not limited to, a data mask.
In binary amplitude shift keying (BASK) mode, the SLM pixels typically operate by maintaining or changing polarization of reflected light in response to voltage applied to the pixels, in order to create light and dark pixels. Typically, an SLM pixel in a dark state receives p-polarized light of the nascent signal beam 126 and reflects p-polarized light, which passes back through the PBS 139 along the same transit path (but in the opposite direction) as the incoming nascent signal beam 126. Thus light from the dark state pixels is directed away from the recording medium 258, and the dark state pixels are “dark” to the medium 158. Conversely, an SLM pixel in a light state typically rotates polarization of incoming p-polarized light to reflect s-polarized light that is subsequently reflected by the PBS 139 on a path to the recording medium 158. Thus light state pixels are “light” to the medium 158. Half wave plate 138 may be removed for operating the holography system 100 in ASK mode.
For recording in PSK mode, half wave plate 138 typically resides between the SLM 140 and the PBS 139. Accordingly, the SLM 140, which in the absence of half wave plate 138 is configured for binary intensity modulation, is adapted to binary phase modulation. In order to effect phase modulation, the half wave plate 138 is installed in front of the SLM 140 as illustrated, oriented at 12.25°. Accordingly, the incoming nascent signal beam 126, arriving from the PBS with a polarization of 0°, is rotated to either +45° or −45° depending on whether an SLM pixel's optic axis is 0° or 45°, and both polarization states are transmitted by the PBS with equal intensity. Where the SLM pixel optic axis is at 0°, the signal beam 143 has a phase difference of 180° compared to where the SLM pixel optic axis is at 45°, and the SLM pixels are thus phase modulated.
The holographic memory system 100 may also record data by modulating the phase and the amplitude of the incoming nascent signal beam 126 with the SLM 140. In 16-quadrature amplitude modulation (QAM) mode, for example, each SLM pixel may impart a respective portion of the incoming nascent signal beam 126 with one of four amplitude states and one of four phase states distributed across the I-Q plane. This yields a signal beam 143 that encodes the data page as 4-bit symbols, which can be recorded and read from the holographic storage medium 158. Other suitable phase and amplitude modulation techniques include partial response maximum likelihood signaling, which is explained in greater detail below.
After being modulated by data encoding element 140 to contain a pixel image, the signal beam of the system 100 is typically directed by the PBS 139 through a second switchable half wave plate (2nd SHWP) 146, which is configured to transmit s-polarized light when the holographic system 100 is in record mode. Accordingly, the signal beam 143 emerges from the 2nd SHWP s-polarized. The signal beam 143 subsequently propagates through a 4F imaging assembly 150 comprising converging lenses 154. An optical filter, shown in
Because the knife-edge mirror 156 resides in a path of signal beam 143, the mirror 156 obscures some of the signal beam 143. Accordingly, the knife-edge mirror 156 causes some occlusion of the signal beam 143 as the beam 143 propagates past the mirror 156. However, because the knife-edge mirror 156 is typically only 500 μm thick (along the y axis), it typically occludes only 16 rows of pixels in the signal beam 143, and signal beam degradation is thus relatively minor. In the system 100, 16 to 32 rows of pixels are rendered inactive in order to ensure that pixels occluded by the knife-edge mirror contain no data. In addition, the occluded pixels can be omitted from the SLM data format so the occluded pixels contain no data. Omission of the pixels results in relatively small loss of data recording capacity.
After passing the knife-edge mirror 156, the signal beam 143 passes through the objective lens 145, which directs the signal beam 143 into recording medium 158. The recording medium typically comprises a photosensitive recording layer 160 sandwiched between two substrate structures 163. An interference pattern is created where the signal beam 143 and the reference beam 133 interfere with each other. Where the interference pattern resides within the photosensitive recording layer 160 of the recording medium 158, a hologram 148 is recorded. The substrate structures 163 typically comprise Zeonor® polyolefin thermoplastic, and the photosensitive recording layer typically includes photosensitive monomers in a polymeric matrix. Variations comprise substrates including, but not limited to sapphire, polycarbonate, other polymers, or glass. Suitable recording mediums are well known to persons of ordinary skill in the art, and embodiments of recording mediums are disclosed in U.S. Pat. Nos. 8,133,639 and 8,323,854. Variations of recording medium include, but are not limited to, photorefractive crystals and film containing dispersed silver halide particles. As used in this specification and appended claims, a recording medium is sometimes referred to as a photosensitive recording medium, a photosensitive storage medium, a storage medium, a photopolymer medium, or a medium.
Monocular Holographic Data Storage—Read Mode
For purposes of the holographic system 100, where the nascent signal/local oscillator beam 123 emerges s-polarized from the 1st SHWP, and is thus oriented to be reflected by the PBS 139 toward the detector 142, the s-polarized beam that emerges from the 1st SHWP is considered the local oscillator 125. If the nascent signal/local oscillator beam 123 is destined to become the local oscillator 125 and is phase adjusted by the variable phase retarder 130, it is expedient to state that the local oscillator 125 has its phase adjusted by the variable phase retarder 130.
When performing n-rature homodyne detection, the variable phase retarder 130 retards the phase of the local oscillator 125 with respect to the phase of the reconstructed signal beam 124 by an amount equal to about 2πm/n, where n>2 is the total number of images being acquired of a particular hologram and m is the index of the current image being detected. For n=4, for example, the system 100 collects four images at relative phase differences of π/2, π, 3π/2, and 2π. These phase differences may be determined and selected by a processor (not shown) operably coupled to the variable phase retarder 130 and/or to the detector 142.
The variable phase retarder 130 can be implemented as one or more components in a transmissive geometry, as shown in
The reconstructed signal beam 124 is created by illuminating hologram 148 with probe beam 134. The reconstructed signal beam 124 propagates part way through the holographic system 100 in a direction opposite that of the signal beam 143. The 2nd SHWP 146 is configured to transmit p-polarized light when the holographic system 100 is in read mode. Accordingly, the s-polarized reconstructed signal beam has its polarization rotated 90° by the 2nd SHWP 146 to emerge p-polarized. The p-polarized reconstructed signal beam 124 is thus oriented to pass through the PBS 139 and combine with the local oscillator 125 to form a combined beam 131. Combined beam 131 thus includes the p-polarized reconstructed signal beam and the s-polarized local oscillator 125. Analyzer 141 acts on the combined beam 131 to modulate relative strengths of the reconstructed signal beam 124 and the local oscillator 125 that make up the combined beam 131. The analyzer 141 can be a polarizer that can be oriented to pass more or less light depending on polarization of the light.
Typically, but not necessarily, the intensity of the local oscillator 125 is about 100 times intensity of the reconstructed signal beam 124, and the analyzer 141 is oriented to transmit about 16.7% of the local oscillator (s-polarized) portion of the combined beam 131 and about 83.3% of the reconstructed signal beam (p-polarized) portion of the combined beam 131. Accordingly, intensity of the local oscillator portion of the combined beam is about 20 times the intensity of the reconstructed signal beam portion, upon detection of the combined beam 131 by the detector 142. Detection of the combined beam 131 by the detector typically includes detecting an interference pattern created by interference of the local oscillator portion of the combined beam with the reconstructed signal beam portion of the combined beam.
In some embodiments, the analyzer 141 transmits different proportions of the local oscillator and reconstructed signal beam portions of the combined beam 131. For example, in a variation, the analyzer is oriented to transmit 45% to 98% of the reconstructed signal beam portion and 55% to 2% of the local oscillator portion of the combined beam. In another variation, the analyzer is oriented to transmit 60% to 93% of the reconstructed signal beam portion and 40% to 7% of the local oscillator portion of the combined beam. In yet another variation, the analyzer is oriented to transmit 75% to 90% of the reconstructed signal beam portion and 25% to 10% of the local oscillator portion of the combined beam. In some embodiments, the analyzer can be omitted; for example, if a non-polarizing beam splitter is used instead of a polarizing beam splitter and the local oscillator and reconstructed signal beam are in the same polarization state so that they can interfere with each other.
The holographic system 100 enables practice of both phase quadrature multiplexing and homodyne detection, by virtue of configurations allowing the phase retarder to adjust the phase of both signal beams and local oscillator beams. The system 100 is further adapted to recording in both ASK and PSK modes, by using the same SLM 140 either with or without half wave plate 138, respectively. In ASK mode, for example, the half wave plate 138 can be rotated so as not to transform the incident beam's polarization state.
However, the holographic system 100 is but one exemplary embodiment of components adapted to optical data recording, detection, and data channel modulation according to the present invention. Persons skilled in the art will recognize that other arrangements of light sources, data encoding elements, detectors, half wave plates, polarizing beam splitters and other system components can be devised that enable optical data recording, detection, and channel modulation, including recording and retrieval of holograms using ASK, PSK, phase quadrature multiplexing, and homodyne detection techniques described herein.
Read-Only Holographic Data Storage System
As illustrated in
In the embodiment shown in
Objective lens 204 may be, for example, any type of lens, such as those commercially available, or a custom lens, e.g., as disclosed in U.S. Pat. No. 7,532,374, which is incorporated herein by reference in its entirety. Exemplary lenses include, for example, high numerical aperture (NA) aspheric storage lenses. Lens 204 may also be located one focal length (i.e., the focal length of lens 204) from holographic storage medium 202 so that the storage medium is located at a Fourier plane of SLM 218. These lenses and their locations are exemplary and in other embodiments, including the monocular system shown in
The reconstructed signal beam 234 may then be combined with a local oscillator (LO) beam 236 by NPBS 208. Local oscillator beam 236 may be, for example, a plane wave. Further, local oscillator beam 236 may be generated from a portion of the probe beam 232, so that local oscillator beam 236 is temporally coherent with the reconstructed signal beam 234. The local oscillator beam 236 is injected or introduced into the reconstructed object path so that it is collinear with and has the same polarization state as the reconstructed signal beam 234, although the local oscillator beam 236 need not have any special phase relationship to reconstructed signal beam 234. The power of the reflected local oscillator beam 236 may be set to some power level to effect or cause the desired amount of optical gain and dynamic signal range (e.g., 100 times the nominal power of the reconstructed signal beam). This may be accomplished by splitting off a portion of the main laser beam used for generating the probe beam 232 using a fixed or variable beamsplitter as readily understood in the art.
Local oscillator beam 236 may pass through variable phase retarder 222 prior to being injected or introduced into the signal path where local oscillator beam 236 may be combined with reconstructed signal beam 234. Variable phase retarder 222 may be any type of device capable of changing the phase of local oscillator beam 236, such as, for example, a Nematic Liquid Crystal (NLC) variable phase retarder 222. For example, variable phase retarder 222 may be configured to switch between three or more states in which the active axis of the NLC material is electrically modulated to impart the desired phase differences (e.g., 0°-120°-240°; 0°-90°-180°-270°; and so on) between local oscillator beam 236 and reconstructed signal beam 234. Variable phase retarder 222 may switch between these states in response to signals from processor 280.
NPBS 208 combines the local oscillator beam 236 and reconstructed signal beam 234 to produce combined beam 238. NPBS 208 may include a partially reflective coating that allows 95% of light to pass through the NPBS 208 and 5% of light to be reflected. In such an example, 95% of reconstructed signal beam 234 will pass through NPBS 208 and 5% will be reflected away. Similarly, 95% of local oscillator beam 236 will pass through NPBS 208 while 5% of local oscillator beam 236 is reflected and combined with reconstructed signal beam 234. Thus, in this example, combined beam 238 comprises 95% of the reconstructed signal beam 234 and 5% of the local oscillator beam 236. Further, in this example, the portions of the local oscillator beam 236 (i.e., the portion passing through NPBS 208) and reconstructed signal beam 234 (i.e., the portion reflected by NPBS 208) not used for generating combined beam 238 may be passed to a device, such as, for example, a beam block for absorbing these unused portions of beams 234 and 236.
The combined beam 238 may then pass through lens 210 which focuses the combined beam 238. Lens 210 may be located, for example, so that its front focal plane is the back focal plane of lens 204. The focused combined beam 238 may then pass through polytopic aperture 212 which may be located, for example, 1 focal length from lens 210. Polytopic aperture 212 may be used to filter noise from combined beam 238. Combined beam 238 may then pass through lens 214, which may be located, for example, one focal length from polytopic aperture 214. Lens 214 may expand combined beam 238 so that beam 238 has a fixed diameter. Combined beam 238 may then enter PBS 216 which, because of the polarization of combined beam 238, directs combined beam 238 towards detector 220, which detects the received image. Detector 220 may be any device capable of detecting combined beam 238, such as, for example, a complementary metal-oxide-semiconductor (CMOS) detector array or charged coupled device (CCD). Although in the embodiment,
Where the local oscillator beam 236 and the reconstructed signal beam 234 have substantially the same phase they will interfere constructively to produce a representation of the reconstructed data page at the detector 220. Where the local oscillator beam 236 and the reconstructed signal beam 234 have substantially opposite phases, then they may interfere destructively to produce an inverted representation of the reconstructed data page at the detector 220. Where the local oscillator beam 236 and the reconstructed signal beam have substantially orthogonal phases (i.e., difference near ±90°), then they may produce a washed-out (low contrast) representation of the reconstructed data page at the detector.
This simplified diagram of the holographic memory system 200 of
Coherent Channel Modulation for Data Recording
As explained above with respect to
Phase Quadrature Holographic Multiplexing
Phase quadrature holographic multiplexing (PQHM) can be considered analogous to quadrature phase shift keying (QPSK) in traditional communications theory. The ability to detect the phase of a hologram presents an opportunity to increase storage density. A second hologram can be recorded with each reference beam (e.g., two holograms at each reference beam angle for angle multiplexing), with little to no cross talk between the holograms provided they have a 90° difference in phase. We refer to this as method as PQHM. More generally, we refer to methods of recording holograms in both orthogonal phase dimensions as phase-multiplexing. Phase-multiplexing therefore includes, but is not limited to, holograms recorded using PQHM (i.e. QPSK), higher-order PSK, and QAM holographic recording methods. Conversely, BPSK is not considered a phase-multiplexing method.
PQHM can provide a doubling of storage density, and opens the door to other advanced channel techniques. Furthermore, PQHM can be used to increase both recording and recovery speeds.
In general, holographic recording is performed by illuminating a photosensitive medium with an interference pattern formed by two mutually coherent beams of light. In one embodiment, the light induces a refractive index change that is linearly proportional to the local intensity of the light, i.e.,
where Δn({right arrow over (r)}) is the induced refractive index change, S is the sensitivity of the recording medium, t is the exposure time, and {right arrow over (r)}={x, y, z} is the spatial coordinate vector. I({right arrow over (r)}) is the spatially-varying intensity pattern, which is in turn decomposed into a coherent summation of two underlying optical fields, ER({right arrow over (r)}) and ES({right arrow over (r)}), representing the complex amplitudes of the reference beam and the signal beam, respectively. The unary * operator represents complex conjugation.
In this case, both the reference beam and the signal beams corresponding to an individual stored bit are plane waves (or substantially resemble plane waves), though other page-oriented recording techniques are also suitable for PQHM recording. Generally, the reference and signal beams may be written as exp(jφR)ER ({right arrow over (r)}) and exp (jφS)ES({right arrow over (r)}) respectively, where phases φR and φS have been explicitly factored out. Then,
Δn({right arrow over (r)})=St|ER({right arrow over (r)})|2+|ES({right arrow over (r)})|2+Re{exp(jΔφ)(ER*({right arrow over (r)})ES({right arrow over (r)})+ER({right arrow over (r)})ES*({right arrow over (r)}))}] (2)
where Δφ=φS−φR is the difference between the phases of the two recording beams.
Eq. (2) shows that the phase of the interference term may be controlled by controlling the phase difference of the recording beams. If two holograms are recorded sequentially using the same reference and signal beams ER({right arrow over (r)}) and ES({right arrow over (r)}) while changing Δφ by 90°, then the holograms will have a quadrature relationship to each other. Each planar grating component in the Fourier decompositions of the interference terms of the two holograms will be identical to the corresponding component of the other hologram, excepting for a 90° phase difference. This reflects the 90° phase shift between the grating fringes of each component of the second hologram with respect to the grating fringes of each component of the first hologram; thus, the recorded gratings are substantially spatially orthogonal to each other. Similarly, the two holograms will be reconstructed in quadrature when the medium is illuminated by an appropriate probe beam. Because their gratings are orthogonal, the two holograms actually occupy different degrees of freedom within the address space of the recording medium, even though they use the same bands of angular spectrum.
The quadrature-multiplexed holograms can be denoted as the in-phase (I) hologram and the quadrature (Q) hologram (or the I quadrature and the Q quadrature of the same hologram). If the signal beams of two sequentially recorded holograms are not identical, but are instead modulated with two different data patterns, then the quadrature relationship can still be maintained so long as the modulation scheme does not produce substantial out-of-quadrature gratings within each individual hologram. For example, binary amplitude shift keying (ASK) works for this purpose because ‘ones’ are represented by gratings in the 0° phase in the I hologram and by gratings in the 90° phase in the Q hologram (with ‘zeros’ being represented by the absence of a grating in both holograms). Similarly, for binary PSK modulation (phase shift keying, in one embodiment), ‘ones’ and ‘zeros’ are respectively represented by 0° and 180° gratings in the I hologram, and by 90° and 270° gratings in the Q hologram. Thus, an orthogonal ±90° relationship is maintained between the I and Q holograms of each Fourier component.
Phase-quadrature recording may be physically effected by changing the optical path length of one (or possibly both) of the recording beams to produce a net phase difference of Δφ=90° (or ±90° plus some whole number of waves). For instance, the optical path length can be increased or decreased by moving one or more mirrors in the beam path(s) using a piezo, galvonometer, or micro-electro-mechanical system (MEMS). One or both of the beams may also be phase-modulated with a suitable SLM, such as a switchable liquid crystal SLM.
The I and Q holograms can be modulated and recorded over a sequence of exposures with a binary SLM or in parallel during a single exposure with a gray-scale SLM with at least four modulation levels. When using QPSK modulation, for example, the gray-scale SLM modulates each pixel into one of the four quadrature phase states of 0°, 90°, 180°, and 270°. In this case, the binary state of each pixel in the I and Q images together may be encoded into a single state of the gray-scale phase SLM in a manner that produces quadrature-multiplexed gratings indistinguishable from those produced by sequential writing. The single exposure for parallel quadrature recording requires only 1/√2 times the optical energy used to record two sequential exposures, so medium consumption (M/# usage) is reduced by a factor of 0.707. Recording rates may be increased by up to the same factor.
If desired, a gray-scale SLM can be implemented by cascading two or more binary SLMs in series, or by cascading non-binary SLMs in series to produce the four or more phase/amplitude states. In embodiments employing modulation schemes different from binary PSK, the four states might not correspond to the four quadratic states of 0°, 90°, 180°, and 270°. For example, parallel phase-quadrature recording of two binary ASK-modulated holograms might be accomplished with an SLM (or cascaded series of SLMs) that produces two bright states at phase 0° and 90° with 1√2 amplitude, a bright state at phase 45° with unity amplitude, and a dark state.
Recording Phase Quadrature Holographic Multiplexed (PQHM) Data Pages
As shown in
Data is encoded in the first signal beam in the form of a first data page. The first data page typically includes pixels of varied intensity created using a data encoding element 140 illustrated in
The first reference beam is a plane wave reference beam, and the recording medium 158 typically, but not necessarily, comprises a combination of photoactive polymerizable material and a support matrix, with the combination typically residing on a substrate or sandwiched between two substrates. Other storage media familiar to persons skilled in the art can also be used, including but not limited to LiNbO3 crystals and film containing dispersed silver halide particles. Other methods of optical data recording can use reference beams other than plane wave reference beams, including but not limited to spherical beams and in-plane cylindrical waves.
During the first operation 302, the variable phase retarder 130 in
In the second operation 304, the recording medium 158 records a second interference pattern, called a quadrature (Q) hologram, created by interference of a second reference beam with a second signal beam. The variable retarder 130 is set at a second phase position for the second operation 304, and the second hologram is recorded in the recording medium with the second signal beam being in a second phase state. The second phase state differs from the first phase state by 90°. In other words, the first signal beam has a phase difference of 90° from the second signal beam. Recording the second hologram is typically performed by opening and closing the shutter (not shown).
The second operation 304 is typically performed with the system 100 configured as shown in
However, both the first and second data pages include reserved blocks comprising known pixel patterns. As used herein, the term “reserved block” refers to a region of known pixel pattern(s) that is encoded in a page stored in the holographic storage medium. A reserved block residing at a specific location in the first data page is typically matched by a complementary reserved block residing at an identical specific location in the second data page, wherein the reserved blocks have complementary pixel patterns. The reserved blocks and processing using the reserved blocks are discussed in greater detail below.
In the second operation 304, the second hologram is recorded in a substantially identical location in the photosensitive recording medium 158 as the first hologram, such that the first and second holograms overlap completely to share a common space. However, because the first signal beam and the second signal beam have a phase difference from each other of 90°, the first and second holograms have a phase difference from each other of 90°. In other words, refractive index gratings of each and every Fourier component of the first and second holograms have a phase difference from each other of ±90°. The first and second holograms are thus said to be phase quadrature multiplexed, and form a phase quadrature hologram pair, sometimes referred to as a phase quadrature pair or a PQHM pair. Each hologram of a phase quadrature hologram pair is a species of phase-multiplexed hologram.
In other embodiments, first and second holograms are recorded using first and second signal beams that do not have a phase difference, which is to say the first and second signal beams have a phase difference from each other of 0°. In such case, a phase difference between the first and second holograms of a phase quadrature pair can be achieved using first and second reference beams that have a phase difference from each other of 90°. Persons skilled in the art will recognize that a phase difference between two reference beams can be achieved by placing a phase retarder in a path of the first and second reference beams. The phase difference can also be manipulated by adjusting the phases of both the signal beams and the reference beams so as to have a relative phase difference that switches between 0° and 90°.
Upon completion of the second operation 304 in the PQHM recording process, a processor or other suitable component in or operably coupled to the holographic data storage system 100 determines whether or there are any more data pages to be recorded in operation 306. If so, then the holographic data storage system 100 shifts to a new angle, position, dynamic aperture setting, and/or other multiplexing setting in operation 308, then repeats operations 302, 304, and 306 to record the desired number of multiplexed data pages. Once the multiplexed data pages have been recorded, recording ends in operation 310.
Higher-Order Phase Shift Keying (PSK)
In another embodiment, the holographic memory systems disclosed herein may be used to record and recover data modulated with higher-order PSK constellations. PSK encoding may be extended generally to incorporate any number of phase states—for example, 8-PSK. In one embodiment, 8-PSK recording is performed by recording a data page composed using a gray-scale phase SLM with each pixel taking one of eight phase states, e.g., 0°, 45°, 90°, 135°, 180°, 225°, 270°, or 315°. Higher-order PSK holograms may be detected using a modified quadrature homodyne detection or n-rature homodyne detection algorithm. Any number and distribution of phase states may be thus accommodated.
Higher-order PSK holograms can also be recorded sequentially using a binary phase SLM (0° and 180°), and a separate phase retarder in a manner analogous to the sequential PQHM recording method disclosed immediately above. Note, however, that for PSK orders higher than four, the sequentially-recorded images may not constitute independent binary data pages. 8-PSK, for example, involves the sequential exposure of four SLM images but may yield only three bits per pixel of data (not four) since the number of data bits may be equal to the log base 2 of the number of phase states.
Quadrature Amplitude Modulation (QAM)
The holographic memory systems disclosed herein may also be used to record and recover data modulated in both amplitude and phase. 16-QAM, for example, is a well-known method for encoding 4 bits per symbol using a constellation of typically 4×4 states distributed uniformly in the I-Q plane. Generally, any digital QAM constellation may be recorded holographically using a phase and amplitude-modulating SLM capable of providing an appropriate number of phase and amplitude states, or with a sequence of exposures of varying amplitude and phase using a binary phase SLM. Any number and distribution of states may be thus accommodated.
Partial Response Maximum Likelihood (PRML) Signaling
Partial response maximum likelihood (PRML) signaling can also be used to increase the density with which phase- and amplitude-modulated data is stored in a holographic medium. PRML signaling is used in communications and magnetic storage applications where, for instance, the bits on a data track are packed so closely that four or six individual magnetic flux reversal response pulses may overlap each other. This allows the channel to operate at four or six times the data density it would achieve if the pulses were completely separated. The cost for this improved performance is increased complexity in the form of a decoder that can recover the original data from the convolved signal. Typically, a Viterbi or Bahl, Cocke, Jelinek, and Raviv (BCJR) decoder is used to select the optimal data pattern consistent with the observed signal. In the case of Viterbi decoding, the detector is optimal in the maximum likelihood sense, hence the term partial response, maximum likelihood.
In PRML signaling, a partial response resampling filter creates an output which resembles the convolution of the binary data pattern with some specific channel impulse response, h. For holographic data storage, this resampling filter can be employed in two spatial dimensions rather than in one temporal dimension for time-varying images. In particular, the optical response of neighboring SLM pixel images overlap with each other (blur) when the spatial resolution grows coarser. The superposition of overlapping pixel fields resembles a linear convolution that is amenable to PRML processing. Because homodyne detection permits the detection of optical amplitude, it enables PRML signaling by linearizing the channel response. Put differently, PRML signaling for holographic data storage can be implemented by detecting the optical amplitudes of the overlapping pixel fields using a homodyne detection technique (e.g., n-rature homodyne detection), then applying PRML processing techniques on the detected optical amplitudes.
Implementing partial response signaling for holographic data storage may reduce the signaling bandwidth, increase the signaling capacity for a given bandwidth, or both. Partial response signaling may also allow the designer to select a target channel response that is closer to the native, physical response of the channel, thus reducing noise amplification due to aggressive equalization.
Once the hologram has been recorded successfully, it can be read out using n-rature homodyne detection, quadrature homodyne detection, or any other technique that yields the complex amplitude of the reconstructed signal beam. First, the signal beam is reconstructed in operation 408 by illuminating the hologram with a probe beam, e.g., as described with respect to
One form of partial response suitable for use in the process of
Normally, the discrete response sequence of
For the page-oriented channel of holographic data storage, a two-dimensional response may be used.
However, a pixel-matched holographic data storage system may not be practical for all uses. In order to implement the system with an oversampled detector, corner alignment can be performed in postprocessing using a modified version of the resampling method disclosed below. The full response oversampling process disclosed below uses the 4×4 detector pixel window closest to each SLM pixel image and applies coefficients optimized (selected) to determine the state of that SLM pixel alone; conversely, the partial response resampling method selects the 4×4 detector pixel window closest to the corner of four SLM pixel images and applies coefficients optimized (selected) to determine the sum of the four SLM pixel responses. Coefficients can be determined by simulation using a modified version of the computer code used to derive the full response resampling coefficients as described below with respect to
Optical Equalization for PRML
Imaging through a square polytopic aperture in the Fourier plane produces a sinc-shaped point spread function (impulse response) in the image plane. Without being bound by any particular theory, a sinc function may be considered to be the lowest-bandwidth response that leads to an isolated non-zero value at the sampling point x=0, but zero at all other integer sampling points, as shown in
In order to effect the desired discrete partial response shape of h=[1 1] (considering only one spatial dimension for simplicity), a reduced (minimal) bandwidth channel comprising two displaced sinc functions as shown in
where Δpix is the SLM pixel spacing, λ is the wavelength of the light, and ƒ is the Fourier transform lens focal length. (Again, only one dimension is considered for simplicity.) This is to say that the square aperture should be apodized with a single null-to-null cosine half-cycle in amplitude transmittance, which corresponds to cos2 in intensity transmittance. For a phase conjugate polytopic architecture employing a double pass through the polytopic aperture (once upon recording, and then again upon read-out), the expression for the intensity transmittance, T(x), of the aperture is:
Having produced an optical response resembling the desired PR1-2D target, the data may be decoded using a two-dimensional version of the Viterbi algorithm, the BCJR algorithm, the Iterative Multi-Strip algorithm, or any other suitable algorithm.
In other embodiments, one skilled in the art will readily recognize how to modify the foregoing analysis to implement other partial response classes, e.g., PR2 (response (1+D)2), or EPR2 (response (1+D)3). For a PR2 partial response, the discrete partial response shape is h=[1 2 1]; for an EPR2 partial response, the discrete partial response shape is h=[1 2 2 1]. Similarly, one skilled in the art will readily see how to implement noise-predictive maximum-likelihood detection (NPML).
Single Sideband Holographic Recording
Single sideband holographic recording involves removing redundant spectral components of the holographic signal in order to increase storage density. In a holographic data storage system that employs polytopic multiplexing, for example, the redundant spectral components can be removed by occluding half of the polytopic aperture. Since the polytopic aperture is placed in a Fourier plane of the signal beam, the complex amplitude distribution in the plane is conjugate-symmetric about the origin, so long as the signal beam is real-valued (as it is typically for binary modulation schemes, such as PSK and ASK). In real-world cases where the signal is not purely real-valued but is instead modulated onto a phase carrier that varies slowly across the image field (i.e., the signal is conjugate-symmetric about the phase carrier, rather than about 0° phase), single sideband holographic recording can be implemented with a phase carrier that is resolved by coherent detection (e.g., n-rature homodyne detection). In n-rature homodyne detection, the resolution depends on the reserved block spacing (discussed below).
In the systems shown in
Single-sideband multiplexing introduces an imaginary component in the detected signal that is normally not present due to cancellation of the imaginary parts in the conjugate sidebands of a double-sideband recording. To restore the original signal, the imaginary part of the signal (as expressed in the recorded phase basis) is discarded or suppressed, e.g., by recovering only the real part of the signal. For example, the real part of the signal can be retrieved without the imaginary part by reading a single-sideband hologram using n-rature homodyne detection, quadrature homodyne detection, or any other suitable digital or optical reconstruction technique.
n-Rature Homodyne Detection
As introduced above, n-rature homodyne detection is a coherent channel detection process for reconstructing holographically stored data, including data encoded using the coherent channel modulation techniques disclosed herein. In n-rature homodyne detection, one or more detectors sense n images (e.g., IA, IB, IC, etc.) of a particular data page. Each of these images is produced by sensing the interference pattern between a local oscillator beam and a reconstructed signal beam diffracted by the holographic storage medium. For the mth image Im, the phase difference between the local oscillator beam and the reconstructed signal beam is 2πm/n, where n≧3 is the total number of images. This can be accomplished, for instance, by incrementing or decrementing the modulo 2π phase difference between the local oscillator and the reconstructed signal beam by 2π/n for subsequent images. This phase difference can be implemented using a liquid crystal-based phase modulator, an electro-optic phase modulator, a movable mirror, or any other suitable phase modulator in the path of the local oscillator beam, the probe beam, or the reconstructed data (signal) beam. The phase differences of successive images do not have to be arranged in any particular sequence or order; rather, the phase difference can be varied as desired such that the resulting n images can be ordered in phase difference increments or decrements of 2π/n.
Common Intensity Noise Suppression by N-Rature Homodyne Detection
Though n-rature homodyne detection involves more holographic exposures than conventional homodyne detection or quadrature homodyne detection, it enjoys other benefits, including the rejection of common intensity noise. Eliminating common intensity noise using n-rature homodyne detection increases the SNR of the detected signal. To see why, consider n-rature homodyne detection versus quadrature homodyne detection. In quadrature homodyne detection, the detector acquires two images, IA and IB, of the interference between the reconstructed signal beam and the local oscillator beam. The phase difference between the reconstructed signal beam and the local oscillator beam is shifted by 90° for one of the images. As a result, the irradiance of the detected images can be expressed as:
I
A
=I
LO
+I
S+2|ELO∥ES|cos(Δφ) (5)
I
B
=I
LO
+I
S+2|ELO∥ES|cos(Δφ+90°)
where IS is the signal (reconstructed data) beam irradiance, ILO is the local oscillator irradiance, and Δφ is the phase difference between the signal beam and the local oscillator. |ES| and |ELO| are the magnitudes of the optical fields, i.e., |ES|=√{square root over (IS)}, |ELO|=√{square root over (ILO)}. The signal magnitude |ES| may be estimated by:
The final term in Eq. (6) represents common intensity noise and is an additive noise source in the estimate of |ES|. Common intensity noise is so denoted because it includes components of the detected images proportional to the direct intensity terms, ILO and IS, in addition to the desired interference term. Although increasing the local oscillator intensity may increase the SNR of the detected signal, it will not necessarily eliminate the common intensity noise in quadrature homodyne detection.
Conversely, using n-rature homodyne detection suppresses or eliminates common intensity noise and can increase the SNR of the detected signal. To see how, consider the case of n-rature homodyne detection where n=3. The detector images may be written as:
I
A
=I
LO
+I
S+2|ELO∥ES|cos(Δφ) (7)
I
B
=I
LO
+I
S+2|ELO∥ES|cos(Δφ+120°)
I
C
=I
LO
+I
S+2|ELO∥ES|cos(Δφ+240°)
Then |ES| may be estimated by
where ΔφA, ΔφB, and ΔφC represent the respective phase differences between the three pairs of local oscillator and signal beams. Typically, these phase differences are shifted with respect to each other by 120°, so |ES| may be estimated by:
Eq. (9) shows that the common intensity noise term has cancelled. This approach may be generalized for any n≧3 since
is a constant factor applied to the signal term and
is the factor applied to the term including the common intensity noise. In the ideal case, this factor sums exactly to zero, reflecting a perfect cancellation of common intensity noise. In a real implementation where the actual phase differences might deviate slightly from
this factor tends towards zero, thus substantially cancelling common intensity noise. However, because the factor typically does not sum exactly to zero in real implementation, substantial cancellation of common intensity noise typically allows a small amount of common intensity noise to remain.
In a refinement to the n-rature algorithm above, cancellation forcing may be employed. In practice, the coefficients for combining the images in Eq. (8), cos (φA), cos (φB), and cos (φC), can be determined by correlation operations on the detected images. Perfect common intensity noise cancellation occurs when these coefficients sum to zero, but this may not be the case in practice due to measurement noise or phase errors in the constituent images. In such a case, cancellation forcing may be practiced by adjusting the coefficients to sum to zero, for example by subtracting 1/n of the mean from each coefficient.
Reading Holographically Stored Data Using N-Rature Homodyne Detection
In the embodiment shown in
The probe beam diffracts off the in-phase hologram and the quadrature hologram to generate a reconstructed signal beam, which interferes with the local oscillator as discussed above. A detector senses this interference pattern, and a memory stores a representation of the detected interference pattern as one of the n n-rature homodyne images. In optional operation 505, a processor coupled to the memory may subtract the background image acquired in operation 504 from the interference pattern detected in operation 504 in order to remove direct terms (e.g., non-signal components, such as ILO and IS) from the representation of the interference pattern. Examples of operation 505, also called detector image modification, are described in greater detail below.
If there are more images to recover, e.g., as determined by the processor in operation 506, the phase difference between the local oscillator and the reconstructed signal beam is shifted by 360°/n using the variable phase retarder 130 in operation 508. Put differently, the mth local oscillator in the set of n local oscillators has a relative phase of (360°×m)/n. Accordingly, where n=3, each of the three local oscillators is shifted by a phase difference of 120° (360°/3) with respect to the other local oscillators. The system repeats operations 504 and 506 in an iterative fashion at incremented phase differences of 360°/n until all n images have been captured.
The n images undergo postprocessing in operations 509 and 512. In optional operation 509, a processor combines the n images captured by the detector into a pair of images suitable for further processing as explained below. This combination operation 509 can be applied to images reproduced from holograms recorded using ASK, PSK, QAM, and single-sideband recording techniques. The resulting image pair, referred to as a quadrature image pair, undergoes spatial wavefront demodulation in operation 512. (Spatial wavefront modulation is explained in greater detail with respect to
The process shown in
The operations shown in
Generating and Detecting Interference Patterns
The 2nd SHWP 146 is configured to transmit p-polarized light when the holographic system 100 is in read mode. Accordingly, each of the n s-polarized reconstructed signal beams has its polarization rotated 90° by the 2nd SHWP 146 to emerge p-polarized and thus propagates through the PBS 139 towards the detector 142. The first SWHP 144 is oriented to transmit an s-polarized local oscillator 125, which reflects off PBS 139 towards detector 142 to combine with the corresponding reconstructed signal beam 124. If the local oscillator 125 and the reconstructed signal beam 124 are aligned with each other, they propagate substantially collinearly towards the detector 142, thereby forming combined beam 131.
An analyzer 141 (e.g., a linear polarizer) between the PBS 139 and the detector 142 transmits projections of the s-polarized local oscillator 125 and the p-polarized reconstructed signal beam 124 into a particular polarization state (e.g., a linear diagonal polarization state). Changing the polarization state transmitted by the analyzer (e.g., by rotating the analyzer 141 about the optical axis of the combined beam 131) changes the relative strengths of the reconstructed signal beam portions and local oscillator portions of the n combined beams transmitted to the detector 142. This varies the modulation depth of the detected interference pattern and the gain experienced by the reconstructed signal beam. The detector 142 senses the interference pattern generated by the local oscillator and the reconstructed signal beam and produces an electronic signal (e.g., a current or voltage) whose amplitude is proportional to the detected irradiance.
The local oscillators 125 may be generated at a substantially identical, fixed wavelength, which is substantially identical to the wavelength of probe beam 133. The fixed wavelength should remain constant over time, within the capability of a holographic system 100 to maintain a constant wavelength. Persons skilled in the art recognize that small, unintentional variations in wavelength are typically unavoidable. For example, laser mode hops, current variation, and temperature variation can limit wavelength stability for light beams in any holographic system.
Incrementing the Phase Difference Between the LO and Signal Beam
The phase difference between the local oscillator 125 and the reconstructed signal beam 124 can be incremented in operation 508 with the variable phase retarder 130 shown in
The phase of the first of the n local oscillators does not have to be adjusted. However, variations include methods where the first local oscillator phase is adjusted. For some methods of coherent optical data detection, local oscillator phases are adjusted using a variable phase retarder. In some variations, local oscillator phases are adjusted by other means, including but not limited to changing a path length of the local oscillator prior to combining the local oscillator with a reconstructed signal beam.
Other exemplary embodiments of methods of optical data recording, detection, and channel modulation include, but are not limited to, embodiments where n=4, n=5, or n=6. For example, in a method of coherent optical data detection where n=4 (thus having 4 reconstructed signal beams, 4 local oscillators, and 4 combined beams), each of the 4 local oscillators has a phase difference of 90° (360°/4) from two of the 4 local oscillators, and each of the 4 local oscillators has a phase difference of at least 90° from the 3 other local oscillators. In another embodiment, where n=5 (thus having 5 reconstructed signal beams, 5 local oscillators, and 5 combined beams), each of the 5 local oscillators has a phase difference of 72° (360°/5) from two of the 5 local oscillators, and each of the 5 local oscillators has a phase difference of at least 72° from the 4 other local oscillators. In still another embodiment, where n=6 (thus having 6 reconstructed signal beams, 6 local oscillators, and 6 combined beams), each of the 6 local oscillators has a phase difference of 60° (360°/6) from two of the 6 local oscillators, and each of the 6 local oscillators has a phase difference of at least 60° from the 5 other local oscillators.
Combining N-Rature Homodyne Detected Images
In operation 509 of the n-rature detection process shown in
In quadrature homodyne detection, a quadrature combined image can be produced by combining the detected images:
where ÊI is the estimated optical field of the signal at the detector, ĨA and ĨB constitute a quadrature image pair which may have been produced by combination of n n-rature images as described above, or directly by quadrature homodyne detection. PA and PB are upsampled peak strength maps based on reserved block patterns distributed throughout the images (described in greater detail below with respect to
ÊI and ÊQ are traditionally referred to as the in phase and quadrature components of the signal. For the case of PQHM, independent data pages may be written in each component, leading to the doubling of storage density. Furthermore, ÊQ may be computed from the same quadrature image pair using the same upsampled peak strength maps used for ÊI—it is not necessary even to perform correlation operations for the reserved block patterns in the quadrature (Q) page, which may differ from those in the in-phase (I) page. In this manner, a PQHM system can recover two data pages from two holographic exposures, achieving the same detection rate as direct detection.
In an alternative embodiment, quadrature homodyne recombination can be performed twice independently to recover each of the signals. In this case, the reserved block cross-correlations are performed using the known reserved block patterns of the Q page in addition to those of the I page. Q image recombination is performed using Eq. (12) with the Q reserved block pattern cross-correlation peak strengths. In another alternative embodiment, the cross-correlations for both known reserved block patterns are performed, and the results are combined into a single low-noise estimate of the detected phase basis, which is then used in the recombination of both images.
Similar principles apply to reconstruct PQHM data recovered using n-rature homodyne detection and spatial fringe demodulation, both of which are disclosed above and in greater detail below. The n n-rature images so detected may be combined into a quadrature image pair as above, or they may be used to directly determine ÊI and ÊQ estimated optical fields as follows. For n-rature homodyne detection with n=3, the expressions in Eqs. (12) and (13) become:
Higher values of n may be accommodated analogously. Note that in these expressions, PA, PB, . . . correspond to the upsampled reserved block correlations for the I image reserved block patterns. One skilled in the art would also recognize that it is possible to modify the expressions in Eqs. (14) and (15) to instead employ correlations for the Q image reserved block patterns, or to incorporate both.
Detector Image Modification
In operation 510 of the n-rature detection process shown in
By convention, the modified image produced by detector image modification is designated with a tilde to distinguish it from the original detector image, e.g., IA→ĨA. This modification may be performed in several different ways. Typically,
I
A
=I
LO
+I
Sig+2|ELO∥ES|cos(Δφ) (16)
Ĩ
A=2|ELO∥ES|cos(Δφ)
In one embodiment, ĨA is computed from IA simply by subtracting the mean of IA. Typically, ILO is spatially constant and IS is comparatively small, so subtracting the mean does a fair job of approximating the third term.
Subtracting the mean can be generalized to performing a filtering operation on IA to produce ĨA. For example, a spatial high-pass filter would subtract not only the mean, but other slowly-varying components caused by, say, intensity variations in the local oscillator. This can improve performance compared to simply subtracting the mean.
The device can also subtract a reference image from the raw image(s) to produce the modified image(s). This reference image might be generated analytically or empirically. In an embodiment, the reference image is generated simply by taking a detector image of the local oscillator in the absence of a reconstructed signal beam, such as in operation 502 of the n-rature homodyne detection process shown in
These detector image modification embodiments are not mutually exclusive. That is, any combination of mean subtraction, spatial filtering, and reference image subtraction can be applied to the n-rature homodyne images.
Spatial Wavefront Estimation and Demodulation
Homodyne detection, including n-rature homodyne detection, involves estimating the phase difference between the signal phase carrier and the local oscillator, e.g., using reserved blocks distributed across the data page. In some cases, misalignment between the wavefront of the local oscillator and the wavefront of the signal beam introduces an additional, undesired phase difference, or spatial wavefront modulation, that manifests in the detected images as a spatial fringe pattern. The periodicity and orientation of the spatial wavefront modulation may vary due to real-world perturbations, such as vibration, heating, or misalignment of components in the optical path. Generally, these perturbations tend to increase the frequency of the spatial wavefront modulation. If the frequency of the spatial wavefront modulation is too high, the estimate of the phase difference may degrade due to aliasing.
In optional operation 554, the processor detects and cross-correlates selected reserved blocks (described below) in each calibration page as described below with respect to
In operation 558, the processor detects all or substantially all reserved blocks in each calibration page and uses these reserved blocks to estimate the remaining spatial modulation wavefront affecting the corresponding calibration page. In operation 560, the processor sum the remaining spatial modulation wavefront with the predetermined fringe pattern and the least-squares quadratic fit. It then removes the resulting sum from the images of the data pages retrieved from the holographic storage medium and combines the images of each data page into a corresponding quadrature image pair in operation 562. The processor decodes the demodulated quadrature image pairs to recover the information represented by the data pages in operation 564.
Spatial Wavefront Modulation Due to Medium Positioning Errors
Medium positioning errors during recovery tend to produce a large impact on the spatial wavefront modulation. For instance, consider a holographic recording medium placed at or near a Fourier plane of the SLM. In such a geometry, the optical fields during a recovery operation can be described as a Fresnel approximation using the well-known Fourier optics principle:
where g(x,y) is the optical field at the detector and F(vx, vy) is the Fourier transform of the recorded optical field as if it were emitted entirely from the recorded Fourier plane within the medium. By convention, the notation
denotes this Fourier transform scaled by x=λƒvx and y=λƒvy. hl is a scalar factor, λ is the recording wavelength, and ƒ is the focal length of the Fourier transform lens (i.e., the recording objective lens). d is the propagation distance from the recorded Fourier plane to the lens, and x and y are the Cartesian coordinates at the detector.
Equation (17) represents the classical Fourier transform property of a lens, but also includes a quadratic phase factor that becomes significant when the recorded Fourier plane is not placed exactly in the lens focal plane, i.e., when d≠ƒ. This condition may represent a height (z axis) error of the position of the recorded hologram with respect to the optical head during recovery. Thus, measurement of the phase difference between the signal beam and the local oscillator and extraction of the quadratic component (for example by projection onto a Zernike or Seidel basis) constitutes a highly accurate estimate of the medium height error. According to one embodiment, this estimate of the medium height error may be used to adjust the relative height of the medium during recovery, e.g., with a servo or other suitable actuator controlled by a processor that estimates the position error(s). According to another embodiment, the estimate of the medium height error may be used to adjust a quadratic phase factor in a local oscillator or reference beam.
Similarly, transverse positioning errors (x and y axes) produce characteristic tilt factors in Δφ. According to the shift property of the Fourier transform,
shifting a function ƒ(x) by x0 in the spatial domain introduces a phase factor of exp(−jkxx0) to its Fourier transform F(kx). Similarly, shifting a function in the y direction by y0 introduces an exp(−jkyy0) factor. Thus, medium position errors in the x and y directions may be measured by extracting the respective tilt components. This may likewise be done with Zernike or Seidel coefficients, or by Fourier transform. According to embodiments of this invention, shift errors may be used to adjust the relative x,y position of recording medium, again using servos or other suitable actuators controlled by a processor that estimates the transverse position error(s).
Predetermined Fringe Demodulation
In operation 552 of the process shown in
In one embodiment, an aberration function corresponding to the design-nominal or as-built performance of the optical system is used as a predetermined wavefront. This approach can enable the use of cheaper or smaller lenses or other optical components. This predetermined wavefront may also be modified according to current conditions to account for changes in environmental conditions (e.g., temperature, vibration), wavelength, etc.
One or more predetermined wavefronts can also be used to remove the known phase aberrations imparted by a phase mask. Phase masks are commonly used to mitigate the effects of the “DC hot spot” and inter-pixel noise in ASK-modulated data. The use of a phase mask along with a predetermined wavefront estimate allows the application of spatial wavefront demodulation to ASK modulation and other modulation schemes that use phase masks.
Reserved Blocks Cross-Correlations for Estimating Spatial Wavefront Modulation
In operations 552, 554, and 556 of the process shown in
The reserved blocks may serve other purposes as well. With oversampled images, for example, the reserved blocks can serve as fiducials for image alignment measurement. Since the reserved block data patterns are known, they may also be used for signal-to-noise ratio (SNR) calculation. In addition, the specific patterns employed for the reserved blocks may also be selected to eliminate or reduce pattern-dependent autocorrelation noise, e.g., by reducing cross correlations between oversampled versions of regions of the reserved block patterns, rather than the original binary versions.
In order to prevent noise from neighboring data pixels from impacting the alignment measurement, cross correlations may be calculated over an upsampled reserved block target pattern corresponding to only an interior region of the reserved block. For example, an 8×8 pixel binary reserved block pattern can be selected such that the cross correlation of the inner 6×6 pixel sub-block with any of the other eight edge-bordering 6×6 pixel sub-blocks is zero. Similarly an interior region corresponding to the inner 6×6 pixel sub-block can be used to derive the target pattern.
Overlaid on an inner 6×6 pixel sub-block of the binary reserved block pattern 606 is an 8×8 grid 616 showing the locations corresponding to target pattern pixels. Upsampled reserved block target pattern 626 shows the results of upsampling the inner 6×6 pixel sub-block simply by integrating the values of binary reserved block pattern 606 within each of the grid cells of the overlaid 8×8 grid 616. In other embodiments, the process of upsampling may be enhanced to incorporate an optical point spread function (PSF) in a manner that will be readily apparent to one skilled in the art.
The cross-correlation produces cross-correlation matrix 610, wherein sampled peak 612 is identified as the largest value. The location of 612 can be used for image alignment in resampling. A processor or controller may also interpolate the location of correlation peak 612 to interpolated location 614 for sub-pixel resolution (e.g., by using a centroid operation). An array of cross correlation peak location information and/or interpolated cross correlation peak location information constitutes a quiver alignment array.
In yet another embodiment of PQHM recording, the reserved block patterns in in-phase (I) and quadrature (Q) pages are identical, resulting in the detection of Δφ with a 45° offset, i.e., halfway between the I and Q holograms, rather than aligned with the I (or alternatively, Q) hologram. In such a case the offset can be subtracted, and quadrature homodyne detection performed as usual.
The peak is then determined as the largest value in combined correlation matrix 662 (XRMS), for instance pixel 664. The processor then chooses a 2×2, 2×3, 3×2 or 3×3 peak neighborhood of pixels constituting the peak neighborhood 660 according to a peak neighborhood rule. For example, a peak neighborhood might include both the pixel to the left of the peak and the pixel to the right of the peak if the values of those two pixels are within 50% of each other; otherwise it might include only the larger of the two. Similarly, the neighborhood might include both the pixel above the peak and the pixel below the peak if their values are within 50% of each other; otherwise it might include only the larger of the two. The peak itself is included in the peak neighborhood. Pixels diagonal to the peak would then be included if they have three neighbors included in the peak neighborhood according to the previous rules, yielding the 2×2, 2×3, 3×2 or 3×3 peak neighborhood. Once the peak neighborhood has thus been established within combined correlation matrix 662, the processor locates the corresponding neighborhoods in sampled correlation matrices 658 and 678, and sums the values of the pixels in those neighborhoods to yield peak strengths PA, PB, . . . The peak strengths of all reserved blocks are combined to form peak strength maps that can be used to estimate the spatial modulation as explained below.
Regions that are in high contrast and non-inverted in the detected images show large positive peak strength values (approaching +1) in
Δ{circumflex over (φ)}(x,y)=tan−1[PB(x,y),PA(x,y)] (18)
where Δ{circumflex over (φ)} is the estimate of Δφ(x, y) and tan−1 is the four-quadrant arctangent. In n-rature detection, the n detector images may be combined into quadrature and processed according to Eqn (18). Alternatively, Δφ(x, y) may be estimated directly by
Δ{circumflex over (φ)}=tan−1(√{square root over (3)}(PC−PB),2PA−PB−PC) (19)
in the case where n=3, by
Δ{circumflex over (φ)}=tan−1(PB−PD,PA−PC) (20)
when n=4, and by
when n=5. Similar expressions for other values of n may be determined by those skilled in the art.
In some embodiments, the spatial modulation estimate Δ{circumflex over (φ)} is represented as a function including samples corresponding to each pixel, or to groups of pixels. In other embodiments, the spatial modulation estimate Δ{circumflex over (φ)} is represented in non-sampled form, for example by Zernike or Seidel coefficients for components of interest. For example, the spatial modulation estimate Δ{circumflex over (φ)} can be represented compactly by three “position registers” indicating the tip, tilt, and focus terms of the wavefront corresponding to the medium positioning errors.
Least-Squares Fitting for Estimating Spatial Wavefront Modulation
If desired, the spatial phase modulation estimate Δ{circumflex over (φ)} derived from the peak strength maps (e.g., including those shown in
Calibration Pages for Spatial Wavefront Demodulation
In some cases, the spatial modulation Δ{circumflex over (φ)} may contain frequency components higher than those that may be resolved by the reserved blocks in the data pages. If desired, the higher frequency components can be detected and estimated with calibration holograms interspersed among the other holograms. In one embodiment of this technique, a calibration page includes a higher density of reserved blocks than a data page and is therefore able to resolve higher fringe frequencies. For example, if a data page includes 8×8 pixel reserved blocks interspersed on a 64×64 reserved block sample grid, a calibration page could include reserved blocks of the same form on an 8×8 pixel grid along with data interspersed among the reserved blocks. Alternatively, a calibration page could consist essentially of reserved blocks, and would be able to sample spatial frequencies eight times higher than the reserved blocks in a data page. The Δ{circumflex over (φ)} estimate could then be produced using methods almost entirely identical to the methods during homodyne detection decoding, for example by applying equation (18, (19), (20), or (21).
Alternatively, a uniform Δφ calibration page could be recorded (i.e., all pixels in the same state), and the phase Δ{circumflex over (φ)} could be determined based on the detected interference pattern. In other embodiments, larger or smaller reserved block patterns could be used, and multiple sizes could be used simultaneously. For example, a separate set of correlation operations could be performed on 4×4 pixel subsections of the reserved blocks to produce a higher-resolution, but noisier estimate of Δφ, and then this estimate could be combined with the standard-resolution estimate to produce a superior quality estimate. Lower-resolution but less noisy estimates could similarly be produced from larger patterns, e.g., 16×16 reserved block patterns.
If desired, calibration pages may be inserted so that the spatial phase modulation estimate Δ{circumflex over (φ)} may be recalculated any time the spatial phase modulation Δφ is likely to change significantly. For books of angle-multiplexed holograms, for example, Δφ may change significantly when moving to a new book because of the mechanical uncertainty in the x,y (or r, θ in the case of a disk-shaped medium) movement. Hence, it is advantageous to include a Δφ calibration page at the first recovery angle of each book. In at least one embodiment, a Δφ calibration page is recorded and recovered at the first angular address of each book, and the resulting Δ{circumflex over (φ)} estimate is used to demodulate all of the remaining pages in the book. In such an implementation, the overhead incurred by Δφ calibration pages can be relatively low since there may be hundreds of holograms in a book.
In other embodiments, the fraction of calibration pages may be increased to provide redundancy, or to account for other sources of changes to the spatial wavefront modulation. For example, if x,y (or r, θ) mechanical moves are performed within a book (“short-stacking”), then a Δφ calibration page could be recorded at the first angular address of each short stack.
Blind De-Aliasing for Estimating Spatial Wavefront Modulation
In still another embodiment, a higher-quality estimate Δ{circumflex over (φ)} of the spatial wavefront modulation may be generated by performing blind de-aliasing on an aliased estimate of the spatial wavefront modulation. For example, suppose an aliased estimate is produced by interpolating reserved block samples in an ordinary data page recovered with a large tilt component. In such a case, the estimate Δ{circumflex over (φ)} will exhibit a Fourier peak at a spatial frequency which is an aliased version of the true frequency. The set of true frequencies that will alias to the observed frequency is discrete, so it is possible to blindly replace the observed frequency with candidates from this set and retry the page decoding operation. If the page is decoded correctly according to cyclic redundancy check (CRC) codes or similar integrity checks within the recorded data, then the chosen candidate spatial frequency is likely the true spatial frequency. This procedure can be repeated with or without actually repeating holographic exposures until either the correct spatial frequency is found, or until the set of reasonable candidate spatial frequencies is exhausted.
Spatial Wavefront Demodulation (Aka LO Fringe Demodulation)
As explained above, given an estimate Δ{circumflex over (φ)} of the undesired spatial wavefront modulation, a phase factor corresponding to Δ{circumflex over (φ)} may be demodulated (removed) from detected holographic images using a suitable processor, e.g., as in operations 558 and 564 of the spatial wavefront demodulation process shown in
Local oscillator spatial wavefront demodulation may be performed at different stages of acquiring and processing the detected holographic images. For instance, demodulation can be performed at any stage while the images are still at the detector resolution (as opposed to the SLM resolution), e.g., before coarse alignment determination (if any); after coarse alignment but before reserved block correlation operations; and/or after reserved block correlation operations. In practice, it is advantageous to perform detector domain local oscillator fringe demodulation before any coarse alignment or reserved block correlation operations, as those operations can benefit from fringe demodulation.
For quadrature homodyne detection, demodulated images I′A and I′B can be expressed in terms of the estimated phase Δ{circumflex over (φ)} of with the spatial wavefront modulation and the raw images IA and IB:
I′
A
=I
A cos(−Δ{circumflex over (φ)})+IB sin(−Δ{circumflex over (φ)}) (22)
I′
B
=I
A cos(−Δ{circumflex over (φ)})+IB sin(−Δ{circumflex over (φ)})
The demodulated images I′A and I′B may be processed using the desired quadrature homodyne detection or resampled quadrature homodyne detection process. (If desired, raw images recovered using n-rature homodyne detection can be combined to form a pair of in-phase and quadrature images that can be demodulated according to Eq. (22). Combining the n-rature homodyne detection images before demodulation reduces the memory and processing burdens associated with the spatial wavefront demodulation process.)
The effect of the −Δ{circumflex over (φ)} phase factor in each term in Eq. (22) is to largely cancel the existing Δφ within the raw images. For tilt fringe components, this is analogous to baseband demodulation of the carrier frequency from a frequency-modulated signal. The demodulated images retain a phase factor of the difference Δφ−Δ{circumflex over (φ)}. However, even if this difference is relatively large (i.e., the estimate is poor), it is likely to reduce the frequency of remaining fringe components, thus reducing the likelihood of fringe pattern aliasing in subsequent processing.
For n-rature homodyne detection (including resampled n-rature homodyne detection, discussed below), the demodulation process is similar. For n=3:
I′
A
=I
A cos(−Δ{circumflex over (φ)})+IB cos(120°−Δ{circumflex over (φ)})+IC cos(240°−Δ{circumflex over (φ)}) (23)
I′
B
=I
A cos(−120°−Δ{circumflex over (φ)})+IB cos(−Δ{circumflex over (φ)})+IC cos(120°−Δ{circumflex over (φ)})
I′
C
=I
A cos(−240°−Δ{circumflex over (φ)})+IB cos(−120°−Δ{circumflex over (φ)})+IC cos(−Δ{circumflex over (φ)})
Different descriptions of these processes, perhaps including constant phase offsets or trigonometric functions, may be formulated without departing from the scope of the invention.
The number of images may also be changed in the demodulation process. An example of this principle is in the combination of three or more n-rature detection images into two images constituting a quadrature image pair, which may subsequently be processed using quadrature homodyne detection techniques instead of n-rature homodyne detection techniques. For n=3:
I′
A
=I
A cos(−Δ{circumflex over (φ)})+IB cos(120°−Δ{circumflex over (φ)})+IC cos(240°−Δ{circumflex over (φ)}) (24)
I′
B
=I
A cos(90°−Δ{circumflex over (φ)})+IB cos(90°+120°−Δ{circumflex over (φ)})+IC cos(90°+240°−Δ{circumflex over (φ)})
Consolidating images n-rature detection images into two images preserves the common intensity noise cancellation of n-rature homodyne detection, while reducing the memory use and computation in the later stages since the number of images has been reduced. Equations (23) and (24) may be generalized to demodulate Δ{circumflex over (φ)} from any starting number of images n into any finishing number of images m:
where the image subscripts A, B, C, etc. . . . have been replaced by the numbers 0, 1, 2, etc. . . . , (e.g., I′B becomes I′1). n is the initial number of images, and m is the final number. The phase shift values, θn and θm, may be given by:
In this manner, image sets may be converted from quadrature or n-rature of any n to quadrature or n-rature of any m.
Adaptive Optical Fringe Demodulation
Fringe demodulation may also be effected during detection, instead of or in addition to during post-processing, by physically changing the wavefront of the local oscillator and/or signal beam to more closely match each other. For instance, a beam steering device, such as piezo-mounted mirror or other beam deflector, can be used to adjust a tilt component of the local oscillator's and/or signal beam's spatial wavefront. The quadratic component of the local oscillator's and/or signal beam's spatial wavefront can be adjusted with a zoom lens, SLM, deformable mirror, or other suitable optical element. Similarly, an adaptive optics element, such as an SLM or a deformable mirror, is used to adjust an arbitrary component of Δφ as determined, e.g., by Zernike or Seidel coefficients.
Refocusing by Beam Propagation
If desired, the quadratic phase error associated with error(s) in the position of the detector with respect to the focal plane of the hologram can be estimated and/or compensated using a suitable beam propagation algorithm. It is well known that the digitally-sampled complex optical field distribution at one transverse plane can be algorithmically transformed to that of another transverse plane by means of a beam propagation algorithm. Thus, in cases where a focus error exists, the out-of-focus detected image can be converted to an in-focus image if the focus error is known. In the case where the focus error is not known, the controller may iteratively try beam propagation refocusing using different propagation distances, selecting that distance that optimizes a given figure of merit, such as the SNR of the detected signal.
Resampling and Enhanced Resampling of Phase Quadrature Multiplexed Images
In a typical holographic data storage system, the resolution of the detector may exceed the resolution of the data page. In the system 100 of
The process of
In the process of
Conversely,
Upsampling from the reserved block resolution [rb] to the detector page/SLM resolution [SLM] is computationally simpler than upsampling to detector resolution [det] because the reserved blocks may be positioned on a rectilinear grid within the SLM image. Upsampling may thus be performed by relatively simple processes, such as inserting an integral number of values in each dimension, e.g., using a bi-linear interpolation algorithm. Upsampling from the reserved block to the detector resolution, by contrast, involves upsampling reserved block information that does not necessarily lie on a rectilinear grid due to real-world image distortions. In addition, the upsampling ratio may be a non-integer that varies throughout the image. Thus the process of upsampling from the reserved block resolution [rb] to the SLM resolution [SLM] may be both simpler and more accurate than upsampling from the reserved block resolution [rb] to detector resolution.
The enhanced resampling process shown in
Note that for n-rature homodyne detection, the processes shown in
Mathematics of Resampling and Enhanced Resampling
Resampling of the Quadrature Combined Image for quadrature homodyne detection or n-rature homodyne detection may be performed in the same manner as resampling in a direct detection channel. In this approach, the position of each SLM pixel image upon the detector is established by locating the positions of the reserved blocks within the page image. Resampling is then performed by choosing a set of detector pixel values I near to the SLM pixel image (e.g., the nearest 4×4 window of detector pixels), and applying a set of resampling coefficients w, i.e.,
{circumflex over (d)}=Iw (27)
where {right arrow over (d)} is the estimated data value d, of the SLM pixel image, e.g., d∈{−1,+1} for BPSK data. w may be chosen to minimize the squared error between {right arrow over (d)} and the actual data, d, over many detection cases. Furthermore, differing w coefficient sets may be optimized and applied for differing alignment cases, e.g., 256 different w coefficient sets could be used corresponding to differing 2D fractional pixel alignment cases of the 4×4 window of detector pixels with respect to the SLM pixel image.
For enhanced resampling (
{circumflex over (d)}=[c
A
Ã+c
B
{tilde over (B)}+c
C
{tilde over (C)}+ . . . ]
w (28)
where cA, cB, . . . represent the combination coefficients for the respective detector pixel value sets. In one embodiment, these combination coefficients may be determined by the cosine projection of the data page upon the local oscillator used to detect it as measured by the reserved block correlation peak strengths, e.g.,
where PA, PB, . . . are the Upsampled A Peaks, B Peaks, etc., . . . for the corresponding SLM pixel image as determined from the reserved block correlation operations on the corresponding IA, IB, . . . detector images. In another embodiment, the normalizing denominators in the cosine projections may be omitted, e.g.,
c
A
=P
A
,c
B
=P
B, . . . (30)
In still other embodiments, the combination coefficients may be determined from the cosine projections of the reserved blocks from a different data page, e.g., when performing phase quadrature holographic multiplexing as disclosed above. For example, the data value of the corresponding SLM pixel in the I (in phase) data page could be estimated by applying equations (28) and (29) as presented, and then the data value of the corresponding SLM pixel image in the Q (quadrature) image could be estimated by using different combination coefficients, e.g.,
where φQ is the known phase difference between the I and Q images, e.g., preferably 90°. In this manner separate correlation and upsampling operations do not need to be performed for the reserved block patterns in the Q image; instead the entire image combination and resampling process may be accomplished using the reserved block patterns of the I image.
Experimental Coherent Channel Modulation and Detection
This section includes results of an experimental demonstration of the storage and recovery of holographic data at an areal density of 2.0 Tbit/in2. This demonstration is but one example of the present technology and should not be taken to limit the scope of this disclosure or the appended claims.
This demonstration involved reading phase quadrature holographic multiplexed data pages using n-rature homodyne detection with n=4. For this demonstration, 220 holograms were recorded in each “book” (spatial location) using angle multiplexing. A grid of 6×9 books at a pitch of 304 μm was so recorded using polytopic multiplexing, yielding a raw areal bit density of 2.004 Tbit/in2. Dynamic aperture multiplexing was also employed. The holograms were recorded in a 1.5 mm thick layer of photopolymer media with a total M/# (dynamic range) of 173. Each of the 220 holograms was recorded using sequential PQHM recording, and thus contains two separate data pages (an in-phase page and a quadrature page) separated in phase by 90°. Among the 220 holograms were four Δφ calibration holograms, also recorded using PWHM. The holograms were recovered using n-rature homodyne detection with n=4, then demodulated and combined using the techniques disclosed herein.
Fourth, the predetermined fringe demodulation pattern, Δ{circumflex over (φ)}PRE, shown in
Fifth, a subset of the reserved blocks in the demodulation page was chosen and cross-correlations were performed to establish spatially-sampled peak strengths and phase measurements. A least-squares fit of these measurements was performed to produce Δ{circumflex over (φ)}LS, a quadratic wavefront best fitting the sampled data. The best-fit quadratic wavefront is shown in
Sixth, the best-fit quadratic wavefront shown in
Δ{circumflex over (φ)}Demod=Δ{circumflex over (φ)}PRE+Δ{circumflex over (φ)}LS+Δ{circumflex over (φ)}RB (32)
In the demonstration, this demodulation wavefront derived from the calibration hologram at angle 10 was used to demodulate the data holograms at angles 0 through 54, excluding the demodulation hologram at angle 10 itself. To recover the hologram at angle 2, the reference beam is aligned to the data hologram at angle 10, and four n-rature data detector images of the data hologram are exposed. The local oscillator image is subtracted (pixel-wise) from each of the four data detector images. Then the measured fringe demodulation pattern, Δ{circumflex over (φ)}Demod, is demodulated from the four data detector images. The demodulation is performed using n-rature to quadrature fringe demodulation to yield two data images constituting a quadrature image pair.
The two data images were then resampled to produce ÊI and ÊQ images, containing estimates of the optical field for each recorded pixel in the I and Q data pages, respectively, using resampled quadrature homodyne detection in combination with PQHM recovery. During the resampling process, the I and Q reserved blocks were jointly detected in the two data images and used to produce a wavefront Δ{circumflex over (φ)}dat representing the fringe pattern still present in the data hologram after removal of the demodulation pattern, Δ{circumflex over (φ)}Demod. This fringe pattern Δ{circumflex over (φ)}dat is shown in
In a device storing data, the ÊI and ÊQ images could subsequently be used to generate soft decision estimates of the state of each pixel, which are fed into a soft-decision decoder to reduce the bit error rate of the recorded user data to an acceptably low level, e.g., 10−15. The pixels in the ÊI and ÊQ images may, however, also be used to generate a hard decision about the binary state of each pixel by simple threshold detection. Comparing this decision to the true value produces a raw bit error rate which is diagnostic of the quality of the recording channel, and may be used to determine the amount of forward error correction required for the soft-decision decoder. Additionally, bit error maps may be produced showing the location of erroneously detected pixels within the data pages. Bit error rates may be converted into equivalent signal-to-noise ratios, e.g., denoted BSNR and defined as the SNR in decibels of an additive white Gaussian noise (AWGN) channel achieving the same bit error rate. In other words, BSNR is SNR back-calculated from the bit error rate rather than from the 1/0 separation like normal SNR.
Additionally, a portion of the laser light was diverted through a diffuser, and then blended with the signal beam in order to simulate coherent optical noise with a broad angular spectrum. The ratio of optical signal to optical noise power was varied across a test range, and detector images were collected and processed for all three detection variants.
The terms and phrases as indicated in quotation marks (“ ”) in this section are intended to have the meaning ascribed to them in this Terminology section applied to them throughout this document, including in the claims, unless clearly indicated otherwise in context. Further, as applicable, the stated definitions are to apply, regardless of the word or phrase's case, to the singular and plural variations of the defined word or phrase.
References in the specification to “one embodiment”, “an embodiment”, “another embodiment, “a preferred embodiment”, “an alternative embodiment”, “one variation”, “a variation” and similar phrases mean that a particular feature, structure, or characteristic described in connection with the embodiment or variation, is included in at least an embodiment or variation of the invention. The phrase “in one embodiment”, “in one variation” or similar phrases, as used in various places in the specification, are not necessarily meant to refer to the same embodiment or the same variation, components, or objects, in which no other element, component, or object resides between those identified as being directly coupled.
The term “approximately,” as used in this specification and appended claims, refers to plus or minus 10% of the value given.
The term “about,” as used in this specification and appended claims, refers to plus or minus 20% of the value given.
The terms “generally” and “substantially,” as used in this specification and appended claims, mean mostly, or for the most part.
While various inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
The above-described embodiments can be implemented in any of numerous ways. For example, embodiments of designing and making the technology disclosed herein may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
Also, a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format.
Such computers may be interconnected by one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
The various methods or processes (e.g., of designing and making the coupling structures and diffractive optical elements disclosed above) outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
In this respect, various inventive concepts may be embodied as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, one or more floppy discs, compact discs, optical discs, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other non-transitory medium or tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above. The computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above.
The terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention.
Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
Also, data structures may be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements.
Also, various inventive concepts may be embodied as one or more methods, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 221.03.
This application is a continuation of, and claims priority from, co-pending PCT Application No. PCT/US2015/028356, filed Apr. 29, 2015 and entitled “Methods and Apparatus for Coherent Holographic Data Channels,” which claims priority to U.S. Application No. 61/986,083, filed Apr. 29, 2014, and entitled “N-rature Homodyne Detection.” The above applications are incorporated herein by reference, in their entireties. This application is also a continuation-in-part of, and claims priority from, co-pending U.S. application Ser. No. 14/484,060, filed Sep. 11, 2014 and entitled “Methods and Devices for Coherent Optical Data Detection and Coherent Data Channel Modulation,” which claims priority from the following U.S. patent applications: U.S. Application No. 61/876,725, filed Sep. 11, 2013 and entitled “Multi-Terabyte Holographic Data Storage Systems; and U.S. Application No. 61/941,974, filed Feb. 19, 2014, and entitled “Reflective Holographic Storage Medium.” The above applications are incorporated herein by reference, in their entireties.
Number | Date | Country | |
---|---|---|---|
61876725 | Sep 2013 | US | |
61941974 | Feb 2014 | US | |
61986083 | Apr 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2015/028356 | Apr 2015 | US |
Child | 14484060 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14484060 | Sep 2014 | US |
Child | 14831291 | US |