This application relates generally to projection systems and methods of driving a projection system.
Digital projection systems typically utilize a light source and an optical system to project an image onto a surface or screen. The optical system includes components such as mirrors, lenses, waveguides, optical fibers, beam splitters, diffusers, spatial light modulators (SLMs), and the like. Some projection systems are based on SLMs that implement spatial amplitude modulation. In such a system, the light source provides a light field that embodies the brightest level that can be reproduced on the image, and light is attenuated (e.g., discarded) in order to create the desired scene levels. In such a configuration, light that is not projected to form any part of the image is attenuated or discarded. Alternatively, a projection system may be configured such that light is steered instead of attenuated. Such systems may operate by generating a complex phase signal and providing the signal to a modulator, such as a phase light modulator (PLM).
Various aspects of the present disclosure relate to circuits, systems, and methods for projection display using phase light modulation to generate a precise and accurate reproduction of a target image.
In one exemplary aspect of the present disclosure, there is provided a projection system comprising: a light source configured to emit a light in response to an image data; a phase light modulator configured to receive the light from the light source and to apply a spatially-varying phase modulation on the light, thereby to steer the light and generate a projection light; and a controller configured to control the light source, control the phase light modulator, and iteratively for each of a plurality of subframes within a frame of the image data: determine a reconstruction field, map the reconstruction field to a modulation field, scale an amplitude of the modulation field, map the modulation field to a subsequent-iteration reconstruction field, and provide a phase control signal based on the modulation field to the phase light modulator.
In another exemplary aspect of the present disclosure, there is provided a method of driving a projection system comprising emitting a light by a light source, in response to an image data; receiving the light by a phase light modulator; applying a spatially-varying phase modulation on the light by the phase light modulator, thereby to steer the light and generate a projection light; and iteratively, with a controller configured to control the light source and the phase light modulator, for each of a plurality of subframes within a frame of the image data: determining a reconstruction field, mapping the reconstruction field to a modulation field, scaling an amplitude of the modulation field, mapping the modulation field to a subsequent-iteration reconstruction field, and providing a phase control signal based on the modulation field to the phase light modulator.
These and other more detailed and specific features of various embodiments are more fully disclosed in the following description, reference being had to the accompanying drawings, in which:
This disclosure and aspects thereof can be embodied in various forms, including hardware or circuits controlled by computer-implemented methods, computer program products, computer systems and networks, user interfaces, and application programming interfaces; as well as hardware-implemented methods, signal processing circuits, memory arrays, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and the like. The foregoing summary is intended solely to give a general idea of various aspects of the present disclosure, and does not limit the scope of the disclosure in any way.
In the following description, numerous details are set forth, such as circuit configurations, timings, operations, and the like, in order to provide an understanding of one or more aspects of the present disclosure. It will be readily apparent to one skilled in the art that these specific details are merely exemplary and not intended to limit the scope of this application.
Moreover, while the present disclosure focuses mainly on examples in which the various circuits are used in digital projection systems, it will be understood that this is merely one example of an implementation. It will further be understood that the disclosed systems and methods can be used in any device in which there is a need to project light; for example, cinema, consumer and other commercial projection systems, heads-up displays, virtual reality displays, and the like.
In comparative projection systems based on a PLM, generating the complex phase signal has presented challenges. For example, comparative algorithms may create images that appear similar to a target image, but have an excess or deficit of light at unpredictable locations, or may have other artifacts which ruin the quality of the reproduction. If the comparative projection systems are dual-modulation systems, light deficits, unless addressed by the algorithm, may only be overcome by increasing the diffuse illumination. This may be prohibitive due to cost and/or efficiency considerations. The present disclosure provides for phase modulation image projection systems involving single (i.e., phase-only) or multiple stages of modulation, and implement features such as algorithms which can create accurate reconstructions of the target image and, in the case of multiple modulation stages, create a reproduction that is appropriate for compensation by the downstream modulator(s).
The projection system 100 further includes a controller 114 configured to control various components of the projection system 100, such as the light source 101 and/or the PLM 105. In some implementations, the controller 114 may additionally or alternatively control other components of the projection system 100, including but not limited to the illumination optics 103, the first projection optics 107, and/or the second projection optics 111. The controller 114 may be one or more processors such as a central processing unit (CPU) of the projection system 100. The illumination optics 103, the first projection optics 107, and the second projection optics 111 may respectively include one or more optical components such as mirrors, lenses, waveguides, optical fibers, beam splitters, diffusers, and the like. Moreover, while
The light source 101 may be, for example, a laser light source and the like. Generally, the light source 101 is any light emitter which emits, e.g. coherent, light. In some aspects of the present disclosure, the light source 101 may comprise multiple individual light emitters, each corresponding to a different wavelength or wavelength band. The light source 101 emits light in response to an image signal provided by the controller 114. The image signal includes image data corresponding to a plurality of frames to be successively displayed. The image signal may originate from an external source in a streaming or cloud-based manner, may originate from an internal memory of the projection system 100 such as a hard disk, may originate from a removable medium that is operatively connected to the projection system 100, or combinations thereof.
The filter 109 may be provided to mitigate effects caused by internal components of the projection system 100. In some systems, the PLM 105 (which will be described in more detail below) may include a cover glass and cause reflections, device switching may temporarily cause unwanted steering angles, and various components may cause scattering. To counteract this and decrease the floor level of the projection system 100, the filter 109 may be a Fourier (“DC”) filter component configured to block a portion of the fourth light 108. Thus, the filter 109 may increase contrast by reducing the floor level from light near zero angle, which will correspond to such elements as cover-glass reflections, stroke transition states, and the like. This DC block region may be actively used by the algorithm to prevent certain light from reaching the screen. In some aspects of the present disclosure, the filter 109 prevents the undesired light from reaching the screen by steering said light to a light dump located outside the active image area, in response to control from the controller 114.
As illustrated in
The liquid crystal layer 240 is disposed between the first electrode layer 220 and the second electrode layer 230, and includes a plurality of liquid crystals 241. The liquid crystals 241 are particles which exist in a phase intermediate between a solid and a liquid; in other words, the liquid crystals 241 exhibit a degree of directional order, but not positional order. The direction in which the liquid crystals 241 tend to point is referred to as the “director.” The liquid crystal layer 240 modifies incident light entering from the cover glass 250 based on the birefringence Δn of the liquid crystals 241, which may be expressed as the difference between the refractive index in a direction parallel to the director and the refractive index in a direction perpendicular to the director. From this, the maximum optical path difference may be expressed as the birefringence multiplied by the thickness of the liquid crystal layer 240. This thickness is set by the spacer 260, which seals the PLM 200 and ensures a set distance between the cover glass 250 and the silicon backplane 210. The liquid crystals 241 generally orient themselves along electric field lines between the first electrode layer 220 and the second electrode layer 230. As illustrated in
The yoke 321 may be formed of or include an electrically conductive material so as to permit a biasing voltage to be applied to the mirror plate 322. The mirror plate 322 may be formed of any highly reflective material, such as aluminum or silver. The electrodes 330 are configured to receive a first voltage and a second voltage, respectively, and may be individually addressable. Depending on the values of a voltage on the electrodes 330 and a voltage (for example, the biasing voltage) on the mirror plate 322, a potential difference exists between the mirror plate 322 and the electrodes 330, which creates an electrostatic force that operates on the mirror plate 322. The yoke 321 is configured to allow vertical movement of the mirror plate 322 in response to the electrostatic force. The equilibrium position of the mirror plate 322, which occurs when the electrostatic force and a spring-like force of the yoke 322 are equal, determines the optical path length of light reflected from the upper surface of the mirror plate 322. Thus, individual ones of the plurality of controllable reflective elements are controlled to provide a number (as illustrated, three) of discrete heights and thus a number of discrete phase configurations or phase states. As illustrated, each of the phase states has a flat profile. In some aspects of the present disclosure, the electrodes 330 may be provided with different voltages from one another so as to impart a tilt to the mirror plate 322. Such tilt may be utilized with a light dump of the type described above.
The PLM 300 may be capable of high switching speeds, such that the PLM 300 switches from one phase state on the order of tens of μs, for example. In order to provide for a full cycle of phase control, the total optical path difference between a state where the mirror plate 322 is at its highest point and a state where the mirror plate 322 is at its lowest point should be approximately equal to the wavelength λ of incident light. Thus, the height range between the highest point and the lowest point should be approximately equal to λ/2.
Regardless of which particular architecture is used for the PLM 105, it is controlled by the controller 114 to take particular phase configurations on a pixel-by-pixel basis. Thus, the PLM 105 utilizes an array of the respective pixel elements, such as a 960×540 array. The number of pixel elements in the array may correspond to the resolution of the PLM 105. Due to the beam-steering that capabilities of the PLM 105, light may be steered to any location on the reconstruction image plane. The reconstruction image plane is not constrained to be the same pixel grid as the PLM 105. The reconstruction image plane may be located anywhere between the PLM 105 and the first projection optics 107. In a dual-modulation configuration, for example, the reconstruction image is imaged onto the secondary modulator 105′ through the first projection optics 107. As the PLM 105 is capable of a fast response time, high-resolution moving images may be generated on the reconstruction image plane. The operation of the PLM 105 may be affected by the data bandwidth of the projection system 100, stroke quantization of the PLM 105, and/or response time of the PLM 105. The maximum resolution may be determined by the point-spread function (PSF) of the light source 101 and on parameters of various optical components in the projection system 100. Because the PLM 105 in accordance with the present disclosure is capable of a fast response time, multiple phase configurations can be presented in succession for a single frame, which are then integrated by the human eye into a high-quality image.
The fast response time of the PLM 105 may be leveraged by a method that uses an iterative back-and-forth wave-propagation loop to estimate the PLM phase configuration to reconstruct a target light field (e.g., a target image). The iterative back-and-forth wave-propagation loop may be based on a loop as describe in, for example, commonly-owned U.S. patent application Ser. No. 16/650,545, the entire contents of which are herein incorporated by reference. In the reference example, a random or quasi-random phase is used as an initialization seed for the iterative wave-propagation loop. By doing so, for the same target image in each subframe, the wave-propagation loop produces a different phase configuration that reconstructs an approximation of the target image. Due at least in part to the fast response time of the PLM 105, presenting these reconstructed image (subframes) in quick succession may lead to a temporally integrated image that may mitigate artifacts (e.g., when the PLM 105 transitions from one phase configuration to the next). Such a method may use a low-pass or band-pass angular filter (e.g., an algorithmic filter) selected for a particular balance between reconstruction quality and steering efficiency. In some implementations of such a method, substantial light may be missing from reconstructed image features. In certain applications (e.g., imaging) and/or for certain device architectures (e.g., dual-modulation), this missing light may lead to reduced efficacy. For example, in the case of dual-modulation the primary modulator is only capable of attenuating the light field, and not of adding energy to it. If this were counteracted by increasing the illumination power, the efficiency of the beam-steering modulation stage may be decreased and/or the cost of the illumination components may be increased. These effects may be avoided by providing a particular wave-propagation loop instead of an increase in illumination power.
The wave-propagation loop operates to establish a bidirectional mapping between the phasor field at the modulation plane M(x, y)=AM(x, y)∠ϕM(x, y) (also known as a “modulation field”) and the phasor field at the reconstruction plane R(x′, y′)=AR(x′, y′)∠ϕR(x′, y′) (also known as a “reconstruction field”), where A represents the amplitude component and ∠ϕ represents the phase component. The variables x and y represent pixel coordinates. This bidirectional mapping may be any numerical wave propagation including but not limited to Fresnel or Rayleigh-Sommerfeld methods. The mapping may be denoted by P(M(x, y))R(x′, y′) and its respective converse P−1(R(x′, y′))M(x, y), where P is the wave propagation function. In this example, the modulation plane refers to the plane of the PLM 105 which can only modulate phase, and the reconstruction plane refers to a plane where the reconstruction image is formed, which may be located anywhere between the PLM 105 and the first projection optics 107, i.e. optically downstream PLM 105. The reconstruction plane (or field) is located inside the projector (projector system). The reconstruction plane (or field) R(x′, y′) is located at an optical distance in the near field relative to the modulation plane (or field) M(x, y). In contrast, in conventional Gerchberg-Saxton algorithms, mapping does not occur between two complex planes but rather, for the same definition of the Fourier transform, between a complex plane and infinity, i.e., the far field. Mapping between the modulation plane and the reconstruction plane is more efficient in terms of amount of energy being steered into the right locations of the reconstruction plane compared to mapping between a complex plane and infinity. Definitions of near field and far field optical distances depend on the specific implementation, e.g., the type of PLM, design constraints, etc. For example, in cinema projectors, the near field optical distance may be in the order of a few centimeters or a few tens of centimeters, while a far field optical distance may be in the order of a few meters. In conventional Gerchberg-Saxton algorithms, since mapping is performed via the Fourier transform, mapping is inaccurate in the near field. In an example of the present disclosure, the wave propagation function P mapping the modulation field to the reconstruction field is not a Fourier transform. In a dual-modulation configuration, the reconstruction image is imaged onto the secondary modulator 105′ through the first projection optics 107. In a single-modulation configuration, the reconstruction image is imaged directly onto the screen through the first projection optics 107 and the second projection optics 111.
The modulation field may then be calculated by back-propagating the reconstruction field according to the following expression (1):
P
−1(R0(x′, y′))M0(x, y) (1)
In expression (1), M0(x, y)=AM0(x, y)∠ϕM0(x, y). The modulation field may then be subject to additional processing to account for physical processes or PLM characteristics, such as phase stroke quantization. Before forward-propagating the modulation field to obtain its corresponding reconstruction field, its amplitude-component may be dropped, i.e., may be set to 1, leading to the following expression (2):
P(1∠ϕM0(x, y)R1(x′, y′) (2)
In expression (2), R1(x′, y′)=AR1∠ϕR1. In this iteration, the amplitude-component of the reconstruction field is then typically replaced by the target field, and the cycle repeats again; that is, the resulting field is backwards-propagated resulting in its corresponding modulation field according to the following expression (3):
P
−1(√{square root over (I(x′, y′))}∠ϕR1(x′, y′)M1(x, y) (3)
The corresponding modulation field from expression (3) is then subjected to additional processing similar to that described above, its amplitude-component is dropped, and so on. This iterative process repeats, thus forming a wave-propagation loop. This loop may be summarized as illustrated in
At operation 401, the amplitude, phase, and an index variable n (which may, for example, indicate the current iteration) are initialized for a frame of image data. For example, the amplitude AR0(x′, y′) is initialized to √{square root over (I(x′, y′))}, the phase ϕR0(x′, y′) is initialized to some initial value (e.g., a value near the expected phase, a random or pseudo-random seed, etc.), and the index n is set to 0. The iterative wave-propagation loop is then performed, including several operations. At operation 402, the reconstruction field Rk(x′, y′) is set to AR0(x′,y′)∠ϕRk(x′, y′). Next, at operation 403, the reconstruction field is mapped to the modulation field using expression (1). Note that the subscript 0 in expression (1) here corresponds to the index n, which is 0 for the first iteration in the loop. At operation 404, the amplitude component of the modulation field is set to a predetermined value. For example, the amplitude component of the modulation field may be set to 1. At operation 405, the resulting field is mapped to the reconstruction field for the next iteration using expression (2). In expression (2), the subscripts 0 on the left and 1 on the right indicate the index n, which is 0 for the first iteration in the loop and 1 for the next iteration in the loop. The loop is repeated for n=0 . . . N, where N is the number of iterations. In some examples N may be predetermined; however, in other examples the number of iterations may be dynamically determined. That is, the iterative loop may be automatically terminated (e.g., when the reconstruction field achieves or exceeds a target quality. Thus, at operation 406, the index n is compared to the value N. If n<N, the index n is incremented at operation 407 and the loop begins again with operation 402. If n=N, the phase-component of the modulation field (as described above) is displayed on the PLM as to apply a spatially-varying phase modulation to the second light 104. The method then proceeds to the next frame through operation 408 and begins again at operation 401 for the new frame or subframe, depending on whether the current subframe being processed is the last subframe within a given frame.
The wave-propagation loop described above may be modified to speed up convergence and/or increase the final quality of the reconstruction field. These effects may be realized by implementing a regularization factor, which adjusts the target amplitudes of the subsequent iteration with the feedback of the reconstruction error ε(x′, y′) from the current iteration. Regularization may provide improved reconstruction image quality at the cost of only a very small increase in computational complexity (e.g., corresponding to the overhead of the regularization factor). The reconstruction error for a given subframe n is given by the following expression (4):
ε(x′, y′)=√{square root over ((x′, y′))}−Arn(x′, y′) (4)
A gain function γ(ε(x′, y′)) may also be defined using, as two examples, the following expressions (5a) or (5b):
γ(ε(x′, y′))=sign(ε(x′, y′))·|ε(x′, y′)|β (5a)
γ(ε(x′, y′))=β·G(|ε(x′, y′)|) (5b)
In expressions (5a) and (5b), β is a gain factor. In expression (5b), a blurring filter G (e.g., a Gaussian filter) is applied. The regularization operation may then be performed for the subsequent subframe n+1 according to the following expressions (6a) or (6b):
A
R,n+1(x′, y′)=√{square root over (I(x′,y′))}+γ(εn(x′, y′)) (6a)
A
R,n+1(x′, y′)=AR(x′,y′)+γ(εn(x′, y′)) (6b)
Herein, regularization using expression (6a) is referred to as a “first regularization” method and regularization using expression (6b) is referred to as a “second regularization” method.
To implement regularization, the method illustrated in
At operation 501, the amplitude, phase, and an index variable n (which may, for example, indicate an iteration) are initialized for a frame of image data. For example, the amplitude AR0(x′, y′) is initialized to √{square root over (I(x′, y′))}, the phase ϕR0(x′, y′) is initialized to some initial value (e.g., a value near the expected phase, a random or pseudo-random seed, etc.), and the index n is set to 0. The iterative wave-propagation loop is then performed, including several operations. At operation 502, the reconstruction field Rk(x′, y′) is set to AR0(x′, y′)∠ϕRk(x′, y′). Next, at operation 503, the reconstruction field is mapped to the modulation field using expression (1). At operation 504, the amplitude component of the modulation field is set to a predetermined value. For example, the amplitude component of the modulation field is set to 1. At operation 505, the resulting field is mapped to the reconstruction field for the next iteration using expression (2). At operation 506, a regularization factor is applied using, for example, expression (6a) in the first regularization method or expression (6b) in the second regularization method. The loop is repeated for n=0 . . . N, where N is the number of iterations. Thus, at operation 507, the index n is compared to the value N. If n<N, the index n is incremented at operation 508 and the loop begins again with operation 502. If n=N (whether N is predetermined or dynamically determined as described above), the phase-component of the modulation field (as described above) is displayed on the PLM as to apply a spatially-varying phase modulation to the second light 104. The method then proceeds to the next frame through operation 509 and begins again at operation 501 for the new frame or subframe, depending on whether the current subframe being processed is the last subframe within a given frame.
The effects of the wave-propagation loop and of regularization on convergence speed and image quality are illustrated in
To assess the conversion quality (represented by the y axes in
In
In
The wave-propagation loop with iterative regularization produces phase configurations that reproduce the relative intensities in the reconstructed light field. In some implementations, this may be produced under the assumption that the integrality of the illumination is steered to the primary modulator to make up the reconstruction light field, and the burden of dimming excess light is thus placed on the primary modulator. For certain applications (e.g., for high dynamic range image projection), the light field is dimmed to meet the limited contrast ratio of the primary ratio. This dimming may be achieved by providing a filter, by globally dimming the illumination. or by using a beam-steering dump.
A beam-steering dump may be implemented as part of the wave-propagation loop. In such an implementation, the beam-steering dump allows the wave-propagation loop to converge to a phase configuration that steers any excess energy into the dump region, while achieving the absolute intensity levels within the reconstruction image. Moreover, the beam-steering dump region operates as a float region inside which the values are unconstrained and free to fluctuate; therefore, the float region relaxes the constraints within the wave-propagation loop, which may allow convergence onto a solution.
Each image in the loop illustrated in
After back-propagation, the resulting modulation field Mn(x, y) includes a phase component 1030, which may be represented as ϕMn(x, y), and an amplitude component 1040, which may be represented as AMn(x, y). Both the phase component 1030 and the amplitude component 1040 of the modulation field include an addressable region (1031 and 1041, respectively) and an unaddressable region (1032 and 1042, respectively). The unaddressable regions 1032 and 1042 are outside of the modulation region of the PLM 105 and therefore may be set to zero. At this point in the loop, values in the addressable region 1041 (the amplitude component 1040 of the modulation field) may be set to the flat level intensity (in nits) of the illumination √{square root over (Killum)} (i.e., the square root of the illumination intensity, which may be a single value or a 2D map and may be treated as a constant). Setting the regions to these values imposes a relationship between the amplitude values at the modulation field and the amplitude values at the reconstruction field. This may enable the loop to automatically converge onto a reconstruction field in which the values within the active region 1021 approximate the target absolute levels, and the values within the dump region 1022 contain any excess energy due to (for example) the target image uses less energy than what is provided by the light source.
At the n=N iteration, the addressable region 1031 of the phase component 1030 of the modulation field are output as an interim phase-component 1050 (e.g., a phase configuration 1050) for the PLM 105. Otherwise, after forward-propagation the loop produces a reconstruction field Rn+1(x′, y′) which includes interim phase-component 1050, which may be represented as ϕR,n+1(x′, y′), and an interim amplitude-component 1060, which may be represented as AR,n+1(x′, y′). The active region 1061 of the interim amplitude-component 1060 may then be subjected to regularization 1070, while the dump region 1062 of the interim amplitude-component 1060 may be left untouched. The regularization 1070 may use the first regularization method (i.e., using expression (6a)) or the second regularization method (i.e., using expression (6b)). The interim phase-component 1050 and the interim amplitude-component 1060, which includes the active region 1061 (after regularization) and the dump region 1062, are then used, in respective order, as initial reconstruction phase field 1010 and an initial reconstruction amplitude field 1020, which includes the active region 1021 and the dump region 1022, for the next iteration through the loop. During the forward-propagation and regularization, values within the various dump regions may be unprocessed and thus left untouched.
While
Dumping large amounts of light into the dump regions may, in some implementations, result in light bleeding into the image region and effectively degrading the image. Although the primary modulator functions to increase the effective contrast ratio of the beam-steered light field, the primary modulator also provides some light-dumping capabilities. Therefore, the wave-propagation loop may be adjusted such that it automatically converges onto a solution that dumps excess light by using both the dump regions and the primary modulator. Such an adjustment may include modifying the regularization expression such that it makes the wave-propagation loop aware of the primary modulator's capabilities (i.e., its contrast ratio), thereby allowing it to produce more accurate phase configurations that interchangeably use the dump region and the primary modulator leading to better on-screen image absolute levels.
A contrast-awareness function may be defined using the following expression (7):
In expression (7), c represents the contrast ratio of the primary modulator and the function clip (X, A, B) represents a clipping function which clips the value of X to be in the interval [A, B]. As a result, the reconstruction error for a given subframe n becomes the following expression (8):
ε(x′, y′)=√{square root over (I(x′, y′))}−√{square root over (C(x′, y′)·Arn(x′, y′)2)} (8)
Expression (8) may be used for the error in the gain function of expression (5b) instead of expression (4). This will make the regularization operation of (6b) aware of the contrast ratio of the primary modulator and thus provide the above-noted benefits.
Some open-loop integration schemes feed the phase algorithm the same input image for every subframe and use the randomness of each individual solution to integrate into an image having less noise (e.g., a higher SNR). In some examples, up to 100 individual solutions (or more) may be generated, each for a corresponding subframe within a frame. Even if, for example, a diffraction-aware algorithm is used, each subframe tends to exhibit roll-off toward the edges of the frame and may exhibit randomness; thus, the resulting integrated light field exhibits roll-off and may present an image with increased blurring and reduced contrast. These effects occur in addition to the level issues (e.g., overshoot and undershoot) for individual solutions described above.
In practice, it may be difficult or impossible to predict the exact outcome of a diffraction-aware phase algorithm, especially when using a random phase distribution as an initial state. However, integrating many solutions for the same image may provide information regarding the phase algorithm itself. As such, it may be possible to use the results of previous integrations to correct for deficiencies in subsequent integrations, and thereby achieve a more accurate target image. This may be accomplished using a feedback loop acting on the intensities fed into the phase algorithm for each sub-frame. In one example, the feedback loop is applied outside of and independent from the phase algorithm itself, referred to as “outer-loop feedback” or OLFB. OLFB may be used in addition to other iterative methods such as wave-propagation or iterative regularization, or may be used by itself.
The OLFB method may be implemented by performing a series of operation for each integration within a subframe except for the first integration. One example of an OLFB method is illustrated in
E=c
1(T)−c2(iLFS) (9)
In expression (9), c1 and c2 are conditioning functions. Next at operation 1403, the error signal is combined with the input intensities to generate a new target T′ for the phase algorithm. This may be represented by the following expression (10):
I′=I+g(E) (10)
In expression (10), g is a conditioning function. The conditioning functions c1 and c2 scale their respective arguments T and iLFS so they both have the same total energy. The conditioning function g applies a gain to the error to amplify the correction and speed the convergence. The error signal E and the target intensities T′ are updated for each subframe n=1 . . . N, where N is the number of subframes or integrations in a frame, using results from the previous iteration or from multiple previous iterations. Thus, at operation 1404, the index n is compared to the value N. If n<N, the index n is incremented at operation 1405 and the loop begins again with operation 1402. If n=N, the method proceeds to the next frame through operation 1406 and begins again at operation 1401 for the new frame. The total number of iterations N may be chosen to balance image quality and computational requirements. In some implementations, N≥6. In one example, N=6.
The effects of the OLFB as compared to an open-loop method are illustrated in
The differences between
Moreover, OLFB generates a cleaner and more accurate image compared to the open-loop method. This can be seen by the lower amount of noise in curves 1703a and 1703b compared to the curves 1702a and 1702b. This is also illustrated in more detail in
Curve 1802 increases more quickly than curve 1801, and has a much higher maximum value. For example, the y-value of curve 1802 at x=6 is higher than the y-value of curve 1801 at x=100; this indicates that the PSNR of the OLFB method with only six integrations is higher than the PSNR of the open-loop method with 100 integrations. While not illustrated in
As noted above, because a PLM can only redirect light (as opposed to discarding it), achieving absolute intensities in the reconstructed image may involve dumping the excess energy by means other than the PLM. In one example, if a Fourier (DC) filter is present in the optical path (e.g., as or with the filter 109), all unmodulated light (i.e., light traveling straight through) after the PLM will be discarded. It is then possible to control the amount of light that reaches the reconstruction plane by limiting the area of the modulator that is used to create the image (referred to as the “active area”). The light in the inactive area of the modulator will then be discarded in the Fourier filter. Additionally or alternatively, it is possible to create a beam-steering dump region around the reconstructed image that will contain the excess energy.
The beam-steering dump region may be implemented using the OLFB method. This may be performed instead of the implementation of the dump region in the iterative regularization process described above. The OLFB method facilitates dumping because, while the diffraction efficiency is not known a priori, it may be calculated (either for the image and dump together, or for either individually) after solving for the first sub-frame and may be updated at each subsequent integration. The diffraction efficiency will generally remain constant across all integrations, despite the target being updated for each subframe. Thus, using the diffraction efficiency of the previous integration to scale the image portion leads to accurate results. In this implementation, the phase algorithm itself does not implement a dumping scheme and instead merely solves for a normalized square target.
In one example, the dump region is implemented as two bands of equal intensities above and below the reconstructed image, making the overall reconstruction window square and thus preserving the steering requirements (i.e., the maximum steering angle is unchanged). Moreover, providing a square reconstruction window reduces computational overhead for the two-dimension FFTs used to simulate diffraction. A gap may be reserved between the dump region and the edges of the image region, thereby to prevent light in the dump from bleeding back onto the image portion when a PSF is used (e.g., for multi-modulation systems). In such an implementation, the image energy may first be scaled up by some estimated diffraction efficiency, and subsequently updated by the actual calculated efficiency value.
When the target image includes small and very bright features, diffraction-aware phase algorithms may introduce an artifact on screen that manifests as a flare around the object, with a large portion of the energy spreading both horizontally and vertically. Such flaring and the effects of the above algorithms thereon are illustrated in
Without limitation to any one particular mathematical theory, it is believed that the flare effect occurs when the propagation operator, which has a circular lens-like shape, reaches the edge of the modulator and is clipped into a rectangular shape. The brighter the object, the larger the area of the modulator that is allocated for the object; in other words, the larger the “lens.” This true not only for lens-like algorithms, but also for diffraction-aware algorithms. When the “lens” for a bright object hits the aperture of the projection system, its propagation function is multiplied with the corresponding portion of the rectangular aperture. The resulting lightfield generated by the lens, which is the reconstructed object, is thus convolved with the Fourier transform for the rectangular aperture, which is a two-dimensional sinc function. Therefore, the flaring is apparent mostly on the horizontal and vertical axes.
To address the presence of flare, an active area may be defined on the modulator. The active area has a geometric shape whose Fourier transform does not produce strong vertical or horizontal flaring. For example, the active area may be a circle or an ellipse. The active area is used to create the image and, in some implementations, a portion of the dump region. The remaining area (“non-active area”) of the modulator is solely dedicated to steering light into a dump. To achieve this, the method first solves for the dump alone using either the entire modulator or only the non-active area. This may be done in advance (e.g., as part of the system's initialization). The diffraction-aware algorithm is modified to only use the active area of the modulator. The non-active area remains configured with the dump solution throughout the process. The effect of this is to simulate a modulator with the desired geometric shape.
The shape is chosen such that its area is large enough to steer enough energy toward the image. For phase modulators with a 16:9 aspect ratio (i.e., wider than tall), the largest inscribed circle occupies approximately 40% of the area. As such, a circular active area may be appropriate for target images whose energy is less than 40% of the available energy. If the required energy is greater, an elliptical area may instead be chosen. The largest inscribed ellipse occupies approximately 78% of the area, and thus an elliptical active area may accommodate target images whose energy is up to 78% of the available energy.
In other implementations, flaring may be addressed through the use of an optical filter located in the Fourier plane of the beam-steered lightfield. This optical filter may resemble a crosshair and block all steering frequencies that are strictly vertical or horizontal. While this may block frequencies that make up the target energy, it will block frequencies corresponding to the flaring effect. The phase configuration may be generated in such a way that strictly horizontal and vertical frequencies are not used to achieve the target image, thereby avoiding the blocking of target energy frequencies. In one example, this is implemented with an angular spectrum filter within the wave-propagation loop that generates the phase configuration.
As noted above, the operation of the PLM 105 may be affected by factors including phase stroke quantization of the PLM 105. Some PLM architectures (e.g., those based on MEMS technology) provide a relatively low number of phase strokes. For an n-bit PLM device, the number of codewords is 2n. Additionally, the phase stroke quantization, which converts phase values into codewords for the PLM phase configuration, may be non-uniform.
The wave-propagation loop may account for the PLM phase stroke quantization by subjecting the phase-component of the modulation field to such quantization process. In various implementations, the phase quantization of the modulation field may be performed in some or all of the iterations of the wave-propagation loop.
As can be seen from
Systems, methods, and devices in accordance with the present disclosure may take any one or more of the following configurations.
(1) A projection system comprising a light source configured to emit a light in response to an image data; a phase light modulator configured to receive the light from the light source and to apply a spatially-varying phase modulation on the light, thereby to steer the light and generate a projection light; and a controller configured to control the light source, control the phase modulator, and iteratively for each of a plurality of subframes within a frame of the image data: determine a reconstruction field, map the reconstruction field to a modulation field, scale an amplitude of the modulation field, map the modulation field to a subsequent-iteration reconstruction field, and provide a phase control signal based on the modulation field to the phase light modulator.
(2) The projection system according to (1), wherein the modulation field is a plane of the phase light modulator which modulates a phase of the light, and wherein the reconstruction field is a plane on which a reconstruction image is formed.
(3) The projection system according to any one of (1) to (2), wherein the controller is configured to, iteratively for each of the plurality of subframes within the frame of the image data, apply a regularization factor to the reconstruction field, and wherein the regularization factor adjusts a target amplitude of the subsequent-iteration reconstruction field using a gain based on a reconstruction error of a current iteration.
(4) The projection system according to any one of (1) to (3), wherein scaling the amplitude of the modulation field includes setting an amplitude component of the modulation field to 1.
(5) The projection system according to any one of (1) to (4), wherein the controller is configured to pad the reconstruction field with a dump region before mapping the reconstruction field to the modulation field.
(6) The projection system according to any one of (1) to (5), wherein the controller is configured to, iteratively for each of a plurality of iterations within a subframe except a first iteration, generate an error signal by comparing an integrated lightfield simulation of a current iteration to a target image.
(7) The projection system according to any one of (1) to (6), further comprising a secondary modulator configured to receive and modulate the projection light, wherein the phase light modulator includes a plurality of pixel elements arranged in an array, and circuitry configured to modify respective states of the plurality of pixel elements in response to the phase control signal.
(8) A method of driving a projection system comprising emitting a light by a light source, in response to an image data; receiving the light by a phase light modulator; applying a spatially-varying phase modulation on the light by the phase light modulator, thereby to steer the light and generate a projection light; and iteratively, with a controller configured to control the light source and the phase light modulator, for each of a plurality of subframes within a frame of the image data: determining a reconstruction field, mapping the reconstruction field to a modulation field, scaling an amplitude of the modulation field, mapping the modulation field to a subsequent-iteration reconstruction field, and providing a phase control signal based on the modulation field to the phase light modulator.
(9) The method according to (8), wherein the modulation field is a plane of the phase light modulator which modulates a phase of the light, and wherein the reconstruction field is a plane on which a reconstruction image is formed.
(10) The method according to any one of (8) to (9), further comprising, iteratively for each of the plurality of subframes within the frame of the image data, applying a regularization factor to the reconstruction field, wherein the regularization factor adjusts a target amplitude of the subsequent-iteration reconstruction field using a gain based on a reconstruction error of a current iteration.
(11) The method according to any one of (8) to (10), wherein scaling the amplitude of the modulation field includes setting an amplitude component of the modulation field to 1.
(12) The method according to any one of (8) to (11), further comprising padding the reconstruction field with a dump region before mapping the reconstruction field to the modulation field.
(13) The method according to any one of (8) to (12), further comprising, iteratively for each of a plurality of iterations within a subframe except a first iteration, generating an error signal by comparing an integrated lightfield simulation of a current iteration to a target image.
(14) The method according to any one of (8) to (13), further comprising receiving and modulating the projection light by a secondary modulator, wherein the phase light modulator includes a plurality of pixel elements arranged in an array, and circuitry configured to modify respective states of the plurality of pixel elements in response to the phase control signal.
(15) A non-transitory computer-readable medium storing instructions that, when executed by a processor of a projection device, cause the projection device to perform operations comprising the method according to any one of (8) to (14).
With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments, and should in no way be construed so as to limit the claims.
Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the application is capable of modification and variation.
All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments incorporate more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Various aspects of the present invention may be appreciated from the following enumerated example embodiments (EEEs):
Number | Date | Country | Kind |
---|---|---|---|
21164809.2 | Mar 2021 | EP | regional |
This application claims priority to U.S. Provisional Application No. 63/165,846, filed Mar. 25, 2021, and EP Application No. 21164809.2, filed Mar. 25, 2021, all of which are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/021823 | 3/24/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63165846 | Mar 2021 | US |