Projection-based displays project images onto projection surfaces, such as onto a wall or a screen, to display video or still pictures for viewing. Such displays can include cathode-ray tube (CRT) displays, liquid crystal displays (LCDs), and spatial light modulator (SLM) displays, etc. For examples, SLMs can be useful in heads up displays (HUD), cinema, televisions, presentation projectors, near eye display, automotive console displays, light field displays, and high dynamic range projectors.
In accordance with at least one example of the disclosure, a device includes at least one processor configured to partition a source image including image components into sub-images, each including a corresponding image component of the image components, and process each sub-image to produce a target image to be projected for each sub-image of the sub-images. The device also includes one or more light sources coupled to the at least one processor and configured to project an incident light, and a phase projection-based display device coupled to the at least one processor and optically coupled to the one or more light sources and configured to modulate, based on the target image of each sub-image, the incident light to separately project the sub-images.
In accordance with at least one example of the disclosure, a vehicle includes a projector device mounted in the vehicle and comprising at least one processor configured to partition a source image including image components into multiple sub-images, each including a corresponding mage component of the image components, and process each sub-image to produce a target image to be projected for each sub-image. The projector device also includes one or more light sources coupled to the at least one processor and configured to project an incident light, and a phase projection-based display device coupled to the at least one processor and optically coupled to the one or more light sources and configured to modulate, based on the target image of each sub-image, the incident light to separately project the sub-images, where the phase projection-based display device is configured to project the sub-images on a projection surface on a front windshield of the vehicle.
In accordance with at least one example of the disclosure, a method includes obtaining, by a processor, a source image including image components, and partitioning, by the processor, the source image including image components into multiple sub-images, where each sub-image of the sub-images has a same size as the source image, and each sub-image includes a different image component of the image components from the source image. The method also includes processing each sub-image to produce a target image to be projected for each sub-image, and sequentially modulating, by a phase projection-based display device and based on the target image of each sub-image, incident light from one or more light sources to project each sub-image separately and produce a far-field image for each target image.
Projection-based displays can include SLMs, which display projected images by changing the intensity of projected light across the displayed image pixels. For example, SLM displays include micro-electromechanical system (MEMS) based SLMs, such as digital mirror devices (DMDs). SLM displays also include liquid crystal-based SLMs, such as LCDs and liquid crystal on silicon (LCoS) devices. An SLM modulates the intensity of the projected light by controlling optical elements to manipulate the light and accordingly form the pixels of a displayed image. For example, in the case of a DMD, the optical elements are adjustable tilting micromirrors that are tilted by applying voltage. In the case of liquid crystal-based SLMs, the optical elements are liquid crystals that are controlled by voltage to modulate the intensity of the light across the image pixels.
The display system can also be based on one of various display methods. For example, according to a time multiplexing method of light projection, light for different color modes (e.g., blue, green, and red) is emitted in sequence in time by respective light sources, such as lasers or light emitting diodes (LEDs). The color modes can be projected by time multiplexing the respective light sources to display an image in full color. The sequence of switching the light sources for the respective color modes is set at a rate sufficiently high to allow the human eye to integrate a sequence of projected colored modes of the image into a single colored image.
Projection-based displays can also include PLMs for projecting images. A PLM can be a MEMS device including micromirrors that have adjustable heights with respect to the PLM surface. The heights of the micromirrors can be adjusted by applying voltages. The micromirrors may be controlled with different voltages to form a diffraction surface on the PLM. A controller can control, by applying voltage, the micromirrors individually to form the diffraction surface. For example, each micromirror can be coupled to respective electrodes for applying a voltage and controlling the micromirror independently from the other micromirrors of the PLM. The diffraction surface is a phase altering reflective surface to light incident from light sources. The phase altering reflective surface represents a hologram, also referred to herein as a phase hologram, for projecting illumination patterns of light that form an image on an image projection surface. The hologram is formed as a diffraction surface by adjusting the heights of the micromirrors of the PLM. The heights of the micromirrors may be adjusted to form a certain hologram by controlling the voltages applied to each micromirror. The hologram is formed based on a source image that is to be displayed by projecting the light on the projection surface.
An image can be projected by the PLM to a certain distance, also referred to herein as a far field. The far field can be located on a projection surface, such as a screen, which represents a far field imaging plane of the image. When the PLM is illuminated by an incident light, the light is modulated by the hologram formed at the surface of the PLM and projected towards the far field, causing an image to appear on the projection surface, which is also referred to herein as a far-field image. The incident light is composed of light waves having a phase which is altered by the hologram, producing the modulated light that is projected towards the far field. The brightness of the image varies across the image because of constructive and destructive interference of the light waves at the far field on the projection surface. The projection of the modulated light from the PLM to the projection surface, or the far field imaging plane, can be represented mathematically by a Fourier transform from a source plane located at the surface of the PLM to a far field imaging plane located at the projection surface. The Fourier transform is a mathematical function that transforms the hologram at the source plane to the far-field image at the far field imaging plane. The Fourier transform can transform a first mathematical value representing the hologram with certain phase and amplitude values into a second mathematical value representing the far field image with transformed phase and amplitude values.
The controller can switch each of the PLM micromirrors between multiple discrete and different heights to form the hologram that modulates the reflected incident light from the light sources. This switching of the PLM micromirrors can be at a speed that projects a sequence of images in time at a faster rate than other display devices such as LCOS displays. For example, the PLM can project a sequence of images in time at a rate of tens of kilohertz (kHz). If used as a phase-only light modulator, the PLM can also have a diffraction efficiency higher than a MEMS based SLM, such as a DMD. The DMD, having two stable states (e.g., on and off states), can be useful to create binary holograms if operated in a binary diffraction mode. In this mode, the DMD may have lower efficiency than a PLM. For example, the PLM can provide a diffraction efficiency of approximately 89 percent (%) in comparison to a diffraction efficiency of approximately 10% for the DMD if operated as a binary phase modulator. The diffraction efficiency of a display system is a measure of the amount (e.g., energy) of light projected onto an image relative to the amount of light provided by the light sources. The majority of the efficiency loss in direct imaging-based SLMs, such as liquid crystal-based SLMs and DMDs, is caused by blocking some of the light to form dark pixels or regions in the displayed image. In the case of PLMs, the dark pixels are produced by the destructive interference of light waves in the modulated light without blocking a portion of the light. PLM also projects a sequence of images in time at a faster rate than liquid crystal based SLMs, such as LCDs and LCoS devices that may project the sequence of images at rates within hundreds of hertz. The relatively faster projection rate of the PLM in comparison to liquid crystal-based SLMs allows the human visual system (HVS) to integrate more images displayed within a certain time to perceive single images. For example, the HVS can generally integrate 30 displayed image frames per second or less as a single perceived image.
Display systems including SLMs and PLMs can also add noise in the displayed image. The noise can cause a fluctuation or deviation in the correct color shades in regions of the displayed image. Phase-only light modulators can produce images with some noise content. The noise can be exacerbated by the quantization of the phase values in image processing. For example, modulating light by a PLM produces a quantization-induced noise in the displayed image because the fixed number and positions of the micromirrors on the PLM form a fixed number and position of pixels in the image. The quantization noise is caused by processing an analog signal representing the image as a digital signal representing the image pixels. The error caused by this approximation of the analog signal with the digital signal is referred to as quantization noise. The quantization noise may be a random error that causes random variation and hence error in the displayed brightness or color shades across the image pixels, which can reduce the difference between bright and dark pixels and regions of the image, also referred to herein as contrast. The noise can increase in the displayed image by the image processing algorithm implemented by the display system, such as to generate the PLM hologram for projecting the image. For example, the image processing algorithm can be an iterative algorithm that starts from a random or estimated initial phase image for the hologram value based on the image value, which can introduce initial noise. The hologram value is then updated over multiple iterations to reach or converge to a final hologram value that can be loaded onto the PLM. Increasing noise in the displayed image increases the error in brightness or color shades in the image pixels which reduces the ratio between maximum brightness and minimum brightness in the displayed image pixels, also referred to herein as contrast ratio.
This description includes examples for implementing a time-sequential projection by a PLM to reduce noise and increase contrast in the displayed image. The time-sequential projection includes partitioning an image to be displayed, also referred to herein as a source image, into a plurality of images, referred to herein as sub-images. The sub-images can include different image components of the source image. Accordingly, each sub-image can include a portion of the bright regions in the source image, and a remaining larger dark region which can be represented by more dark pixels. Each sub-image can then be processed separately and projected, such as by loading a hologram on the PLM. The processing/optimization algorithm (e.g., GS algorithm) to generate the hologram performs with higher accuracy when the source image has fewer on/bright pixels compared to dark pixels. Consequently, the processing/optimization algorithm can generate an optimized hologram that when loaded onto the PLM produces an image with noise being localized to the smaller bright region and surrounding areas. The sub-images are projected in sequence in time at a certain rate (e.g., hundreds of hertz to kHz) that is supported by the projection-based display device, such as the PLM, and that allows the HVS to integrate the images into a single combined image that represents the source image. This rate is referred to herein as the HVS image integration rate. The localization of the noise to the smaller bright regions in the sub-images reduces noise across the single combined image, increasing the contrast across the image as perceived by the HVS.
This time-sequential projection-based on partitioning the source image can be implemented in projection-based displays with optical elements, such as the PLM micromirrors, that can be switched at speeds which provide the HVS image integration rate. The projection-based displays can include liquid crystal-based displays with liquid crystals that have switching speeds capable of projecting images with rates within tens of kHZ. For example, ferroelectric liquid crystal on silicon (FLCoS) devices include ferroelectric liquid crystals (FLCs) that have a faster voltage response than other liquid crystal devices (e.g., LCoS and LCDs) and can project images at a rate above 1 kHz. Other examples of liquid crystal-based devices with switching speeds that allow the time sequential projection of sub-images to display a source image can include dual frequency liquid crystal (DFLC) devices, which contain liquid crystals with positive and negative dielectric anisotropy that have a response time within sub-milliseconds.
The modulated light 120 may be modulated by the display device 110 to project still images or moving images, such as video, onto the projection surface 130. The modulated light 120 may be formed as a combination of light with multiple color modes provided by the display device 110. The display device 110 includes an apparatus 150 for modulating and projecting the modulated light 120. The apparatus 150 can include one or more light sources (not shown) for providing light, also referred to herein as incident light, with different wavelengths for the color modes. The color modes can be projected simultaneously or by time multiplexing the respective light sources. In examples, the incident light at the different wavelengths is modulated and reflected by a phase projection-based display device such as a PLM (not shown) with micromirrors to produce the modulated light 120 for displaying images or video on the projection surface 130. In other examples, the apparatus 150 includes other phase projection-based display devices, such as FLCoS devices, with optical elements that can be switched at speeds equal to the PLM micromirrors and display images with rates up to tens of kHZ.
The display device 110 also includes one or more controllers 190 coupled to the apparatus 150 for controlling the components of the display device 110 to display the images or video. For example, the one or more controllers 190 can include a first controller (not shown) for controlling the PLM, or other light projection devices with equal switching speeds, to modulate the incident light of different wavelengths from the respective light sources. The one or more controllers 190 may also include a second controller (not shown) for controlling the light sources. The display device 110 may further include one or more input/output devices (not shown), such as an audio input/output device, a key input device, a display, and the like. The display device 110 can include a cover 195 through which the modulated light 120 is projected from the display device 110. The cover 195 is a transparent cover made of a dielectric material, such as glass or plastic, that transmits the modulated light 120 from the apparatus 150 to the projection surface 130. The cover 195 also protects the components of the display device 110 from outside elements.
In examples, the display system 100 is a HUD system, where the projection surface 130 is a windshield in a vehicle.
The vehicle HUD system 200 can be an AR HUD system that projects images 206 with image components 207, such as in the form of text, graphics, etc. The image components 207 indicate information useful to the driver 203 or a front passenger. For example, the components 207 of the image 206 can include a road trajectory line to guide the driver 203 on the road, route information and conditions (e.g., temperature, weather), gauges or text indicating vehicle information or conditions (e.g., speed, gas), messages, alerts, warnings, and the like. The displayed images 205 and image components 206 may not obstruct the view of the driver 203 and may not require the driver 202 to look, while driving, away from a line of sight in front of the vehicle 202. The projections surface 130 of the HUD system 200 can also include a holographic optical element (HOE) 208. The HOE 208 is an optical structure that can be manufactured on or in a glass material, such as the front windshield 204. For example, the HOE 208 can be a single layer of a diffraction grating or can be formed by multiple layers of diffraction gratings. The HOE 208 is a transparent diffraction surface that modulates light projected on the HOE 208 to produce 3D images that can be viewed looking though the projections surface 130. The modulated light 120 can be projected by the apparatus 150 of the display device 110 onto the HOE 208 of the projections surface 130 to display to the driver 203 the images 206 as 3D images. The 3D images can be perceived with a perception of depth in the direction of the line of sight in front of the vehicle 202.
In examples, the apparatus 150 of the display device 110 in the display system 100 or the vehicle HUD system 200 includes a phase projection-based display device, such as a PLM or an FLCoS, that is optically coupled to one or more light sources. The phase projection-based display device is configured to modulate the phase of an incident light from the light sources to produce a modulated light for projecting images.
In examples, as shown in
The micromirrors 215 of the PLM 210 are adjustable MEMS reflectors which form a grid of pixels on the surface of the PLM 210. The heights of the micromirrors 215 with respect to the surface can be adjusted by applying voltages to the PLM 210. The first controller 242 controls the PLM 210 by changing the voltages applied to the PLM 210 to adjust the heights of the micromirrors 215 forming a certain hologram on the surface of the PLM 210. The hologram is a diffraction surface that is formed by providing different heights of the micromirrors 215 across the grid of pixels on the surface of the PLM 210. This diffraction surface modulates and reflects the incident light 250 from the one or more light sources 220 to project the modulated light 120.
The incident light 250 includes one or more color modes at respective wavelengths that are transmitted from the one or more light sources 220 to the PLM 210 through the focusing optics 230. The light sources 220 can be three light sources that provide three color modes at three respective wavelengths, such as for blue, green, and red light. The three color modes may provide three basic color components for displaying an image in full color. In examples, the light sources 220 can be three laser light sources that transmit the incident light 250 for three color modes at three respective wavelengths or ranges of wavelengths. The color modes can be projected by time multiplexing and in respective time durations that determine the color shades in the displayed image pixels. The focusing optics 230 can include one or more lenses that collimate and focus the incident light 250 onto the micromirrors 215. The spot of the incident light 250 on the micromirrors 215 can be focused to provide equal illumination across the grid of pixels for the different color modes.
The light sources 220 can be controlled by a controller 190 (e.g., the second controller 244), to project the incident light 250 for each color mode at a time to the PLM 210 with time multiplexing. Accordingly, each light source 220 is switched on at a time in a certain sequence and rate to project the incident light 250 for a respective color mode on the PLM 210. The rate can be within the HVS image integration rate to perceive the time multiplexed color modes in the displayed image as a single full color image. For example, the image projection rate can be between 30 frames per second and 60 frames per second.
The hologram or diffraction surface formed by the micromirrors 215 can also split the reflected incident light 250 into multiple light beams, also referred to herein as diffraction orders 265, that form the modulated light 120. In this example, the diffraction surface can include a structure of repeated surface patterns, also referred to herein as a diffraction grating, that is formed by the micromirrors 215. The surface patterns are repeated periodically in a direction across the surface which causes the splitting along the same direction the incident light 250 into the diffraction orders 265 in the modulated light 120. The repeated surface patterns of the diffraction surface alters the phases of the light waves that form the incident light 250 and reflects the light phases from the surface of the PLM 210 with different phases. The light waves having different phases are reflected by the diffraction surface in different directions which forms the diffraction orders 265. Accordingly, the diffraction orders 265 are reflected away from the surface of the PLM 210 at different reflection angles, also referred to herein as diffraction angles. The diffraction angles of the diffraction orders 265 depend on the incident angle of the incident light 250, the period of the repeated surface patterns of the diffraction surface, and the wavelength of the incident light 250. The diffraction orders 265 in the modulated light 120 may also have different intensities.
An image can be projected in a certain diffraction order 265 in the modulated light 120 on a far field image plane 270. In this example, the diffraction order 265 on which the image is projected has a higher intensity than other diffraction orders 265 in the modulated light 120. For example, the diffraction order 265 on which the image is projected may have more than 50% of the overall light intensity in the modulated light 120. In other examples, the same image can be projected by the PLM 210 simultaneously on multiple diffraction orders 265. In this example, multiple copies of the image can be projected on the far field image plane 270, where each copy of the image belongs to a respective diffraction order 265. The diffraction orders 265 can include a light beam in a center position between the diffraction orders 265, also referred to herein as a zero-order light. In examples, the image can be projected on the zero-order light. In other examples, the images can be projected on one or more diffraction orders 265 other than the zero-order light. In this example, the zero-order light can be useful to increase the illumination and accordingly brightness of the displayed images, or can be blocked from the far field image plane 270, such as to remove nonuniform illumination by the zero-order light in the displayed image. In examples, the images projected on multiple diffraction order 265 can be combined into a single displayed image by focusing optics (not shown) between the far field image plane 270 and a projection surface, such as the projection surface 130.
In examples, the apparatus 209 in
In other examples, the display device 110 may include an apparatus with multiple pairs of PLMs and respective light sources, each pair corresponding to a color mode provided from a respective light source. In this example, the PLMs can modulate simultaneously the respective color modes from the respective light sources. The multiple pairs of PLMs and light sources can increase the diffraction efficiency and the projected intensity of each color mode and accordingly increase image quality and power efficiency of the display device 110.
In other examples, a display system may include a liquid crystal-based display that projects a modulated light for displaying images. The liquid crystals in the device can have switching speeds within the same range as PLM micromirrors and can project images with equal time rates as PLMs, such as in kHZ.
In examples, as shown in
The first controller 442 may be an analog controller that can control by voltage each of the cells of liquid crystals in the liquid crystal device 410. The amount of voltage can be controlled to change the level of light intensity and accordingly brightness in the image pixels. The liquid crystals are controlled to project an image on the modulated light 120 to a far field image plane 470. The far field image plane 470 can be located at, or projected (e.g., by projection optics) on, the projection surface 130. The second controller 444 can be a digital controller configured to switch the one or more light sources 420 on and off, or an analog controller that controls and changes the level of light intensity of the incident light 450 from the one or more light sources 420.
In examples, the liquid crystal device 410 can be a FLCoS device that includes FLCs. FLCs can be reoriented, also referred to herein as switched, by voltage at speeds that allows projecting the modulated light 120 and accordingly displaying images at a rate above 100 kHz. For example, FLCs can be switched within durations of less than 100 microsecond (μsec). The FLCoS device can include FLCs positioned between a glass layer and a pixelated reflective complementary metal-oxide-semiconductor (CMOS) chip. The CMOS chip includes an array of fixed micromirrors, such as aluminum micromirrors, and a circuit configured to receive video signals and convert the signals into digital voltages. The voltages are independently applied to each of the micromirrors switching a respective cell of FLCs which projects a pixel of the image. Depending on the voltage applied to each pixel, the FLCs are oriented in a certain respective direction, which modulates the polarization of the incident light 450 as reflected by the micromirrors generating the modulated light 120.
The image 500 displayed in the HUD system can include noise, which appears as random variation of brightness or color shades across the image 500. The noise can cause changes in the color shades of image pixels, such as in bright image pixels appearing darker and dark image pixels appearing brighter. The noise can reduce the quality of images displayed in the HUD system and accordingly reduce visibility to the viewer, such as to a driver through the windshield of the vehicle. The noise includes quantization noise caused by processing a digital source image to display the image 500. For example, the digital source image is processed to generate a PLM hologram by adjusting the micromirrors 215 of the PLM 210 in the apparatus 209 or to switch the FLCs of the liquid crystal device 410 in the apparatus 400. The amount of noise in the image 500 is also dependent on the image processing algorithm implemented to process the source image. The increase in noise increases the error in displayed brightness or color shades in the image pixels which reduces contrast across the image.
In examples, the apparatus 209 or the apparatus 400 can be controlled by the one or more controllers 190 to display images, such as in the HUD system, according to a time-sequential projection that increases contrast in the displayed images. To increase contrast in the displayed image, a source image, which can be stored in digital format, is processed by partitioning the source image into a plurality of digital sub-images. The sub-images can include respective and different image components of the source image. The sub-images are processed and projected in sequence in time to overlap on a surface, such as the projection surface 130. Each sub-image can have the same size (e.g., in pixels) as the source image and include a respective component of the source image and a remaining dark region. Each sub-image can be processed separately and projected in sequence in time at a rate allowing the HVS to integrate the overlapping sub-images into a single combined image that represents the source image. Because the sub-images include portions of the source image, the sub-images also include more dark pixels than the source image. Accordingly, the noise added to each projected sub-image can be localized to fewer bright pixels than the source image. The localization of the noise in the sub-images reduces noise across the single combined image perceived by the HVS, increasing the contrast in the image.
For example, the source image can be the image 500 in the case of a HUD system. The image 500 can be partitioned into five digital sub-images that include, respectively, the five image components 501 to 505 in the image 500. In this example, each digital sub-image generated from the image 500 can include one respective image component from the image components 501 to 505. In other examples, fewer than five sub-images can be generated from the image 500, where a sub-image can include more than one image component from the image components 501 to 505. Each of the digital sub-images has the same size of the image 500. The sub-images are then projected toward the windshield, such as by the PLM 210 or the liquid crystal device 410 on the modulated light 120, in sequence in time at a rate that allows the HVS to integrate the sub-images into the image 500. For example, the HVS image integration rate is supported by the switching speed for the micromirrors 215 of the PLM 210 or the FLCs of the liquid crystal device 410. Each digital sub-image is projected in a respective order in the time sequence.
In examples, a digital source image, such as of the image 500, can be processed to generate a PLM hologram in the apparatus 209 based on an iterative algorithm. The iterative algorithm is based on performing Fourier transforms over multiple iterations of calculation to update phase information for generating the hologram based on the source image. The phase information is updated at each iteration until a Fourier transform of the hologram based on the updated phase information of the source image is within a certain error threshold. The iterative algorithm can be based on the GS algorithm, also referred to herein as GS calculation method. The GS algorithm is an optimization algorithm where constraints, such as the desired source image in the far-field plane and phase-only representation in the PLM plane, are iteratively applied with the Fourier transform of the converged phase-only hologram to provide the desired source image.
The GS algorithm is initialized with a random phase pattern in the PLM plane. The Fourier transform of the phase pattern is computed to obtain the light distribution in the diffracted plane/far-field plane. The obtained amplitude in the far-field plane is then replaced with the desired amplitude of the source image. The phase in the in the far-field plane is left untouched. The updated far-field plane complex image is inverse Fourier transformed to the PLM plane and only the phase values are retained. The process with the source image amplitude constraint in the far-field plane and the phase-only constraint in the PLM plane is repeated over multiple iterations until the updated values of the source image and the hologram converge based on the error threshold. The GS algorithm is described in a paper titled, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Pictures,” R. W. Gerchberg and W. O. Saxton, Cavendish Laboratory, Cambridge, United Kingdom (1971), which is hereby incorporated by reference herein in its entirety.
In other examples, the digital source image can be processed with other iterative algorithms or calculation methods based on performing Fourier transforms over multiple iterations to update the phase information for the hologram based on the source image. Such algorithms, including the GS algorithm, generate noise in the displayed image to represent the full complex value (e.g., with amplitude and phase) of a hologram by a phase-only hologram (e.g., with constant amplitude and random phase). Since the amplitude in the PLM plane is not modulated, the algorithm can introduce noise in the displayed image in the far-field plane. Further, noise may be added by the random phase values generated for the source image in the initial iteration. The noise may be distributed across the entire displayed image, such as in both the dark and bright regions of the displayed image. The presence of this noise, including the noise in the dark regions of the displayed image, reduces the image contrast. The noise can be expressed mathematically as a combination of the mean of the intensity in a region and the variance in the intensity of the region. If noise increases, the intensity variance increases in the displayed image. If noise decreases, the intensity variance decreases.
In the iterative algorithms, the noise may be related to the size and locations of the bright regions in the source image. For example, if the bright regions in the image are localized to an area of the image, the noise can also be localized to the same area and reduced in the remaining dark areas of the displayed image, resulting in higher contrast in such areas. If the source image is partitioned into multiple sub-images of the same size as the source image, where each sub-image includes one or some of the image components of the source image without the remaining image components, the sub-images can contain localized bright regions surrounded by remaining dark regions. In this example, by processing and projecting the sub-images separately, the noise in the displayed sub-images may be localized in and around the bright regions and is reduced in the remaining dark regions. If the sub-images are also projected in sequence in time within the HVS image integration rate, the source image can be displayed with higher contrast because the noise is localized to the bright regions and is reduced in the dark regions in the image.
As shown in
The curves 901A and 902A indicate a decrease in the contrast of displayed images as the APL in the respective source images increases. This decrease in contrast can be caused by increase in noise in the GS algorithm, which is proportional to the APL. The curves 901A and 902A also show higher contrast in the case of source images with higher quantization. For example, a source image with an APL of 48% provides a displayed image with a contrast of approximately 65:1 if the source image is quantized with 8 bits and provides a displayed image with a contrast of approximately 40:1 if quantized with 4 bits. This increase in contrast can be caused by decrease in quantization noise in the GS algorithm.
The data points in
For example, if the quantization of the source image 700 in
The high gain light intensity in the displayed image 1000B also increases the noise in comparison to the displayed image 1000A. The noise can be quantified by the mean average of light intensity across the image and can increase if the illumination in the image is increased. This causes the noise to be more visible in the displayed image 1000B and appears as gray pixels outside the displayed image components 1001B-1005B. The higher noise also causes lower contrast in the displayed image 1000B than the displayed image 1000A.
The high gain light intensity in the displayed image 1000D also causes the noise to increase and appear more visible in comparison to the displayed image 1000C. The noise appears in the displayed image 1000D as gray pixels outside the displayed image components 1001D-1005D. As shown in
For example, the intensity mean in the corner area 1090B of the displayed image 1000B is equal to 29.7 on a scale of gray values and the intensity mean in the corner area 1090D of the displayed image 1000D is equal to 13.24 measured units. The scale of gray values represents the brightness of pixels in the image and ranges from 0 to 255, with 0 representing a full dark pixel and 255 representing a saturated bright pixel. The reduced mean intensity in the corner area 1090D of the displayed image 1000D represents lower noise and higher contrast in comparison to the corner area 1090B of the of the displayed image 1000B. Visible noise in the corners of a displayed image is also referred to herein as the postcard effect. The post card effect can be reduced or cancelled by the time-sequential projection of sub-images with lower APL and within the HVS image integration rate. This projection method is useful for displaying images with few image components (e.g., within five or ten components) that are separated by dark pixels or regions. For example, the time-sequential projection method is useful in an AR HUD system where images with few image components are displayed and the postcard effect is mitigated to improve visibility.
At step 1101, a source image including one or more image components is obtained, by a processor, for display. At step 1102, the source image including image components is partitioned by the processor into multiple sub-images with the same pixel size as the source image. Accordingly, each sub-image from the sub-images includes a corresponding image component of the image components in the source image. In examples, the image components in the source image can be detected with an image recognition algorithm or processing method for identifying objects in an image. Examples of image recognition algorithms or processing methods can include machine learning by neural networks, training and test data/models, or other object detection and image recognition methods. The image components can also be detected with metadata provided or included in the source image. For example, the source image can be generated with image components and metadata indicating information about the image components. The metadata can be included in a same file of the source image or in a metadata file processed with the source image. The metadata can include information such as size, color, and location of the image components in the image. After detecting the image components, each image component can be separately included in a respective sub-image, where the remaining area in the sub-image can be empty of objects and represented by black pixels. The image component can have the same location in the respective sub-image as the source image. In examples, multiple image components from the source image can be included in the same respective sub-image if the image components are detected within a certain threshold distance in the source image, such as relative to other image components or the image size.
Each sub-image has an equal number of pixels as the source image and includes a different image component from the source image. Because each sub-image include one or fewer image components of the source image, each sub-image also has an APL lower than the source image. The number of sub-images of the source image is determined based on the number of image components in the source image. In the case where each sub-image includes only one image component of the source image, the number of the sub-images is equal to the number of image components in the source image. The source image can be stored in digital format in memory in the form of pixel values in bits, which represent the color shades across the source image. The digital source image is partitioned by a processor into sub-images that have the same number of pixels and a different image component of the source image. The source image may include few image components (e.g., fewer than ten image components) that display data (e.g., text, graphics, etc.) and that are separated by dark pixels or regions in the source image. For example, the source image 700 can be partitioned into the sub-images 710-750 by the processor 248 in the apparatus 209 or the processor 448 in the apparatus 400. The sub-images 710-750 include the respective image components 701-705.
At step 1103, each sub-image is processed, by the processor, to produce a target image to be projected for each sub-image. For example, the sub-images can be processed separately according to an iterative algorithm, such as the GS algorithm, to calculate respective holograms for projecting the sub-images. For each sub-image, the hologram is obtained by the algorithm based on a Fourier transform relationship between the hologram and the respective target image, which represents the far-field image. For example, the sub-images 710-750 are each processed by the GS algorithm to generate a respective hologram for the PLM 210 in the apparatus 209. The values of the processed sub-images can be sent as voltage values to control the micromirrors 215 of the PLM 210 (e.g., by the first controller 242). In examples, the sub-images can be processed by algorithms other than the GS algorithm, which may or may not be iterative algorithms, to generate the respective holograms for projecting the sub-images. The sub-images can also be processed by the same or other algorithms, such as for switching liquid crystals in liquid crystal devices, to project the sub-images separately.
At step 1104, incident light from one or more light sources is sequentially modulated by a phase projection-based display device based on the target image of each sub-image to project each sub-image separately and produce a far-field image for each target image. The sub-images of the source image can be projected in sequence in time at a certain rate within the HVS image integration rate for perceiving the projected sub-images as a single image representing the source image.
For example, the phase projection-based display device can be the PLM 210 in the apparatus 209. For each processed sub-image of the sub-images 710-750, the PLM 210 modulates separately the incident light 250 by a respective hologram formed by the micromirrors 215 according to the voltage values of the respective sub-image. Accordingly, the PLM 210 projects each sub-image in sequence in time on the modulated light 120 at a rate within the HVS image integration rate. This allows the HVS to integrate the overlapping sub-images (e.g., displayed on the projection surface 130) into a single displayed image 800B that represents the source image 700. The sub-images 710-750 can be projected by the PLM 210 on one or more diffraction orders 265 in the modulated light 120, which may include the zero-order light.
In other examples, the liquid crystal device 410 modulates the incident light 450 by switching the FLCs according to the voltage values of each processed sub-image of the sub-images 710-750 separately. Accordingly, the liquid crystal device 410 projects each sub-image in sequence in time on the modulated light 120 at a rate within the HVS image integration rate. The phase projection-based display device can be a FLCoS device including FLCs. The FLCs can be switched by voltage to electrically adjust the optical refractive index across the surface and accordingly produce a phase altering reflective surface that represents a phase hologram. The sub-images can be processed separately according to iterative algorithms, such as the GS algorithm, or other algorithms to obtain the respective phase holograms on the FLCoS to project the sub-images.
The method 1100 can reduce noise and increase contrast in the projected sub-images in comparison to projecting the source image without partitioning and higher APL. The increase in contrast can allow reducing the quantization of the pixel values (e.g., in bits) in the processing of the sub-images. Reducing the quantization can reduce processing time and power cost in the system. For example, if the pixel values of the source image 700 are stored in 8-bit values, the pixel values can be approximated by 4-bit values. The source image 700 is hence partitioned in to the sub-images 710-750 with 4-bit pixel values.
The term “couple” is used throughout the specification. The term may cover connections, communications, or signal paths that enable a functional relationship consistent with this description. For example, if device A generates a signal to control device B to perform an action, in a first example device A is coupled to device B, or in a second example device A is coupled to device B through intervening component C if intervening component C does not substantially alter the functional relationship between device A and device B such that device B is controlled by device A via the control signal generated by device A.
A device that is “configured to” perform a task or function may be configured (e.g., programmed and/or hardwired) at a time of manufacturing by a manufacturer to perform the function and/or may be configurable (or re-configurable) by a user after manufacturing to perform the function and/or other additional or alternative functions. The configuring may be through firmware and/or software programming of the device, through a construction and/or layout of hardware components and interconnections of the device, or a combination thereof.
A system or device that is described herein as including certain components may instead be adapted to be coupled to those components to form the described structure, device, or apparatus. For example, an apparatus described as including one or more devices (such as PLMs, FLCs or light sources), one or more optical elements (such as lenses), and/or one or more electronic components (such as controllers, processors, or memories) may instead have at least some of the components integrated into a single component which is adapted to be coupled to the remaining components either at a time of manufacture or after a time of manufacture, for example, by an end-user and/or a third-party.
While certain components may be described herein as being of a particular process technology, these components may be exchanged for components of other process technologies. Devices described herein are reconfigurable to include the replaced components to provide functionality at least partially similar to functionality available prior to the component replacement.
Unless otherwise stated, “about,” “approximately,” or “substantially” preceding a value means+/−10 percent of the stated value. Modifications are possible in the described examples, and other examples are possible within the scope of the claims.