The present disclosure generally relates to near-eye display technology, and more particularly, to a method and a system for correcting nonuniformity of a near-eye display.
Near-eye displays (NEDs) may be provided as an augmented reality (AR) display, a virtual reality (VR) display, a Head Up/Head Mount, or other displays. Generally, an NED usually includes an image generator and optical paths including optical combiners. The image generator is commonly a projector with micro displays (e.g., micro-LED (light-emitting diode), micro-OLED (organic light-emitting diode), LCOS (liquid crystal on silicon), or DLP (digital light processing)) and an integrated optical lens. The optical combiner includes reflective and/or diffractive optics, such as freeform mirror/prism, birdbath, cascaded mirrors, or grating coupler (waveguide). A virtual image is rendered from an NED to human eyes with/without ambient light.
Uniformity is one key performance for evaluating the imaging quality of an NED. Nonuniformity can be caused by imperfections of display pixels and the optical paths to guide the light emitted by the display, and it has variation in global distribution, and also variation in local zones called Mura. The visual artefact may seem like a mottled appearance, a bright spot, a black spot, or a cloudy appearance. For the NEDs such as an AR/VR display, the visual artefact is also observable on the virtual image rendered in the display system. In the virtual image rendered in the AR/VR display, nonuniformity may be shown in luminance and chromaticity. Moreover, the visual artefact caused by nonuniformity is more obvious due to the closeness to human eyes when compared with traditional displays.
Therefore, there is a need for improving the nonuniformity of an NED.
Embodiments of the present disclosure provide nonuniformity correction of an NED including a first display and a second display. The method includes: generating and displaying a test pattern for the first display and the second display; obtaining, in response to the test pattern, a first image at an end of a first optical path coupled to the first display and a second image at an end of a second optical path coupled to the second display; fusing the first image and the second image to generate a fusion image; and determining a correction scheme for correcting nonuniformity of the NED based on the fusion image.
Embodiments of the present disclosure provide a system for correcting nonuniformity of an NED including a first display and a second display. The system includes: a light measuring device (LMD) configured to: obtain, in response to a test pattern being displayed by the first display and the second display, a first image at an end of a first optical path coupled to the first display and a second image at an end of a second optical path coupled to the second display; a processor configured to: fuse the first image and the second image to generate a fusion image; and determine a correction scheme for correcting nonuniformity of the NED based on the fusion image.
Embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing a set of instructions that are executable by one or more processors of a device to cause the device to perform operations for correcting nonuniformity of an NED, the operations including: generating and displaying a test pattern for the first display and the second display; obtaining, in response to the test pattern, a first image at an end of a first optical path coupled to the first display and a second image at an end of a second optical path coupled to the second display; fusing the first image and the second image to generate a fusion image; and determining a correction scheme for correcting nonuniformity of the NED based on the fusion image.
Embodiments and various aspects of the present disclosure are illustrated in the following detailed description and the accompanying figures. Various features shown in the figures are not drawn to scale.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims. Particular aspects of the present disclosure are described in greater detail below. The terms and definitions provided herein control, if in conflict with terms and/or definitions incorporated by reference.
In some embodiments, NED 110 includes an image generator 111. Image generator 111 is provided as one or more micro displays (e.g., one display for one eye), such as micro-LED displays, micro-OLED displays, LCOS displays, or DLP displays, and each of the micro displays can be configured as a light engine with an additional projector lens. In some embodiments, the display may be coupled with a plurality of lenses (also referred to as a “lens group”, “designed optics”, etc.) for adjusting the image displayed by the micro display in a manner applicable to human eyes. The micro display of image generator 111 includes a micro light emitting array which can form an active emitting area. The projected image from the light engine through designed optics is transferred to human eyes via an optical path including an optical combiner (not shown). The optics of the optical combiner can be reflective and/or diffractive optics, such as a free form mirror/prism, birdbath, or cascaded mirrors, grating coupler (waveguide), etc.
Image generator 111 includes a driving integrated circuit (IC, not shown in
It is to be noted that “left” and “right” mentioned in the present disclosure are from the perspective view from a person (i.e., a user or a viewer of system 200) shown in
As can be appreciated, first micro display 210 can be used to display a right image, while second micro display 220 can be used to display a left image captured or rendered at a different angle from the right image. When simultaneously viewing the left image and right image, the brain of the viewer combines these two images into a three-dimensional scene. However, if the uniformity within either the left image or the right image, or the uniformity between the left image and the right image is not ideal, a “sense of place” of the three-dimensional scene created by these two images can be affected. The nonuniformity can be caused by one or more of second micro display 220, first micro display 210, lens group 221, or lens group 222. For example, when driven by a signal indicating a same intensity of brightness, some pixels in either or both of first micro display 210 and second micro display 220 may be brighter compared with the others.
Referring to
Refer back to
Processing module 104 is configured to evaluate and improve the uniformity of the virtual image rendered by NED 110. In some embodiments, processing module 104 can be included in a computer or a server. In some embodiments, processing module 104 can be deployed in the cloud, which is not limited herein. In some embodiments, processing module 104 can include one or more processors.
At step S402, one or more test patterns are generated for the first display and the second display for displaying. For example, with further reference to AR system 300 in
In some embodiments, all the pixels in first micro display 310 and second micro display 350 need correction. In this situation, multiple test patterns can be generated for the displays to measure the performance of the displays for different imaging situations. For example, first micro display 310 and second micro display 350 include three kinds of fundamental pixels—red pixels, green pixels, and blue pixels, hence complete nonuniformity correction of AR system 300 may need testing all these pixels. That is, the multiple test patterns can be generated to light all the red pixels, green pixels, and blue pixels for observing the property of these pixels. As first micro display 310 and second micro display 350 are typically driven by signals in RGB chroma space, in some embodiments, the multiple test patterns can include a standard red pattern, a standard green pattern, and a standard blue pattern in RGB chroma space. In this manner, all the red pixels, green pixels, and blue pixels of first micro display 310 and second micro display 350 can be lighted by the standard red pattern, the standard green pattern, and the standard blue pattern in sequence.
In some embodiments, the test pattern can be a white pattern in RGB chroma space. As can be appreciated, compared with chroma nonuniformity, luminance nonuniformity can be more easily observed by a viewer. Typically, a white dot in the test pattern is rendered by red pixels, green pixels, and blue pixels in the display. Hence, a white pattern will trigger at least a part of the red pixels, green pixels, and blue pixels, and correction to this part of pixels will alleviate the nonuniformity of and/or between first micro display 310 and second micro display 350.
As described above, waveguide 322 and optical combiner 323 are provided in optical path 320, while waveguide 362 and optical combiner 363 are provided in optical path 360. The imaging quality of AR system 300 can also be affected by optical path 320 or optical path 360. Specifically, at least a part of the nonuniformity in AR system 300 maybe caused by optical combiner 323 and optical combiner 363. With the correction method provided in the present disclosure, this source of nonuniformity is eliminated or at least diminished.
Referring back to
Referring back to
At sub-step S502, the first image and the second image are downsampled to obtain a first intermediate image and a second intermediate image having a target resolution. The images captured by an imaging module may have a fine resolution (e.g., 9000×6000), which may be too large for image processing. In some embodiments, the resolution of these images can be lowered by pixel decimation. As used herein, pixel decimation refers to a process by which the number of pixels in an image is reduced, e.g., downsampled. For example, with further reference to
At sub-step S504, the first intermediate image and the second intermediate image are fused to form the fusion image.
The fusing method described above to form the fusion image can also be applicable to the standard green pattern and the standard blue pattern, and is not repeated here. In the present disclosure, although images denotated as “CIE X-RED-LEFT”, “CIE Z-RED-LEFT”, “CIE X-RED-RIGHT”, “CIE Z-RED-RIGHT”, “CIE X-RED”, and “CIE Z-RED” are used to represent a picture's chroma components, they can be expressed in a grayscale showing the intensity of X, Y and Z components of each pixel.
In some embodiments, the first image and the second image are represented by grayscale, and a grayscale value of a pixel in a target location (e.g., with the coordinates of (x, y) in a 640×480 image) of the fusion image is equal to the grayscale value of a pixel in the target location of the first intermediate image plus the grayscale value of a pixel in the target location of the second intermediate image. This process can be represented by the following formula:
wherein Gray_fusion (x, y) denotes the grayscale value of a pixel in the target location of the fusion image, Gray_left (x, y) denotes the grayscale value of a pixel in the target location of the left image, and Gray_right (x, y) denotes the grayscale value of a pixel in the target location of the right image.
In some other embodiments, a grayscale value of a pixel in a target location of the fusion image can be calculated in a more complex manner that is more consistent with the working mechanism of the human visual system. The present disclosure is not limited by the manner in which the grayscale value of a pixel in a target location is calculated.
In some embodiments, the correction method is applied to luminance nonuniformity and chromaticity nonuniformity. Hence, at step S504, both luminance components and chromaticity components of the first intermediate image and the second intermediate image are fused to form the fusion image. In some embodiments, the correction method may focus on luminance nonuniformity. Hence, at step S504 only luminance components of the first intermediate image and the second intermediate image are fused to form the fusion image. That is, with further reference to
Referring back to
At sub-step S602, a target grayscale value is set for the fusion image according to a distribution of the grayscale values of the pixels in the fusion image. In some embodiments, the target grayscale value is an average grayscale value of each pixel in the fusion image. In some other embodiments, the target grayscale value is the grayscale value with a greatest probability in the distribution. A target value generally reflects a statistical status of the fusion image, and can be used to represent the fusion image.
Referring to
In this 3×3 matrix, XR represents the target grayscale value of chroma component X under the standard red pattern, YG represents the target grayscale value of luminance component Y under the standard green pattern, ZB represents the target grayscale value of chroma component Z under the standard blue pattern, etc. As can be understood, this matrix implements both luminance correction and chromaticity correction. If only the luminance correction is needed to be applied, then only the second row [YR YG YB]obj needs to be determined, and this matrix become a 1×3 matrix.
Referring back to
In other words, the correction matrix
can be determined through the following equation:
wherein, [M3×3]px′ denotes the inverse of matrix [M3×3]px, and [M3×3]px contains the actual pixel values that can be determined from the fusion image. As appreciated, [M3×3]corr can be generated for each of the 640×480 pixels.
In some embodiments, gamma operation in display driving system can also be considered when determining the correction scheme. For example, the correction matrix can be updated by a gamma operator γ for each pixel in the fusion image:
When [M3×3]corr_2 is applied for correction, the driving system of the NED will not conduct another gamma correction to the pixels.
In some embodiments, the determined correction matrix can be saved for further processing. In some other embodiments, the determined correction matrix of each pixel in the fusion image can be used to update the driving system of the NED according to the correction matrix. The driving system can drive the first display and the second display with their corresponding drivers file with the corresponding correction matrix. For example, when a display pixel of the first display or the second display is driving a signal
in RGB chroma space, the driving system can correct this driving signal to
which is actually used to drive the pixel. When considering gamma operation, the driving system can instead correct this driving signal to
To review the quality of improvement with the correction scheme, the uniformity of the first display and the second display can be evaluated before and after correction. In some embodiments, the method for correcting nonuniformity further includes the following steps (not shown): displaying the test pattern on the first display and the second display by the updated driving system; and verifying an updated uniformity of the first display and the second display according to an updated fusion image of the test pattern.
Some embodiments of the present disclosure further provide a non-transitory computer-readable storage medium storing a set of instructions that are executable by one or more processors of a device to cause the device to perform any of the above-mentioned method for correcting nonuniformity of an NED.
It should be noted that relational terms herein such as “first” and “second” are used only to differentiate an entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations. Moreover, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
As used herein, unless specifically stated otherwise, the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a database may include A or B, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or A and B. As a second example, if it is stated that a database may include A, B, or C, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method.
In the drawings and specification, there have been disclosed exemplary embodiments. However, many variations and modifications can be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation.
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2023/143061 | Dec 2023 | WO | international |
This disclosure claims the benefits of priority to PCT Application No. PCT/CN2023/143061, filed on Dec. 29, 2023, which is incorporated herein by reference in its entirety.