The present disclosure generally relates to a near-eye display technology, and more particularly, to a method and system for detecting a visual artefact of near-eye display.
Near-eye displays (NEDs) may be provided as an augmented reality (AR) display, a virtual reality (VR) display, a Head Up/Head Mount or other displays. Generally, a NED usually comprises an image generator and an optical combiner. The image generator is commonly a projector with micro displays (e.g., micro-LED, micro-OLED, LCOS, or DLP) and optical lens integrated. The optical combiner includes reflective and/or diffractive optics, such as freeform mirror/prism, birdbath, cascaded mirrors, or grating coupler (waveguide). A virtual image is rendered from NEDs to human eyes with/without ambient light.
Visual artefacts are a key issue in evaluating the image quality of displays, which is more serious issue for a NED since the NED is close to human eyes. The visual artefacts may include defects, mura/nonuniformity, ghosts, etc. A defect in a virtual image rendered in NEDs is generally a visual artefact which originated from one or more defective pixels of the image generator (e. g. micro-LED or micro-OLED). The defect may manifest as a brighter, darker, or dead dot, line, or zone. Nonuniformity is a fatal visual artefact, which dramatically influences the image quality. The visual artefact may be illustrated as a mottled, bright, or black spot, or cloud appearance. For NEDs such as AR/VR, the visual artefact is apparently observable on the virtual image.
However, due to pixel-level crosstalk or low image resolution on virtual image in NEDs, it is challenging to detect the visual artefacts in measurement.
Embodiments of the present disclosure provide a method for detecting a visual artefact of near-eye display. The method includes generating a plurality of sub-test patterns, each of the sub-test patterns corresponding to a portion of pixels of a source pixel array of the near-eye display being turned on, and a full test pattern comprising all the plurality of sub-test patterns corresponds to each of the pixels of the source pixel array being turned on at least once; obtaining a plurality of sub-virtual images corresponding to the plurality of sub-test patterns; obtaining a final virtual image by integrating the plurality of sub-virtual images; and detecting the visual artefact, if any, based on the final virtual image.
Embodiments of the present disclosure provide a system for detecting a visual artefact of near-eye display. The system includes a light measuring device (LMD) configured to: obtain a plurality of sub-virtual images corresponding to a plurality of sub-test patterns; and detect the visual artefact based on a final virtual image; and a processor configured to: generate a plurality of sub-test patterns, each of the sub-test patterns corresponding to a portion of pixels of a source pixel array of the near-eye display being turned on, and a full test pattern comprising all the plurality of sub-test patterns corresponds to each of the pixels of the source pixel array being turned on at least once; and integrate the plurality of sub-virtual images to obtain the final virtual image.
Embodiments and various aspects of the present disclosure are illustrated in the following detailed description and the accompanying figures. Various features shown in the figures are not drawn to scale.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims. Particular aspects of the present disclosure are described in greater detail below. The terms and definitions provided herein control, if in conflict with terms and/or definitions incorporated by reference.
In some embodiments, NED 110 can include an image generator 111, and an optical combiner, also referred to herein as image optics (not shown in
Processing module 140 is configured to evaluate and improve uniformity/nonuniformity for the virtual image rendered in NED 110. In some embodiments, processing module 140 can be included in a computer or a server. In some embodiments, processing module 140 can be deployed in the cloud, which is not limited herein. In some embodiments, processing module 140 can include one or more processors.
In some embodiments, a driver provided as a driving module (not shown in
In some embodiments, for example for an AR application, ambient light is provided from the ambient light module. The ambient light module is configured to generate a uniform light source with corresponding color (such as D65), which can support a measurement taken with an ambient light background, and simulation of various scenarios such as daylight, outdoor, or indoor.
In order to improve the performance of detecting visual artefacts of a NED, the present disclosure provides a method for detecting such visual artefacts.
In some embodiments of the present disclosure, a source pixel array can be turned on to perform, i.e., display, a plurality of sub-test patterns, and corresponding sub-virtual images are obtained, respectively. In each sub-test pattern, a portion of the source pixel array pixels are turned on. All the sub-test patterns together constitute a full test pattern. Therefore, all the sub-virtual images can be integrated into a final visual image where all the pixels are turned on. Since for each sub-test pattern only a portion of pixels are turned on, the pixel level cross-talk may be reduced.
At step 402, a plurality of sub-test patterns are generated. A sub-test pattern corresponds to a portion of the source pixel array pixels being turned on, and a full test pattern includes all the plurality of sub-test patterns corresponds to all pixels of the source pixel array turned on at least once.
In some embodiments, the sub-test patterns can be overlapped. That is, a pixel can be included in two or more sub-test patterns. In this example, a pixel can be turned on more than once.
In some embodiments, the source pixel array is divided into several zones, and each zone includes a plurality of pixels.
In some embodiments, to be more efficient, the sub-test patterns can be obtained by scanning the pixels in each zone in a predetermined order. For example, from top to bottom and from left to right, or any other order, as long as all the pixels of the source pixel array have been scanned.
In some embodiments, the source pixel array 500 can be divided unequally. A number of the divided zones depends on the source pixel array (e.g., test samples). In some embodiments, the number of the source pixel in a divided zones is from 1 to 10000.
In some embodiments, in each sub-test pattern, a distance between two adjacent turned-on pixels is greater than a distance between two adjacent pixels. That is, for each sub-test pattern, the two adjacent pixels are not turned on at the same time, so that pixel level cross-talk can be reduced. Referring back to
At step 404, a plurality of sub-virtual images are obtained. Each sub-virtual image corresponds to one sub-test patterns. The plurality of sub-virtual images corresponds to the plurality of sub-test patterns.
In some embodiments, a first sub-virtual image is rendered for a first sub-test pattern, and captured by an LMD. A second sub-virtual image is rendered for a second sub-test pattern, and then captured. The rendering and capturing are repeated until all the sub-test patterned are performed, therefore all the sub-virtual images are obtained.
At step 702, a sub-virtual image is rendered for a sub-test pattern. The sub-test pattern is generated by the image generator of NED 110. In each sub-test pattern, only a portion of pixels are turned on.
At step 704, a sub-virtual image is rendered for the sub-test pattern. And the sub-virtual image is displayed to LMD 122. Since only a portion of the pixels are turned on for the sub-test pattern, a cross-talk among pixels may be reduced, so that optical characteristics of the turned on pixels may be clearer.
At step 706, the sub-virtual image is captured. In some embodiments, the sub-virtual image is captured by the LMD 122. In some embodiments, the captured sub-virtual image is a high-resolution image. For example, the output of step 706 can be a high-resolution image (e.g., 10000×10000 pixel) for a virtual image for a sub-test pattern. A registration of virtual image to a source image can be applied as well.
At step 708, it is determined whether all the sub-test pattern have been performed. Referring to the example shown in
Other steps in
In some embodiments, the captured sub-virtual image is temporarily stored in the LMD 122 and transmitted to a processor as a batch after all the sub-virtual image are captured. In some embodiments, the captured sub-virtual image is transmitted to the processor once it is captured, and stored in the processor for further processing.
At step 406, a final virtual image is obtained by integrating the plurality of sub-virtual images. After all the sub-virtual image are obtained, all the sub-virtual image are integrated (or referred to as combined, synthesized) into a final virtual image. Since each pixel has been turned on at least once in a one sub-test pattern, the final virtual image can be considered as a virtual image with all the pixels turned on. In some embodiments, the final virtual image is obtained as image data of the full virtual image for further processing. The image data of the respective sub-virtual images are integrated to obtain image data of the final virtual image, without displaying a full virtual image displayed to and captured by LMD 122.
At step 408, the visual artefact, if any, is detected based in the final virtual image. The visual artefact can be detected at the pixel level. That is, a pixel-level defect can be identified. The visual artefact, if any, may include defects, mura/nonuniformity, ghost, etc. In some embodiments, optical characteristics of the final virtual image are obtained, and then the visual artefact is detected based on the optical characteristics. In some embodiments, the optical characteristics comprise luminance and/or chromaticity.
Referring to
In some embodiments of the present disclosure, a device for detecting a visual artefact of a NED is provided. The device can perform the above-described methods. Referring back to
In some embodiments, the processing module 140 is further configured to divide the source pixel array into two or more zones, each zone includes a plurality of pixels; and obtain the plurality of sub-test patterns, the sub-test pattern corresponding to one pixel turned on in each zone.
In some embodiments, the processing module 140 is further configured to generate the sub-test patterns by turning on the one pixel in each zone in a preset order.
In some embodiments, the processing module 140 is configured to generate the sub-test patterns by turning on the one pixel in each zone in a same order.
In some embodiments, the NED 110 is further configured to render a sub-virtual image under a sub-test pattern. The imaging module 120 is configured to capture a corresponding sub-virtual image. The driver module (not shown) is further configured to control the NED 110 and imaging module 120 to repeat the rendering and the capturing until all the plurality of sub-test patterns have been performed.
In some embodiments, the imaging module 120 is further configured to obtain optical characteristics of the final virtual image; and detect a visual artefact based on the optical characteristics.
The details of each step performed by system 100 are the same as the methods described above with reference to
The embodiments may further be described using the following clauses:
1. A method for detecting a visual artefact of near-eye display, comprising:
2. The method according to clause 1, wherein generating the plurality of sub-test patterns further comprises:
3. The method according to clause 2, wherein generating the plurality of sub-test patterns further comprises:
4. The method according to clause 2 or 3, wherein the two or more zones are divided equally, each of the zones comprising a same number of pixels.
5. The method according to clause 4, wherein each of the zones has a same shape.
6. The method according to clause 5, wherein generating the plurality of sub-test patterns further comprises:
7. The method according to any one of clauses 1 to 6, wherein in each of the sub-test patterns, a distance between two adjacent turned-on pixels is greater than a distance between two adjacent pixels.
8. The method according to any one of clauses 1 to 7, wherein obtaining the plurality of sub-virtual images corresponding to the plurality of sub-test patterns further comprises:
9. The method according to any one of clauses 1 to 8, wherein the sub-virtual image is obtained by a light measuring device (LMD).
10. The method according to any one of clauses 1 to 9, wherein detecting the visual artefact based on the final virtual image further comprises:
11. The method according to clause 10, wherein the optical characteristics comprises luminance and/or chromaticity.
12. The method according to any one of clauses 1 to 11, wherein the visual artefact is detected at a pixel level.
13. A system for detecting a visual artefact of near-eye display comprising:
14. The system according to clause 13, wherein the near-eye display comprises a source pixel array, and the processor is further configured to:
15. The system according to clause 14, wherein the processor is configured to generate the plurality of sub-test patterns by turning on the one pixel in each of the zones in a preset order.
16. The system according to clause 14 or 15, wherein the two or more zones are divided equally, each of the zones comprising a same number of pixels.
17. The system according to clause 16, wherein each of the zones has a same shape.
18. The system according to clause 17, the processor is configured to generate the plurality of sub-test patterns by turning on the one pixel in each of the zones in a same order.
19. The system according to any one of clauses 13 to 18, wherein the near-eye display is further configured to render a sub-virtual image for a sub-test pattern; the LMD is configured to capture the sub-virtual image; and the system further comprises a driver configured to control the near-eye display and LMD to repeat the rendering and the capturing until all the plurality of sub-test patterns have been performed.
20. The system according to any one of clauses 13 to 19, wherein the LMD is further configured to:
21. The system according to clause 20, wherein the optical characteristics comprises luminance and/or chromaticity.
22. The system according to any one of clauses 13 to 21, wherein the visual artefact is detected at a pixel level.
It should be noted that relational terms herein such as “first” and “second” are used only to differentiate an entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations. Moreover, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
As used herein, unless specifically stated otherwise, the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a database may include A or B, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or A and B. As a second example, if it is stated that a database may include A, B, or C, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method.
In the drawings and specification, there have been disclosed exemplary embodiments. However, many variations and modifications can be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation.
| Number | Date | Country | Kind |
|---|---|---|---|
| PCT/CN2022/139060 | Dec 2022 | WO | international |
The present disclosure claims priority to and the benefits of PCT Application No. PCT/CN2022/139060, filed on Dec. 14, 2022, which is incorporated herein by reference in its entirety.